No Clocks

No Clocks

3231 bookmarks
Newest
OHK Consultants: Strategy ❘ Innovation ❘ Development ❘ OHK Consultants
OHK Consultants: Strategy ❘ Innovation ❘ Development ❘ OHK Consultants
At OHK Consultants, we specialize in providing strategic, innovative, and development solutions to our clients. Our goal is to help organizations achieve their objectives by offering customized services that are tailored to their specific needs. We offer strategy, intelligence, and innovation consul
·ohkconsultants.com·
OHK Consultants: Strategy ❘ Innovation ❘ Development ❘ OHK Consultants
OHK Consultants: Strategy ❘ Innovation ❘ Development ❘ OHK Consultants
OHK Consultants: Strategy ❘ Innovation ❘ Development ❘ OHK Consultants
At OHK Consultants, we specialize in providing strategic, innovative, and development solutions to our clients. Our goal is to help organizations achieve their objectives by offering customized services that are tailored to their specific needs. We offer strategy, intelligence, and innovation consul
data identification and collection steps are sometimes considered the most difficult and demanding aspects of GIS projects.
Data must be identified and collected for all relevant spatial and non-spatial data for a parcel level of interest. Spatial data might include the boundaries of the parcels, which can often be obtained from one entity in the local government and, in some cases, driven from recent satellite imagery. Non-spatial data could include property characteristics like the age of structures, building materials, square footage, and so forth, often found in property tax records. Let's break down the types of data you might need to collect for a comprehensive property valuation:
Property assessment: The methods vary by jurisdiction, but three primary methods are commonly used. First, the market approach values a property based on what similar properties have sold for in the same area, considering the property's location, size, condition, and features and the prices at which similar properties have recently sold. Second, the cost approach, often used for unique or special-purpose properties or new construction, values a property based on how much it would cost to replace it with an identical or similar property, taking into account the cost of materials and labor required to reproduce or replace the property, minus depreciation. Lastly, the income approach, typically used for investment or commercial properties, values a property based on its income. It calculates the net present value of future income streams given the yield an investor would expect for that investment. Depending on the property's nature and data availability, these methods can sometimes be combined. However, the specifics can vary greatly depending on local laws and practices, so it's best to contact the local assessor's office for the most accurate information. Each method would require different data collection: Each method of property assessment requires specific types of information or measurements to complete the evaluation: Market Approach: For the market approach, the following measurements are typically required: The recent sale prices of comparable properties in the same area. Characteristics of the property, including the number of bedrooms and bathrooms, square footage, lot size, age, and the property's condition. Cost Approach: For the cost approach, these measurements are needed: The cost to construct a similar property, including labor and materials. This might require data on local construction costs and labor rates. The value of the land as if it were vacant. This could be based on the sale prices of comparable land parcels in the same area. The depreciation of the property could be due to physical wear and tear, functional obsolescence (e.g., an outdated layout), or external factors (e.g., changes in the neighborhood). Income Approach: For the income approach, you would need: The potential income the property could generate. For rental properties, this would be the potential rental income. For businesses, it could be the income the business generates. The operating expenses of the property, such as maintenance, utilities, insurance, and property management costs. The capitalization rate is the rate of return a reasonable investor would expect on the property type. This could be based on rates for comparable properties.
Data Type #1—Spatial Data—these data have a geographic component and are important for property valuation. Usually, they can be generated through GIS tools if not already available. Parcel Boundaries: These are the specific outlines of each property. They typically come in the form of shapefiles and indicate the precise geographical boundaries of a property. Future Development Plans: Information about any future infrastructure or development plans in the area. These could significantly impact the future value of a property. Satellite Images: High-resolution satellite images can provide a lot of useful information about a property, including the size and shape of buildings, the presence of features like pools or solar panels, and the general condition of the property. Topographic Data: This could include Digital Elevation Models (DEMs) that show the elevation of the land, which could affect property value. Land Use Data: This shows how land in the area is used (residential, commercial, industrial, agricultural, etc.). Land use could have a significant impact on property values. Road Networks: Proximity to transportation infrastructure can influence a property's value. This includes public transit data stations or stops, and proximity to public transportation can be a significant factor in property valuation. Environmental Data: This might include flood zones, proximity to bodies of water, forest cover, soil quality, etc. Amenities: Locations of schools, parks, shopping centers, hospitals, and other amenities.
Data Type #2—Non-Spatial Data—these are data that don't have a geographic component but are still important for property valuation: Structural Characteristics: This might include the age of the buildings, their square footage, the number of rooms, the types of materials used, and other physical characteristics. Building Codes Compliance: Details about the building's compliance with local codes can impact its value. This could include information about any code violations and the results of any inspections. Rental History: If a property has been rented out in the past, information about the rental history can be useful. This could include the length of each rental period, the amount of rent charged, and the reliability of the tenants. Ownership Details: This might include the owner's name, purchase date, purchase price, etc. Sale History: Information about previous property sales, including dates and prices. Tax Records: Past tax assessments and payments could be useful for predicting future tax amounts. Local Market Data: Data about recent sales of comparable properties in the area. Zoning Regulations: The property's zoning designation and what kind of structures or businesses are allowed there. Environmental Efficiency Ratings: These can include ratings related to energy efficiency, such as LEED ratings or other environmental impacts of the property. Disaster History: Information about any natural disasters that have affected the property. This could include floods, earthquakes, wildfires, etc. Crime Statistics: Detailed data about the area's prevalence and types of crime can significantly impact a property's value. Liens and Judgments: Any liens or judgments against the property could affect its marketability and value.
When collecting data, ensuring ISO or INSPIRE compliance for data can be important for interoperability, standardization, and data sharing. While it's crucial to consult the specific requirements of ISO or INSPIRE for your region or project, here are some general considerations for data formats and importing into open-source GIS software like QGIS.
Spatial Data Infrastructure (SDI) Considerations—ISO (International Organization for Standardization) standards play a critical role in guiding the creation and management of Spatial Data Infrastructure (SDI). Here's how the data could be formulated with references to relevant ISO standards:
Data Availability (ISO 19115: Metadata): According to this standard, there should be adequate metadata to describe the available spatial data for the required geographic areas. This ensures that users can find the appropriate datasets and understand their content, source, and usability. Data Accuracy (ISO 19157: Data Quality): This standard provides a framework for specifying and reporting data quality, including accuracy. The SDI will ensure the spatial data is accurate and up-to-date, with quality measures implemented according to this standard. Interoperability (ISO 19119: Services): This standard defines how GIS and ‘services’ interact. This assumes that there is an ambition to integrate with other systems used by tax authorities, for example—in such cases, the data can integrate effectively with other systems used by tax authorities, promoting seamless data sharing and usage. Sustainability (ISO 19101: Reference Model): This standard provides a framework for establishing and maintaining an SDI. It will guide the project in ensuring the system is robust, maintained, and updated over time to remain relevant and useful.
Schematic Mapping of Existing Data: This process entails visualizing or representing existing spatial data in an organized, intuitive, and easily understandable format. This can include various geographic and spatial data an organization may have collected over time. It provides an overview of the current data situation and can help to identify patterns, relationships, or discrepancies that might not be readily apparent from raw data. Schematic Mapping of Periodic Reports: Periodic reports contain valuable insights into temporal changes, trends, and patterns. Schematic mapping of such reports visually represents these insights over time, often revealing otherwise hidden temporal patterns or changes. This makes it easier for decision-makers to understand trends and make informed decisions. Schematic Mapping of Scheme of Service of Selected Institutions: This pertains to the visual representation of how different services of an organization or institutions are interconnected spatially, how they function, and how they can influence one another. In an SDI context, it can map the flow of spatial data within and between organizations, highlighting any bottlenecks, inefficiencies, or potential areas for improvement.
·ohkconsultants.com·
OHK Consultants: Strategy ❘ Innovation ❘ Development ❘ OHK Consultants
OHK Consultants: Strategy ❘ Innovation ❘ Development ❘ OHK Consultants
OHK Consultants: Strategy ❘ Innovation ❘ Development ❘ OHK Consultants
At OHK Consultants, we specialize in providing strategic, innovative, and development solutions to our clients. Our goal is to help organizations achieve their objectives by offering customized services that are tailored to their specific needs. We offer strategy, intelligence, and innovation consul
·ohkconsultants.com·
OHK Consultants: Strategy ❘ Innovation ❘ Development ❘ OHK Consultants
100-Step Land Due Diligence Checklist for 2025 - The Land Geek
100-Step Land Due Diligence Checklist for 2025 - The Land Geek
Learn every step of the land due diligence process in this expert guide. From confirming access to reviewing zoning, this 100-step checklist will help you make confident, informed land buying decisions.
·thelandgeek.com·
100-Step Land Due Diligence Checklist for 2025 - The Land Geek
Exploring geometa: An R Package for Managing Geographic Metadata – R Consortium
Exploring geometa: An R Package for Managing Geographic Metadata – R Consortium
geometa provides an essential object-oriented data model in R, enabling users to efficiently manage geographic metadata. The package facilitates handling of ISO and OGC standard geographic metadata and their dissemination on the web, ensuring that spatial data and maps are available in an open, internationally recognized format.
·r-consortium.org·
Exploring geometa: An R Package for Managing Geographic Metadata – R Consortium
Access Uber's H3 Library
Access Uber's H3 Library
Provides access to Ubers H3 library for geospatial indexing via its JavaScript transpile h3-js and V8' .
·obrl-soil.github.io·
Access Uber's H3 Library
Extension Types for Spatial Data for Use with Arrow
Extension Types for Spatial Data for Use with Arrow
Provides extension types and conversions to between R-native object types and Arrow columnar types. This includes integration among the arrow, nanoarrow, sf, and wk packages such that spatial metadata is preserved wherever possible. Extension type implementations ensure first-class geometry data type support in the arrow and nanoarrow packages.
·geoarrow.org·
Extension Types for Spatial Data for Use with Arrow
2025-12-05 AI Newsletter - Posit
2025-12-05 AI Newsletter - Posit
Several frontier model releases, the first viable AI-generated technical diagrams, and an evaluation suite for R code generation.
·posit.co·
2025-12-05 AI Newsletter - Posit
Unraveled! The H3 Geospatial Indexing System - Geospatial World
Unraveled! The H3 Geospatial Indexing System - Geospatial World
Grid systems are vital for analyzing large spatial data sets and partitioning areas of the Earth into identifiable grid cells. Keeping this in view, Uber developed the H3 Geospatial Indexing System.
·geospatialworld.net·
Unraveled! The H3 Geospatial Indexing System - Geospatial World
mapgl: WebGL Maps in R with Mapbox and MapLibre
mapgl: WebGL Maps in R with Mapbox and MapLibre
Provides an interface to the Mapbox GL JS () and the MapLibre GL JS () interactive mapping libraries to help users create custom interactive maps in R. Users can create interactive globe visualizations; layer sf objects to create filled maps, circle maps, heatmaps, and three-dimensional graphics; and customize map styles and views. The package also includes utilities to use Mapbox and MapLibre maps in Shiny web applications.
·walker-data.com·
mapgl: WebGL Maps in R with Mapbox and MapLibre
Using Sprites | Guides | Map design | MapTiler
Using Sprites | Guides | Map design | MapTiler
Every map can use its own set of icons for displaying points of interest, highway shields, peaks, etc. In special cases, maps can also include patterns, helping users distinguish between similar polygon features. A set of icons and patterns and a file defining which icon should be used for what purpose is called a sprite.
·docs.maptiler.com·
Using Sprites | Guides | Map design | MapTiler
Introduction to STAC GeoParquet
Introduction to STAC GeoParquet
Introducing STAC GeoParquet, a specification and library for storing and serving STAC metadata as GeoParquet.
·cloudnativegeo.org·
Introduction to STAC GeoParquet
Geospatial Data Pipeline Management: Modern approaches vs traditional methods - Matt Forrest
Geospatial Data Pipeline Management: Modern approaches vs traditional methods - Matt Forrest
Having the right formats is one thing. Getting your data into those formats reliably, at scale, and on schedule is another thing entirely. You can read all you want about GeoParquet and Zarr and COG, but if you can't create a repeatable process to convert your legacy shapefiles to GeoParquet or your NetCDF files to
·forrest.nyc·
Geospatial Data Pipeline Management: Modern approaches vs traditional methods - Matt Forrest
Leveraging Azure Batch and Geospatial Open Source Standards to Map the World | Azure
Leveraging Azure Batch and Geospatial Open Source Standards to Map the World | Azure
By: Zoe Statman-Weil & Mark Mathis, Impact Observatory, Inc.   Global decision makers need timely, accurate maps Land use and land cover (LULC) maps are used by decision makers in governments, civil society, industries, and finance to observe how the world is changing, and to understand and manage the impact of their actions. Historically, LULC maps are produced using expensive, semi-automated techniques requiring significant human input and thus leading to significant delays between collection of satellite images and production of maps, limiting the ability to get regular and frequent temporal updates to users. Making the detailed, accurate maps the whole world needs to understand our rapidly changing planet with timely updates requires automation. A groundbreaking artificial intelligence-powered 2020 global LULC map was produced for Esri on Microsoft Azure by Impact Observatory, a mission-driven technology company bringing AI algorithms and on-demand data to environmental monitoring and sustainability risk analysis. This map will be used to help decision makers address challenges in climate change mitigation and adaptation, biodiversity preservation, and sustainable development.   The Impact Observatory LULC machine learning (ML) model was trained on an Azure NC12s v2 virtual machine (VM) powered by NVIDIA® Tesla® P100 GPUs using over 5 billion pixels hand-labeled into one of ten classes: trees, water, built area, scrub/shrub, flooded vegetation, bare ground, cropland, grassland, snow/ice, and clouds. The model was then deployed over more than 450,000 Copernicus Sentinel-2 Level-2A 10-meter resolution, surface reflectance corrected images, each 100 km x 100km in size and totaling 500 terabytes of satellite imagery (1 terabyte = 1012 bytes) hosted on the Microsoft Planetary Computer. The processing leveraged geospatial open standards, Azure Batch, and other Azure resources to efficiently produce the final dataset at scale and at a low cost.   Geospatial Open Standards support distributed processing The Microsoft Planetary Computer and Impact Observatory (IO) make extensive use of geospatial open standards, specifically Cloud Optimized GeoTIFF (COG) and Spatial Temporal Asset Catalog (STAC). Use of these standards enabled the team to produce the Esri 2020 Land Cover map using distributed processing at scale.   GeoTIFF is a widely used open standard for geospatial data based on the common TIFF image file format, able to support imagery with bands beyond the usual red, green, blue visible light bands, and containing additional metadata to locate the image on the surface of the Earth.  A COG is a regular GeoTIFF file, aimed at being hosted on a HTTP file server, with an internal organization that enables more efficient workflows on the cloud. Not only can COGs be read from the cloud without needing to duplicate the data to a local filesystem, but a portion of the file can be read using HTTP GET Range requests allowing for targeted reading and efficient processing. Azure Blob Storage is an ideal solution for hosting COGs as it is an unstructured data storage system accessible via HTTP requests. The LULC map was produced using Sentinel-2 COGs hosted on Microsoft’s Planetary Computer in Blob Storage, and all prediction rasters produced from the model were saved as COGs to Blob Storage.   The STAC specification is a common language used to index geospatial data for easy search and discovery. IO searched the Planetary Computer’s STAC catalog to identify Sentinel-2 imagery for certain locations, times, and cloud coverage. IO applied a community supported implementation of the STAC interface to create its own STAC catalog on Azure App Services with Azure Database for PostgreSQL as the underlying data store. IO’s STAC catalog was used to index data throughout the model deployment pipeline and thus served as both a tool for checkpointing pipeline progress, as well as indexing the final product.   COGs and STAC, both easily leveraged in Azure, provide a scalable and highly flexible framework for processing geospatial data.   Azure Batch enabled Impact Observatory to map the globe at record scale & speed Azure Batch was used by IO to efficiently deploy the model over satellite images in parallel at a large scale. IO bundled the ML model, and deployment and processing code into Docker containers, and ran Batch tasks within these containers on a Batch pool of compute nodes.   The data processing pipeline consisted of three primary tasks: 1) Deploying the model over one 100 km x 100 km Sentinel-2 COG by chipping it into hundreds of overlapping 5 km X 5 km smaller images, running those chips through the model, and finally merging the chips back together; 2) Computing a class weighted mode across all model predictions for a given Sentinel-2 image footprint; and 3) Combining the class weighted modes produced in #2 for a given Military Grid Reference System (MGRS) zone into one COG. IO relied heavily on Batch’s task dependency capabilities, which allowed, for example, for the class-weighted mode task (#2) to only be scheduled for execution when the relevant set of model deployment tasks (#1) were completed successfully.   While the model was trained on a GPU-enabled VM, the model deployment over the image chips was executed on CPU-based virtual machines, enabling resource efficient computation at scale. Due to the task dependent nature of the pipeline, all tasks needed to be run on the same pool, and thus the same VM type. RAM and network bandwidth requirements fluctuated for the tasks, but the high CPU usage ended up being the defining factor in VM choice. In the end, the data was processed on low-priority Standard Azure D4 V2 virtual machine powered by Intel® Xeon® scalable processors with seven task slots allocated per node.   It took over one million core hours to process the data for the entire LULC map. With the scaling flexibility of Batch, IO was able to process over 10% of the earth’s surface a day. The completed Esri 2020 Land Cover map is now freely available on Esri Living Atlas and the Microsoft Planetary Computer.   For additional information visit https://www.impactobservatory.com/
·techcommunity.microsoft.com·
Leveraging Azure Batch and Geospatial Open Source Standards to Map the World | Azure
Storing and querying your geospatial data in Azure
Storing and querying your geospatial data in Azure
While Azure Maps is known for great use cases around visualizing and interacting with a map and location data, you probably also need secure and reliable storage for that data that offers the flexibility to query your (location) data. In this blog post, we explore the different options for storing and querying geospatial data in Azure, including Azure Cosmos DB, Azure SQL Database, and Azure Blob Storage. Storing and querying geospatial data in Azure is a powerful and flexible way to manage and analyze large sets of geographic information. Azure Cosmos DB is a globally distributed, multi-model database that supports document, key-value, graph, and column-family data models. One of the key features of Cosmos DB is its support for geospatial data, which allows you to store and query data in the form of points, lines, and polygons. Cosmos DB also supports spatial indexing and advanced querying capabilities, making it a great choice for applications that require real-time, low-latency access to geospatial data. Example query:   SELECT f.id FROM Families f WHERE ST_DISTANCE(f.location, {"type": "Point", "coordinates":[31.9, -4.8]}) 30000     Read here more information about Geospatial and GeoJSON location data in Azure Cosmos DB. Another option for storing and querying geospatial data in Azure is Azure SQL Database. SQL Database is a fully managed, relational database service that supports the spatial data types and functions of SQL Server. This allows you to store and query geospatial data using standard SQL syntax, and also includes spatial indexing and querying capabilities. SQL Database is a good choice for applications that require a traditional relational database model and support for SQL-based querying. Read here more information about Spatial Data in Azure SQL Database. Finally, Azure Blob Storage can be used to store and query large amounts of unstructured data, including geospatial data. Blob Storage allows you to store data in the form of blobs, which can be accessed via a URL. This makes it a great option for storing large files, such as satellite imagery or shapefiles. While Blob Storage does not include built-in support for spatial querying, it can be used in conjunction with other Azure services, such as Azure Data Lake Storage or Azure Databricks, to perform spatial analysis on the data.   In this sample we used satellite imagery that is stored in Azure Blob storage https://samples.azuremaps.com/?sample=tile-layer-options   Lastly, to see a sample that pulls Azure Maps and Azure Databases together, see the Microsoft Learn topic Geospatial data processing and analytics - Azure Example Scenarios which discusses:   Azure Database for PostgreSQL - a fully managed relational database service that's based on the community edition of the open-source PostgreSQL database engine. PostGIS - an extension for the PostgreSQL database that integrates with GIS servers. PostGIS can run SQL location queries that involve geographic objects. In conclusion, Azure offers a variety of options for storing and querying geospatial data, including Azure Cosmos DB, Azure SQL Database, and Azure Blob Storage. Each of these services has its own set of features and capabilities, and choosing the right one will depend on the specific needs of your application. Whether you need low-latency access to real-time data, support for traditional SQL-based querying, or the ability to store and analyze large amounts of unstructured data, Azure has the tools you need to get the job done.
·techcommunity.microsoft.com·
Storing and querying your geospatial data in Azure
Data Table Back-End for dplyr
Data Table Back-End for dplyr
Provides a data.table backend for dplyr. The goal of dtplyr is to allow you to write dplyr code that is automatically translated to the equivalent, but usually much faster, data.table code.
·dtplyr.tidyverse.org·
Data Table Back-End for dplyr