02-AREAS

02-AREAS

Farm
Farm
Investing in farmland regeneration.
·farm.vc·
Farm
Topographic Map Colors - Coolors
Topographic Map Colors - Coolors
Get inspired by these beautiful i-am-looking-for-colors-for-topographic-maps color schemes and make something cool!
·coolors.co·
Topographic Map Colors - Coolors
Maps Color Palettes - Coolors
Maps Color Palettes - Coolors
Get inspired by thousands of beautiful color schemes and make something cool!
·coolors.co·
Maps Color Palettes - Coolors
OpenFreeMap
OpenFreeMap
OpenFreeMap – Open-Source Map Hosting lets you display custom maps on your website and apps for free.
·openfreemap.org·
OpenFreeMap
Home
Home
Pixi Documentation — Next-gen package manager for reproducible development setups
·pixi.prefix.dev·
Home
ArcGIS Hub
ArcGIS Hub
Discover, analyze and download data from ArcGIS Hub. Download in CSV, KML, Zip, GeoJSON, GeoTIFF or PNG. Find API links for GeoServices, WMS, and WFS. Analyze with charts and thematic maps. Take the next step and create StoryMaps and Web Maps.
·atlas.eia.gov·
ArcGIS Hub
MassGIS Data Layers
MassGIS Data Layers
Each digital dataset name below links to a complete data layer description. On each page you will find metadata and links to free data downloads.
·mass.gov·
MassGIS Data Layers
Landscape Visualizations in R and Unity
Landscape Visualizations in R and Unity
Functions for the retrieval, manipulation, and visualization of geospatial data, with an aim towards producing 3D landscape visualizations in the Unity 3D rendering engine. Functions are also provided for retrieving elevation data and base map tiles from the USGS National Map https://apps.nationalmap.gov/services/.
·docs.ropensci.org·
Landscape Visualizations in R and Unity
Documenting functions
Documenting functions
The basics of roxygen2 tags and how to use them for documenting functions.
Instead of including examples directly in the documentation, you can put them in separate files and use @example path/relative/to/package/root to insert them into the documentation.
·roxygen2.r-lib.org·
Documenting functions
Ports & Adapters Architecture
Ports & Adapters Architecture
This post is part of The Software Architecture Chronicles, a series of posts about Software Architecture. In them, I write about what I’ve…
The traditional approach will likely bring us problems both on the front-end side and on the backend side. On the front-end side we end up having leakage of business logic into the UI (ie. when we put use case logic in a controller or view, making it unreusable in other UI screens) or even leakage of the UI into the business logic (ie. when we create methods in our entities because of some logic we need in a template).
A port is a consumer agnostic entry and exit point to/from the application. In many languages, it will be an interface. For example, it can be an interface used to perform searches in a search engine. In our application, we will use this interface as an entry and/or exit point with no knowledge of the concrete implementation that will actually be injected where the interface is defined as a type hint.
An adapter is a class that transforms (adapts) an interface into another. For example, an adapter implements an interface A and gets injected an interface B. When the adapter is instantiated it gets injected in its constructor an object that implements interface B. This adapter is then injected wherever interface A is needed and receives method requests that it transforms and proxies to the inner object that implements interface B.
The adapters on the left side, representing the UI, are called the Primary or Driving Adapters because they are the ones to start some action on the application, while the adapters on the right side, representing the connections to the backend tools, are called the Secondary or Driven Adapters because they always react to an action of a primary adapter.
·medium.com·
Ports & Adapters Architecture
The Modern Geospatial Data Stack: Trends, Tools, and What They Mean for You - Matt Forrest
The Modern Geospatial Data Stack: Trends, Tools, and What They Mean for You - Matt Forrest
The geospatial technology landscape is changing fast. What used to be a world of shapefiles, desktop software, and siloed workflows is now becoming cloud-native, AI-driven, and analytics-focused. This shift isn’t just technical—it’s reshaping how geospatial professionals build, analyze, and share data. In this post, I’ll break down the key trends shaping the modern geospatial data
The Modern Geospatial Data Stack: Trends, Tools, and What They Mean for You
In this post, I’ll break down the key trends shaping the modern geospatial data stack, highlight the tools and platforms that are leading innovation, and explain what this means for practitioners, teams, and organizations.
File Formats and Catalogs: The Foundation of Cloud-Native Geospatial
Modern analytics workflows are no longer small, local projects—they’re massive, distributed, and data-heavy. That’s why cloud-native file formats and data catalogs are at the center of the stack.
Apache Iceberg and other table formats are becoming the backbone of large-scale geospatial data management.
Cloud-optimized formats (like GeoParquet and COGs) make spatial data portable, efficient, and accessible.
Specialized systems like Earthmover are also adding focus for specific file types, in this case climate data
What this means for you: If you’re still relying on ad hoc file storage, you’re missing out on performance and scalability. Learning how to use catalogs like Iceberg lets you fully leverage file-level optimizations, versioning, and schema evolution—critical for handling large and evolving geospatial datasets.
Data Processing: Beyond the Spatial Join
For years, the hallmark of a spatial database was the ability to run a point-in-polygon query. But in 2025, that capability has been commoditized. Most OLAP systems and modern databases can handle these joins at scale—even without compute layers optimized for geospatial.
The real differentiator now is advanced geospatial processing: Zonal statistics for climate and land-use analysis Mobility data pipelines for transportation and urban planning Feature engineering for AI and machine learning workflows
Platforms like Wherobots and Coiled are focusing directly on these workloads, while Apache Spark has begun supporting vector data types. Traditional relational databases still play a role—especially as AI applications demand fast transactional access—but the future belongs to systems that optimize for large analytical queries across massive datasets.
👉 What this means for you: Stop thinking of “point-in-polygon” as the benchmark. Systems that can go deeper—into advanced feature generation and distributed geospatial computation—will define the next generation of spatial analytics.
Transformation and Orchestration: Moving Beyond Simple Scripts
In the past, geospatial data pipelines often relied on one-off Python scripts. Today, that approach simply doesn’t scale.
Specialized spatial ELT tools like Seer AI and BigGeo are emerging to handle geospatial-specific transformations.
Orchestration platforms such as Apache Airflow and Astronomer are essential for managing dependencies, scheduling, and ensuring upstream data integrity.
👉 What this means for you: Don’t think of orchestration as overhead—it’s how you guarantee reliable and reproducible data pipelines. If your team is serious about analytics, orchestration is no longer optional.
Analytical Tools: From Niche to End-to-End
The analytics ecosystem for geospatial continues to expand, giving users more choice than ever. Specialized platforms: Foursquare, Dekart, Superset, Preset End-to-end systems: CARTO and Fused, which combine geospatial with AI, data management, and visualization 👉 What this means for you: The decision is no longer “which GIS platform do I use?” Instead, it’s about picking the right tool for the specific stage of your workflow—sometimes a lightweight visualizer, sometimes a comprehensive enterprise solution.
GIS: The Rise of Web-Native Platforms Web GIS is where most of the visible innovation is happening. Platforms like Felt and Atlas are reimagining the GIS experience: collaborative, browser-based, and designed for simplicity without losing power. 👉 What this means for you: Expect the center of gravity in GIS to continue shifting from desktop to the web. Professionals who adapt to these tools will be better positioned for collaborative, cloud-based work environments.
AI: A New Category of Geospatial Tools One of the most exciting areas is the emergence of AI-native geospatial platforms. These tools are building with machine learning and agentic AI in mind from the start. Vertical-focused AI: Aino (planning), Contour (cities) GIS-focused AI: Bunting Labs, optimizing traditional GIS workloads with AI Agentic AI for geospatial: Klarety and Monarcha, building agents as spatial tools 👉 What this means for you: AI isn’t just an add-on anymore—it’s a defining capability. Expect to see AI-powered agents and models become critical in workflows from automated labeling to decision support.
Python Ecosystem: Expanding AI and Spatial ML Python remains the glue of modern geospatial, and the ecosystem keeps growing: TorchGeo has matured into an independent framework for spatial deep learning. GeoAI from Dr. Qiusheng Wu provides new capabilities for applying ML to spatial data. 👉 What this means for you: If you’re serious about geospatial and AI, Python is unavoidable. The tools are expanding, and open-source continues to lead the way.
Final Takeaway: Where the Modern Geospatial Stack is Headed The geospatial data stack is no longer about static maps or one-off analyses. It’s about: Scalable architectures (Iceberg, GeoParquet, COGs) Advanced processing (beyond spatial joins) Reliable pipelines (orchestration + transformation) AI-native design (feature engineering, agents, ML-ready workflows) The modern stack is maturing into a foundation for spatial intelligence at scale. If you’re a GIS professional, data engineer, or analyst, now is the time to expand your toolkit—because the organizations that master this new stack will define the future of geospatial.
·forrest.nyc·
The Modern Geospatial Data Stack: Trends, Tools, and What They Mean for You - Matt Forrest
7 Geospatial Data Visualization – Geospatial Data Science with R
7 Geospatial Data Visualization – Geospatial Data Science with R
Emphasizing clarity, accuracy, and purpose-driven design, the guide covers how to create compelling static and interactive visualizations tailored to diverse analytical and communicative objectives. Throughout, we will adhere to best practices in cartographic design to ensure that each visualization conveys its intended message clearly and truthfully.
Successful geospatial visualization demands attention to several core principles that ensure the resulting map or graphic is both informative and comprehensible. Visualizations must balance aesthetic considerations with functional accuracy to effectively convey spatial information.
Clarity and Simplicity Clarity in geospatial visualization means the core message of a map is immediately apparent to viewers. Overloading a map with too much information or too many design elements can confuse rather than enlighten. Designers should strive for minimalism—include only the critical spatial features and data relevant to the map’s purpose. An expert data visualization principle is to “remove any elements that do not contribute to understanding the data”, such as excessive labels, grid lines, or decorative clutter. By simplifying the visual display, we direct the audience’s attention to the most important patterns or locations. Key guidelines for simplicity include: Limit the number of thematic layers shown on a single map. Use intuitive symbols and clear labeling for features. Avoid excessive annotation or unnecessary visual noise. In practice, a clean design with ample white space and straightforward symbology will enhance comprehension and engagement. The goal is a map that communicates rather than overwhelms.
Appropriate Color Schemes: Choose color palettes that are perceptually uniform and appropriate for the data. Avoid using arbitrary or excessively bright colors that could mislead or cause eye strain. Instead, use scientifically derived colormaps like viridis or plasma, which are designed to be uniform and colorblind-friendly. For example, a sequential palette (light to dark in one hue) makes sense for a unipolar data range (e.g., population density), whereas a diverging palette (two hues fading to a neutral midpoint) is best for data that have a meaningful center (e.g., above/below average comparisons). Ensure that the colors have sufficient contrast against each other and the map background for readability – legend text and boundary lines should also be clearly visible. (A poorly chosen palette or low-contrast colors can render a map useless to colorblind viewers or even to those in print vs. screen viewing.) Maintain consistency in color use across multiple maps; if one map shows intensity of something in red, another map in the report should ideally use red for that same intensity concept.
Clear Legends and Annotations: Provide explanatory legends, titles, and annotations to guide the viewer. A well-crafted legend is crucial for interpreting a map’s symbology – it should be clear, concise, and not overly cluttered. Viewers should not struggle to match colors or symbols to their meaning. As a cartography guide points out, a confusing legend can undermine your visualization, whereas a clear legend “guides [readers] seamlessly through your data story”. Tips include: place the legend in an unobtrusive but noticeable location (often corners work well), use intuitive labels (e.g., “Population (millions)” rather than “Pop_val”), and avoid too many categories. Additionally, titles and subtitles are important: they frame what the map is about. A good title might state the what/where/when (e.g., “Population Distribution by Region, 2020”) so the audience immediately knows the context. Annotations like callout labels or arrows can highlight key insights (e.g., “Area X has the highest value”). However, avoid excessive text on the map itself; maintain a balance so the map doesn’t become a textbook diagram unless that’s intended. The goal is to inform without overwhelming.
·warin.ca·
7 Geospatial Data Visualization – Geospatial Data Science with R
Exploring geometa: An R Package for Managing Geographic Metadata – R Consortium
Exploring geometa: An R Package for Managing Geographic Metadata – R Consortium
geometa provides an essential object-oriented data model in R, enabling users to efficiently manage geographic metadata. The package facilitates handling of ISO and OGC standard geographic metadata and their dissemination on the web, ensuring that spatial data and maps are available in an open, internationally recognized format.
·r-consortium.org·
Exploring geometa: An R Package for Managing Geographic Metadata – R Consortium
Data Table Back-End for dplyr
Data Table Back-End for dplyr
Provides a data.table backend for dplyr. The goal of dtplyr is to allow you to write dplyr code that is automatically translated to the equivalent, but usually much faster, data.table code.
·dtplyr.tidyverse.org·
Data Table Back-End for dplyr
Wrappers for GDAL Utilities Executables
Wrappers for GDAL Utilities Executables
Rs sf package ships with self-contained GDAL executables, including a bare bones interface to several GDAL-related utility programs collectively known as the GDAL utilities. For each of those utilities, this package provides an R wrapper whose formal arguments closely mirror those of the GDAL command line interface. The utilities operate on data stored in files and typically write their output to other files. Therefore, to process data stored in any of Rs more common spatial formats (i.e. those supported by the sf and terra packages), first write them to disk, then process them with the package's wrapper functions before reading the outputted results back into R. GDAL function arguments introduced in GDAL version 3.5.2 or earlier are supported.
·joshobrien.github.io·
Wrappers for GDAL Utilities Executables