No Clocks

No Clocks

3463 bookmarks
Newest
tipg
tipg
Simple and Fast Geospatial OGC Features and Tiles API for PostGIS.
·developmentseed.org·
tipg
Yes, Postgres can do session vars - but should you use them?
Yes, Postgres can do session vars - but should you use them?
Animated by some comments / complaints about Postgres’ missing user variables story on a Reddit post about PostgreSQL pain points in the real world - I thought I’d elaborate a bit on sessions vars - which is indeed a little known Postgres functionality. Although this “alley” has existed for ages...
The obvious and more well known SQL way to keep some transient state is via temp tables! They give some nice data type guarantees, performance, editor happiness to name a few benefits. But - don’t use them for high frequency use cases! A few temp tables per second might already be too much and a disaster might be waiting to happen…Because CREATE TEMP TABLE actually writes into system catalogs behind the scenes, which might not be directly obvious… And in cases of violent mis-use - think frequent, short-lived temp tables with a lot of columns, plus unoptimized and overloaded Autovacuum together with long-running queries - can lead to extreme catalog bloat (mostly on pg_attribute) and unnecessary IO for each session start / relcache filling / query planning. And it’s also hard to recover from without some full locking - so that for critical high velocity DB’s it might be a good idea to revoke temp table privileges altogether - for app / mortal users at least (not possible for superusers).
The 2nd most obvious way to keep some DB-side session state around would probably be to use more persistent normal tables, right? Already better than temp tables as no danger of bloating the system catalog, right? NO. Pushing transient data though WAL (including replicas and backup systems) is pretty bad and pointless and only to be recommended for tiny use cases. In the Postgres world, exactly for these kinds of transient use cases, special UNLOGGED tables should be used! Which can relieve the IO pressure on the system / whole cluster considerably. One of course just needs to account for the semi-persistent nature - and the fact that they won’t be private anymore. Meaning usage of RLS in case of secret data or just using some random enough keys to avoid collisions.
·kmoppel.github.io·
Yes, Postgres can do session vars - but should you use them?
How should I handle and persist external API responses in my database?
How should I handle and persist external API responses in my database?
I’m working with an external service that has two main resources: Products and Deals. When I create these items through their API, I get back JSON responses that contain lots of data. Right now I only need certain fields from these responses, but I’m worried that later on I might need other parts of the data that I’m not currently storing. So I’m thinking about saving the complete JSON response somewhere just in case. What’s the recommended approach for handling this situation? I need advice o...
·community.latenode.com·
How should I handle and persist external API responses in my database?
TypR
TypR
·we-data-ch.github.io·
TypR
Igloo
Igloo
Discover how the sitrep() pattern simplifies R package maintenance and surfaces configuration errors in one go.
Stop playing “diagnostics ping-pong” with your users. This post explores why the _sitrep() (situation report) pattern — popularized by the usethis package — is a game-changer for R packages wrapping APIs or external software. Learn how to build structured validation functions that power both early error-handling and comprehensive system reports, featuring real-world implementation examples from the meetupr and freesurfer packages.
·drmowinckels.io·
Igloo
Geospatial API Fundamentals
Geospatial API Fundamentals
Geospatial APIs enable seamless access to spatial data, powering mapping, analysis, and urban analytics with standardized operations and protocols.
Geospatial APIs are software abstraction layers that provide standardized methods to query, analyze, and visualize spatial data from diverse sources. They support essential functionalities including 2D/3D map rendering, geocoding, coordinate transforms, and real-time sensor data integration. Modern designs employ RESTful architectures and OGC standards to enhance interoperability, performance, and scalability across geospatial applications.
A geospatial Application Programming Interface (API) is a software abstraction layer—typically a web service or client library—that exposes standardized operations for querying, rendering, analyzing, and modeling spatial data, including vector features, raster coverages, multi-dimensional sensor observations, and geospatial attributes. Geospatial APIs are foundational for scientific computing, urban analytics, planetary research, public health surveillance, and geospatial-AI workflows, enabling programmable access to distributed spatial resources and seamless integration across data repositories, sensor infrastructures, visualization platforms, and analytic pipelines.
·emergentmind.com·
Geospatial API Fundamentals
Adopting semantic types - Taxi
Adopting semantic types - Taxi
Learn how Taxi uses semantic typing to describe the meaning of data, not just its structure
Types are meant to be shared across systems, while models are system-specific. Your project structure should reflect this separation.
A well-implemented Taxi ecosystem has clear separation between shared semantics and system-specific implementations.
A mature implementation typically includes: ​ Shared Taxonomy Collection of semantic types Broadly shared across organization Version controlled and carefully governed Published as a reusable package ​ Service Implementations Models and service definitions using types from taxonomy System-specific structures Published to TaxiQL server (like Orbital) Each service depends on shared taxonomy ​ Data Consumers Import shared taxonomy only Don’t depend on service-specific models Query data using TaxiQL Receive data mapped to their needs ​
Best Practices ​ Type Development Focus on business concepts Keep types focused and single-purpose Document type meanings clearly Version types carefully ​ Model Development Use semantic types for fields Keep models service-specific Don’t share models between services ​ Service Integration Publish service contracts to TaxiQL server Use semantic types in operation signatures Let TaxiQL handle data mapping
Measuring Success Your implementation is successful when: Services can evolve independently Data integration requires minimal code New consumers can easily discover and use data Changes to one service don’t cascade to others Semantic meaning is preserved across systems
·taxilang.org·
Adopting semantic types - Taxi
Welcome to Taxi 👋 - Taxi
Welcome to Taxi 👋 - Taxi
Connect all your APIs & data sources dynamically, without writing integration code
·taxilang.org·
Welcome to Taxi 👋 - Taxi
Client Interface for openEO Servers
Client Interface for openEO Servers
Access data and processing functionalities of openEO compliant back-ends in R.
·open-eo.github.io·
Client Interface for openEO Servers
openEO
openEO
openEO develops an open API to connect various clients to big EO cloud back-ends in a simple and unified way.
·openeo.org·
openEO
Marshalling | Deep Notes
Marshalling | Deep Notes
Marshalling is the process of transforming the memory representation of an object) to a data format suitable for storage or transmission, and it is typically used when data must be moved between different parts of a computer program or from one program to another. Marshalling is similar to serialization and is used to communicate to remote objects with an object, in this case a serialized object. It simplifies complex communication, using composite objects in order to communicate instead of primitives. The inverse, of marshalling is called unmarshallin (or demarshalling, similar to deserialization)
·deepaksood619.github.io·
Marshalling | Deep Notes
Complete GIS Data Format Guide - WKT, WKB, GeoJSON Explained
Complete GIS Data Format Guide - WKT, WKB, GeoJSON Explained
Comprehensive guide to GIS data formats including WKT, WKB, and GeoJSON. Learn conversion techniques, best practices, and practical applications for spatial data management.
·gis-tools.com·
Complete GIS Data Format Guide - WKT, WKB, GeoJSON Explained
OSM.PBF
OSM.PBF
The OSM.PBF (Protocolbuffer Binary Format) is a compressed binary format used to store OpenStreetMap (OSM) data.
·semanticgis.org·
OSM.PBF
Koop Core
Koop Core
Koop-core and its implementation in the main server.js file is where all providers, output-services, caches, and authorization plugins are registered. It exposes an Express server which can be instructed to listen on port 80 or any port of your choosing. This doc walks through the steps for creating this file from scratch.
·koopjs.github.io·
Koop Core
Koop
Koop
Koop - an open source geospatial ETL engine
·koopjs.github.io·
Koop
12 Geospatial Data Integration – Geospatial Data Science with R
12 Geospatial Data Integration – Geospatial Data Science with R
Clearly Define Integration Objectives: Before starting, articulate the goals of the integration. Are you creating a unified base map for a city, conducting a multi-factor suitability analysis, or building a predictive model? Clear objectives guide the selection of data sources and methods and provide a focus for resolving trade-offs (e.g., whether to prioritize resolution or coverage).
Rigorous Metadata Documentation: Maintain detailed metadata for each dataset and for the integrated product. This metadata should document data sources, collection dates, coordinate systems, processing steps, and known limitations or accuracy levels. Adhering to standards like ISO 19115 or FGDC metadata ensures interoperability and clarity. Good metadata allows others (and your future self) to understand the provenance and quality of the data, which is crucial for reproducibility and for assessing whether the integrated data is fit for a given purpose.
Conduct Robust Validation and Quality Control: After integration, validate the results both statistically and visually. This can include comparing integrated outputs against ground truth or withheld data, checking that attribute values fall in expected ranges, and mapping the data to visually inspect for obvious errors or misalignments. Any anomalies discovered should be investigated and, if possible, corrected. It’s also wise to test the integration process on a subset of data first. Thorough testing and validation help ensure that errors have not been introduced during integration and that the final dataset accurately represents reality. In practice, this might involve computing error metrics, performing consistency checks, or having domain experts review the integrated data.
Planners routinely merge datasets like demographic information, infrastructure networks, land use maps, and environmental data to get a 360-degree view of cities and regions. Such integration aids in designing sustainable cities by, for example, optimizing transportation routes, analyzing the distribution of green spaces relative to population density, and assessing energy use patterns across neighborhoods. By seeing how various factors overlap spatially, planners can identify areas that need new facilities, predict growth hotspots, or evaluate the impacts of proposed developments in a holistic way. The result is more informed urban policies and designs that account for the interplay of social, economic, and environmental factors in space.
Cloud-Based Integration Platforms: The use of cloud computing is transforming how geospatial data is integrated and shared. Cloud-based GIS and data warehouses allow practitioners to store and process very large datasets collaboratively and on-demand. This enables real-time data integration where multiple users or automated systems can contribute and update spatial data through web services. Cloud platforms also provide scalable computing power for intensive tasks like massive raster mosaicking or big data spatial analytics. The result is faster integration workflows, the ability to handle “big geodata,” and improved accessibility (since datasets and tools can be accessed from anywhere). We are likely to see more organizations adopting cloud-native geospatial integration solutions, which also facilitate integration of streaming data (e.g., live sensor feeds) seamlessly.
In summary, geospatial data integration is moving towards more streamlined, scalable, and intelligent workflows. Cloud infrastructure provides the backbone for handling data at scale and in real time. The proliferation of IoT and big data is expanding the breadth of information that can be integrated, offering more detail and temporal depth to analyses. And advances in AI and machine learning promise to automate complex fusion tasks and improve the quality of integrated data. Together, these trends will continue to break down barriers between data silos and unlock deeper insights into the spatial processes that affect our world.
Because attributes may have different units or scales, normalization or scaling is performed to facilitate meaningful comparisons. Normalizing attributes puts them on a common scale (such as 0 to 1) or adjusts for differences like population per unit area, so that no single attribute unduly dominates due to unit magnitude.
Location (Coordinates) – The geographic positioning of data, typically defined through latitude and longitude (or other coordinate systems). Location information pinpoints where an observation or feature is on the Earth. Precise coordinates are crucial for mapping features and performing spatial queries (e.g., finding all hospitals within 10 km of a city center). Coordinates may be expressed in various reference systems, but most commonly in decimal degrees of latitude/longitude for global reference (e.g., WGS84, the standard used by GPS). Location data provides the spatial frame on which all other information is layered.
Attributes – Descriptive information linked to each geographic location, representing what is at that location. Attributes can be qualitative or quantitative data describing the feature or event at the given coordinates. Examples include the name, type, or function of a feature (e.g., a hospital’s name and capacity), environmental measurements (temperature, land-use category), or demographic indicators (population density, median income) associated with an area. Attribute data provide context to the location, allowing deeper analysis beyond mere position. For instance, points representing schools might carry attributes for student enrollment and school performance; a land parcel polygon might have attributes for land use type and ownership.
·warin.ca·
12 Geospatial Data Integration – Geospatial Data Science with R
SiteIntel™
SiteIntel™
Get lender-ready feasibility reports in 10 minutes for $795. Zoning, flood & utilities data from FEMA, ArcGIS, TxDOT. Replace $10K studies.
·siteintel.lovable.app·
SiteIntel™