AIOTI WG Standardisation Focus Group on Semantic Interoperability has prepared a report on Data to Ontology Mapping. A key challenge people face when using ontologies is […]
Uncovering Financial Crime with DuckDB and Graph Queries
You can process graphs in DuckDB! In this post, we show how to use DuckDB and the DuckPGQ community extension to analyze financial data for fraudulent patterns with the SQL/PGQ graph syntax that's part of SQL:2023.
The summer has been quite busy, and we are very thrilled to announce the release of Gephi Lite v1.0! This marks for us the first version of Gephi Lite we are really proud about. You can play with i…
FalkorDB/QueryWeaver: An open-source Text2SQL tool that transforms natural language into SQL using graph-powered schema understanding. Ask your database questions in plain English, QueryWeaver handles the weaving.
An open-source Text2SQL tool that transforms natural language into SQL using graph-powered schema understanding. Ask your database questions in plain English, QueryWeaver handles the weaving. - Fal...
QLever's distinguishing features · ad-freiburg/qlever Wiki · GitHub
Graph database implementing the RDF and SPARQL standards. Very fast and scales to hundreds of billions of triples on a single commodity machine. - ad-freiburg/qlever
When we present QLever, people often ask "how is this possible" as our speed and scale is on another dimension. We now have a page in the wiki that goes into a bit more detail on why and how this is possible. In short:
• Purpose built for large scale graph data, not retrofitted
• Indexing optimized for fast queries without full in-memory loading
• Designed in C++ for efficiency and low overhead
• Integrated full text and spatial search in the same engine
• Fast interactive queries even on hundreds of billions of triples
Link to the wiki page in the comments.
Qlever: graph database implementing the RDF and SPARQL standards. Very fast and scales to hundreds of billions of triples on a single commodity machine.
Sounds to good to be true, anyone tested this out?
https://lnkd.in/esXKt79J #GraphDatbase #ontology #RDF | 14 comments on LinkedIn
Labeled Meta Property Graphs (LMPG): A Property-Centric Approach to Graph Database Architecture
Discover how LMPG transforms graph databases by treating properties as first-class citizens rather than simple node attributes. This comprehensive technical guide explores RushDB's groundbreaking architecture that enables automatic schema evolution, property-first queries, and cross-domain analytics impossible in traditional property graphs or RDF systems.
Simplify graph emebeddings ↙️↙️↙️
Developing a fast vector indexing datastore engine 🚂 at `arrowspace` led me into defining a fast way for doing graph embeddings.
What I came up with is a process that is categorised as inductive graph embeddings, aka infer the embedding of an added node without retraining on the graph.
`arrowspace` work similarly to Laplacian Eigenmaps with some relevant tweaks to achieve performance as described in https://lnkd.in/eGgeKbdM
This method is a sequence of linear operations, compared to similar algorithms it uses spectral properties instead of random walks so to achieve faster training speed 🚄 How faster will be the object of a future blogpost.
Practical comparison summary:
* Inductiveness: `arrowspace` (spectral operator on features) and GraphSAGE are inductive; DeepWalk/node2vec are typically transductive
* Online cost: `arrowspace`’s operator application is lightweight; GraphSAGE requires model inference; node2vec/DeepWalk usually require rerunning or approximations to add nodes
* Quality: Laplacian embeddings benchmark strongly against node2vec and are competitive with deep methods (VGAE) depending on graph properties and metrics, suggesting `arrowspace`’s embeddings will be solid baselines or better for community-structured retrieval tasks
* Integration: `arrowspace` emphasizes Rust/native vector indexing with spectral augmentation, complementing external training stacks rather than replacing them.
This simplifies this kind of processes compared to Deep Learning and random walks approaches.
Please follow for more updates.
#graphembeddings #graphs #embeddings #search #algorithm
Tree-KG: An Expandable Knowledge Graph Construction Framework for Knowledge-intensive Domains
Songjie Niu, Kaisen Yang, Rui Zhao, Yichao Liu, Zonglin Li, Hongning Wang, Wenguang Chen. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025.
Text2graph is and online service for transforming free text into a knowledge graph form (nodes and relationships). The graph can be also exported using Cypher or Gremlin statements for quick import into your favourite database.
A Survey on Temporal Knowledge Graph: Representation Learning and...
Knowledge graphs have garnered significant research attention and are widely used to enhance downstream applications. However, most current studies mainly focus on static knowledge graphs, whose...
Time and space in the Unified Knowledge Graph environment
PDF | On Oct 2, 2025, Lyubo Blagoev published Time and space in the Unified Knowledge Graph environment | Find, read and cite all the research you need on ResearchGate
painter-network-exploration: Construction of a large painter network with ~3000 painters using the PainterPalette dataset, connecting painters if they lived at the same place for long enough time.
Construction of a large painter network with ~3000 painters using the PainterPalette dataset, connecting painters if they lived at the same place for long enough time. - me9hanics/painter-network-e...
Cognee - AI Agents with LangGraph + cognee: Persistent Semantic Memory
Build AI agents with LangGraph and cognee: persistent semantic memory across sessions for cleaner context and higher accuracy. See the demo—get started now.
Full Steam Ahead! Fast-Tracking Your Graph Creation with Nodestream
Discover what's possible with Nodestream: a declarative framework for building, maintaining, and analyzing graph data, compatible with Neo4j & Amazon Neptune.
city2graph is a Python library that converts geospatial datasets into graphs (networks).
🚀 𝗰𝗶𝘁𝘆𝟮𝗴𝗿𝗮𝗽𝗵 𝘃𝟬.𝟭.𝟲 𝗶𝘀 𝗻𝗼𝘄 𝗹𝗶𝘃𝗲! 🚀
city2graph is a Python library that converts geospatial datasets into graphs (networks).
🔗 GitHub https://lnkd.in/gmu6bsKR
What's New:
🛣️ 𝐌𝐞𝐭𝐚𝐩𝐚𝐭𝐡𝐬 𝐟𝐨𝐫 𝐇𝐞𝐭𝐞𝐫𝐨𝐠𝐞𝐧𝐞𝐨𝐮𝐬 𝐆𝐫𝐚𝐩𝐡𝐬 - Generate node connections by a variety of relations (e.g. amenity → street → street → amenity)
🗺️ 𝐂𝐨𝐧𝐭𝐢𝐠𝐮𝐢𝐭𝐲 𝐆𝐫𝐚𝐩𝐡 - Analyse spatial adjacency and neighborhood relationships with the new contiguity graph support
🔄 𝐎𝐃 𝐌𝐚𝐭𝐫𝐢𝐱 - Work seamlessly with OD matrices for migration and mobility flow analysis
You can now install the latest version via pip and conda.
For more examples, please see the document: https://city2graph.net/
As always, contributors are most welcome!
#UrbanAnalytics #GraphAnalysis #OpenSource #DataScience #GeoSpatial #NetworkScience #UrbanPlanning #Python #SpatialAnalysis
| 25 comments on LinkedIn
city2graph is a Python library that converts geospatial datasets into graphs (networks).
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Creating applications to manually populate and modify knowledge graphs is a complex task. In this paper, we propose a novel approach for designing user interfaces for this purpose, based on existing SHACL constraint files. Our method consists of taking SHACL constraints and creating multi-form web applications. The novelty of the approach is to treat the editing of knowledge graphs via multi-form application interaction as a business process. This enables user interface modeling, such as modeling of application control flows by integrating ontology-based business process management components. Additionally, because our application models are themselves knowledge graphs, we demonstrate how they can leverage OWL reasoning to verify logical consistency and improve the user experience.
Introducing GraphQA: An Agent for Asking Graphs Questions | Catio
GraphQA is Catio’s new open-source agent for natural-language questions over architecture graphs, fusing LLMs with graph algorithms to deliver fast, structure-aware answers for dependencies, flows, and system reasoning.
Introducing GraphQA: An Agent for Asking Graphs Questions
Ladybug: The Next Chapter for Embedded Graph Databases | LinkedIn
It's with deep gratitude for the amazing product the #KuzuDB team created, and a mix of necessity and excitement, that I announce the launch of Ladybug. This is a new open-source project and a community-driven fork of the popular embedded graph database.
happy to add support for LadybugDB on G.V() - Graph Database Client & Visualization Tooling, picking right up where we left off with our KuzuDB integration.
How to achieve logical inference performantly on huge data volumes
Lots of people talking about semantic layers. Okay, welcome to the party! The big question in our space, is how to achieve logical inference performantly on huge data volumes, given the inherent problems of combinatorial explosion that search algorithms (on which inference algorithms are based) have always confronted. After all, semantic layers are about offering inference services, the services that Edgar Codd envisioned DBMSes on the relational model eventually supporting in the very first paper on the relational model.
So what are the leading approaches in terms of performance?
1. GPU Datalog
2. High-speed OWL reasoners like RDFox
3. Rete networks like Sparkling Logic's Rete-NT
4. High-speed FOL provers like Vampire
Let's get down to brass tacks. RDFox posts some impressive benchmarks, but they aren't exactly obsoleting GPU Datalog, and I haven't seen any good data on RDFox vs Relational AI. If you have benchmarks on that, I'd love to see them. Rete-NT and RDFox are heavily proprietary, so understanding how the performance has been achieved is not really possible for the broader community beyond these vendors' consultants. And RDFox is now owned by Samsung, further complicating the picture.
That leaves us with the open-source GPU Datalogs and high-speed FOL provers. That's what's worth studying right now in semantic layers, not engaging in dogmatic debates between relational model, property graph model, RDF, and "name your emerging data model." Performance has ALWAYS been the name of the game in automated theorem proving. We still struggle to handle inference on large datasets. We need to quit focusing on non-issues and work to streamline existing high-speed inference methods for business usage. GPU Datalog on CUDA seems promising. I imagine the future will bring further optimizations. | 11 comments on LinkedIn
how to achieve logical inference performantly on huge data volumes