Time and space in the Unified Knowledge Graph environment
PDF | On Oct 2, 2025, Lyubo Blagoev published Time and space in the Unified Knowledge Graph environment | Find, read and cite all the research you need on ResearchGate
Cognee - AI Agents with LangGraph + cognee: Persistent Semantic Memory
Build AI agents with LangGraph and cognee: persistent semantic memory across sessions for cleaner context and higher accuracy. See the demo—get started now.
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Creating applications to manually populate and modify knowledge graphs is a complex task. In this paper, we propose a novel approach for designing user interfaces for this purpose, based on existing SHACL constraint files. Our method consists of taking SHACL constraints and creating multi-form web applications. The novelty of the approach is to treat the editing of knowledge graphs via multi-form application interaction as a business process. This enables user interface modeling, such as modeling of application control flows by integrating ontology-based business process management components. Additionally, because our application models are themselves knowledge graphs, we demonstrate how they can leverage OWL reasoning to verify logical consistency and improve the user experience.
How to achieve logical inference performantly on huge data volumes
Lots of people talking about semantic layers. Okay, welcome to the party! The big question in our space, is how to achieve logical inference performantly on huge data volumes, given the inherent problems of combinatorial explosion that search algorithms (on which inference algorithms are based) have always confronted. After all, semantic layers are about offering inference services, the services that Edgar Codd envisioned DBMSes on the relational model eventually supporting in the very first paper on the relational model.
So what are the leading approaches in terms of performance?
1. GPU Datalog
2. High-speed OWL reasoners like RDFox
3. Rete networks like Sparkling Logic's Rete-NT
4. High-speed FOL provers like Vampire
Let's get down to brass tacks. RDFox posts some impressive benchmarks, but they aren't exactly obsoleting GPU Datalog, and I haven't seen any good data on RDFox vs Relational AI. If you have benchmarks on that, I'd love to see them. Rete-NT and RDFox are heavily proprietary, so understanding how the performance has been achieved is not really possible for the broader community beyond these vendors' consultants. And RDFox is now owned by Samsung, further complicating the picture.
That leaves us with the open-source GPU Datalogs and high-speed FOL provers. That's what's worth studying right now in semantic layers, not engaging in dogmatic debates between relational model, property graph model, RDF, and "name your emerging data model." Performance has ALWAYS been the name of the game in automated theorem proving. We still struggle to handle inference on large datasets. We need to quit focusing on non-issues and work to streamline existing high-speed inference methods for business usage. GPU Datalog on CUDA seems promising. I imagine the future will bring further optimizations. | 11 comments on LinkedIn
how to achieve logical inference performantly on huge data volumes
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Knowledge graphs offer many advantages in the fields of materials science and manufacturing technology. But how can we explore knowledge graphs in a meaningful way?
The current article “Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing”shows what such a solution could look like: https://lnkd.in/ehiK5php.
Special thanks to Matthias Büschelberger as main author and many thanks to all co-authors Konstantinos Tsitseklis, Lukas Morand, Anastasios Zafeiropoulos, Yoav Nahshon, Symeon Papavassiliou, and Dirk Helm for the great collaboration as part of our wunderful DiMAT project.
Would you like to see such a chatbot acting on a knowledge graph in action? Take a look at the video.
#datamanagement #FAIRData #dataspace #ontology #knowledgegraph #AI #materials #sustainability #digitalisation #InsideMaterial
Fraunhofer IWM, National Technical University of Athens
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Your agents NEED a semantic layer 🫵
Traditional RAG systems embed documents, retrieve similar chunks, and feed them to LLMs. This works for simple Q&A. It fails catastrophically for agents that need to reason across systems.
Why? Because semantic similarity doesn't capture relationships.
Your vector database can tell you that two documents are "about bonds." It can't tell you that Document A contains the official pricing methodology, Document B is a customer complaint referencing that methodology, and Document C is an assembly guide that superseded both.
These relationships are invisible to embeddings.
What semantic layers provide:
Entity resolution across data silos. When "John Smith" in your CRM, "J. Smith" in email, and "john.smith@company.com" in logs all map to the same person node, agents can traverse the complete context.
Cross-domain entity linking through knowledge graphs. Products in your database connect to assembly guides, which link to customer reviews, which reference support tickets. Single-query traversal instead of application-level joins.
Provenance-tracked derivations. Every extracted entity, inferred relationship, and generated embedding maintains lineage to source data. Critical for regulatory compliance and debugging agent behavior.
Ontology-grounded reasoning. Financial instruments mapped to FIBO standards. Products mapped to domain taxonomies. Agents reason with structured vocabulary, not statistical word associations.
The technical implementation pattern:
Layer 1: Unified graph database supporting vector, structured, and semi-structured data types in single queries.
Layer 2: Entity extraction pipeline with coreference resolution and deduplication across sources.
Layer 3: Relationship inference and cross-domain linking using both explicit identifiers and contextual signals.
Layer 4: Separation of first-party data from derived artifacts with clear tagging for safe regeneration.
The result: Agents can traverse "Product → described_in → AssemblyGuide → improved_by → CommunityTip → authored_by → Expert" in a single graph query instead of five API calls with application-level joins.
Model Context Protocol is emerging as the open standard for semantic tool modeling. Not just describing APIs, but encoding what tools do, when to use them, and how outputs compose. This enables agents to discover and reason about capabilities dynamically.
The competitive moat isn't your model choice.
The moat is your knowledge graph architecture and the accumulated entity relationships that took years to build. | 28 comments on LinkedIn
Can LLMs Really Build Knowledge Graphs We Can Trust?
🕸️ Can LLMs Really Build Knowledge Graphs We Can Trust?
There’s a growing trend: “Let’s use LLMs to build knowledge graphs.”
It sounds like the perfect shortcut - take unstructured data, prompt an LLM, and get a ready-to-use graph.
But… are we sure those graphs are trustworthy?
Before that, let’s pause for a second:
💡 Why build knowledge graphs at all?
Because they solve one of AI’s biggest weaknesses - lack of structure and reasoning.
Graphs let us connect facts, entities, and relationships in a way that’s transparent, queryable, and explainable. They give context, memory, and logic - everything that raw text or embeddings alone can’t provide.
Yet, here’s the catch when using LLMs to build them:
🔹 Short context window - LLMs can only “see” a limited amount of data at once, losing consistency across larger corpora.
🔹 Hallucinations - when context runs out or ambiguity appears, models confidently invent facts or relations that never existed.
🔹 Lack of provenance - LLM outputs don’t preserve why or how a link was made. Without traceability, you can’t audit or explain your graph.
🔹 Temporal instability - the same prompt can yield different graphs tomorrow, because stochastic generation ≠ deterministic structure.
🔹 Scalability & cost - large-scale graph construction requires persistent context and reasoning, which LLMs weren’t designed for.
Building knowledge graphs isn’t just data extraction - it’s engineering meaning. It demands consistency, provenance, and explainability, not just text generation.
LLMs can assist in this process, but they shouldn’t be the architect.
The next step is finding a way to make graphs both trustworthy and instant - without compromising one for the other. | 11 comments on LinkedIn
Can LLMs Really Build Knowledge Graphs We Can Trust?
"GraphRAG chatter is louder than its footprint in production."
That line from Ben Lorica's piece on Gradient Flow stopped me in my tracks: https://lnkd.in/dmC-ykAu
I was reading it because of my deep interest in graph-based reasoning, and while the content is excellent, I was genuinely surprised by the assessment of GraphRAG adoption. The article suggests that a year after the initial buzz, GraphRAG remains mostly confined to graph vendors and specialists, with little traction in mainstream AI engineering teams.
Here's the thing: at GraphAware, we have GraphRAG running in production: our AskTheDocs conversational interface in Hume uses this approach to help customers query documentation, and the feedback has been consistently positive. It's not an experiment—it's a production feature our users rely on daily.
So I have a question for my network (yes, I know you're a bit biased—many of you are graph experts, after all 😊):
Where is GraphRAG actually working in production?
I'm not looking for POCs, experiments, or "we're exploring it." I want to hear about real, deployed systems serving actual users. Success stories. Production use cases. The implementations that are quietly delivering value while the tech commentary wonders if anyone is using this stuff.
If you have direct or indirect experience with GraphRAG in production, I'd love to hear from you:
- Drop a comment below
- Send me a DM
- Email me directly
I want to give these cases a voice and learn from what's actually working out there.
Who's building with GraphRAG beyond the buzz?
#GraphRAG #KnowledgeGraphs #AI #ProductionAI #RAG
Let's talk ontologies. They are all the rage.
I've been drawing what I now know is a 'triple' on whiteboards for years. It's one of the standard ways I know to start to understand a business.
A triple is:
subject, predicate, object
I cannot overstate how useful this practice has been. Understanding how everything links together is useful, for people and AI.
I'm now stuck on what that gets stored in. I'm reading about triplestores and am unclear on the action needed. Years ago some colleagues and I used Neo4j to do this. I liked the visual interaction of the output but I'm not sure that is the best path here.
Who can help me understand how to move from whiteboarding to something more formal?
Where to actually store all these triples?
At what point does it become a 'knowledge graph'?
Are there tools or products that help with this?
Or is there a new language to learn to store it properly? (I think yes)
#ontology #help | 40 comments on LinkedIn
As requested, these are the FIRST set of slides for my Ontobras Tutorial on the Unified Foundational Ontology i.e., the upcoming ISO/IEC CD 21838-5 (https://lnkd.in/egrMiCvG), and as announced here: https://lnkd.in/eeKmVW-5.
The Brazilian community is one of the most active and lively communities in ontologies these days and the event joined many people from academia, government and industry.
The slides for the SECOND part can be found here:
https://lnkd.in/eD2xhPKj
Thanks again for the invitation Jose M Parente de Oliveira.
#ontology #ontologies #conceptualmodeling #semantics
Semantics, Cybersecurity, and Services (SCS)/University of Twente
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
SuperMemory is just one example of a growing ecosystem of knowledge graph systems (Graphiti- Zep, Fast Graph RAG, TrustGraph...) Some in Python, some TypeScript with the added benefit of having graph visualization. Even in Rust and Go there is a growing list of open source graph-RAG.
Ontology (LLM generated in particular) seems to have its own moment in the sun with a growing interest in RDF, OWL, SHACL and whatnot. Whether the big guys (OpenAI, Microsoft...) will launch something ontological remains to be seen. They likely leave it to triple-store vendors to figure it out.
https://lnkd.in/e3HAiC8c #KnowledgeGraph #GraphRAG
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Because it forces you to slow down before you speed up.
It means defining what exists in your organization before building systems to act on it. It requires clarity, discipline, and the ability to model multiple perspectives without losing coherence. Ontology-first doesn’t mean everyone must agree; it means connecting different views through layers: application ontologies for context, domain ontologies for shared objects, mid-level ontologies for reusable patterns, and a top-level ontology for common sense.
It means defining what exists in your organization before building systems to act on it. Without a shared map of what things mean, every new system just adds noise.
Ontology-first architecture isn’t about technology; it’s about truth, structure, and long-term adaptability. It’s the foundation that allows AI to enhance human power and impact without losing context or control.
It’s hard because it demands that we think, model, and connect before we automate.
But that’s also why it’s the only path toward a world where humans ingenuity can be enhanced with AI truly. | 42 comments on LinkedIn
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Small insights that unfolded while tracing and expanding on Irina Malkova’s mindset map (Joe Reis' SOURCE: https://lnkd.in/dVGgxPdF)
12 learning points
• Telemetry → the nervous system of AI: Event data from software is no longer an error log - it forms the sensory layer, the neural pattern of reality for AI.
• Dashboard → dialogue: Data no longer lives in tables but in question and response. Attention shifts from the screen to the conversation.
• Graphs → fabric of meaning: Connections are not technical structures but semantic threads - turning the graph into a landscape of thought.
• ROI → measurable, because we choose to measure it: Once the agent lives inside the workflow, its impact can be traced all the way to business outcomes.
Measurability is not a technical, but a strategic decision.
• Agent → decision within the process, not beside it: An agent is not another tool but the embedding of decision-making into daily work - the age of AI-in-flow replacing traditional BI.
• Data Cloud → activated data: Data gains meaning in motion, not in storage.
The Salesforce model makes that movement visible and automatable.
• Dashboard blindness → cognitive overload: Visualization tools don’t remove complexity - they shift it to the eyes. Agents interpret instead of merely displaying.
• Graph-RAG → thinking against density: Enterprise corpora are too homogeneous for vector search; graph semantics dissolves the overcrowding of entities.
• Data taxonomy → corporate self-knowledge: The data team’s real task is not measurement but mapping the conceptual landscape of the organization - they are its quiet ontologists.
• Agentic ROI → the feedback loop of decisions: Conversational agents make it possible to trace how a recommendation turns into real action - something BI could never show.
• Data team → AI change agent: Data professionals carry the rare skill of building under uncertainty. That skill now becomes a strategic advantage.
• AI → thinking partner: AI does not write for us - it thinks with us.
Between text and meaning, a dialogue emerges instead of automation.
7 quiet directions that might have unfolded, had the conversation lasted longer:
• Agentic enterprise → Meta-agentic reasoning: Perhaps this will be the first, quiet form of organizational consciousness.
• Data cloud → Ontology-ready stack: Where data becomes not only accessible but interpretable.
• Telemetry → Self-reflective feedback: When data streams not only observe but look back at themselves, a learning system emerges - not supervised, but self-aware.
• Graph-RAG → Dynamic knowledge fabric: Where connections are no longer static edges but moving contexts.
• ROI → Cognitive value creation: Cognitive efficiency, too, can be a business metric.
• Data team → Thinking ecosystem: The data department slowly evolves into a cognitive community.
• Prompt → Model of thought: A question is not an input but a form - a way of showing AI one’s own pattern of reasoning.
A sophisticated knowledge graph memory system that stores interconnected information with rich semantic structure using Neo4j.
A sophisticated knowledge graph memory system that stores interconnected information with rich semantic structure using Neo4j. - shuruheel/mcp-neo4j-shan
KGGen: Extracting Knowledge Graphs from Plain Text with Language Models
Recent interest in building foundation models for KGs has highlighted a fundamental challenge: knowledge-graph data is relatively scarce. The best-known KGs are primarily human-labeled, created by...
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
Why Large Graphs Fail to Understand Small Scalability
We love to talk about scaling graphs: billions of nodes, trillions of relationships and distributed clusters. But, in practice, larger graphs often become harder to understand. As Labelled Property Graphs (LPGs) grow, their structure remains sound, but their meaning starts to drift. Queries still run, but the answers become useless.
In my latest post, I explore why semantic coherence collapses faster than infrastructure can scale up, what 'cognitive coherence' really means in graph systems and how the flexibility of LPGs can empower and endanger knowledge integrity.
Full article: 'Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence' https://lnkd.in/epmwGM9u
#GraphRAG #KnowledgeGraph #LabeledPropertyGraph #LPG #SemanticAI #AIExplainability #GraphThinking #RDF #AKG #KGL | 15 comments on LinkedIn
Unlock Cross-Domain Insight: Uncover Hidden Opportunities in Your Data with Knowledge Graphs and Ontologies
From Siloed Data to Missed Opportunities Organizations today sit on massive troves of data – customer transactions, logs, metrics, documents – often scattered across departments and trapped in spreadsheets or relational tables. The data is diverse, dispersed, and growing at unfathomable rates, to th
Following yesterday's announcements from OpenAI, brands start to have real ways to operate inside ChatGPT. At a very high-level this is the map for anyone considering entering (or expanding) into the ChatGPT ecosystem: Conversational Prompts / UX: optimize how ChatGPT “asks” for or surfaces brand se
Is OpenAI quietly moving toward knowledge graphs?
Yesterday’s OpenAI DevDay was all about new no-code tools to create agents. Impressive. But what caught my attention wasn’t what they announced… it’s what they didn’t talk about.
During the summer, OpenAI released a Cookbook update introducing the concept Temporal Agents (see below) connecting it to Subject–Predicate–Object triples: the very foundation of a knowledge graph.
If you’ve ever worked with graphs, you know this means something big:
they’re not just building agents anymore they’re building memory, relationships, and meaning.
When you see “London – isCapitalOf – United Kingdom” in their official docs, you realize they’re experimenting with how to represent knowledge itself.
And with any good knowledge graph… comes an ontology.
So here’s my prediction:
ChatGPT-6 will come with a built-in graph that connects everything about you.
The question is: do you want their AI to know everything about you?
Or do you want to build your own sovereign AI, one that you own, built from open-source intelligence and collective knowledge?
Would love to know what you think. Is that me hallucinating or is that a weak signal?👇 | 62 comments on LinkedIn
New project makes Wikipedia data more accessible to AI | TechCrunch
Called the Wikidata Embedding Project, the system applies a vector-based semantic search to the existing data on Wikipedia and its sister platforms, consisting of nearly 120 million entries.
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
Inspired by the talented Jessica Talisman, here is a new infographic microsim that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph:
https://lnkd.in/g66HRBhn
You can include this interactive microsim in all of your semantics/ontology and agentic AI courses with just a single line of HTML.
a new infographic microsim that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
For all the excitement around large language models, the latest research from Simona-Vasilica Oprea and Georgiana Stănescu (Electronics 14:1313, 2025) offers a reality check. Automatic ontology generation, even with novel prompting techniques like Memoryless CQ-by-CQ and Ontogenia, remains a partial