GraphNews

4902 bookmarks
Custom sorting
Ontologies, Business Data Models, and Where Semantics Actually Break
Ontologies, Business Data Models, and Where Semantics Actually Break
Ontologies, Business Data Models, and Where Semantics Actually Break This post by Pierre Bonnet raises real concerns that many enterprises are feeling right now. Knowledge graphs and AI have made semantic modeling accessible, but not always disciplined. Poorly built ontologies absolutely can create silos rather than remove them. That said, there is an important distinction worth clarifying. A Business Data Model is a critical enterprise artifact. It captures what an organization chooses to track, report on, and operationalize. It is purpose driven, scoped, and optimized for business outcomes. An ontology, properly used, serves a different role. It is not a replacement for business data modeling, nor is it a technical derivative of it. Ontology constrains meaning at a deeper level by defining what kinds of things exist in a domain, how they persist, and how they relate independent of systems, workflows, or reporting needs. Several of the “mistakes” identified here are exactly why ontology exists in the first place: • Modeling workflows instead of stable domain entities leads to brittle systems • Collapsing conceptual meaning into logical or technical artifacts obscures intent • Treating metadata as documentation rather than as part of a formal semantic structure limits reuse and reasoning Where the framing becomes risky is in suggesting that ontology should be derived from business data models. In practice, that reverses the dependency. When business models are built without ontological constraints, today’s organizational assumptions get hard coded into tomorrow’s AI systems. That is where semantic debt and long term integration failures come from. The most durable pattern looks like this: • Ontology constrains meaning and category boundaries • Domain and business models specialize those constraints for enterprise use • Logical and physical models optimize for execution and performance When this order is respected, ontologies reduce silos rather than create them, and AI systems gain stability rather than fragility. The real issue is not ontology versus business data modeling. It is whether meaning is treated as foundational infrastructure, or as a byproduct of implementation. That distinction determines whether semantic systems scale beyond pilots and remain profitable over time. #Ontology #SemanticInfrastructure #EnterpriseArchitecture #KnowledgeGraphs #DataStrategy #AIGovernance #SemanticAI #DataModeling #ExplainableAI #DigitalTransformation | 18 comments on LinkedIn
Ontologies, Business Data Models, and Where Semantics Actually Break
·linkedin.com·
Ontologies, Business Data Models, and Where Semantics Actually Break
Where context graphs materialize | LinkedIn
Where context graphs materialize | LinkedIn
The Missing Layer in Enterprise Ontologies Enterprise software got very good at storing state, but it’s still bad at storing decisions. Most systems can tell you what’s true right now and what happened, but they don’t preserve why a choice was made in the moment—what inputs were considered, which co
·linkedin.com·
Where context graphs materialize | LinkedIn
Graph Retrieval-Augmented Generation: A Survey | ACM Transactions on Information Systems
Graph Retrieval-Augmented Generation: A Survey | ACM Transactions on Information Systems

● First comprehensive GraphRAG overview. This paper provides the first systematic survey of Graph‑based Retrieval‑Augmented Generation (GraphRAG) -- where structured graph data (nodes/relationships) is used to enhance LLM outputs by enabling richer, multi‑hop reasoning compared with text‑only RAG. ● Formalised workflow across three stages. It defines a GraphRAG pipeline -- (1) graph‑based indexing, (2) graph‑guided retrieval (using non‑parametric, LLM or graph neural retrievers) and (3) graph‑enhanced generation -- improving precision and relational context in LLM responses. ● Practical impact across domains. The survey highlights how GraphRAG boosts complex tasks -- from question answering and recommendation systems to domain‑specific reasoning in healthcare, finance and e‑commerce -- by bridging graph knowledge and large model reasoning.

·dl.acm.org·
Graph Retrieval-Augmented Generation: A Survey | ACM Transactions on Information Systems
An OWL ontology and RDF knowledge graph of the top 100 IMDb movies, modeled in Protégé and stored in GraphDB. Demonstrates semantic modeling, data ingestion, and SPARQL querying.
An OWL ontology and RDF knowledge graph of the top 100 IMDb movies, modeled in Protégé and stored in GraphDB. Demonstrates semantic modeling, data ingestion, and SPARQL querying.
An OWL ontology and RDF knowledge graph of the top 100 IMDb movies, modeled in Protégé and stored in GraphDB. Demonstrates semantic modeling, data ingestion, and SPARQL querying. - marcusv02/film-k...
·github.com·
An OWL ontology and RDF knowledge graph of the top 100 IMDb movies, modeled in Protégé and stored in GraphDB. Demonstrates semantic modeling, data ingestion, and SPARQL querying.
Graph-based approaches compared to vectors: they are not mutually exclusive – the strongest agent architectures are hybrid.
Graph-based approaches compared to vectors: they are not mutually exclusive – the strongest agent architectures are hybrid.
When the conversation turns to AI agents, even technically savvy people keep asking what's special about graph-based approaches compared to vectors.
graph-based approaches compared to vectors.This has always felt like a strange question, because in fact, they are not mutually exclusive – the strongest agent architectures are hybrid.
·linkedin.com·
Graph-based approaches compared to vectors: they are not mutually exclusive – the strongest agent architectures are hybrid.
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
Over the last few days, there’s been an intense discussion about CONTEXT GRAPHS — sparked by work from people like Jaya Gupta and Animesh Koratana.
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
·linkedin.com·
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology?
Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology?
🎯 Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology? (Updated GraphRAG repo) There's a lot of talk about GPT-5.2's context attention and its ability to code. I was curious how this affected its ability to "understand" ontologies. I downloaded the full version of Charles Dickens "Oliver Twist" from Kaggle (19000 rows, 975000 characters, 256K tokens) and scattered my jaguar corpus with animals, cars, and guitars into random places to put it to the ultimate test 🧪: 📖 256K tokens of irrelevant Victorian novel → Must ignore 🐆 Wildlife jaguar info scattered throughout → Must extract 🚗 Car jaguar mentions mixed in → Must ignore 🎸 Guitar jaguar mentions mixed in → Must ignore The only thing I gave the model was an RDF ontology. No instructions. No examples. No "please ignore cars and guitars." Just the ontology. ✨ And it worked. Every wildlife jaguar extracted. Every car and guitar ignored. 🤔 Why are RDF/OWL ontologies better than text descriptions? Certain market leading LPG vendors will tell you that text descriptions work just as well as ontologies. They have to, they can't store semantic ontologies anyway. Here's why they're wrong: ❌ Text is ambiguous. Ontologies aren't. Prompt: "Extract information about jaguars (the animal, not cars or guitars)" You're trusting the LLM to interpret "animal" correctly. What if your domain is more nuanced? What if "animal" isn't clear enough? ❌ "Converting RDF to natural language" is stupid. I've seen this pattern: "Use an LLM to convert your ontology to natural language, then use that for extraction!" This is backwards: 🔄 You're only proving the LLM can read RDF. If it can convert RDF → NL, it can just... use the RDF directly. ⚠️ You're introducing a second error source. If the LLM misinterprets something in step 1, that error multiplies when you use the NL version. 🔥It Burns a lot more tokens 🗑️ You're throwing away machine-readability. RDF can be stored in databases. Validated with SHACL. Reasoned over with OWL. Natural language can't. ✅ Maybe LLMs don't "reason". But they sure simulate it. Can an LLM truly perform OWL reasoning? Debatable. But here's what's NOT debatable: 🧠 LLMs have been trained on massive amounts of RDF, OWL, and SPARQL. Just like they've been trained on Python and C++ etc ... 📐 They can predict what valid RDF looks like. 🌳 They can simulate class inheritance. 🔍 They can pattern-match ontological structures. Is that "reasoning"? I don't know. But when I give GPT-5.2 an RDF ontology, it behaves as if it understands it. And that's enough for me. 💡 How do you think this Will affect future RAG systems? 📦 I've updated my open source repo with the new corpus and model: https://lnkd.in/dmf5HDRm 🔗 If you missed the original Jaguar GraphRAG post that started this: https://lnkd.in/dzag69dH #GraphRAG #KnowledgeGraphs #SemanticWeb #RDF #SPARQL #AI #LLM #Ontology #OpenSource #neo4j #graphdb #GPT5 #AgenticAI | 39 comments on LinkedIn
Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology?
·linkedin.com·
Can GPT-5.2 extract specific domain entities inside a 256K token Victorian novel using ONLY an RDF Ontology?
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
Over the last few days, there’s been an intense discussion about CONTEXT GRAPHS — sparked by work from people like Jaya Gupta and Animesh Koratana. It’s an important conversation, and it points to something bigger. A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone. 👉 It is the combination of both. Ontology defines what the business means It captures: • business concepts and relationships • rules, constraints, and permissions • metric definitions and accountability Ontology is NORMATIVE. It defines what is valid, comparable, and allowed. Without ontology, meaning drifts and decisions can’t be governed. You may have data — but you don’t have authority. The Context Graph captures how the business behaves It records: • decision traces and trajectories • observed human and system activity • experience over time The context graph is EMPIRICAL. It remembers what happened — without rewriting the rules. Without it, there’s no explanation, no learning, and no institutional memory. You may be correct — but you are blind. Together, they form the semantic digital twin A business is not just definitions or events. It is meaning plus experience, rules plus history, authority plus memory. The combination of an ontology and a context graph is the semantic digital twin of the business. This isn’t academic. It’s the foundation for explainable decisions, safe automation, and enterprise AI that can reason about the organization itself. Curious how others think about this split between meaning and behavior. #DigitalTwin #ContextGraph #Ontology #EnterpriseAI | 26 comments on LinkedIn
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
·linkedin.com·
A semantic digital twin of a business is not built from a context graph alone. And it’s not built from an ontology alone.
How do you build a context graph
How do you build a context graph
Authored by Animesh Koratana, founder and CEO of PlayerZero We recently wrote about context graphs, the layer that captures decision traces rather than just data. The argument: the next trillion-dollar platforms won't be built by adding AI to existing systems of record, but by capturing the reasonin
·linkedin.com·
How do you build a context graph
The Art of Taxonomy Workarounds
The Art of Taxonomy Workarounds
“With the greatest of ease / I stay on track like the greatest of skis.” – Stress Eater, Czarface, Kool Keith, Rocket Science In all my years working in taxonomy, there have been few times I …
·informationpanopticon.blog·
The Art of Taxonomy Workarounds
An Intent Map collects individual feedback loops, measures alignment to an ontology, and ensures valuable metadata flows into the Context Graph.
An Intent Map collects individual feedback loops, measures alignment to an ontology, and ensures valuable metadata flows into the Context Graph.
Kay Iversen has a great post on the combination of ontologies and context graphs for creating semantic digital twins, inspired by posts by Animesh Koratana and Jaya Gupta.
An Intent Map collects individual feedback loops, measures alignment to an ontology, and ensures valuable metadata flows into the Context Graph.
·linkedin.com·
An Intent Map collects individual feedback loops, measures alignment to an ontology, and ensures valuable metadata flows into the Context Graph.
A NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article
A NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article
Here is a NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article (see https://lnkd.in/e8SQm-Zz) which I saw in this post by Anthony Alcaraz (see https://lnkd.in/eFMhMmEG) I completely agree with this hypothesis that graphs is the way to capture all data used to create decisions, which is the only real way to have provenance and explainability. For example that is what I'm doing at MyFeeds-AI, which you can read about at investor.myfeeds.ai or see this presentation https://lnkd.in/eTyAndHg
a NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article
·linkedin.com·
A NotebookLM slide deck created from the "Context Graphs: AI's Trillion-Dollar Opportunity" article
Semantic Web Market Size, Share, Growth & Forecast [2030]
Semantic Web Market Size, Share, Growth & Forecast [2030]

📈 Semantic Web Market Set for Strong Growth Toward 2030

Recent market research indicates that the global Semantic Web market is expected to grow significantly toward 2030, fueled by increased adoption of knowledge graphs, semantic data integration, and AI-driven data processing.

This growth reflects a broader shift: organizations are moving beyond traditional data pipelines toward architectures that can capture meaning, context, and relationships. Technologies such as RDF, OWL, SPARQL, and SHACL are increasingly used to address challenges around data interoperability, governance, and explainable AI.

As enterprises and public organizations prepare for stricter data-sharing requirements and more advanced AI use cases, semantic technologies are no longer experimental, they are becoming foundational infrastructure.

🔎 The article highlights a clear trend: semantics are moving from the margins into the mainstream of enterprise data strategy.

·marketsandmarkets.com·
Semantic Web Market Size, Share, Growth & Forecast [2030]