Found 282 bookmarks
Custom sorting
A Graph RAG (Retrieval-Augmented Generation) chat application that combines OpenAI GPT with knowledge graphs stored in GraphDB
A Graph RAG (Retrieval-Augmented Generation) chat application that combines OpenAI GPT with knowledge graphs stored in GraphDB
After seeing yet another Graph RAG demo using Neo4j with no ontology, I decided to show what real semantic Graph RAG looks like. The Problem with Most Graph RAG Demos: Everyone's building Graph RAG with LPG databases (Neo4j, TigerGraph, Arrango etc.) and calling it "knowledge graphs." But here's the thing: Without formal ontologies, you don't have a knowledge graph—you just have a graph database. The difference? ❌ LPG: Nodes and edges are just strings. No semantics. No reasoning. No standards. ✅ RDF/SPARQL: Formal ontologies (RDFS/OWL) that define domain knowledge. Machine-readable semantics. W3C standards. Built-in reasoning. So I Built a Real Semantic Graph RAG Using: - Microsoft Agent Framework - AI orchestration - Formal ontologies - RDFS/OWL knowledge representation - Ontotext GraphDB - RDF triple store - SPARQL - semantic querying - GPT-5 - ontology-aware extraction It's all on github, a simple template as boilerplate for you project: The "Jaguar problem": What does "Yesterday I was hit by a Jaguar" really mean? It is impossible to know without concept awareness. To demonstrate why ontologies matter, I created a corpus with mixed content: 🐆 Wildlife jaguars (Panthera onca) 🚗 Jaguar cars (E-Type, XK-E) 🎸 Fender Jaguar guitars I fed this to GPT-5 along with a jaguar conservation ontology. The result? The LLM automatically extracted ONLY wildlife-related entities—filtering out cars and guitars—because it understood the semantic domain from the ontology. No post-processing. No manual cleanup. Just intelligent, concept-aware extraction. This is impossible with LPG databases because they lack formal semantic structure. Labels like (:Jaguar) are just strings—the LLM has no way to know if you mean the animal, car, or guitar. Knowledge Graphs = "Data for AI" LLMs don't need more data—they need structured, semantic data they can reason over. That's what formal ontologies provide: ✅ Domain context ✅ Class hierarchies ✅ Property definitions ✅ Relationship semantics ✅ Reasoning rules This transforms Graph RAG from keyword matching into true semantic retrieval. Check Out the Full Implementation, the repo includes: Complete Graph RAG implementation with Microsoft Agent Framework Working jaguar conservation knowledge graph Jupyter notebook: ontology-aware extraction from mixed-content text https://lnkd.in/dmf5HDRm And if you have gotten this far, you realize that most of this post is written by Cursor ... That goes for the code too. 😁 Your Turn: I know this is a contentious topic. Many teams are heavily invested in LPG-based Graph RAG. What are your thoughts on RDF vs. LPG for Graph RAG? Drop a comment below! #GraphRAG #KnowledgeGraphs #SemanticWeb #RDF #SPARQL #AI #MachineLearning #LLM #Ontology #KnowledgeRepresentation #OpenSource #neo4j #graphdb #agentic-framework #ontotext #agenticai | 148 comments on LinkedIn
·linkedin.com·
A Graph RAG (Retrieval-Augmented Generation) chat application that combines OpenAI GPT with knowledge graphs stored in GraphDB
Beyond RDF vs LPG: Operational Ontologies, Hybrid Semantics, and Why We Still Chose a Property Graph | LinkedIn
Beyond RDF vs LPG: Operational Ontologies, Hybrid Semantics, and Why We Still Chose a Property Graph | LinkedIn
How to stay sane about “semantic Graph RAG” when your job is shipping reliable systems, not winning ontology theology wars. You don’t wake up in the morning thinking about OWL profiles or SPARQL entailment regimes.
·linkedin.com·
Beyond RDF vs LPG: Operational Ontologies, Hybrid Semantics, and Why We Still Chose a Property Graph | LinkedIn
Connected Data London 2024: Semantics, a Disco Ball Jacket and an Escalator Metaphor in Hindsight | Teodora Petkova
Connected Data London 2024: Semantics, a Disco Ball Jacket and an Escalator Metaphor in Hindsight | Teodora Petkova
This text is about my impressions from Connected Data London 2024. And about working towards a shared space of present and possible collaborative actions based on connected data and content. Intro: Shiny Happy Data People 20 years after the article in which Sir Tim Berners Lee imagined a paper on which you can click withContinue Weaving
·teodorapetkova.com·
Connected Data London 2024: Semantics, a Disco Ball Jacket and an Escalator Metaphor in Hindsight | Teodora Petkova
The idea that chips and ontology is what you want to short is batsh*t crazy
The idea that chips and ontology is what you want to short is batsh*t crazy
“The idea that chips and ontology is what you want to short is batsh*t crazy.” Whilst I couldn't agree more about how good chips (both the silicon & potato varieties) & ontologies are the context & semantics, as always, are important..... The Context: Michael Burry of Big Short fame is shorting AI as a trend with puts on Nvidia & Palantir being disclosed in the latest regulatory filings for his fund Scion Asset Management - $187 million against Nvidia and $912 million against Palantir as of Sept. 30. The circularity of the latest AI boom and limitations of Large Language Models being amongst many reasons being cited for the apparent AI bubble which Burry believes will burst. Whether you consider it a formal Ontology or not Palantir & Alex Karp are some of the few to use the 'O word' openly in product marketing - something long considered a brave move by many a frontier technology company! https://lnkd.in/eh7SAS8P Ontologies are also a key component of research & development to overcome many of the limitations of contemporary 'AI' systems and the factors contributing to the AI bubble Burry references. Plug: Interested in learning about what industry leaders are doing to overcome these limitations and develop AI systems with true reasoning capabilities? Come to this year's Connected Data London conference and engage in the debate, discussions and learning. This year its at the Leonardo Royal Hotel Tower Bridge on 20th & 21st November and tickets are selling fast! https://lnkd.in/entfkddD CNBC article below with video interview: https://lnkd.in/eHDpnWAW
The idea that chips and ontology is what you want to short is batsh*t crazy.
·linkedin.com·
The idea that chips and ontology is what you want to short is batsh*t crazy
“Shorting Ontology” — Why Michael Burry Might Not Be Wrong | LinkedIn
“Shorting Ontology” — Why Michael Burry Might Not Be Wrong | LinkedIn
“The idea that chips and ontology is what you want to short is batsh*t crazy.” — Alex Karp, CNBC, November 2025 When Palantir’s CEO, Alex Karp, lashed out at Michael Burry — “Big Short” investor who bet against Palantir and Nvidia — he wasn’t just defending his balance sheet.
·linkedin.com·
“Shorting Ontology” — Why Michael Burry Might Not Be Wrong | LinkedIn
🧭 What a Knowledge Graph Really Is — Insights from a Great Debate 💬 | LinkedIn
🧭 What a Knowledge Graph Really Is — Insights from a Great Debate 💬 | LinkedIn
After publishing my recent post “What Are Knowledge Graphs, Really?”, I received fascinating and sometimes contrasting perspectives from experts across ontology, AI, and systems modeling. Here’s a glimpse of the discussion that unfolded 👇 🧩 Dave Duggal reminded us that combining multiple database
·linkedin.com·
🧭 What a Knowledge Graph Really Is — Insights from a Great Debate 💬 | LinkedIn
The Schema Paradox: Why LPGs Are Both Structured and Free
The Schema Paradox: Why LPGs Are Both Structured and Free
The Schema Paradox: Why LPGs Are Both Structured and Free In the world of data and AI, we are often forced to choose between rigid structure and complete flexibility. But labelled property graphs (LPGs) quietly break that rule. They evolve structure through use, building ontology through action. In this new piece, I explore how LPGs balance order and chaos to form living schemas that grow alongside the data and its context. Integrated with GraphRAG and Applied Knowledge Graphs (AKGs), they become engines of adaptive intelligence, not just models of data. This isn’t theory, it's how modern systems are learning to reason contextually, adapt dynamically and evolve continuously. Full article: https://lnkd.in/eUdmQjyH #GraphData #KnowledgeGraph #KG #GraphRAG #AppliedKnowledgeGraph #AKG #LPG #DataArchitecture #AI #KnowledgeEngineering
The Schema Paradox: Why LPGs Are Both Structured and Free
·linkedin.com·
The Schema Paradox: Why LPGs Are Both Structured and Free
One question keeps coming up about UDA: why don't we call them ontologies?
One question keeps coming up about UDA: why don't we call them ontologies?
One question keeps coming up about UDA: why don't we call them ontologies? We actually tried that. People said 'ontology' was too abstract, too academic, that they felt dumb. So we were asked to step back: what were we really asking for? Conceptual models of business domains. Turns out people already had the right intuitions: domain-driven design, domain graph services, database modeling, etc. We literally did a search-replace: 'ontology' became 'domain model'. They understood overnight 😅 But there's more to it. Most ontology frameworks are just RDF, OWL, and SHACL. Upper does use those as building blocks and adds what's missing: information architecture, federation for collaborative modeling, and bootstrap properties. Domain models that are self-describing, self-referencing, self-governing. 'Ontology' just doesn't capture that precision. So 'domain model' it is, not 'ontology'.
One question keeps coming up about UDA: why don't we call them ontologies?
·linkedin.com·
One question keeps coming up about UDA: why don't we call them ontologies?
Ontology Bill of Material? Do we really need it?
Ontology Bill of Material? Do we really need it?
Ontology Bill of Material? Do we really need it? In software engineering, we have SBOMs, Maven, Gradle, pip, and npm. We have decades of best practices for dependency management, version pinning, and granular control. We can exclude transitive dependencies we don't want. In ontology engineering and semantic modeling... we have owl:imports. We're trying to build mission-critical, enterprise-scale knowledge graphs, but our core dependency mechanism often feels like a step back in time. We talk about logical rigor, but we're living in "dependency hell." So: "How do you manage different versions of an ontology? How do you go through the complexity of imports? How do you propagate changes?" And the answer right now is: With great difficulty! and a lot of custom workarounds. The owl:imports axiom is a logical "all-or-nothing" merge. It's defined as a transitive closure. This is the direct cause of our most common and painful problems: - The "Diamond Problem": Your ontology imports Model-A (which needs Common-v1) and Model-B (which needs Common-v2). Your tool just pulls in both, creating a logical mess of conflicting axioms. A software build would fail and force you to resolve this. - Model Bloat: You want to use one class from a massive upper ontology (e.g schema .org)? Congratulations, you just imported the entire thing, plus everything it imports. And good luck with that RAM spikes, lags, ... - No Granular Control: This is the big one. In Maven or Gradle, you can exclude a problematic transitive dependency. In OWL, this is simply not possible at the specification level. You get everything. So, yes, we need the concept of an "Ontology Bill of Materials" (OBOM). We need a manifest file that lives with our ontology (and helps us to build it) and provides a reproducible "build." We need our tools (Protege, OWL API, ...) to treat this as a first-class citizen. This manifest would: -List all direct dependencies. -Pin their exact versions (via VersionIRI or even a content hash). -Resolve and list the full transitive dependency graph, so we know exactly what we are loading. -Detects problematic imports, cyclic dependencies, ... The "duct tape" we use today like custom build scripts, manually copy paste of element and so on are just admissions that owl:imports is not enough. It's time to adopt the mature engineering practices that software teams have relied on for decades. So how do you deal with complex ontology/model dependencies? How do you and your teams manage this chaos today? #Ontology #KnowledgeGraph #SemanticWeb #RDF #OWL | 39 comments on LinkedIn
Ontology Bill of Material? Do we really need it?
·linkedin.com·
Ontology Bill of Material? Do we really need it?
The O-word, “ontology” is here!
The O-word, “ontology” is here!
The O-word, “ontology” is here! Traditionally, you couldn’t say the word “ontology” in tech circles without getting a side-eye. Now? Everyone’s suddenly an ontology expert. And honestly… I’m here for it. As someone who’s been deep in this space, this moment is exciting. We’re finally having the right conversations about semantic interoperability and the relationship with Agentic AI. But here’s the thing: before we reinvent the wheel, we need to understand the road already paved. 🧠 Homework if you’re diving into this space (link in comments): 1️⃣ Read the original Semantic Web vision article by Tim Berners‑Lee, James Hendler & Ora Lassila It laid out a future we’re finally ready for. Before you complain that “it’s complicated” or “that never worked and failed”, recall that this was a vision that laid out a roadmap of what was needed. Learn about the W3C standards that have emerged from this vision. Honored that I got to write a book with Ora! 2️⃣ Explore ISWC (International Semantic Web Conference) This scientific community was created to research what would it take to fulfill the Semantic Web vision. It’s the top scientific conference in this space, running for over 20 years. I’m proud to call it my academic home (been attending since 2008). ISWC will take place next week in Nara, Japan and I’m excited to be keynoting the Knowledge Base Construction from Pre-Trained Language Models Workshop and be part of the Panel: Reimagining Knowledge: The Future and Relevance of Symbolic Representation in the Age of LLMs. Take a look at the program and accepted papers if you want to know where the puck is heading! 3️⃣ Learn the history of knowledge graphs. It didn’t start with Google. It’s not just about graph databases. The Semantic Web has been a huge influence, in addition to so many events over 50+ years that have worked to connect data and knowledge at scale. Prof Claudio Gutierrez and I wrote a paper that goes into this history. Why this matters? Because we’re in a moment where many talk about “semantic” and “knowledge”, but often without acknowledging the deep foundations.  AI agents, interoperability, and scalable intelligence depend on these foundations. The tech, standards and tools exist. If you rebuild from scratch, you waste time. But if you stand on these shoulders, you build faster and smarter. Learn about the W3C standards: RDF, OWL, SPARQL, SHACL, SKOS, etc. Take a look at open source projects like Apache Jena, RDFLib, QLever, Protege. If something’s broken, or if you don’t like how it’s done, don’t start from scratch. Improve it. Contribute. Build on what’s already there. So if you’re posting about ontologies or knowledge graphs, please ask yourself: - Did I look at the classical semantic web work (yes, that 2001 article) and the history of knowledge graphs? - Am I building on the shoulders of giants, rather than re‑starting? - If I disagree with a standard/open source project, am I choosing to contribute instead of ignoring it? | 65 comments on LinkedIn
The O-word, “ontology” is here!
·linkedin.com·
The O-word, “ontology” is here!
Reusing Ontologies makes Your life easier
Reusing Ontologies makes Your life easier
𝐑𝐞𝐮𝐬𝐢𝐧𝐠 Ontologies 𝐌𝐚𝐤𝐞𝐬 Your 𝐋𝐢𝐟𝐞 𝐄𝐚𝐬𝐢𝐞𝐫 𝐃𝐚𝐭𝐚 contains tremendous 𝐯𝐚𝐥𝐮𝐞. Unfortunately, it is often only used in a specific application, even though it would be useful in other contexts as well. However, 𝐬𝐡𝐚𝐫𝐢𝐧𝐠 data is 𝐧𝐨𝐭 a 𝐭𝐫𝐢𝐯𝐢𝐚𝐥 task. To share data effectively within an organization, we need to 𝐚𝐥𝐢𝐠𝐧 our data with a 𝐜𝐨𝐦𝐦𝐨𝐧 𝐦𝐨𝐝𝐞𝐥. The first thought that comes to mind when hearing about the concept of shared data models (also known as ontologies) is often to develop a new one from 𝐬𝐜𝐫𝐚𝐭𝐜𝐡 quickly. That allows for a fast start and often a slow, yet inevitable, 𝐜𝐡𝐚𝐨𝐬. Ontologies aim to provide a well-described and carefully disambiguated meaning. They are about finding consensus, which is a process rather than a quick win. In that regard, using standardized ontologies is tremendously helpful. (1.) Because they are the product of a collaborative process of 𝐞𝐱𝐩𝐞𝐫𝐭𝐬, many potential 𝐩𝐢𝐭𝐟𝐚𝐥𝐥𝐬 have already been considered and 𝐞𝐥𝐢𝐦𝐢𝐧𝐚𝐭𝐞𝐝. They are established and well used. (2.) They are often abstract enough to be 𝐚𝐝𝐚𝐩𝐭𝐚𝐛𝐥𝐞 to more specific domains. Reused ontologies are not a dead end. They are a 𝐬𝐭𝐚𝐫𝐭𝐢𝐧𝐠 𝐩𝐨𝐢𝐧𝐭 for making data your own. (3.) [𝘈𝘯𝘥 𝘵𝘩𝘪𝘴 𝘪𝘴 𝘮𝘺 𝘧𝘢𝘷𝘰𝘳𝘪𝘵𝘦:] They are 𝐛𝐚𝐜𝐤𝐞𝐝 𝐛𝐲 one or more established 𝐨𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐬. Often, it is so much 𝐞𝐚𝐬𝐢𝐞𝐫 to 𝐜𝐨𝐧𝐯𝐢𝐧𝐜𝐞 people to use the standard pushed by Google or the guys who standardize the internet itself, rather than your own definitions. That does not mean that there is no need to create your own ontologies. However, your use case is likely not as unique as you think. And it might be useful to extend an existing ontology to your needs or use one as a blueprint. Want to hear more about how graphs can solve your data problems? Join our next webinar: https://lnkd.in/e6JgQzhP
𝐑𝐞𝐮𝐬𝐢𝐧𝐠 Ontologies 𝐌𝐚𝐤𝐞𝐬 Your 𝐋𝐢𝐟𝐞 𝐄𝐚𝐬𝐢𝐞𝐫
·linkedin.com·
Reusing Ontologies makes Your life easier
Two Meanings of “Semantic Layer” and Why Both Matter in the Age of AI
Two Meanings of “Semantic Layer” and Why Both Matter in the Age of AI
"Semantic layer” means different things depending on who you ask. In my latest newsletter, published on Medium first this time, I look at the two definitions and how they can work together. Are you using a semantic layer? if so, which type? #SemanticLayer #DataGovernance #AnalyticsEngineering #DataandAI | 25 comments on LinkedIn
·linkedin.com·
Two Meanings of “Semantic Layer” and Why Both Matter in the Age of AI
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Creating applications to manually populate and modify knowledge graphs is a complex task. In this paper, we propose a novel approach for designing user interfaces for this purpose, based on existing SHACL constraint files. Our method consists of taking SHACL constraints and creating multi-form web applications. The novelty of the approach is to treat the editing of knowledge graphs via multi-form application interaction as a business process. This enables user interface modeling, such as modeling of application control flows by integrating ontology-based business process management components. Additionally, because our application models are themselves knowledge graphs, we demonstrate how they can leverage OWL reasoning to verify logical consistency and improve the user experience.
·mdpi.com·
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Your agents NEED a semantic layer
Your agents NEED a semantic layer
Your agents NEED a semantic layer 🫵 Traditional RAG systems embed documents, retrieve similar chunks, and feed them to LLMs. This works for simple Q&A. It fails catastrophically for agents that need to reason across systems. Why? Because semantic similarity doesn't capture relationships. Your vector database can tell you that two documents are "about bonds." It can't tell you that Document A contains the official pricing methodology, Document B is a customer complaint referencing that methodology, and Document C is an assembly guide that superseded both. These relationships are invisible to embeddings. What semantic layers provide: Entity resolution across data silos. When "John Smith" in your CRM, "J. Smith" in email, and "john.smith@company.com" in logs all map to the same person node, agents can traverse the complete context. Cross-domain entity linking through knowledge graphs. Products in your database connect to assembly guides, which link to customer reviews, which reference support tickets. Single-query traversal instead of application-level joins. Provenance-tracked derivations. Every extracted entity, inferred relationship, and generated embedding maintains lineage to source data. Critical for regulatory compliance and debugging agent behavior. Ontology-grounded reasoning. Financial instruments mapped to FIBO standards. Products mapped to domain taxonomies. Agents reason with structured vocabulary, not statistical word associations. The technical implementation pattern: Layer 1: Unified graph database supporting vector, structured, and semi-structured data types in single queries. Layer 2: Entity extraction pipeline with coreference resolution and deduplication across sources. Layer 3: Relationship inference and cross-domain linking using both explicit identifiers and contextual signals. Layer 4: Separation of first-party data from derived artifacts with clear tagging for safe regeneration. The result: Agents can traverse "Product → described_in → AssemblyGuide → improved_by → CommunityTip → authored_by → Expert" in a single graph query instead of five API calls with application-level joins. Model Context Protocol is emerging as the open standard for semantic tool modeling. Not just describing APIs, but encoding what tools do, when to use them, and how outputs compose. This enables agents to discover and reason about capabilities dynamically. The competitive moat isn't your model choice. The moat is your knowledge graph architecture and the accumulated entity relationships that took years to build. | 28 comments on LinkedIn
Your agents NEED a semantic layer
·linkedin.com·
Your agents NEED a semantic layer