Found 2596 bookmarks
Custom sorting
Why Ontologies Are More Critical Than Ever
Why Ontologies Are More Critical Than Ever
đ™đ™đ™€đ™Șđ™œđ™đ™© đ™›đ™€đ™§ đ™©đ™đ™š 𝘿𝙖𝙼: Why Ontologies Are More Critical Than Ever
Why Ontologies Are More Critical Than Ever
·linkedin.com·
Why Ontologies Are More Critical Than Ever
A Graph RAG (Retrieval-Augmented Generation) chat application that combines OpenAI GPT with knowledge graphs stored in GraphDB
A Graph RAG (Retrieval-Augmented Generation) chat application that combines OpenAI GPT with knowledge graphs stored in GraphDB
After seeing yet another Graph RAG demo using Neo4j with no ontology, I decided to show what real semantic Graph RAG looks like. The Problem with Most Graph RAG Demos: Everyone's building Graph RAG with LPG databases (Neo4j, TigerGraph, Arrango etc.) and calling it "knowledge graphs." But here's the thing: Without formal ontologies, you don't have a knowledge graph—you just have a graph database. The difference? ❌ LPG: Nodes and edges are just strings. No semantics. No reasoning. No standards. ✅ RDF/SPARQL: Formal ontologies (RDFS/OWL) that define domain knowledge. Machine-readable semantics. W3C standards. Built-in reasoning. So I Built a Real Semantic Graph RAG Using: - Microsoft Agent Framework - AI orchestration - Formal ontologies - RDFS/OWL knowledge representation - Ontotext GraphDB - RDF triple store - SPARQL - semantic querying - GPT-5 - ontology-aware extraction It's all on github, a simple template as boilerplate for you project: The "Jaguar problem": What does "Yesterday I was hit by a Jaguar" really mean? It is impossible to know without concept awareness. To demonstrate why ontologies matter, I created a corpus with mixed content: 🐆 Wildlife jaguars (Panthera onca) 🚗 Jaguar cars (E-Type, XK-E) 🎾 Fender Jaguar guitars I fed this to GPT-5 along with a jaguar conservation ontology. The result? The LLM automatically extracted ONLY wildlife-related entities—filtering out cars and guitars—because it understood the semantic domain from the ontology. No post-processing. No manual cleanup. Just intelligent, concept-aware extraction. This is impossible with LPG databases because they lack formal semantic structure. Labels like (:Jaguar) are just strings—the LLM has no way to know if you mean the animal, car, or guitar. Knowledge Graphs = "Data for AI" LLMs don't need more data—they need structured, semantic data they can reason over. That's what formal ontologies provide: ✅ Domain context ✅ Class hierarchies ✅ Property definitions ✅ Relationship semantics ✅ Reasoning rules This transforms Graph RAG from keyword matching into true semantic retrieval. Check Out the Full Implementation, the repo includes: Complete Graph RAG implementation with Microsoft Agent Framework Working jaguar conservation knowledge graph Jupyter notebook: ontology-aware extraction from mixed-content text https://lnkd.in/dmf5HDRm And if you have gotten this far, you realize that most of this post is written by Cursor ... That goes for the code too. 😁 Your Turn: I know this is a contentious topic. Many teams are heavily invested in LPG-based Graph RAG. What are your thoughts on RDF vs. LPG for Graph RAG? Drop a comment below! #GraphRAG #KnowledgeGraphs #SemanticWeb #RDF #SPARQL #AI #MachineLearning #LLM #Ontology #KnowledgeRepresentation #OpenSource #neo4j #graphdb #agentic-framework #ontotext #agenticai | 148 comments on LinkedIn
·linkedin.com·
A Graph RAG (Retrieval-Augmented Generation) chat application that combines OpenAI GPT with knowledge graphs stored in GraphDB
ATOM: AdapTive and OptiMized dynamic temporal knowledge graph construction using LLMs
ATOM: AdapTive and OptiMized dynamic temporal knowledge graph construction using LLMs
✅ Some state-of-the-art methods for knowledge graph (KG) construction that implement incrementality build a graph from around 3k atomic facts in 4–7 hours, while ATOM achieves the same in just 20 minutes using only 8 parallel threads and a batch size of 40 for asynchronous LLM API calls. ❓ What’s the secret behind this performance? 👉 The architecture. The parallel design. ❌ Incrementality in KG construction was key, but it significantly limits scalability. This is because the method must first build the KG and compare it with the previous one before moving on to the next chunk. That’s why we eliminated this in iText2KG. ❓ Why is scalability so important? The short answer: real-time analytics. Fast dynamic TKG construction enables LLMs to reason over them and generate responses instantly, in real time. Discover more secrets behind this parallel architecture by reading the full paper (link in the first comment).
ATOM: AdapTive and OptiMized dynamic temporal knowledge graph construction using LLMs
·linkedin.com·
ATOM: AdapTive and OptiMized dynamic temporal knowledge graph construction using LLMs
Beyond RDF vs LPG: Operational Ontologies, Hybrid Semantics, and Why We Still Chose a Property Graph | LinkedIn
Beyond RDF vs LPG: Operational Ontologies, Hybrid Semantics, and Why We Still Chose a Property Graph | LinkedIn
How to stay sane about “semantic Graph RAG” when your job is shipping reliable systems, not winning ontology theology wars. You don’t wake up in the morning thinking about OWL profiles or SPARQL entailment regimes.
·linkedin.com·
Beyond RDF vs LPG: Operational Ontologies, Hybrid Semantics, and Why We Still Chose a Property Graph | LinkedIn
Knowledge Graphs and GraphRAG have sorta taken over my life the last two months or so, so I thought I would share some very important books for learners and builders
Knowledge Graphs and GraphRAG have sorta taken over my life the last two months or so, so I thought I would share some very important books for learners and builders
Knowledge Graphs and GraphRAG have sorta taken over my life the last two months or so, so I thought I would share some very important books for learners and builders. Knowledge Graphs: I’m going to really enjoy this KG book a lot more, now. It’s simple reading, in my opinion. Text as Data: if you work in Data Science and AI, just buy this book right now and then read it. You need to know this. This is my favorite NLP book. Orange Book (Sorry, long title): that is the best builder book I have found so far. It shows how to build with GraphRAG, and you should check it out. I really enjoyed reading this book and use it all the time. Just wanted to make some recommendations as I am looking at a lot of my books for ideas, lately. These are diamonds. Find them where you like to shop for books! #100daysofnetworks | 11 comments on LinkedIn
Knowledge Graphs and GraphRAG have sorta taken over my life the last two months or so, so I thought I would share some very important books for learners and builders
·linkedin.com·
Knowledge Graphs and GraphRAG have sorta taken over my life the last two months or so, so I thought I would share some very important books for learners and builders
Connected Data London 2024: Semantics, a Disco Ball Jacket and an Escalator Metaphor in Hindsight | Teodora Petkova
Connected Data London 2024: Semantics, a Disco Ball Jacket and an Escalator Metaphor in Hindsight | Teodora Petkova
This text is about my impressions from Connected Data London 2024. And about working towards a shared space of present and possible collaborative actions based on connected data and content. Intro: Shiny Happy Data People 20 years after the article in which Sir Tim Berners Lee imagined a paper on which you can click withContinue Weaving
·teodorapetkova.com·
Connected Data London 2024: Semantics, a Disco Ball Jacket and an Escalator Metaphor in Hindsight | Teodora Petkova
The idea that chips and ontology is what you want to short is batsh*t crazy
The idea that chips and ontology is what you want to short is batsh*t crazy
“The idea that chips and ontology is what you want to short is batsh*t crazy.” Whilst I couldn't agree more about how good chips (both the silicon & potato varieties) & ontologies are the context & semantics, as always, are important..... The Context: Michael Burry of Big Short fame is shorting AI as a trend with puts on Nvidia & Palantir being disclosed in the latest regulatory filings for his fund Scion Asset Management - $187 million against Nvidia and $912 million against Palantir as of Sept. 30. The circularity of the latest AI boom and limitations of Large Language Models being amongst many reasons being cited for the apparent AI bubble which Burry believes will burst. Whether you consider it a formal Ontology or not Palantir & Alex Karp are some of the few to use the 'O word' openly in product marketing - something long considered a brave move by many a frontier technology company! https://lnkd.in/eh7SAS8P Ontologies are also a key component of research & development to overcome many of the limitations of contemporary 'AI' systems and the factors contributing to the AI bubble Burry references. Plug: Interested in learning about what industry leaders are doing to overcome these limitations and develop AI systems with true reasoning capabilities? Come to this year's Connected Data London conference and engage in the debate, discussions and learning. This year its at the Leonardo Royal Hotel Tower Bridge on 20th & 21st November and tickets are selling fast! https://lnkd.in/entfkddD CNBC article below with video interview: https://lnkd.in/eHDpnWAW
The idea that chips and ontology is what you want to short is batsh*t crazy.
·linkedin.com·
The idea that chips and ontology is what you want to short is batsh*t crazy
Using Knowledge Graphs to Accelerate and Standardize AI-Generated Technical Documentation | by Michael Iantosca | Oct, 2025 | Medium
Using Knowledge Graphs to Accelerate and Standardize AI-Generated Technical Documentation | by Michael Iantosca | Oct, 2025 | Medium
Using Knowledge Graphs to Accelerate and Standardize AI-Generated Technical Documentation for Avalara Connector Guides A Practical Implementation Guide for Structured, Scalable Documentation 

·medium.com·
Using Knowledge Graphs to Accelerate and Standardize AI-Generated Technical Documentation | by Michael Iantosca | Oct, 2025 | Medium
“Shorting Ontology” — Why Michael Burry Might Not Be Wrong | LinkedIn
“Shorting Ontology” — Why Michael Burry Might Not Be Wrong | LinkedIn
“The idea that chips and ontology is what you want to short is batsh*t crazy.” — Alex Karp, CNBC, November 2025 When Palantir’s CEO, Alex Karp, lashed out at Michael Burry — “Big Short” investor who bet against Palantir and Nvidia — he wasn’t just defending his balance sheet.
·linkedin.com·
“Shorting Ontology” — Why Michael Burry Might Not Be Wrong | LinkedIn
🧭 What a Knowledge Graph Really Is — Insights from a Great Debate 💬 | LinkedIn
🧭 What a Knowledge Graph Really Is — Insights from a Great Debate 💬 | LinkedIn
After publishing my recent post “What Are Knowledge Graphs, Really?”, I received fascinating and sometimes contrasting perspectives from experts across ontology, AI, and systems modeling. Here’s a glimpse of the discussion that unfolded 👇 đŸ§© Dave Duggal reminded us that combining multiple database
·linkedin.com·
🧭 What a Knowledge Graph Really Is — Insights from a Great Debate 💬 | LinkedIn
The Schema Paradox: Why LPGs Are Both Structured and Free
The Schema Paradox: Why LPGs Are Both Structured and Free
The Schema Paradox: Why LPGs Are Both Structured and Free In the world of data and AI, we are often forced to choose between rigid structure and complete flexibility. But labelled property graphs (LPGs) quietly break that rule. They evolve structure through use, building ontology through action. In this new piece, I explore how LPGs balance order and chaos to form living schemas that grow alongside the data and its context. Integrated with GraphRAG and Applied Knowledge Graphs (AKGs), they become engines of adaptive intelligence, not just models of data. This isn’t theory, it's how modern systems are learning to reason contextually, adapt dynamically and evolve continuously. Full article: https://lnkd.in/eUdmQjyH #GraphData #KnowledgeGraph #KG #GraphRAG #AppliedKnowledgeGraph #AKG #LPG #DataArchitecture #AI #KnowledgeEngineering
The Schema Paradox: Why LPGs Are Both Structured and Free
·linkedin.com·
The Schema Paradox: Why LPGs Are Both Structured and Free
One question keeps coming up about UDA: why don't we call them ontologies?
One question keeps coming up about UDA: why don't we call them ontologies?
One question keeps coming up about UDA: why don't we call them ontologies? We actually tried that. People said 'ontology' was too abstract, too academic, that they felt dumb. So we were asked to step back: what were we really asking for? Conceptual models of business domains. Turns out people already had the right intuitions: domain-driven design, domain graph services, database modeling, etc. We literally did a search-replace: 'ontology' became 'domain model'. They understood overnight 😅 But there's more to it. Most ontology frameworks are just RDF, OWL, and SHACL. Upper does use those as building blocks and adds what's missing: information architecture, federation for collaborative modeling, and bootstrap properties. Domain models that are self-describing, self-referencing, self-governing. 'Ontology' just doesn't capture that precision. So 'domain model' it is, not 'ontology'.
One question keeps coming up about UDA: why don't we call them ontologies?
·linkedin.com·
One question keeps coming up about UDA: why don't we call them ontologies?
Text2KGBench-LettrIA: A Refined Benchmark for Text2Graph Systems
Text2KGBench-LettrIA: A Refined Benchmark for Text2Graph Systems
🚀 LLMs can be powerful tools to extract information from texts and automatically populate Knowledge Graphs guided by ontologies given as inputs. BUT how good are they? To reply to this question, we need benchmarks! 💡 With Lettria, we build the Text2KGBench-LettrIA benchmark covering 19 different ontologies in various domains (company, film, food, politician, sports, monument, etc.) and consisting of near 5k sentences strictly annotated with triples conforming to these ontologies (208 classes, 426 properties) yielding more than 17k triples. What's more? We throw a lot of compute to compare the performance and efficiency of numerous Closed LLMs models and variants (GPT4, Claude 3, Gemini) and numerous fine-tuned Open Weights models (Mistral 3, Qwen 3, Gemma 3, Phi 4). ✹Key take-away: when being provided with high quality data, fine-tuned open models largely outperform larger, proprietary counterparts! 📄 Curious about the detailed results? Read our paper at https://lnkd.in/e-EZCjWm See our presentation at https://lnkd.in/eEdCCpdA that I have just presented at the Knowledge Base Construction from Pre-Trained Language Models Workshop colocated with the ISWC - International Semantic Web Conference. You want to use these results in your operations? Sign-up for using the newly released PERSEUS model, https://lnkd.in/e7exyJHc Joint work with Julien PLU, Oscar Moreno Escobar, Edouard Trouillez, Axelle Gapin, Pasquale Lisena, Thibault Ehrhart #iswc2025 #LLMs #KnowledgeGraphs #NLP #Research EURECOM, Charles Borderie
·linkedin.com·
Text2KGBench-LettrIA: A Refined Benchmark for Text2Graph Systems
Clinical Knowledge Graph
Clinical Knowledge Graph
Clinical Knowledge Graph (CKG) is a platform with twofold objective: 1) build a graph database with experimental data and data imported from diverse biomedical databases 2) automate knowledge disco...
Clinical Knowledge Graph
·github.com·
Clinical Knowledge Graph
The audiobook version of "Knowledge Graphs and LLMs in Action" is now available
The audiobook version of "Knowledge Graphs and LLMs in Action" is now available
🎧 Exciting news! The audiobook version of "Knowledge Graphs and LLMs in Action" is now available! Are you busy but would love to learn how to build powerful and explainable AI solutions? No problem! Manning has just released the audio version of our book. Now you can listen while you're: - Running and training for your next marathon 🏃 - Commuting to the office 🚗 - Sitting in the parking lot waiting for your kids to finish their violin lesson đŸŽ» Your schedule is packed, but that shouldn't stop you from mastering these powerful AI techniques. Get your copy here: https://hubs.la/Q03MVhhk0 And don't forget to use discount code: lagraphs40 for 40% off! Clever solutions for smart people.
The audiobook version of "Knowledge Graphs and LLMs in Action" is now available
·linkedin.com·
The audiobook version of "Knowledge Graphs and LLMs in Action" is now available
Ontology Bill of Material? Do we really need it?
Ontology Bill of Material? Do we really need it?
Ontology Bill of Material? Do we really need it? In software engineering, we have SBOMs, Maven, Gradle, pip, and npm. We have decades of best practices for dependency management, version pinning, and granular control. We can exclude transitive dependencies we don't want. In ontology engineering and semantic modeling... we have owl:imports. We're trying to build mission-critical, enterprise-scale knowledge graphs, but our core dependency mechanism often feels like a step back in time. We talk about logical rigor, but we're living in "dependency hell." So: "How do you manage different versions of an ontology? How do you go through the complexity of imports? How do you propagate changes?" And the answer right now is: With great difficulty! and a lot of custom workarounds. The owl:imports axiom is a logical "all-or-nothing" merge. It's defined as a transitive closure. This is the direct cause of our most common and painful problems: - The "Diamond Problem": Your ontology imports Model-A (which needs Common-v1) and Model-B (which needs Common-v2). Your tool just pulls in both, creating a logical mess of conflicting axioms. A software build would fail and force you to resolve this. - Model Bloat: You want to use one class from a massive upper ontology (e.g schema .org)? Congratulations, you just imported the entire thing, plus everything it imports. And good luck with that RAM spikes, lags, ... - No Granular Control: This is the big one. In Maven or Gradle, you can exclude a problematic transitive dependency. In OWL, this is simply not possible at the specification level. You get everything. So, yes, we need the concept of an "Ontology Bill of Materials" (OBOM). We need a manifest file that lives with our ontology (and helps us to build it) and provides a reproducible "build." We need our tools (Protege, OWL API, ...) to treat this as a first-class citizen. This manifest would: -List all direct dependencies. -Pin their exact versions (via VersionIRI or even a content hash). -Resolve and list the full transitive dependency graph, so we know exactly what we are loading. -Detects problematic imports, cyclic dependencies, ... The "duct tape" we use today like custom build scripts, manually copy paste of element and so on are just admissions that owl:imports is not enough. It's time to adopt the mature engineering practices that software teams have relied on for decades. So how do you deal with complex ontology/model dependencies? How do you and your teams manage this chaos today? #Ontology #KnowledgeGraph #SemanticWeb #RDF #OWL | 39 comments on LinkedIn
Ontology Bill of Material? Do we really need it?
·linkedin.com·
Ontology Bill of Material? Do we really need it?
The O-word, “ontology” is here!
The O-word, “ontology” is here!
The O-word, “ontology” is here! Traditionally, you couldn’t say the word “ontology” in tech circles without getting a side-eye. Now? Everyone’s suddenly an ontology expert. And honestly
 I’m here for it. As someone who’s been deep in this space, this moment is exciting. We’re finally having the right conversations about semantic interoperability and the relationship with Agentic AI. But here’s the thing: before we reinvent the wheel, we need to understand the road already paved. 🧠 Homework if you’re diving into this space (link in comments): 1ïžâƒŁ Read the original Semantic Web vision article by Tim Berners‑Lee, James Hendler & Ora Lassila It laid out a future we’re finally ready for. Before you complain that “it’s complicated” or “that never worked and failed”, recall that this was a vision that laid out a roadmap of what was needed. Learn about the W3C standards that have emerged from this vision. Honored that I got to write a book with Ora! 2ïžâƒŁ Explore ISWC (International Semantic Web Conference) This scientific community was created to research what would it take to fulfill the Semantic Web vision. It’s the top scientific conference in this space, running for over 20 years. I’m proud to call it my academic home (been attending since 2008). ISWC will take place next week in Nara, Japan and I’m excited to be keynoting the Knowledge Base Construction from Pre-Trained Language Models Workshop and be part of the Panel: Reimagining Knowledge: The Future and Relevance of Symbolic Representation in the Age of LLMs. Take a look at the program and accepted papers if you want to know where the puck is heading! 3ïžâƒŁ Learn the history of knowledge graphs. It didn’t start with Google. It’s not just about graph databases. The Semantic Web has been a huge influence, in addition to so many events over 50+ years that have worked to connect data and knowledge at scale. Prof Claudio Gutierrez and I wrote a paper that goes into this history. Why this matters? Because we’re in a moment where many talk about “semantic” and “knowledge”, but often without acknowledging the deep foundations.  AI agents, interoperability, and scalable intelligence depend on these foundations. The tech, standards and tools exist. If you rebuild from scratch, you waste time. But if you stand on these shoulders, you build faster and smarter. Learn about the W3C standards: RDF, OWL, SPARQL, SHACL, SKOS, etc. Take a look at open source projects like Apache Jena, RDFLib, QLever, Protege. If something’s broken, or if you don’t like how it’s done, don’t start from scratch. Improve it. Contribute. Build on what’s already there. So if you’re posting about ontologies or knowledge graphs, please ask yourself: - Did I look at the classical semantic web work (yes, that 2001 article) and the history of knowledge graphs? - Am I building on the shoulders of giants, rather than re‑starting? - If I disagree with a standard/open source project, am I choosing to contribute instead of ignoring it? | 65 comments on LinkedIn
The O-word, “ontology” is here!
·linkedin.com·
The O-word, “ontology” is here!