A simple one pager on LLMs, Knowledge Graphs, Ontologies (what is) | LinkedIn
This is a very simple post, but if you are confused about LLMs, Knowledge Graphs and Ontologies, if you have questions like "what is a knowledge graph?", "can LLM do all?" or "do we still need ontologies?", I hope this post can bring some simple of fundamental orientation. Warning: this is not a tre
HiRAG: Retrieval-Augmented Generation with Hierarchical Knowledge
Graph-based Retrieval-Augmented Generation (RAG) methods have significantly enhanced the performance of large language models (LLMs) in domain-specific tasks. However, existing RAG methods do not...
T-Box: The secret sauce of knowledge graphs and AI
T-Box: The secret sauce of knowledge graphs and AI
Ever wondered how knowledge graphs “understand” the world? Meet the T-Box, the part that tells your graph what exists and how it can relate.
Think of it like building a LEGO set:
T-Box (Terminological Box) = the instruction manual (defines the pieces and how they fit)
A-Box (Assertional Box) = the LEGO pieces you actually have (your data, your instances)
Why it’s important for RDF knowledge graphs:
- Gives your data structure and rules, so your graph doesn’t turn into spaghetti
- Enables reasoning, letting the system infer new facts automatically
- Keeps your graph consistent and maintainable, even as it grows
Why it’s better than other models:
Traditional databases just store rows and columns; relationships have no meaning
RDF + T-Box = data that can explain itself and connect across domains
Why AI loves it:
- AI can reason over knowledge, not just crunch numbers
- Enables smarter recommendations, insights, and predictions based on structured knowledge
Quick analogy:
T-Box = blueprint/instruction manual (the ontology / what is possible)
A-Box = the real-world building (the facts / what is true)
Together = AI-friendly, smart knowledge graph
#KnowledgeGraph #RDF #AI #SemanticWeb #DataScience #GraphData
T-Box: The secret sauce of knowledge graphs and AI
Youtu-GraphRAG: Vertically Unified Agents for Graph...
Graph retrieval-augmented generation (GraphRAG) has effectively enhanced large language models in complex reasoning by organizing fragmented knowledge into explicitly structured graphs. Prior...
If you could hire the smartest engineers and drop them in your code base would you expect miracles overnight? No, of course not! Because even if they are the best of coders, they don’t have context on your project, engineering processes and culture, security and compliance rules, user personas, business priorities, etc. The same is true of the very best agents.. they may know how to write (mostly) technically correct code, and have the context of your source code, but they’re still missing tons of context.
Building agents that can deliver high quality outcomes, faster, is going to require much more than your source code, rules and a few prompts. Agents need the same full lifecyle context your engineers gain after being months and years on the job. LLMs will never have access to your company’s engineering systems to train on, so something has to bridge the knowledge gap and it shouldn’t be you, one prompt at a time. This is why we're building what we call our Knowledge Graph at GitLab.
It's not just indexing files and code; it's mapping the relationships across your entire development environment. When an agent understands that a particular code block contains three security vulnerabilities, impacts two downstream services, and connects to a broader epic about performance improvements, it can make smarter recommendations and changes than just technically correct code.
This kind of contextual reasoning is what separates valuable AI agents from expensive, slow, LLM driven search tools. We're moving toward a world where institutional knowledge becomes portable and queryable. The context of a veteran engineer who knows "why we built it this way" or "what happened last time we tried this approach" can now be captured, connected, and made available to both human teammates and AI agents. See the awesome demos below and I look forward to sharing more later this month in our 18.4 beta update!
Applying Text Embedding Models for Efficient Analysis in Labeled...
Labeled property graphs often contain rich textual attributes that can enhance analytical tasks when properly leveraged. This work explores the use of pretrained text embedding models to enable...
GoAI: Enhancing AI Students' Learning Paths and Idea Generation via Graph of AI Ideas
💡 Graph of Ideas -- LLMs paired with knowledge graphs can be great partners for ideation, exploration, and research.
We've all seen the classic detective corkboard, with pinned notes and pictures, all strung together with red twine. 🕵️ The digital version could be a mind-map, but you still have to draw everything by hand.
What if you could just build one from a giant pile of documents?
Enter GoAI - a fascinating approach that just dropped on arXiv combining knowledge graphs with LLMs for AI research idea generation. While the paper focuses on a graph of research papers, the approach is generalizable.
Here's what caught my attention:
🔗 It builds knowledge graphs from AI papers where nodes are papers/concepts and edges capture semantic citation relationships - basically mapping how ideas actually connect and build on each other
🎯 The "Idea Studio" feature gives you feedback on innovation, clarity, and feasibility of your research ideas - like having a research mentor in your pocket
📈 Experiments show it helps produce clearer, more novel, and more impactful research ideas compared to traditional LLM approaches
The key insight? Current LLMs miss the semantic structure and prerequisite relationships in academic knowledge. This framework bridges that gap by making the connections explicit.
As AI research accelerates, this approach can be be used for any situation where you're looking for what's missing, rather than answering a question about what exists.
Read all the details in the paper...
https://lnkd.in/ekGtCx9T
Graph of Ideas -- LLMs paired with knowledge graphs can be great partners for ideation, exploration, and research.
Graph-Code: A Graph-Based RAG System for Any Codebase
The ultimate RAG for your monorepo. Query, understand, and edit multi-language codebases with the power of AI and knowledge graphs - vitali87/code-graph-rag
Graph-Code: A Graph-Based RAG System for Any Codebases
Neuro-symbolic AI: The key to truly intelligent systems
Unlock true AI potential with neuro-symbolic AI. Learn how combining LLMs with knowledge graphs solves unreliable and inaccurate outputs for enterprise success.
Tried Automating Knowledge Graphs — Ended Up Rewriting Everything I Knew
This post captures the desire for a short cut to #KnowledgeGraphs, the inability of #LLMs to reliably generate #StructuredKnowledge, and the lengths folks will go to realize even basic #semantic queries (the author manually encoded 1,000 #RDF triples, but didn’t use #OWL). https://lnkd.in/eJE_27gS #Ontologists by nature are generally rigorous, if not a tad bit pedantic, as they seek to structure #domain knowledge. 25 years of #SemanticWeb and this is still primarily a manual, tedious, time-consuming and error-prone process. In part, #DeepLearning is a reaction to #structured, #labelled, manually #curated #data (#SymbolicAI). When #GenAI exploded on the scene a couple of years ago, #Ontologist were quick to note the limitations of LLMs. Now some #Ontologists are having a "Road to Damascus" moment - they are aspirationally looking to Language Models as an interface for #Ontologies to lower barrier to ontology creation and use, which are then used for #GraphRAG, but this is a circular firing squad given the LLM weaknesses they have decried. This isn't a solution, it's a Hail Mary. They are lowering the standards on quality and setting up the even more tedious task of identifying non-obvious, low-level LLM errors in an #Ontology (same issue Developers have run into with LLM CodeGen - good for prototypes, not for production code). The answer is not to resign ourselves and subordinate ontologies to LLMs, but to take the high-road using #UpperOntologies to ease and speed the design, use and maintenance of #KGs. An upper ontology is a graph of high-level concepts, types and policies independent of a specific #domain implementation. It provides an abstraction layer with re-usable primitives, building blocks and services that streamline and automate domain modeling tasks (i.e., a #DSL for DSLs). Importantly, an upper ontology drives well-formed and consistent objects and relationships and provides for governance (e.g., security/identity, change management). This is what we do EnterpriseWeb. #Deterministic, reliable, trusted ontologies should be the center of #BusinessArchitecture, not a side-car to an LLM.
Enterprise Adoption of GraphRAG: The CRUD Challenge
Enterprise Adoption of GraphRAG: The CRUD Challenge
GraphRAG and other retrieval-augmented generation (RAG) workflows are currently attracting a lot of attention. Their prototypes are impressive, with data ingestion, embedding generation, knowledge graph creation, and answer generation all functioning smoothly.
However, without proper CRUD (Create, Read, Update, Delete) support, these systems are limited to academic experimentation rather than becoming enterprise-ready solutions.
Update: knowledge is constantly evolving. Regulations change, medical guidelines are updated, and product catalogues are revised. If a system cannot reliably update its information, it will produce outdated answers and quickly lose credibility.
Delete: Incorrect or obsolete information must be deleted. In regulated industries such as healthcare, finance and law, retaining deleted data can lead to compliance issues. Without a deletion mechanism, incorrect or obsolete information can persist in the system long after it should have been removed.
This is an issue that many GraphRAG pilots face. Although the proof of concept looks promising, limitations become evident when someone asks, "What happens when the source of truth changes?"
While reading and creation are straightforward, updates and deletions determine whether a system remains a prototype or becomes a reliable enterprise tool. Most implementations stop at 'reading', and while retrieval and answer generation work, real-world enterprise systems never stand still.
In order for GraphRAG and RAG in general to transition from research labs to widespread enterprise adoption, support for CRUD must be an fundamental aspect of the design process.
#GraphRAG #RAG #KnowledgeGraph #EnterpriseAI #CRUD #EnterpriseAdoption #TrustworthyAI #DataManagement
Enterprise Adoption of GraphRAG: The CRUD Challenge
Guy van den Broeck (UCLA)https://simons.berkeley.edu/talks/guy-van-den-broeck-ucla-2025-04-29Theoretical Aspects of Trustworthy AIToday, many expect AI to ta...
Blue Morpho: A new solution for building AI apps on top of knowledge bases
Blue Morpho: A new solution for building AI apps on top of knowledge bases
Blue Morpho helps you build AI agents that understand your business context, using ontologies and knowledge graphs.
Knowledge Graphs work great with LLMs. The problem is that building KGs from unstructured data is hard.
Blue Morpho promises a system that turns PDFs and text files into knowledge graphs. KGs are then used to augment LLMs with the right context to answer queries, make decisions, produce reports, and automate workflows.
How it works:
1. Upload documents (pdf or txt).
2. Define your ontology: concepts, properties, and relationships. (Coming soon: ontology generation via AI assistant.)
3. Extract a knowledge graph from documents based on that ontology. Entities are automatically deduplicated across chunks and documents, so every mention of “Walmart,” for example, resolves to the same node.
4. Build agents on top. Connect external ones via MCP, or use Blue Morpho: Q&A (“text-to-cypher”) and Dashboard Generation agents.
Blue Morpho differentiation:
- Strong focus on reliability. Guardrails in place to make sure LLMs follow instructions and the ontology.
- Entity deduplication, with AI reviewing edge cases.
- Easy to iterate on ontologies: they are versioned, extraction runs are versioned as well with all their parameters, and changes only trigger necessary recomputes.
- Vector embeddings are only used in very special circumstances, coupled with other techniques.
Link in comments. Jérémy Thomas
#KnowledgeGraph #AI #Agents #MCP #NewRelease #Ontology #LLMs #GenAI #Application
--
Connected Data London 2025 is coming! 20-21 November, Leonardo Royal Hotel London Tower Bridge
Join us for all things #KnowledgeGraph #Graph #analytics #datascience #AI #graphDB #SemTech #Ontology
🎟️ Ticket sales are open. Benefit from early bird prices with discounts up to 30%. https://lnkd.in/diXHEXNE
📺 Sponsorship opportunities are available. Maximize your exposure with early onboarding. Contact us at info@connected-data.london for more.
Blue Morpho: A new solution for building AI apps on top of knowledge bases
Box's Invisible Moat: The permission graph driving 28% operating margins
Everyone's racing to build AI agents.
Few are thinking about data permissions.
Box spent two decades building a boring moat- a detailed map of who can touch what document, when, why, and with what proof.
This invisible metadata layer is now their key moat against irrelevance.
Q2 FY26:
→ Revenue: $294M (+9% YoY)
→ Gross margin: 81.4%
→ Operating margin: 28.6%
→ Net retention: 103%
→ Enterprise Advanced: 10% of revenue (up from 5%)
Slow-growth, high-margin business at a crossroads.
The Permission Graph
Every document in Box has a shadow: its permission metadata. Who created it, modified it, can access it. What compliance rules govern it. Which systems can call it.
When an AI agent requests a contract, it needs more than the PDF. It needs proof it's allowed to see it, verification it's the right version, an audit trail.
Twenty years of accumulated governance that can't be easily replicated.
Why This Matters Now
The CEO Aaron Levie recently told CNBC: "If you don't maintain access controls well, AI agents will find the wrong information - leading to wrong answers or security incidents."
Every enterprise faces the same AI crisis: scattered data with inconsistent permissions, no unified governance, one breach risking progress.
The permission graph solves this.
The Context Control Problem
Box recently launched Enterprise Advanced: AI agents, workflow automation, document generation. They are adding contextual layers because they see a future where AI agents calling their API while users never see Box.
Microsoft owns the experience.
Box becomes plumbing.
This push is their attempt to stay visible. But it's still Product Rails, not Operating Rails. They're adding features to documents, not deepening their permission moat.
The Bull vs Bear Case
Bull: Enterprises will pay for bulletproof governance even if transformation happens elsewhere. The permission graph remains valuable.
Bear: Microsoft acquires or partners with Varonis + Cloudfuze to recreate the graph. The moat may not be deep enough.
Every SaaS Company's Dilemma
Box isn't alone. Every legacy SaaS faces the same question: how do you avoid becoming invisible infrastructure?
They're all trying the same failing playbook. Add AI features, claim "AI-native," hope the moat holds.
Box's advantage: the permission graph is genuinely hard to replicate.
Box's disadvantage: they still think like a document storage company.
Market's View
Box has 81% gross margins on commodity storage because of the permission graph. Yet the market values them at 24x forward P/E, not pricing the graph premium.
The other factor is that Box is led by Aaron Levie. He's a founder who's spent two decades obsessing over one problem: enterprise content governance.
That obsession matters now more than ever.
The question isn't whether the permission graph has value. It's whether Box can deepen the moat before others make it irrelevant.
(Full version sent to subscribers) | 25 comments on LinkedIn
A new notebook exploring Semantic Entity Resolution & Extraction using DSPy and Google's new LangExtract library.
Just released a new notebook exploring Semantic Entity Resolution & Extraction using DSPy (Community) and Google's new LangExtract library.
Inspired by Russell Jurney’s excellent work on semantic entity resolution, this demo follows his approach of combining:
✅ embeddings,
✅ kNN blocking,
✅ and LLM matching with DSPy (Community).
On top of that, I added a general extraction layer to test-drive LangExtract, a Gemini-powered, open-source Python library for reliable structured information extraction. The goal? Detect and merge mentions of the same real-world entities across text.
It’s an end-to-end flow tackling one of the most persistent data challenges.
Check it out, experiment with your own data, 𝐞𝐧𝐣𝐨𝐲 𝐭𝐡𝐞 𝐬𝐮𝐦𝐦𝐞𝐫 and let me know your thoughts!
cc Paco Nathan you might like this 😉
https://wor.ai/8kQ2qa
a new notebook exploring Semantic Entity Resolution & Extraction using DSPy (Community) and Google's new LangExtract library.
Stop manually building your company's brain. ❌
Having reviewed the excellent DeepLearning.AI lecture on Agentic Knowledge Graph Construction, by Andreas Kollegger and writing a book on Agentic graph system with Sam Julien, it is clear that the use of agentic systems represents a shift in how we build and maintain knowledge graphs (KGs).
Most organizations are sitting on a goldmine of data spread across CSVs, documents, and databases.
The dream is to connect it all into a unified Knowledge Graph, an intelligent brain that understands your entire business.
The reality? It's a brutal, expensive, and unscalable manual process.
But a new approach is changing everything.
Here’s the new playbook for building intelligent systems:
🧠 Deploy an AI Agent Workforce
Instead of rigid scripts, you use a cognitive assembly line of specialized AI agents. A Proposer agent designs the data model, a Critic refines it, and an Extractor pulls the facts.
This modular approach is proven to reduce errors and improve the accuracy and coherence of the final graph.
🎨 Treat AI as a Designer, Not Just a Doer
The agents act as data architects. In discovery mode, they analyze unstructured data (like customer reviews) and propose a new logical structure from scratch.
In an enterprise with an existing data model, they switch to alignment mode, mapping new information to the established structure.
🏛️ Use a 3-Part Graph Architecture
This technique is key to managing data quality and uncertainty. You create three interconnected graphs:
The Domain Graph: Your single source of truth, built from trusted, structured data.
The Lexical Graph: The raw, original text from your documents, preserving the evidence.
The Subject Graph: An AI-generated bridge that connects them. It holds extracted insights that are validated before being linked to your trusted data.
Jaro-Winkler is a string comparison algorithm that measures the similarity or edit distance between two strings. It can be used here for entity resolution, the process of identifying and linking entities from the unstructured text (Subject Graph) to the official entities in the structured database (Domain Graph).
For example, the algorithm compares a product name extracted from a customer review (e.g., "the gothenburg table") with the official product names in the database. If the Jaro-Winkler similarity score is above a certain threshold, the system automatically creates a CORRESPONDS_TO relationship, effectively linking the customer's comment to the correct product in the supply chain graph.
🤝 Augment Humans, Don't Replace Them
The workflow is Propose, then Approve. AI does the heavy lifting, but a human expert makes the final call.
This process is made reliable by tools like Pydantic and Outlines, which enforce a rigid contract on the AI's output, ensuring every piece of data is perfectly structured and consistent.
And once discovered and validated, a schema can be enforced. | 32 comments on LinkedIn
FinReflectKG: Agentic Construction and Evaluation of Financial Knowledge Graphs
Sharing our recent research 𝐅𝐢𝐧𝐑𝐞𝐟𝐥𝐞𝐜𝐭𝐊𝐆: 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐂𝐨𝐧𝐬𝐭𝐫𝐮𝐜𝐭𝐢𝐨𝐧 𝐚𝐧𝐝 𝐄𝐯𝐚𝐥𝐮𝐚𝐭𝐢𝐨𝐧 𝐨𝐟 𝐅𝐢𝐧𝐚𝐧𝐜𝐢𝐚𝐥 𝐊𝐧𝐨𝐰𝐥𝐞𝐝𝐠𝐞 𝐆𝐫𝐚𝐩𝐡𝐬. It is the largest financial knowledge graph built from unstructured data. The preprint of our article is out on arXiv now (link is in the comments). It is coauthored with Abhinav Arun | Fabrizio Dimino | Tejas Prakash Agrawal
While LLMs make it easier than ever to generate knowledge graphs, the real challenge lies in ensuring quality without hallucinations, with strong coverage, precision, comprehensiveness, and relevance. FinReflectKG tackles this through an iterative, evaluation-driven agentic approach, carefully optimized across multiple evaluation metrics to deliver a trustworthy and high-quality knowledge graph.
Designed to power use cases like entity search, question answering, signal generation, predictive modeling, and financial network analysis, FinReflectKG sets a new benchmark for building reliable financial KGs and showcases the potential of agentic workflows in LLM-driven systems.
We will be creating a suite of benchmarks using FinReflectKG for KG related tasks in financial services. More details to come soon. | 15 comments on LinkedIn
SynaLinks is an open-source framework designed to make it easier to partner language models (LMs) with your graph technologies. Since most companies are not in a position to train their own language models from scratch, SynaLinks empowers you to adapt existing LMs on the market to specialized tasks.
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge graphs help understand relationships between the objects, events, situations, and concepts in your data so you can readily identify important patterns and make better decisions. This book provides tools and techniques for efficiently labeling data, modeling a knowledge graph, and using it to derive useful insights.
In Knowledge Graphs and LLMs in Action you will learn how to:
Model knowledge graphs with an iterative top-down approach based in business needs
Create a knowledge graph starting from ontologies, taxonomies, and structured data
Use machine learning algorithms to hone and complete your graphs
Build knowledge graphs from unstructured text data sources
Reason on the knowledge graph and apply machine learning algorithms
Move beyond analyzing data and start making decisions based on useful, contextual knowledge. The cutting-edge knowledge graphs (KG) approach puts that power in your hands. In Knowledge Graphs and LLMs in Action, you’ll discover the theory of knowledge graphs and learn how to build services that can demonstrate intelligent behavior. You’ll learn to create KGs from first principles and go hands-on to develop advisor applications for real-world domains like healthcare and finance.