Nice piece on the comparisons of Vector DBs vs Knowledge Graphs.
Nice piece on the comparisons of Vector DBs vs Knowledge Graphs.
I think this becomes even more true when you start talking about temporal knowledge graphs, in which you are effectively describing temporal causality and contingent assertions.
Building AI Agents with LLMs, RAG, and Knowledge Graphs: A practical guide to autonomous and modern AI agents
𝐁𝐨𝐨𝐤 𝐩𝐫𝐨𝐦𝐨𝐭𝐢𝐨𝐧 𝐛𝐞𝐜𝐚𝐮𝐬𝐞 𝐭𝐡𝐢𝐬 𝐨𝐧𝐞 𝐢𝐬 𝐰𝐨𝐫𝐭𝐡 𝐢𝐭.. 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐀𝐈 𝐚𝐭 𝐢𝐭𝐬 𝐛𝐞𝐬𝐭..
This masterpiece was published by Salvatore Raieli and Gabriele Iuculano, and it is available for orders from today, and it's already a 𝐁𝐞𝐬𝐭𝐬𝐞𝐥𝐥𝐞𝐫!
While many resources focus on LLMs or basic agentic workflows, what makes this book stand out is its deep dive into grounding LLMs with real-world data and action through the powerful combination of 𝘙𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭-𝘈𝘶𝘨𝘮𝘦𝘯𝘵𝘦𝘥 𝘎𝘦𝘯𝘦𝘳𝘢𝘵𝘪𝘰𝘯 (𝘙𝘈𝘎) 𝘢𝘯𝘥 𝘒𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘎𝘳𝘢𝘱𝘩𝘴.
This isn't just about building Agents; it's about building AI that reasons, retrieves accurate information, and acts autonomously by leveraging structured knowledge alongside advanced LLMs.
The book offers a practical roadmap, packed with concrete Python examples and real-world case studies, guiding you from concept to deployment of intelligent, robust, and hallucination-minimized AI solutions, even orchestrating multi-agent systems.
Order your copy here - https://packt.link/RpzGM
#AI #LLMs #KnowledgeGraphs #AIAgents #RAG #GenerativeAI #MachineLearning
Semantic Backbone to Business Value: How Meaning Drives Real Results | LinkedIn
Introduction Data with meaning is powerful. But the real advantage comes when meaning leads directly to action—when your semantic backbone becomes the brain and nervous system of your organization.
Semantics in use (part 3): an interview with Saritha V.Kuriakose, VP Research Data Management at Novo Nordisk | LinkedIn
We continue our series of examples of the use of semantics and ontologies across organizations with an interview with Saritha V. Kuriakose from Novo Nordisk, talking about the pervasive and foundational use of ontologies in pharmaceutical R&D.
how Knowledge Graphs could be used to provide context
📚 Definition number 0️⃣0️⃣0️⃣0️⃣0️⃣1️⃣0️⃣1️⃣
🌊 It is pretty easy to see how context is making really big waves recently. Not long time ago, there were announcements about Model Context Protocol (MCP). There is even saying that Prompt Engineers changed their job titles to Context Engineers. 😅
🔔 In my recent few posts about definitions I tried to show how Knowledge Graphs could be used to provide context as they are built with two types of real definitions expressed in a formalised language. Next, I explained how objective and linguistic nominal definitions in Natural Language can be linked to the models of external things encoded in the formal way to increase human-machine semantic interoperability.
🔄 Quick recap: in KGs objective definitions define objects external to the language and linguistic definitions relate words to other expressions of that language. This is regardless of the nature of the language under consideration - formalised or natural. Objective definitions are real definitions when they uniquely specify certain objects via their characteristics - this is also regardless of the language nature. Not all objective definitions are real definitions and none of linguistic definitions are real definitions.
💡 Classical objective definitions are an example of clear definitions. Another type of real definitions that could be encountered either in formalised or Natural Language are contextual definitions. An example of such definition is ‘Logarithm of a number A with base B is such a number C that B to the power of C is equal to A’. Obviously this familiar mathematical definition could be expressed in formalised language as well. This makes Knowledge Graphs capable of providing context via contextual definitions apart from other types of definitions covered so far.
🤷🏼♂️ At the same time another question appears. How is it possible to keep track of all those different types of definitions and always be able to know which one is which for a given modelled object? In my previous posts, I have shown how definitions could be linked via ‘rdfs:comment’ and ‘skos:definition’. However, that is still pretty generic. It is still possible to extend base vocabulary provided by SKOS and add custom properties for this purpose. Quick reminder: property in KG corresponds to relation between two other objects. Properties allowing to add multiple types of definitions in Natural Language can be created as instances of owl:AnnotationProperty as follows:
namespace:contextualDefiniton a owl:AnnotationProperty .
After that this new annotation property instance could be used in the same way as more generic linking definitions to objects in KGs. 🤓
🏄♂️ The above shows that getting context right way can be tricky endeavour indeed. In my next posts, I will try to describe some other types of definitions, so they can also be added to KGs. If you'd like to level up your KG in this way, please stay tuned. 🎸😎🤙🏻
#ai #knowledgegraphs #definitions
how Knowledge Graphs could be used to provide context
Why Ontologies Matter and Why They are Hard to Develop | Jun 25, 2025
This blog post explores why building ontologies is essential yet notoriously difficult, and proposes a faster, more adaptive approach that bridges technical and domain expertise
Confession: until last week, I thought graphs were new
Confession: until last week, I thought graphs were new.
I shared what I thought was a fresh idea: that enterprise structured data should be modeled as a graph to make it digestible for today’s AI with its short context windows and text-based architecture.
My post attracted graph leaders with roots in the Semantic Web. I learned that ontology was the big idea when the Semantic Web launched in 2001, and fell out of fashion by 2008. Then Google brought it back in 2012 —rebranded as the “knowledge graph” - and graphs became a mainstay in SEO.
We’re living through the third wave of graphs, now driven by the need to feed data to AI agents. Graphs are indeed not new.
But there’s no way I - or most enterprise data leaders of my generation - would have known that. I started my data career in 2013 - peak love for data lakes and disregard for schemas. I haven't met a single ontologist until 3 months ago (hi Madonnalisa C.!). And I deal with tables in the enterprise domain, not documents in public domain. These are two different worlds.
Or are they?..
This 1999 quote from Tim Berners-Lee, the father of the Semantic Web hit me:
“I have a dream for the Web [in which computers] become capable of analyzing all the data... When it [emerges], the day-to-day mechanisms of trade, bureaucracy, and our daily lives will be handled by machines talking to machines... The ‘intelligent agents’... will finally materialize.”
We don't talk about this enough - but we are all one:
➡️ Semantic Web folks
➡️ Enterprise data teams
➡️ SEO and content teams
➡️ data providers like Scale AI and Surge AI
In the grand scheme of things, we are all just feeding data into computers hoping to realize Tim’s dream.
That’s when my initial shame turned into wonder.
What if we all reimagined our jobs by learning from each other?
What if enterprise data teams:
▶️ Prioritized algorithmic discoverability of their data assets, like SEOs do?
▶️ Pursued missing data that improves AI outcomes, like Scale AI does?
▶️ Took ownership of all data—not just the tables?
Would we be the generation that finally realizes the dream?
What a time to be alive. | 10 comments on LinkedIn
Confession: until last week, I thought graphs were new
Why Knowledge Graphs are Critical to Agent Context
How should we organize knowledge to provide the best context for agents? We show how knowledge graphs could play a key role in enhancing context for agents.
Credible Intervals for Knowledge Graph Accuracy Estimation
Knowledge Graphs (KGs) are widely used in data-driven applications and downstream tasks, such as virtual assistants, recommendation systems, and semantic search. The accuracy of KGs directly...
Into the Heart of a UX-driven Knowledge Graph | LinkedIn
How is fitness related to a bench? What is suitable for small spaces and can fit by both a sofa and a bed, serving as table but also being flexible to function as a bedside table? And what is a relevant product to complement a bed? Imagine all these questions answered by a furniture website. In one
Leveraging Knowledge Graphs and Large Language Models to Track and...
This study addresses the challenges of tracking and analyzing students' learning trajectories, particularly the issue of inadequate knowledge coverage in course assessments. Traditional assessment...
Unlocking Transparency: Semantics in Ride-Hailing for Consumers | LinkedIn
by Timothy Coleman A recent Guardian report drew attention to a key issue in the ride-hailing industry, spotlighting Uber’s use of sophisticated algorithms to enhance profits while prompting questions about clarity for drivers and passengers. Studies from Columbia Business School and the University
There’s a lot of buzz about #semanticlayers on LinkedIn these days. So what is a semantic layer?
According to AtScale, “The semantic layer is a metadata and abstraction layer built on the source data (eg.. data warehouse, data lake, or data mart). The metadata is defined so that the data model gets enriched and becomes simple enough for the business user to understand.”
It’s a metadata layer.
Which can be taken a step further. A metadata layer is best implemented using metadata standards that support interoperability and extensibility.
There are open standards such as Dublin Core Metadata Initiative and there are home-grown standards, established within organizations and domains.
If you want to design and build semantic layers, build from metadata standards or build a metadata standard, according to #FAIR principles (findable, accessible, interoperable, reusable).
Some interesting and BRILLIANT ✨folks to check out in the metadata domain space:
Ole Olesen-Bagneux (O2)’s (check out his upcoming book about the #metagrid)
Lisa N. Cao
Robin Fay
Jenna Jordan
Larry Swanson
Resources in comments 👇👇👇 | 29 comments on LinkedIn
AutoSchemaKG: Autonomous Knowledge Graph Construction through...
We present AutoSchemaKG, a framework for fully autonomous knowledge graph construction that eliminates the need for predefined schemas. Our system leverages large language models to simultaneously...
The Question That Changes Everything: "But This Doesn't Look Like an Ontology" | LinkedIn
After publishing my article on the Missing Semantic Center, a brilliant colleague asked me a question that gets to the heart of our technology stack: "But Tavi - this doesn't look like an OWL 2 DL ontology. What's going on here?" This question highlights a profound aspect of why systems have struggl
Building Truly Autonomous AI: A Semantic Architecture Approach | LinkedIn
I've been working on autonomous AI systems, and wanted to share some thoughts on what I believe makes them effective. The challenge isn't just making AI that follows instructions well, but creating systems that can reason, and act independently.
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
🧠 When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
Ontologies promise knowledge integration, traceability, reuse, and machine reasoning across the full engineering system lifecycle. From functional models to field failures, ontologies offer a way to encode and connect it all.
💥 However, ontologies are not a silver bullet.
There are plenty of scenarios where an ontology is not just unnecessary, it might actually slow you down, confuse your team, or waste resources.
So when exactly does the ontological approach become more burden than benefit? Based on my understanding and current work in this space,
🚀 For engineering design, it's important to recognise situations where adopting a semantic model is not the most effective approach:
1. When tasks are highly localised and routine
If you're just tweaking part drawings, running standard FEA simulations, or updating well-established design details, then the knowledge already lives in your tools and practices. Adding an ontology might feel like installing a satellite dish to tune a local radio station.
2. When terminology is unstable or fragmented
Ontologies depend on consistent language. If every department speaks its own dialect, and no one agrees on terms, you can't build shared meaning. You’ll end up formalising confusion instead of clarifying it.
3. When speed matters more than structure
In prototyping labs, testing grounds, or urgent production lines, agility rules. Engineers solve problems fast, often through direct collaboration. Taking time to define formal semantics? Not always practical. Sometimes the best model is a whiteboard and a sharp marker.
4. When the knowledge won’t be reused
Not all projects aim for longevity or cross-team learning. If you're building something once, for one purpose, with no intention of scaling or sharing, skip the ontology. It’s like building a library catalog for a single book.
5. When the infrastructure isn't there
Ontological engineering isn’t magic. It needs tools, training, and people who understand the stack. If your team lacks the skills or platforms, even the best-designed ontology will gather dust in a forgotten folder.
Use the Right Tool for the Real Problem
Ontologies are powerful, but not sacred. They shine when you need to connect knowledge across domains, ensure long-term traceability, or enable intelligent automation. But they’re not a requirement for every task just because they’re clever.
The real challenge is not whether to use ontologies, but knowing when they genuinely improve clarity, consistency, and collaboration, and when they just complicate the obvious.
🧠 Feedback and critique are welcome; this is a living conversation.
Felician Campean
#KnowledgeManagement #SystemsEngineering #Ontology #MBSE #DigitalEngineering #RiskAnalysis #AIinEngineering #OntologyEngineering #SemanticInteroperability #SystemReliability #FailureAnalysis #KnowledgeIntegration | 11 comments on LinkedIn
When Is an Ontological Approach Not the Right Fit for Sharing and Reusing System Knowledge in Design and Development?
LLMs already contain overlapping world models. You just have to ask them right.
Ontologists reply to an LLM output, “That’s not a real ontology—it’s not a formal conceptualization.”
But that’s just the No True Scotsman fallacy dressed up in OWL. Boring. Not growth-oriented. Look forward, angel.
A foundation model is a compression of human knowledge. The real problem isn't that we "lack a conceptualization". The real problem with an FM is that they contain too many. FMs contain conceptualizations—plural. Messy? Sure. But usable.
At Stardog, we’re turning this latent structure into real ontologies using symbolic knowledge distillation. Prompt orchestration → structure extraction → formal encoding. OWL, SHACL, and friends. Shake till mixed. Rinse. Repeat. Secret sauce simmered and reduced.
This isn't theoretical hard. We avoid that. It’s merely engineering hard. We LTF into that!
But the payoff means bootstrapping rich, new ontologies at scale: faster, cheaper, with lineage. It's the intersection of FM latent space, formal ontology, and user intent expressed via CQs. We call it the Symbolic Latent Layer (SLL). Cute eh?
The future of enterprise AI isn’t just documents. It’s distilling structured symbolic knowledge from LLMs and plugging it into agents, workflows, and reasoning engines.
You don’t need a priesthood to get a formal ontology anymore. You need a good prompt and a smarter pipeline and the right EKG platform.
There's a lot more to say about this so I said it at Stardog Labs https://lnkd.in/eY5Sibed | 17 comments on LinkedIn
Graph is the new star schema. Change my mind.
Why? Your agents can't be autonomous unless your structured data is a graph.
It is really very simple.
1️⃣ To act autonomously, an agent must reason across structured data.
Every autonomous decision - human or agent - hinges on a judgment: have I done enough? “Enough" boils down to driving the probability of success over some threshold.
2️⃣ You can’t just point the agent at your structured data store.
Context windows are too small. Schema sprawl is too real.
If you think it works, you probably haven’t tried it.
3️⃣ Agent must first retrieve - with RAG - the right tables, columns, and snippets. Decision making is a retrieval problem before it’s a reasoning problem.
4️⃣ Standard RAG breaks on enterprise metadata.
The corpus is too entity-rich.
Semantic similarity is breaking on enterprise help articles - it won't perform on column descriptions.
5️⃣ To make structured RAG work, you need a graph.
Just like unstructured RAG needed links between articles, structured RAG needs links between tables, fields, and - most importantly - meaning.
Yes, graphs are painful. But so was deep learning—until the return was undeniable. Agents need reasoning over structured data. That makes graphs non-optional. The rest is just engineering.
Let’s stop modeling for reporting—and start modeling for autonomy. | 28 comments on LinkedIn
Semantics in use (part 1): an interview with Martin Rezk, Sr. Ontologist at Google. | LinkedIn
To highlight the different uses and impact of semantics and ontologies, I wanted to present a series of interview to professionals from different industries and roles. This is going to be a distributed post.