Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.
Most agentic systems hardcode their capabilities.
Most agentic systems hardcode their capabilities. 🔳This does not scale.Ontologies as executable metadata for the four core agent capabilities can solve this.
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents. 🔳
They're storing the instructions agents use to operate on that data.
Traditional software architectures separate code from data, with logic hardcoded in application layers while data resides in storage layers.
The ontology-based approach fundamentally challenges this separation by storing behavioral rules and tool definitions as graph data that agents actively query during execution.
Ontologies in these systems operate as runtime-queryable metadata rather than compile-time specifications
This is meta-programming at the database level, and the technical implications are profound:
Traditional approach: Your agent has hardcoded tools. Each tool is a Python function that knows exactly what query to run, which entity types to expect, and how to navigate relationships.
Ontology-as-meta-tool approach: Your agent has THREE generic tools that query the ontology at runtime to figure out how to operate.
Here's the technical breakdown:
Tool 1 does semantic search and returns mixed entity types (could be Artist nodes, Subject nodes, whatever matches the vector similarity).
Tool 2 queries the ontology: "For this entity type, what property serves as the unique identifier?" The ontology responds because properties are marked with "inverseFunctional" annotations. Now the agent knows how to retrieve specific instances.
Tool 3 queries the ontology again: "Which relationships from this entity type are marked as contextualizing?" The ontology returns relationship types. The agent then constructs a dynamic Cypher query using those relationship types as parameters.
The breakthrough: The same three tools work for ANY domain. Swap the art gallery ontology for a medical ontology, and the agent adapts instantly because it's reading navigation rules from the graph, not from code.
This is self-referential architecture. The system queries its own structure to determine its own behavior. The ontology becomes executable metadata - not documentation about the system, but instructions that drive the system.
The technical pattern:
Store tool definitions as (:Tool) nodes with Cypher implementations as properties
Mark relationships with custom annotations (contextualizing: true/false)
Mark properties with OWL annotations (inverseFunctional for identifiers)
Agent queries these annotations at runtime to construct dynamic queries
Result: You move from procedural logic (IF entity_type == "Artist" THEN...) to declarative logic (query the ontology to learn the rules).
The system can now analyze its own schema, identify missing capabilities, and propose new tool definitions.
It's not just configurable - it's introspective.
What technical patterns have you found for making agent capabilities declarative rather than hardcoded? | 37 comments on LinkedIn
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
Unifying Data Structures and Knowledge Graphs | by Mark Burgess | Oct, 2025 | Medium
Unifying Data Structures and Knowledge Graphs Why we get confused about the difference between data and knowledge This article is about a technical issue around the use of Knowledge Graphs to …
The Schema Paradox: Why LPGs Are Both Structured and Free
The Schema Paradox: Why LPGs Are Both Structured and Free
In the world of data and AI, we are often forced to choose between rigid structure and complete flexibility. But labelled property graphs (LPGs) quietly break that rule. They evolve structure through use, building ontology through action.
In this new piece, I explore how LPGs balance order and chaos to form living schemas that grow alongside the data and its context. Integrated with GraphRAG and Applied Knowledge Graphs (AKGs), they become engines of adaptive intelligence, not just models of data.
This isn’t theory, it's how modern systems are learning to reason contextually, adapt dynamically and evolve continuously.
Full article: https://lnkd.in/eUdmQjyH
#GraphData #KnowledgeGraph #KG #GraphRAG #AppliedKnowledgeGraph #AKG #LPG #DataArchitecture #AI #KnowledgeEngineering
The Schema Paradox: Why LPGs Are Both Structured and Free
AIOTI WG Standardisation Focus Group on Semantic Interoperability has prepared a report on Data to Ontology Mapping. A key challenge people face when using ontologies is […]
Time and space in the Unified Knowledge Graph environment
PDF | On Oct 2, 2025, Lyubo Blagoev published Time and space in the Unified Knowledge Graph environment | Find, read and cite all the research you need on ResearchGate
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Creating applications to manually populate and modify knowledge graphs is a complex task. In this paper, we propose a novel approach for designing user interfaces for this purpose, based on existing SHACL constraint files. Our method consists of taking SHACL constraints and creating multi-form web applications. The novelty of the approach is to treat the editing of knowledge graphs via multi-form application interaction as a business process. This enables user interface modeling, such as modeling of application control flows by integrating ontology-based business process management components. Additionally, because our application models are themselves knowledge graphs, we demonstrate how they can leverage OWL reasoning to verify logical consistency and improve the user experience.
As requested, these are the FIRST set of slides for my Ontobras Tutorial on the Unified Foundational Ontology i.e., the upcoming ISO/IEC CD 21838-5 (https://lnkd.in/egrMiCvG), and as announced here: https://lnkd.in/eeKmVW-5.
The Brazilian community is one of the most active and lively communities in ontologies these days and the event joined many people from academia, government and industry.
The slides for the SECOND part can be found here:
https://lnkd.in/eD2xhPKj
Thanks again for the invitation Jose M Parente de Oliveira.
#ontology #ontologies #conceptualmodeling #semantics
Semantics, Cybersecurity, and Services (SCS)/University of Twente
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
Why Large Graphs Fail to Understand Small Scalability
We love to talk about scaling graphs: billions of nodes, trillions of relationships and distributed clusters. But, in practice, larger graphs often become harder to understand. As Labelled Property Graphs (LPGs) grow, their structure remains sound, but their meaning starts to drift. Queries still run, but the answers become useless.
In my latest post, I explore why semantic coherence collapses faster than infrastructure can scale up, what 'cognitive coherence' really means in graph systems and how the flexibility of LPGs can empower and endanger knowledge integrity.
Full article: 'Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence' https://lnkd.in/epmwGM9u
#GraphRAG #KnowledgeGraph #LabeledPropertyGraph #LPG #SemanticAI #AIExplainability #GraphThinking #RDF #AKG #KGL | 15 comments on LinkedIn
Announcing the formation of a Data Façades W3C Community Group
I am excited to announce the formation of a Data Façades W3C Community Group.
Façade-X, initially introduced at SEMANTICS 2021 and successfully implemented by the SPARQL Anything project, provides a simple yet powerful, homogeneous view over diverse and heterogeneous data sources (e.g., CSV, JSON, XML, and many others). With the recent v1.0.0 release of SPARQL Anything, the time was right to work on the long-term stability and widespread adoption of this approach by developing an open, vendor-neutral technology.
The Façade-X concept was born to allow SPARQL users to query data in any structured format in plain SPARQL. Therefore, the choice of a W3C community group to lead efforts on specifications is just natural. Specifications will enhance its reliability, foster innovation, and encourage various vendors and projects—including graph database developers — to provide their own compatible implementations.
The primary goals of the Data Façades Community Group is to:
Define the core specification of the Façade-X method.
Define Standard Mappings: Formalize the required mappings and profiles for connecting Façade-X to common data formats.
Define the specification of the query dialect: Provide a reference for the SPARQL dialect, configuration conventions (like SERVICE IRIs), and the functions/magic properties used.
Establish Governance: Create a monitored, robust process for adding support for new data formats.
Foster Collaboration: Build connections with relevant W3C groups (e.g., RDF & SPARQL, Data Shapes) and encourage involvement from developers, businesses, and adopters.
Join us!
With Luigi Asprino Ivo Velitchkov Justin Dowdy Paul Mulholland Andy Seaborne Ryan Shaw ...
CG: https://lnkd.in/eSxuqsvn
Github: https://lnkd.in/dkHGT8N3
SPARQL Anything #RDF #SPARQL #W3C #FX
announce the formation of a Data Façades W3C Community Group
Snowflake Unites Industry Leaders to Unlock AI's Potential with the Open Semantic Interchange
So I am worried.
https://lnkd.in/gfpkjUNZ
A semantic exchange format in YAML?
Because there is nothing to build on already?
https://lnkd.in/gB-iEeXn
:(
Snowflake Unites Industry Leaders to Unlock AI's Potential with the Open Semantic Interchange
RDF, the Semantic Web Project, Linked Data, and Knowledge Graphs have always promised an Internet where data is richly interconnected, queryable, and semantically coherent. The vision was not flawed, but adoption has remained mercurial.
T-Box: The secret sauce of knowledge graphs and AI
T-Box: The secret sauce of knowledge graphs and AI
Ever wondered how knowledge graphs “understand” the world? Meet the T-Box, the part that tells your graph what exists and how it can relate.
Think of it like building a LEGO set:
T-Box (Terminological Box) = the instruction manual (defines the pieces and how they fit)
A-Box (Assertional Box) = the LEGO pieces you actually have (your data, your instances)
Why it’s important for RDF knowledge graphs:
- Gives your data structure and rules, so your graph doesn’t turn into spaghetti
- Enables reasoning, letting the system infer new facts automatically
- Keeps your graph consistent and maintainable, even as it grows
Why it’s better than other models:
Traditional databases just store rows and columns; relationships have no meaning
RDF + T-Box = data that can explain itself and connect across domains
Why AI loves it:
- AI can reason over knowledge, not just crunch numbers
- Enables smarter recommendations, insights, and predictions based on structured knowledge
Quick analogy:
T-Box = blueprint/instruction manual (the ontology / what is possible)
A-Box = the real-world building (the facts / what is true)
Together = AI-friendly, smart knowledge graph
#KnowledgeGraph #RDF #AI #SemanticWeb #DataScience #GraphData
T-Box: The secret sauce of knowledge graphs and AI
Showrooms vs. Production Reality: Why is RDF still not widely used?
Showrooms vs. Production Reality: Why is RDF still not widely used?
The debate around RDF never really goes away. Advocates highlight its strong foundations, interoperability and precision. However, critics point to its steep learning curve, unwieldy tools, and limited adoption beyond academia and government circles.
So why is RDF still a hard sell in enterprise settings? The answer lies less in ignorance and more in reality.
Enterprises operate in dynamic environments. Data is constantly being created, updated, versioned and retired. Complex CRUD operations, integration pipelines and governance processes are not exceptions, but part of the daily routine. RDF, with its emphasis on formal representation, often finds it difficult to keep up with this level of operational activity.
Performance matters, too. Systems that appear elegant in theory often encounter scaling and latency issues in practice. Enterprises cannot afford philosophical debates when customers expect instant results and compliance teams demand verifiable evidence.
Usability is another factor. While RDF tooling is powerful, it is geared towards specialists. Enterprises need platforms that are usable by architects, data stewards, analysts and developers, without requiring them to master semantic web standards.
Meanwhile, pragmatic approaches to GraphRAG — combining graph models with embeddings — are gaining traction. While they may lack the rigour of RDF, they offer faster integration, better performance and easier adoption. For many enterprises, 'good enough and working' is preferable to 'perfect but unused'.
This doesn’t mean that RDF has no place. It remains relevant in classical information systems where interoperability and formal semantics are essential, such as in the healthcare, government and regulated industries.
However, the centre of gravity has shifted. In today's LLM and GraphRAG pipelines, with all their complexity and pragmatic constraints, enterprises prioritise solutions that work, scale and can be trusted. Therefore, the real question may no longer be “Why don’t enterprises adopt RDF?”, but rather, “Can RDF remain relevant in the noisy, fast-moving world of enterprise AI?”
#KnowledgeGraphs #EnterpriseAI #GraphRAG #RDF #DataArchitecture #AIinEnterprise #LLM #AIAdoption | 22 comments on LinkedIn
Showrooms vs. Production Reality: Why is RDF still not widely used?
Every knowledge system has to wrestle with a deceptively simple question: what do we assert, and what do we derive? That line between assertion and derivation is where Object-Role Modeling (ORM) and the Resource Description Framework (RDF) with the Web Ontology Language (OWL) go in radically differe