Found 333 bookmarks
Newest
Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.
Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.
Most agentic systems hardcode their capabilities.
Most agentic systems hardcode their capabilities. 🔳This does not scale.Ontologies as executable metadata for the four core agent capabilities can solve this.
·linkedin.com·
Most agentic systems hardcode their capabilities. This does not scale. Ontologies as executable metadata for the four core agent capabilities can solve this.
Visualizing Knowledge Graphs
Visualizing Knowledge Graphs
A practical guide to visualizing and exploring knowledge graphs (RDF/OWL and property graphs) with yFiles: predicate-aware analysis, schema vs. instance views, appropriate layouts, semantic styling, and interaction patterns like predicate filters and progressive disclosure.
Visualizing Knowledge Graphs: A Comprehensive Guide
·yfiles.com·
Visualizing Knowledge Graphs
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents. 🔳 They're storing the instructions agents use to operate on that data. Traditional software architectures separate code from data, with logic hardcoded in application layers while data resides in storage layers. The ontology-based approach fundamentally challenges this separation by storing behavioral rules and tool definitions as graph data that agents actively query during execution. Ontologies in these systems operate as runtime-queryable metadata rather than compile-time specifications This is meta-programming at the database level, and the technical implications are profound: Traditional approach: Your agent has hardcoded tools. Each tool is a Python function that knows exactly what query to run, which entity types to expect, and how to navigate relationships. Ontology-as-meta-tool approach: Your agent has THREE generic tools that query the ontology at runtime to figure out how to operate. Here's the technical breakdown: Tool 1 does semantic search and returns mixed entity types (could be Artist nodes, Subject nodes, whatever matches the vector similarity). Tool 2 queries the ontology: "For this entity type, what property serves as the unique identifier?" The ontology responds because properties are marked with "inverseFunctional" annotations. Now the agent knows how to retrieve specific instances. Tool 3 queries the ontology again: "Which relationships from this entity type are marked as contextualizing?" The ontology returns relationship types. The agent then constructs a dynamic Cypher query using those relationship types as parameters. The breakthrough: The same three tools work for ANY domain. Swap the art gallery ontology for a medical ontology, and the agent adapts instantly because it's reading navigation rules from the graph, not from code. This is self-referential architecture. The system queries its own structure to determine its own behavior. The ontology becomes executable metadata - not documentation about the system, but instructions that drive the system. The technical pattern: Store tool definitions as (:Tool) nodes with Cypher implementations as properties Mark relationships with custom annotations (contextualizing: true/false) Mark properties with OWL annotations (inverseFunctional for identifiers) Agent queries these annotations at runtime to construct dynamic queries Result: You move from procedural logic (IF entity_type == "Artist" THEN...) to declarative logic (query the ontology to learn the rules). The system can now analyze its own schema, identify missing capabilities, and propose new tool definitions. It's not just configurable - it's introspective. What technical patterns have you found for making agent capabilities declarative rather than hardcoded? | 37 comments on LinkedIn
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
·linkedin.com·
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
Alhamdulillah, ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds. Just as matter is formed from atoms, and galaxies are formed from stars, knowledge is likely to be formed from atomic knowledge graphs. Atomic knowledge graphs were born from our intention to solve a common problem in LLM-based KG construction methods: exhaustivity and stability. Often, these methods produce unstable KGs that change when rerunning the construction process, even without changing anything. Moreover, they fail to capture all facts in the input documents and usually overlook the temporal and dynamic aspects of real-world data. What is the solution? Atomic facts that are temporally aware. Instead of constructing knowledge graphs from raw documents, we split them into atomic facts, which are self-contained and concise propositions. Temporal atomic KGs are constructed from each atomic fact. Then, we defined how the temporal atomic KGs would be merged at the atomic level and how the temporal aspects would be handled. We designed a binary merge algorithm that combines two TKGs and a parallel merge process that merges all TKGs simultaneously. The entire architecture operates in parallel. ATOM employs dual-time modeling that distinguishes observation time from validity time and has 3 main modules: - Module 1 (Atomic Fact Decomposition) splits input documents observed at time t into atomic facts using LLM-based prompting, where each temporal atomic fact is a short, self-contained snippet that conveys exactly one piece of information. - Module 2 (Atomic TKGs Construction) extracts 5-tuples in parallel from each atomic fact to construct atomic temporal KGs, while embedding nodes and relations and addressing temporal resolution during extraction. - Module 3 (Parallel Atomic Merge) employs a binary merge algorithm to merge pairs of atomic TKGs through iterative pairwise merging in parallel until convergence, with three resolution phases: (1) entity resolution, (2) relation name resolution, and (3) temporal resolution that merges observation and validity time sets for relations with similar (e_s, r_p, e_o). The resulting TKG snapshot is then merged with the previous DTKG to yield the updated DTKG. Results: Empirical evaluations demonstrate that ATOM achieves ~18% higher exhaustivity, ~17% better stability, and over 90% latency reduction compared to baseline methods (including iText2KG), demonstrating strong scalability potential for dynamic TKG construction. Check our ATOM's architecture and code: Preprint Paper: https://lnkd.in/dsJzDaQc Code: https://lnkd.in/drZUyisV Website: (coming soon) Example use cases: (coming soon) Special thanks to the dream team: Ludovic Moncla, Khalid Benabdeslem, Rémy Cazabet, Pierre Cléau 📚📡 | 14 comments on LinkedIn
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
·linkedin.com·
ATOM is finally here! A scalable and fast approach that can build and continuously update temporal knowledge graphs, inspired by atomic bonds.
Using Knowledge Graphs to Accelerate and Standardize AI-Generated Technical Documentation | by Michael Iantosca | Oct, 2025 | Medium
Using Knowledge Graphs to Accelerate and Standardize AI-Generated Technical Documentation | by Michael Iantosca | Oct, 2025 | Medium
Using Knowledge Graphs to Accelerate and Standardize AI-Generated Technical Documentation for Avalara Connector Guides A Practical Implementation Guide for Structured, Scalable Documentation …
·medium.com·
Using Knowledge Graphs to Accelerate and Standardize AI-Generated Technical Documentation | by Michael Iantosca | Oct, 2025 | Medium
The Schema Paradox: Why LPGs Are Both Structured and Free
The Schema Paradox: Why LPGs Are Both Structured and Free
The Schema Paradox: Why LPGs Are Both Structured and Free In the world of data and AI, we are often forced to choose between rigid structure and complete flexibility. But labelled property graphs (LPGs) quietly break that rule. They evolve structure through use, building ontology through action. In this new piece, I explore how LPGs balance order and chaos to form living schemas that grow alongside the data and its context. Integrated with GraphRAG and Applied Knowledge Graphs (AKGs), they become engines of adaptive intelligence, not just models of data. This isn’t theory, it's how modern systems are learning to reason contextually, adapt dynamically and evolve continuously. Full article: https://lnkd.in/eUdmQjyH #GraphData #KnowledgeGraph #KG #GraphRAG #AppliedKnowledgeGraph #AKG #LPG #DataArchitecture #AI #KnowledgeEngineering
The Schema Paradox: Why LPGs Are Both Structured and Free
·linkedin.com·
The Schema Paradox: Why LPGs Are Both Structured and Free
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Creating applications to manually populate and modify knowledge graphs is a complex task. In this paper, we propose a novel approach for designing user interfaces for this purpose, based on existing SHACL constraint files. Our method consists of taking SHACL constraints and creating multi-form web applications. The novelty of the approach is to treat the editing of knowledge graphs via multi-form application interaction as a business process. This enables user interface modeling, such as modeling of application control flows by integrating ontology-based business process management components. Additionally, because our application models are themselves knowledge graphs, we demonstrate how they can leverage OWL reasoning to verify logical consistency and improve the user experience.
·mdpi.com·
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
How to achieve logical inference performantly on huge data volumes
How to achieve logical inference performantly on huge data volumes
Lots of people talking about semantic layers. Okay, welcome to the party! The big question in our space, is how to achieve logical inference performantly on huge data volumes, given the inherent problems of combinatorial explosion that search algorithms (on which inference algorithms are based) have always confronted. After all, semantic layers are about offering inference services, the services that Edgar Codd envisioned DBMSes on the relational model eventually supporting in the very first paper on the relational model. So what are the leading approaches in terms of performance? 1. GPU Datalog 2. High-speed OWL reasoners like RDFox 3. Rete networks like Sparkling Logic's Rete-NT 4. High-speed FOL provers like Vampire Let's get down to brass tacks. RDFox posts some impressive benchmarks, but they aren't exactly obsoleting GPU Datalog, and I haven't seen any good data on RDFox vs Relational AI. If you have benchmarks on that, I'd love to see them. Rete-NT and RDFox are heavily proprietary, so understanding how the performance has been achieved is not really possible for the broader community beyond these vendors' consultants. And RDFox is now owned by Samsung, further complicating the picture. That leaves us with the open-source GPU Datalogs and high-speed FOL provers. That's what's worth studying right now in semantic layers, not engaging in dogmatic debates between relational model, property graph model, RDF, and "name your emerging data model." Performance has ALWAYS been the name of the game in automated theorem proving. We still struggle to handle inference on large datasets. We need to quit focusing on non-issues and work to streamline existing high-speed inference methods for business usage. GPU Datalog on CUDA seems promising. I imagine the future will bring further optimizations. | 11 comments on LinkedIn
how to achieve logical inference performantly on huge data volumes
·linkedin.com·
How to achieve logical inference performantly on huge data volumes
Unified Foundational Ontology tutorial
Unified Foundational Ontology tutorial
As requested, these are the FIRST set of slides for my Ontobras Tutorial on the Unified Foundational Ontology i.e., the upcoming ISO/IEC CD 21838-5 (https://lnkd.in/egrMiCvG), and as announced here: https://lnkd.in/eeKmVW-5. The Brazilian community is one of the most active and lively communities in ontologies these days and the event joined many people from academia, government and industry. The slides for the SECOND part can be found here: https://lnkd.in/eD2xhPKj Thanks again for the invitation Jose M Parente de Oliveira. #ontology #ontologies #conceptualmodeling #semantics Semantics, Cybersecurity, and Services (SCS)/University of Twente
Unified Foundational Ontology
·linkedin.com·
Unified Foundational Ontology tutorial