GraphNews

#KnowledgeGraph #semantics #innovation
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents. 🔳 They're storing the instructions agents use to operate on that data. Traditional software architectures separate code from data, with logic hardcoded in application layers while data resides in storage layers. The ontology-based approach fundamentally challenges this separation by storing behavioral rules and tool definitions as graph data that agents actively query during execution. Ontologies in these systems operate as runtime-queryable metadata rather than compile-time specifications This is meta-programming at the database level, and the technical implications are profound: Traditional approach: Your agent has hardcoded tools. Each tool is a Python function that knows exactly what query to run, which entity types to expect, and how to navigate relationships. Ontology-as-meta-tool approach: Your agent has THREE generic tools that query the ontology at runtime to figure out how to operate. Here's the technical breakdown: Tool 1 does semantic search and returns mixed entity types (could be Artist nodes, Subject nodes, whatever matches the vector similarity). Tool 2 queries the ontology: "For this entity type, what property serves as the unique identifier?" The ontology responds because properties are marked with "inverseFunctional" annotations. Now the agent knows how to retrieve specific instances. Tool 3 queries the ontology again: "Which relationships from this entity type are marked as contextualizing?" The ontology returns relationship types. The agent then constructs a dynamic Cypher query using those relationship types as parameters. The breakthrough: The same three tools work for ANY domain. Swap the art gallery ontology for a medical ontology, and the agent adapts instantly because it's reading navigation rules from the graph, not from code. This is self-referential architecture. The system queries its own structure to determine its own behavior. The ontology becomes executable metadata - not documentation about the system, but instructions that drive the system. The technical pattern: Store tool definitions as (:Tool) nodes with Cypher implementations as properties Mark relationships with custom annotations (contextualizing: true/false) Mark properties with OWL annotations (inverseFunctional for identifiers) Agent queries these annotations at runtime to construct dynamic queries Result: You move from procedural logic (IF entity_type == "Artist" THEN...) to declarative logic (query the ontology to learn the rules). The system can now analyze its own schema, identify missing capabilities, and propose new tool definitions. It's not just configurable - it's introspective. What technical patterns have you found for making agent capabilities declarative rather than hardcoded? | 37 comments on LinkedIn
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
·linkedin.com·
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
“Shorting Ontology” — Why Michael Burry Might Not Be Wrong | LinkedIn
“Shorting Ontology” — Why Michael Burry Might Not Be Wrong | LinkedIn
“The idea that chips and ontology is what you want to short is batsh*t crazy.” — Alex Karp, CNBC, November 2025 When Palantir’s CEO, Alex Karp, lashed out at Michael Burry — “Big Short” investor who bet against Palantir and Nvidia — he wasn’t just defending his balance sheet.
·linkedin.com·
“Shorting Ontology” — Why Michael Burry Might Not Be Wrong | LinkedIn
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value. The reports on agents are starting to sound samey: go vertical not horizontal; redesign workflows end-to-end; clean your data; stop doing pilots that automate inefficiencies; price for outcomes when the agent does the work. All true. All necessary. All needing repetition ad nauseam. So it’s refreshing to see a switch-up in Bain’s Technology Report 2025: the real leverage now sits with semantics. A shared layer of meaning. Bain notes that protocols are maturing. MCP and A2A let agents pass tool calls, tokens, and results between layers. Useful plumbing. But there’s still no shared vocabulary that says what an invoice, policy, or work order is, how it moves through states, and how it maps to APIs, tables, and approvals. Without that, cross-vendor reliability will keep stalling. They go further: whoever lands a pragmatic semantic layer first gets winner-takes-most network effects. Define the dictionary and you steer the value flow. This isn’t just a feature. It’s a control point. Bain frames the stack clearly: - Systems of record (data, rules, compliance) - Agent operating systems (orchestration, planning, memory) - Outcome interfaces (natural language requests, user-facing actions) The bottleneck is semantics. And there’s a pricing twist. If agents do the work, semantics define what “done” means. That unlocks outcome-based pricing, charging for tasks completed or value delivered, not log-ons. Bain is blunt: the open, any-to-any agent utopia will smash against vendor incentives, messy data, IP, and security. Translation: walled gardens lead first. Start where governance is clear and data is good enough, then use that traction to shape the semantics others will later adopt. This is where I’m seeing convergence. In practice, a knowledge graph can provide that shared meaning, identity, relationships, and policy. One workable pattern: the agent plans with an LLM, resolves entities and checks rules in the graph, then acts through typed APIs, writing back as events the graph can audit. That’s the missing vocabulary and the enforcement that protocols alone can’t cover. Tony Seale puts it well: “Neural and symbolic systems are not rivals; they are complements… a knowledge graph provides the symbolic backbone… to ground AI in shared semantics and enforce consistency.” To me, this is optimistic, because it moves the conversation from “make the model smarter” to “make the system understandable.” Agents don’t need perfection if they are predictable, composable, and auditable. Semantics deliver that. It’s also how smaller players compete with hyperscalers: you don’t need to win the model race to win the meaning race. With semantics, agents become infrastructure. The next few years won’t be won by who builds the biggest model. It’ll be won by who defines the smallest shared meaning. | 27 comments on LinkedIn
Protocols move bits. Semantics move value.
·linkedin.com·
Protocols move bits. Semantics move value.