Found 7 bookmarks
Custom sorting
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents. šŸ”³ They're storing the instructions agents use to operate on that data. Traditional software architectures separate code from data, with logic hardcoded in application layers while data resides in storage layers. The ontology-based approach fundamentally challenges this separation by storing behavioral rules and tool definitions as graph data that agents actively query during execution. Ontologies in these systems operate as runtime-queryable metadata rather than compile-time specifications This is meta-programming at the database level, and the technical implications are profound: Traditional approach: Your agent has hardcoded tools. Each tool is a Python function that knows exactly what query to run, which entity types to expect, and how to navigate relationships. Ontology-as-meta-tool approach: Your agent has THREE generic tools that query the ontology at runtime to figure out how to operate. Here's the technical breakdown: Tool 1 does semantic search and returns mixed entity types (could be Artist nodes, Subject nodes, whatever matches the vector similarity). Tool 2 queries the ontology: "For this entity type, what property serves as the unique identifier?" The ontology responds because properties are marked with "inverseFunctional" annotations. Now the agent knows how to retrieve specific instances. Tool 3 queries the ontology again: "Which relationships from this entity type are marked as contextualizing?" The ontology returns relationship types. The agent then constructs a dynamic Cypher query using those relationship types as parameters. The breakthrough: The same three tools work for ANY domain. Swap the art gallery ontology for a medical ontology, and the agent adapts instantly because it's reading navigation rules from the graph, not from code. This is self-referential architecture. The system queries its own structure to determine its own behavior. The ontology becomes executable metadata - not documentation about the system, but instructions that drive the system. The technical pattern: Store tool definitions as (:Tool) nodes with Cypher implementations as properties Mark relationships with custom annotations (contextualizing: true/false) Mark properties with OWL annotations (inverseFunctional for identifiers) Agent queries these annotations at runtime to construct dynamic queries Result: You move from procedural logic (IF entity_type == "Artist" THEN...) to declarative logic (query the ontology to learn the rules). The system can now analyze its own schema, identify missing capabilities, and propose new tool definitions. It's not just configurable - it's introspective. What technical patterns have you found for making agent capabilities declarative rather than hardcoded? | 37 comments on LinkedIn
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
Ā·linkedin.comĀ·
Ontologies transcend their traditional role as static schema documentation and emerge as dynamic, executable metadata that actively controls and defines the capabilities of AI agents
The Knowledge Graph Talent Shortage: Why Companies Can't Find the Skills They Desperately Need
The Knowledge Graph Talent Shortage: Why Companies Can't Find the Skills They Desperately Need
The Knowledge Graph Talent Shortage: Why Companies Can't Find the Skills They Desperately Need In my previous posts, I showed how Google's Knowledge Graph gives them a major AI advantage (https://lnkd.in/d5ZpMYut), and how enterprises from IKEA to Siemens to AstraZeneca have been using knowledge graphs and now leverage them for GenAI applications (https://lnkd.in/dPhuUhFJ). But here's the problem: we don't have enough people who know how to build them. šŸ“Š The numbers tell the story. Job boards show thousands of open positions globally for ontology engineers, semantic web developers, and knowledge graph specialists. Yet these positions remain unfilled for months. Salaries for this expertise are rising, and technology vendors report inbound client calls instead of chasing business. šŸ¤” Why the shortage? The semantic web emerged in the early 2000s with technologies like RDF, OWL, and SPARQL. A small group of pioneers built this expertise. I was part of that early wave. I contributed to the POSC Caesar Association oil and gas ontology, certified as ontology modeller and participated in the W3C workshop hosted by Chevron in Houston in 2008. Later I led the Integrated Operations in the High North (IOHN) program with 23 companies like ABB, Siemens, and Cisco to increase semantic web knowledge within Equinor's vendor ecosystem. After IOHN, I stepped away for over a decade. The Knowledge Graph Alliance (KGA) drew me back. Companies need people who can design ontologies, write SPARQL queries, map enterprise data to semantic standards, and integrate knowledge graphs with LLMs. These aren't skills you pick up in a weekend bootcamp. šŸ”„ What needs to change? Universities must integrate semantic knowledge graphs into core curriculum alongside AI and machine learning as requirements, not electives. Here's something many don't realize: philosophy matters. Some of the best ontologists have philosophy degrees. Understanding how to represent knowledge requires training in logic and formal reasoning. DAMA InternationalĀ®'s Data Management Body of Knowledge covers 11 knowledge areas, but knowledge graphs remain absent. This would legitimize the discipline. Industry-academia bridges are critical. Organizations like the KGA bring together industry leaders with research organizations and academia. We need more such collaborations. šŸ’” The opportunity: If you're a data engineer or data scientist looking for a career differentiator, semantic web skills are your ticket. šŸŽÆ The bottom line: Knowledge graphs aren't optional for industrial-scale GenAI. But you need the people who understand them. While reports document tech talent shortages, the semantic web skills gap remains largely undocumented as companies struggle to fill thousands of positions. What's your experience with the shortage? Are you hiring? Upskilling? Teaching this? #KnowledgeGraphs #SemanticWeb #AI #GenAI #TalentShortage #SkillsGap #Ontology #DataScience #Philosophy #DigitalTransformation | 29 comments on LinkedIn
The Knowledge Graph Talent Shortage: Why Companies Can't Find the Skills They Desperately Need
Ā·linkedin.comĀ·
The Knowledge Graph Talent Shortage: Why Companies Can't Find the Skills They Desperately Need
ā€œShorting Ontologyā€ — Why Michael Burry Might Not Be Wrong | LinkedIn
ā€œShorting Ontologyā€ — Why Michael Burry Might Not Be Wrong | LinkedIn
ā€œThe idea that chips and ontology is what you want to short is batsh*t crazy.ā€ — Alex Karp, CNBC, November 2025 When Palantir’s CEO, Alex Karp, lashed out at Michael Burry — ā€œBig Shortā€ investor who bet against Palantir and Nvidia — he wasn’t just defending his balance sheet.
Ā·linkedin.comĀ·
ā€œShorting Ontologyā€ — Why Michael Burry Might Not Be Wrong | LinkedIn
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value. The reports on agents are starting to sound samey: go vertical not horizontal; redesign workflows end-to-end; clean your data; stop doing pilots that automate inefficiencies; price for outcomes when the agent does the work. All true. All necessary. All needing repetition ad nauseam. So it’s refreshing to see a switch-up in Bain’s Technology Report 2025: the real leverage now sits with semantics. A shared layer of meaning. Bain notes that protocols are maturing. MCP and A2A let agents pass tool calls, tokens, and results between layers. Useful plumbing. But there’s still no shared vocabulary that says what an invoice, policy, or work order is, how it moves through states, and how it maps to APIs, tables, and approvals. Without that, cross-vendor reliability will keep stalling. They go further: whoever lands a pragmatic semantic layer first gets winner-takes-most network effects. Define the dictionary and you steer the value flow. This isn’t just a feature. It’s a control point. Bain frames the stack clearly: - Systems of record (data, rules, compliance) - Agent operating systems (orchestration, planning, memory) - Outcome interfaces (natural language requests, user-facing actions) The bottleneck is semantics. And there’s a pricing twist. If agents do the work, semantics define what ā€œdoneā€ means. That unlocks outcome-based pricing, charging for tasks completed or value delivered, not log-ons. Bain is blunt: the open, any-to-any agent utopia will smash against vendor incentives, messy data, IP, and security. Translation: walled gardens lead first. Start where governance is clear and data is good enough, then use that traction to shape the semantics others will later adopt. This is where I’m seeing convergence. In practice, a knowledge graph can provide that shared meaning, identity, relationships, and policy. One workable pattern: the agent plans with an LLM, resolves entities and checks rules in the graph, then acts through typed APIs, writing back as events the graph can audit. That’s the missing vocabulary and the enforcement that protocols alone can’t cover. Tony Seale puts it well: ā€œNeural and symbolic systems are not rivals; they are complements… a knowledge graph provides the symbolic backbone… to ground AI in shared semantics and enforce consistency.ā€ To me, this is optimistic, because it moves the conversation from ā€œmake the model smarterā€ to ā€œmake the system understandable.ā€ Agents don’t need perfection if they are predictable, composable, and auditable. Semantics deliver that. It’s also how smaller players compete with hyperscalers: you don’t need to win the model race to win the meaning race. With semantics, agents become infrastructure. The next few years won’t be won by who builds the biggest model. It’ll be won by who defines the smallest shared meaning. | 27 comments on LinkedIn
Protocols move bits. Semantics move value.
Ā·linkedin.comĀ·
Protocols move bits. Semantics move value.
The rise of Context Engineering
The rise of Context Engineering
The field is evolving from Prompt Engineering, treating context as a single, static string, to Contextual Engineering, which views context as a dynamic system of structured components (instructions, tools, memory, knowledge) orchestrated to solve complex tasks. šŸ”Ž Nearly all innovation is a response to the primary limitation of Transformer models: the quadratic (O(n2)) computational cost of the self-attention mechanism as the context length (n) increases. All techniques for managing this challenge can be organized into three areas: 1. Context Generation & Retrieval (Sourcing Ingredients) Advanced Reasoning: Chain-of-Thought (CoT), Tree-of-Thoughts (ToT). External Knowledge: Advanced Retrieval-Augmented Generation (RAG) like GraphRAG, which uses knowledge graphs for more structured retrieval. 2. Context Processing (Cooking the Ingredients) Refinement: Using the LLM to iterate and improve its own output (Self-Refine). Architectural Changes: Exploring models beyond Transformers (e.g., Mamba) to escape the quadratic bottleneck. 3. Context Management (The Pantry System) Memory: Creating stateful interactions using hierarchical memory systems (e.g., MemGPT) that manage information between the active context window and external storage. Key Distinction: RAG is stateless I/O to the world; Memory is the agent's stateful internal history. The most advanced applications integrate these pillars to create sophisticated agents, with an added layer of dynamic adaptation: Tool-Integrated Reasoning: Empowering LLMs to use external tools (APIs, databases, code interpreters) to interact with the real world. Multi-Agent Systems: Designing "organizations" of specialized LLM agents that communicate and collaborate to solve multi-faceted problems, mirroring the structure of human teams. Adaptive Context Optimization: Leveraging Reinforcement Learning (RL) to dynamically optimize context selection and construction for specific environments and tasks, ensuring efficient and effective performance. Contextual Engineering is the emerging science of building robust, scalable, and stateful applications by systematically managing the flow of information to and from an LLM. | 16 comments on LinkedIn
Ā·linkedin.comĀ·
The rise of Context Engineering
GoAI: Enhancing AI Students' Learning Paths and Idea Generation via Graph of AI Ideas
GoAI: Enhancing AI Students' Learning Paths and Idea Generation via Graph of AI Ideas
šŸ’” Graph of Ideas -- LLMs paired with knowledge graphs can be great partners for ideation, exploration, and research. We've all seen the classic detective corkboard, with pinned notes and pictures, all strung together with red twine. šŸ•µļøĀ  The digital version could be a mind-map, but you still have to draw everything by hand. What if you could just build one from a giant pile of documents? Enter GoAI - a fascinating approach that just dropped on arXiv combining knowledge graphs with LLMs for AI research idea generation. While the paper focuses on a graph of research papers, the approach is generalizable. Here's what caught my attention: šŸ”— It builds knowledge graphs from AI papers where nodes are papers/concepts and edges capture semantic citation relationships - basically mapping how ideas actually connect and build on each other šŸŽÆ The "Idea Studio" feature gives you feedback on innovation, clarity, and feasibility of your research ideas - like having a research mentor in your pocket šŸ“ˆ Experiments show it helps produce clearer, more novel, and more impactful research ideas compared to traditional LLM approaches The key insight? Current LLMs miss the semantic structure and prerequisite relationships in academic knowledge. This framework bridges that gap by making the connections explicit. As AI research accelerates, this approach can be be used for any situation where you're looking for what's missing, rather than answering a question about what exists. Read all the details in the paper... https://lnkd.in/ekGtCx9T
Graph of Ideas -- LLMs paired with knowledge graphs can be great partners for ideation, exploration, and research.
Ā·linkedin.comĀ·
GoAI: Enhancing AI Students' Learning Paths and Idea Generation via Graph of AI Ideas