Found 2587 bookmarks
Newest
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
Inspired by the talented Jessica Talisman, here is a new infographic microsim that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph: https://lnkd.in/g66HRBhn You can include this interactive microsim in all of your semantics/ontology and agentic AI courses with just a single line of HTML.
a new infographic microsim that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
·linkedin.com·
An infographic that you can use when teaching the step-by-step progression from a simple list of business terms to a real-time agentic enterprise knowledge graph
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
For all the excitement around large language models, the latest research from Simona-Vasilica Oprea and Georgiana Stănescu (Electronics 14:1313, 2025) offers a reality check. Automatic ontology generation, even with novel prompting techniques like Memoryless CQ-by-CQ and Ontogenia, remains a partial
·linkedin.com·
Automatic Ontology Generation Still Falls Short & Why Applied Ontologists Deliver the ROI | LinkedIn
Ever heard of "knowledge engineering"?
Ever heard of "knowledge engineering"?
Ever heard of "knowledge engineering"? It’s what we called AI before AI was cool. I just pulled this out of the deep archives, Stanford University, 1980. Feigenbaum’s HPP report. The bones of modern context engineering were already there. ↳ What they did: ➤ Curated knowledge bases, not giant prompts ➤ Rule “evocation” to gate relevance ➤ Certainty factors to track confidence ➤ Shells + blackboards to orchestrate tools ➤ Traceable logic so humans could audit decisions ↳ What we do now: ➤ Trimmed RAG context instead of bloated prompts ➤ Retrieval + reranking + policy checks for gating ➤ Scores, evals, and guardrails to manage uncertainty ➤ Tool calling, MCPs, workflow engines for execution ➤ Logs + decision docs for explainability ↳ The through-line for UX: ➤Performance comes from shaping context, what to include, when to include it, and how to prove it worked. If you're building AI agents, you're standing on those shoulders. Start with context, not cleverness. Follow for human-centered AI + UX. Reshare if your team ships with context discipline. | 41 comments on LinkedIn
Ever heard of "knowledge engineering"?
·linkedin.com·
Ever heard of "knowledge engineering"?
Highlights of Mike Ferguson keynote at BDL - Ontology and Knowledge graph takeaways
Highlights of Mike Ferguson keynote at BDL - Ontology and Knowledge graph takeaways
Highlights of Mike Ferguson keynote at BDL: 1) AI agents are already everywhere so they need to be coordinated. 2) Structured/Unstructured data is converging. 3) Knowledge Graphs and Enterprise Ontology is the context for AI agents 4) Data products to enable reuse and 5) governance must be active. Mike is not just a true veteran in the data space, but also someone who can see the big picture when times of hype. His keynotes are always eye opening to see where the puck is heading. The following are the snippets from his slides which are packed with so much knowledge! 1) AI Agents - Challenges - There Has Been an Explosion of Al Agent Development Tools That is Leading to a Proliferation of Al Agents - A Federated Organisation Structure and Common Al Governance Are Needed to Coordinate Decentralised Development of Al Agents 2) Structured and Unstructured data. - Convergence of Structured Data and Analytics (data warehouse/lake) + Knowledge Management (content and record management systems) 3) Enterprise Ontology and Knowledge Graphs - Lots of Siloed Product Oriented Semantic Layers / Knowledge Graphs Are Now Emerging at All Levels to Provide Context to Solution Al Agents Before Using Tools to Get the Relevant Information. - Ideally Natural Language Queries Should Query an Enterprise Knowledge Graph First to Get Complete Context and Then Get the Relevant Information from Agents for Holistic Insights - Integrated metadata in an enterprise ontology to provide context for all Al will potentially become the most valuable capability of the Al era - Al Requirements - We Need to Create a Data Catalog & Vectorised Knowledge Graph to Know What Relevant Data Is Needed as Context to Answer Natural Language Queries - Issues With Providing Context for Al Over the Next Few Years - Chaos!! - We Need to Integrate Metadata to Provide Context for ALL Al Agents & the Amount of Metadata is Going to be BIG! - The BIG METADATA question: Will we use the same approach to create ontology subgraphs that can be connected to form an enterprise ontology and open it up to multiple Agents? - How Do You Enable Data for Al? - A New Metadata Layer Is Emerging In the Data and Al Stack Which Is Needed for the Agentic Enterprise! - The Enterprise Ontology Layer 4) Data Products - Foundational Data Products are Critical in Any Business and Can Be Reused to Create Others - Conversational Data Engineering Is Now Mainstream to Generate Pipelines to Produce Data Products for Each Entity Defined in the Business Glossary Within a Data Catalog 5) Active Governance - Data Governance Needs a Major Rethink - It Needs to be Active, Dynamic and Always On - The Agentic Unified Data Governance Platform - Al-Assisted Data Governance Services and Al-Agents Thanks Mike for sharing your knowledge!! What are your main takeaways? Also, he was a guest on Catalog & Cocktails Podcast so check that out (link in the comments). | 12 comments on LinkedIn
Highlights of Mike Ferguson keynote at BDL
·linkedin.com·
Highlights of Mike Ferguson keynote at BDL - Ontology and Knowledge graph takeaways
Some companies like Rippletideare getting agents to production using graphs as the orchestration layer and pushing LLMs to the edges
Some companies like Rippletideare getting agents to production using graphs as the orchestration layer and pushing LLMs to the edges
Most companies are building autonomous agents with LLMs at the center, making every decision. Here's the problem: each LLM call has a ~5% error rate. Chain 10 calls together and your reliability drops to 60%. ⁉️ Here's the math: Single LLM call = 95% accuracy. Chain 10 LLM calls for agentic workflow = 0.95^10 = 60% reliability. This compounds exponentially with complexity. Enterprise can't ship that. Some companies like Rippletide Yann BILIEN getting agents to production are doing something different. They're using graphs as the orchestration layer and pushing LLMs to the edges. The architectural solution is about removing LLMs from the orchestration loop entirely and using hypergraph-based reasoning substrates instead. Why hypergraphs specifically? Regular graphs connect two nodes per edge. Hyperedges connect multiple nodes simultaneously - critical for representing complex state transitions. A single sales conversation turn involves speaker, utterance, topic, customer state, sentiment, outcome, and timestamp. A hyperedge captures all these relationships atomically in the reasoning structure. The neurosymbolic integration is what makes this production-grade: Symbolic layer = business rules, ontologies, deterministic patterns. These are hard constraints that prevent policy violations (discount limits, required info collection, compliance rules). Neural layer = RL components that learn edge weights, validate patterns, update confidences. Operates within symbolic constraints. Together they enable the "crystallization mechanism" - patterns start probabilistic, validate through repeated success, then lock into deterministic rules at 95%+ confidence. The system becomes non-regressive: it learns and improves but validated patterns never degrade. Here's what this solves that LLM-orchestration can't: Hallucinations with confidence - eliminated because reasoning follows deterministic graph traversal through verified data, not generative token prediction. Goal drift - impossible because goal hierarchies are encoded in graph topology and enforced mathematically by traversal algorithms. Data leakage across contexts - prevented through graph partitioning and structural access controls, not prompt instructions. Ignoring instructions - doesn't happen because business rules are executable constraints, not natural language hopes. The LLM's role reduces to exactly two functions: (1) helping structure ontologies during build phase, (2) optionally formatting final outputs to natural language. Zero involvement in decision-making or orchestration. Rippletide's architecture demonstrates this at scale: Hypergraph stores unified memory + reasoning (no RAG, no retrieval bottleneck) Reasoning engines execute graph traversal algorithms for decisions Weighted edges encode relationship strength, recency, confidence, importance Temporal/spatial/causal relationships explicit in structure (what LLMs fundamentally lack) | 27 comments on LinkedIn
Some companies like Rippletide Yann BILIEN getting agents to production are doing something different. They're using graphs as the orchestration layer and pushing LLMs to the edges.
·linkedin.com·
Some companies like Rippletideare getting agents to production using graphs as the orchestration layer and pushing LLMs to the edges
The document-to-knowledge-graph pipeline is fundamentally broken
The document-to-knowledge-graph pipeline is fundamentally broken
The market is obsessed with the sexy stuff, autonomous agents, reasoning engines, sophisticated orchestration. Meanwhile, the unsexy foundation layer is completely broken. ⭕ And that foundation layer? It's the only thing that determines whether your agent actually works. Here's the technical problem killing agentic AI reliability and that a great company like Lettria solves: The document-to-knowledge-graph pipeline is fundamentally broken : Layer 1: Document Parsing Hell You can't feed a 400-page PDF with mixed layouts into a vision-language model and expect consistent structure. Here's why: Reading order detection fails on multi-column layouts, nested tables, and floating elements Vision LLMs hallucinate cell boundaries on complex tables (financial statements, technical specs) You need bbox-level segmentation with preserved coordinate metadata for traceability Traditional CV models (Doctr, Detectron2, YOLO) outperform transformers on layout detection and run on CPU Optimal approach requires model routing: PDF Plumber for text extraction, specialized table parsers for structured data, VLMs only as fallback Without preserving document_id → page_num → bbox_coords → chunk_id mapping, you lose provenance permanently Layer 2: Ontology Generation Collapse RDF/OWL ontology creation isn't prompt engineering. It's semantic modeling: You need 5-6 levels of hierarchical abstraction (not flat entity lists) Object properties require explicit domain/range specifications (rdfs:domain, rdfs:range) Data properties need typed constraints (xsd:string, xsd:integer, xsd:date) Relationships must follow semantic web standards (owl:ObjectProperty, owl:DatatypeProperty) LLM might output syntactically valid Turtle that violates semantic consistency Proper approach: 8-9 specialized LLM calls with constraint validation, reasoner checks, and ontologist-in-the-loop verification Without this, your knowledge graph has edges connecting semantically incompatible nodes Layer 3: Text-to-RDF Extraction Failure Converting natural language to structured triples while maintaining schema compliance is where frontier models crater: GPT-4/Claude achieve ~60-70% F1 on entity extraction, ~50-60% on relation extraction (measured on Text2KGBench) They hallucinate entities not in your ontology They create relations violating domain/range constraints Context window limitations force truncation (32K tokens = ~10-15 pages with full ontology) A specialized 600M parameter model fine-tuned on 14K annotated triples across 19 domain ontologies hits 85%+ F1 Why? Task-specific loss functions, schema-aware training, constrained decoding The compounding effect destroys reliability Your agent's reasoning is irrelevant when it's operating on a knowledge graph where 73% of nodes/edges are wrong, incomplete, or unverifiable. Without bidirectional traceability (SPARQL query → triple → chunk_id → bbox → source PDF), you can't deploy in regulated environments. Period. | 13 comments on LinkedIn
The document-to-knowledge-graph pipeline is fundamentally broken
·linkedin.com·
The document-to-knowledge-graph pipeline is fundamentally broken
Building Intelligent AI Memory Systems with Cognee: A Python Development Knowledge Graph
Building Intelligent AI Memory Systems with Cognee: A Python Development Knowledge Graph
Building AI agents that can synthesize scattered knowledge like expert developers 🧠 I have a tutorial about building intelligent AI memory systems with Cognee in my 'Agents Towards Production' repo that solves a critical problem - developers navigate between documentation, community practices, and personal experience, but traditional approaches treat these as isolated resources. This tutorial shows how to build a unified knowledge graph that connects Python's design philosophy, real-world implementations from its creator, and your specific development patterns. The tutorial covers 3 key capabilities: - Knowledge Graph Construction: Building interconnected networks from Guido van Rossum's actual commits, PEP guidelines, and personal conversations - Temporal Analysis: Understanding how solutions evolved over time with time-aware queries - Dynamic Memory Layer: Inferring implicit rules and discovering non-obvious connections across knowledge domains The cross-domain discovery is particularly impressive - it connects your validation issues from January 2024 with Guido van Rossum's actual solutions from mypy and CPython. Rather than keyword matching, it understands semantic relationships between your type hinting challenges and historical solutions, even when terminology differs. Tech stack: - Cognee for knowledge graph construction - OpenAI GPT-4o-mini for entity extraction - Graph algorithms for pattern recognition - Vector embeddings for semantic search The system uses semantic graph traversal with deep relationship understanding for contextually aware responses. Includes working Python code, complete Jupyter notebook with interactive visualizations, and production-ready patterns. Part of the collection of practical guides for building production-ready AI systems. Direct link to the tutorial: https://lnkd.in/eSsjwbuh Ever wish you could query all your development knowledge as one unified intelligent system? ♻️ Repost to let your network learn about this too!
·linkedin.com·
Building Intelligent AI Memory Systems with Cognee: A Python Development Knowledge Graph
Algorithmic vs. Symbolic Reasoning: Is Graph Data Science a critical, transformative layer for GraphRAG?
Algorithmic vs. Symbolic Reasoning: Is Graph Data Science a critical, transformative layer for GraphRAG?
Is Graph Data Science a critical, transformative layer for GraphRAG? The field of enterprise Artificial Intelligence (AI) is undergoing a significant architectural evolution. The initial enthusiasm for Large Language Models (LLMs) has matured into a pragmatic recognition of their limitations, partic
·linkedin.com·
Algorithmic vs. Symbolic Reasoning: Is Graph Data Science a critical, transformative layer for GraphRAG?
Semantics in use part 5: and interview with Anikó Gerencsér, Team leader - Reference data team @Publication Office of the European Union | LinkedIn
Semantics in use part 5: and interview with Anikó Gerencsér, Team leader - Reference data team @Publication Office of the European Union | LinkedIn
What is your role? I am working in the Publications Office of the European Union as the team leader of the Reference data team. The Publications Office of the European Union is the official provider of publishing services to all EU institutions, bodies and agencies.
·linkedin.com·
Semantics in use part 5: and interview with Anikó Gerencsér, Team leader - Reference data team @Publication Office of the European Union | LinkedIn
Recent Trends and Insights in Semantic Web and Ontology-Driven Knowledge Representation Across Disciplines Using Topic Modeling
Recent Trends and Insights in Semantic Web and Ontology-Driven Knowledge Representation Across Disciplines Using Topic Modeling
This research aims to investigate the roles of ontology and Semantic Web Technologies (SWT) in modern knowledge representation and data management. By analyzing a dataset of 10,037 academic articles from Web of Science (WoS) published in the last 6 years (2019–2024) across several fields, such as computer science, engineering, and telecommunications, our research identifies important trends in the use of ontologies and semantic frameworks. Through bibliometric and semantic analyses, Natural Language Processing (NLP), and topic modeling using Latent Dirichlet Allocation (LDA) and BERT-clustering approach, we map the evolution of semantic technologies, revealing core research themes such as ontology engineering, knowledge graphs, and linked data. Furthermore, we address existing research gaps, including challenges in the semantic web, dynamic ontology updates, and scalability in Big Data environments. By synthesizing insights from the literature, our research provides an overview of the current state of semantic web research and its prospects. With a 0.75 coherence score and perplexity = 48, the topic modeling analysis identifies three distinct thematic clusters: (1) Ontology-Driven Knowledge Representation and Intelligent Systems, which focuses on the use of ontologies for AI integration, machine interpretability, and structured knowledge representation; (2) Bioinformatics, Gene Expression and Biological Data Analysis, highlighting the role of ontologies and semantic frameworks in biomedical research, particularly in gene expression, protein interactions and biological network modeling; and (3) Advanced Bioinformatics, Systems Biology and Ethical-Legal Implications, addressing the intersection of biological data sciences with ethical, legal and regulatory challenges in emerging technologies. The clusters derived from BERT embeddings and clustering show thematic overlap with the LDA-derived topics but with some notable differences in emphasis and granularity. Our contributions extend beyond theoretical discussions, offering practical implications for enhancing data accessibility, semantic search, and automated knowledge discovery.
·mdpi.com·
Recent Trends and Insights in Semantic Web and Ontology-Driven Knowledge Representation Across Disciplines Using Topic Modeling
Generating Authentic Grounded Synthetic Maintenance Work Orders wtih Knowledge Graphs
Generating Authentic Grounded Synthetic Maintenance Work Orders wtih Knowledge Graphs
We are all beginning to appreciate how important #knowledgegraphs are to #RAG for robust #genAi apps. But did you know that KGs can make a significant improvement to #syntheticdata generation? Engineers need to generate synthetic technical data, particularly for industrial maintenance where real datasets (e.g. about failures) are often limited and unbalanced. This research offers a generic approach to extracting legitimate paths from a knowledge graph to ensure that synthetic maintenance/failure data generated are grounded in engineering knowledge while reflecting the style and language of the technicians who write the #maintenanceworkorders. Turing test experiments reveal that subject matter experts could distinguish real from synthetic data only 51% of the time while exhibiting near-zero agreement, indicating random guessing. Statistical hypothesis testing confirms the results from the Turing Test. Check out this paper which includes all code, data and documentation.  https://lnkd.in/gmyiJKtj Huge congrats to our amazing students who did this work Allison Lau and Jadeyn Feng Allison Lau Jadeyn Feng Caitlin Woods Michael Stewart, Tony Seale Vladimir Alexiev Sarah Lukens Tyler Bikaun, PhD Mark Warrener Piero Baraldi Caitlin Woods Milenija Stojkovic Helgesen MSHelgesen Nils Martin Rugsveen Chris McFarlane Jean-Charles Leclerc Adriano Polpo (de Campos) | 10 comments on LinkedIn
·linkedin.com·
Generating Authentic Grounded Synthetic Maintenance Work Orders wtih Knowledge Graphs
Flexible-GraphRAG
Flexible-GraphRAG
𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚 𝗼𝗿 𝗥𝗔𝗚 is now flexing to the max using LlamaIndex, supports 𝟳 𝗴𝗿𝗮𝗽𝗵 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀, 𝟭𝟬 𝘃𝗲𝗰𝘁𝗼𝗿 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲𝘀, 𝟭𝟯 𝗱𝗮𝘁𝗮 𝘀𝗼𝘂𝗿𝗰𝗲𝘀, 𝗟𝗟𝗠𝘀, Docling 𝗱𝗼𝗰 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴, 𝗮𝘂𝘁𝗼 𝗰𝗿𝗲𝗮𝘁𝗲 𝗞𝗚𝘀, 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚, 𝗛𝘆𝗯𝗿𝗶𝗱 𝗦𝗲𝗮𝗿𝗰𝗵, 𝗔𝗜 𝗖𝗵𝗮𝘁 (shown Hyland products web page data src) 𝗔𝗽𝗮𝗰𝗵𝗲 𝟮.𝟬 𝗢𝗽𝗲𝗻 𝗦𝗼𝘂𝗿𝗰𝗲 𝗚𝗿𝗮𝗽𝗵: Neo4j ArcadeDB FalkorDB Kuzu NebulaGraph, powered by Vesoft (coming Memgraph and 𝗔𝗺𝗮𝘇𝗼𝗻 𝗡𝗲𝗽𝘁𝘂𝗻𝗲) 𝗩𝗲𝗰𝘁𝗼𝗿: Qdrant, Elastic, OpenSearch Project, Neo4j 𝘃𝗲𝗰𝘁𝗼𝗿, Milvus, created by Zilliz (coming Weaviate, Chroma, Pinecone, 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟 + 𝗽𝗴𝘃𝗲𝗰𝘁𝗼𝗿, LanceDB) Docling 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝗽𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗗𝗮𝘁𝗮 𝗦𝗼𝘂𝗿𝗰𝗲𝘀: using LlamaIndex readers: working: Web Pages, Wikipedia, Youtube, untested: Google Drive, Msft OneDrive, S3, Azure Blob, GCS, Box, SharePoint, previous: filesystem, Alfresco, CMIS. 𝗟𝗟𝗠𝘀: 𝗟𝗹𝗮𝗺𝗮𝗜𝗻𝗱𝗲𝘅 𝗟𝗟𝗠𝘀 (OpenAI, Ollama, Claude, Gemini, etc.) 𝗥𝗲𝗮𝗰𝘁, 𝗩𝘂𝗲, 𝗔𝗻𝗴𝘂𝗹𝗮𝗿 𝗨𝗜𝘀, 𝗠𝗖𝗣 𝘀𝗲𝗿𝘃𝗲𝗿, 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 𝘀𝗲𝗿𝘃𝗲𝗿 𝗚𝗶𝘁𝗛𝘂𝗯 𝘀𝘁𝗲𝘃𝗲𝗿𝗲𝗶𝗻𝗲𝗿/𝗳𝗹𝗲𝘅𝗶𝗯𝗹𝗲-𝗴𝗿𝗮𝗽𝗵𝗿𝗮𝗴: https://lnkd.in/eUEeF2cN 𝗫.𝗰𝗼𝗺 𝗣𝗼𝘀𝘁 𝗼𝗻 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗚𝗿𝗮𝗽𝗵𝗥𝗔𝗚 𝗼𝗿 𝗥𝗔𝗚 𝗺𝗮𝘅 𝗳𝗹𝗲𝘅𝗶𝗻𝗴 https://lnkd.in/gHpTupAr 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗲𝗱 𝗦𝗲𝗺𝗮𝗻𝘁𝗶𝗰𝘀 𝗕𝗹𝗼𝗴: https://lnkd.in/ehpjTV7d
·linkedin.com·
Flexible-GraphRAG
Exploring Network-Knowledge Graph Duality: A Case Study in Agentic Supply Chain Risk Analysis
Exploring Network-Knowledge Graph Duality: A Case Study in Agentic Supply Chain Risk Analysis
Exploring Network-Knowledge Graph Duality: A Case Study in Agentic Supply Chain Risk Analysis ... What happens when you ask an AI about supply chain vulnerabilities and it misses the most critical dependencies? Most AI systems treat business relationships like isolated facts in a database. They might know Apple uses lithium batteries, but they miss the web of connections that create real risk. 👉 The Core Problem Standard AI retrieval treats every piece of information as a standalone point. But supply chain risk lives in the relationships between companies, products, and locations. When conflict minerals from the DRC affect smartphone production, it's not just about one supplier - it's about cascading effects through interconnected networks. Vector similarity search finds related documents but ignores the structural dependencies that matter most for risk assessment. 👉 A Different Approach New research from UC Berkeley and MSCI demonstrates how to solve this by treating supply chains as both networks and knowledge graphs simultaneously. The key insight: economic relationships like "Company A produces Product B" are both structural network links and semantic knowledge graph triples. This duality lets you use network science to find the most economically important paths. 👉 How It Works Instead of searching for similar text, the system: - Maps supply chains as networks with companies, products, and locations as nodes - Uses centrality measures to identify structurally important paths - Wraps quantitative data in descriptive language so AI can reason about what numbers actually mean - Retrieves specific relationship paths rather than generic similar content When asked about cobalt risks, it doesn't just find articles about cobalt. It traces the actual path from DRC mines through battery manufacturers to final products, revealing hidden dependencies. The system generates risk narratives that connect operational disruptions to financial impacts without requiring specialized training or expensive graph databases. This approach shows how understanding the structure of business relationships - not just their content - can make AI genuinely useful for complex domain problems.
Exploring Network-Knowledge Graph Duality: A Case Study inAgentic Supply Chain Risk Analysis
·linkedin.com·
Exploring Network-Knowledge Graph Duality: A Case Study in Agentic Supply Chain Risk Analysis
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value.
Protocols move bits. Semantics move value. The reports on agents are starting to sound samey: go vertical not horizontal; redesign workflows end-to-end; clean your data; stop doing pilots that automate inefficiencies; price for outcomes when the agent does the work. All true. All necessary. All needing repetition ad nauseam. So it’s refreshing to see a switch-up in Bain’s Technology Report 2025: the real leverage now sits with semantics. A shared layer of meaning. Bain notes that protocols are maturing. MCP and A2A let agents pass tool calls, tokens, and results between layers. Useful plumbing. But there’s still no shared vocabulary that says what an invoice, policy, or work order is, how it moves through states, and how it maps to APIs, tables, and approvals. Without that, cross-vendor reliability will keep stalling. They go further: whoever lands a pragmatic semantic layer first gets winner-takes-most network effects. Define the dictionary and you steer the value flow. This isn’t just a feature. It’s a control point. Bain frames the stack clearly: - Systems of record (data, rules, compliance) - Agent operating systems (orchestration, planning, memory) - Outcome interfaces (natural language requests, user-facing actions) The bottleneck is semantics. And there’s a pricing twist. If agents do the work, semantics define what “done” means. That unlocks outcome-based pricing, charging for tasks completed or value delivered, not log-ons. Bain is blunt: the open, any-to-any agent utopia will smash against vendor incentives, messy data, IP, and security. Translation: walled gardens lead first. Start where governance is clear and data is good enough, then use that traction to shape the semantics others will later adopt. This is where I’m seeing convergence. In practice, a knowledge graph can provide that shared meaning, identity, relationships, and policy. One workable pattern: the agent plans with an LLM, resolves entities and checks rules in the graph, then acts through typed APIs, writing back as events the graph can audit. That’s the missing vocabulary and the enforcement that protocols alone can’t cover. Tony Seale puts it well: “Neural and symbolic systems are not rivals; they are complements… a knowledge graph provides the symbolic backbone… to ground AI in shared semantics and enforce consistency.” To me, this is optimistic, because it moves the conversation from “make the model smarter” to “make the system understandable.” Agents don’t need perfection if they are predictable, composable, and auditable. Semantics deliver that. It’s also how smaller players compete with hyperscalers: you don’t need to win the model race to win the meaning race. With semantics, agents become infrastructure. The next few years won’t be won by who builds the biggest model. It’ll be won by who defines the smallest shared meaning. | 27 comments on LinkedIn
Protocols move bits. Semantics move value.
·linkedin.com·
Protocols move bits. Semantics move value.
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
KG-R1, Why Knowledge Graph RAG Systems Are Too Expensive to Deploy (And How One Team Fixed It) ... What if I told you that most knowledge graph systems require multiple AI models just to answer a single question? That's exactly the problem plaguing current KG-RAG deployments. 👉 The Cost Problem Traditional knowledge graph retrieval systems use a pipeline approach: one model for planning, another for reasoning, a third for reviewing, and a fourth for responding. Each step burns through tokens and compute resources, making deployment prohibitively expensive for most organizations. Even worse? These systems are built for specific knowledge graphs. Change your data source, and you need to retrain everything. 👉 A Single-Agent Solution Researchers from MIT and IBM just published KG-R1, which replaces this entire multi-model pipeline with one lightweight agent that learns through reinforcement learning. Here's the clever part: instead of hardcoding domain-specific logic, the system uses four simple, universal operations: - Get relations from an entity - Get entities from a relation - Navigate forward or backward through connections These operations work on any knowledge graph without modification. 👉 The Results Are Striking Using just a 3B parameter model, KG-R1: - Matches accuracy of much larger foundation models - Uses 60% fewer tokens per query than existing methods - Transfers across different knowledge graphs without retraining - Processes queries in under 7 seconds on a single GPU The system learned to retrieve information strategically through multi-turn interactions, optimized end-to-end rather than stage-by-stage. This matters because knowledge graphs contain some of our most valuable structured data - from scientific databases to legal documents. Making them accessible and affordable could unlock entirely new applications.
https://arxiv.org/abs/2509.26383v1 Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
·linkedin.com·
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
Announcing the formation of a Data Façades W3C Community Group
Announcing the formation of a Data Façades W3C Community Group
I am excited to announce the formation of a Data Façades W3C Community Group. Façade-X, initially introduced at SEMANTICS 2021 and successfully implemented by the SPARQL Anything project, provides a simple yet powerful, homogeneous view over diverse and heterogeneous data sources (e.g., CSV, JSON, XML, and many others). With the recent v1.0.0 release of SPARQL Anything, the time was right to work on the long-term stability and widespread adoption of this approach by developing an open, vendor-neutral technology. The Façade-X concept was born to allow SPARQL users to query data in any structured format in plain SPARQL. Therefore, the choice of a W3C community group to lead efforts on specifications is just natural. Specifications will enhance its reliability, foster innovation, and encourage various vendors and projects—including graph database developers — to provide their own compatible implementations. The primary goals of the Data Façades Community Group is to: Define the core specification of the Façade-X method. Define Standard Mappings: Formalize the required mappings and profiles for connecting Façade-X to common data formats. Define the specification of the query dialect: Provide a reference for the SPARQL dialect, configuration conventions (like SERVICE IRIs), and the functions/magic properties used. Establish Governance: Create a monitored, robust process for adding support for new data formats. Foster Collaboration: Build connections with relevant W3C groups (e.g., RDF & SPARQL, Data Shapes) and encourage involvement from developers, businesses, and adopters. Join us! With Luigi Asprino Ivo Velitchkov Justin Dowdy Paul Mulholland Andy Seaborne Ryan Shaw ... CG: https://lnkd.in/eSxuqsvn Github: https://lnkd.in/dkHGT8N3 SPARQL Anything #RDF #SPARQL #W3C #FX
announce the formation of a Data Façades W3C Community Group
·linkedin.com·
Announcing the formation of a Data Façades W3C Community Group