KnowPath: Knowledge-enhanced Reasoning via LLM-generated Inference Paths over Knowledge Graphs
Breaking LLM Hallucinations in a Smarter Way!
(It’s not about feeding more data)
Large Language Models (LLMs) still struggle with factual inaccuracies, but…
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Problem: Knowledge Graphs Are Expensive (and Clunky)
AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares:
✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case.
✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers.
☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Fix: KET-RAG’s Two-Layer Brain
KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system:
✸ Layer 1: Knowledge Graph Skeleton
☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs).
☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs.
✸ Layer 2: Keyword-Chunk Bipartite Graph
☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed.
☆ Acts as a “fast lane” for retrieving context without expensive entity extraction.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Results: Beating Microsoft’s Graph-RAG with Pennies
On HotpotQA and MuSiQue benchmarks, KET-RAG:
✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost.
✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%.
✸ Scales to terabytes of data without melting budgets.
☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Why AI Agents Need This
AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them:
✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds.
✸ Cost-effective scalability: Deploying agents across millions of documents without going broke.
✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed).
Paper in comments
≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣
》Build Your Own Supercharged AI Agent?
🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY!
and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines
𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]:
👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
LLMs that automatically fill knowledge gaps - too good to be true?
Large Language Models (LLMs) often stumble in logical tasks due to hallucinations, especially when relying on incomplete Knowledge Graphs (KGs).
Current methods naively trust KGs as exhaustive truth sources - a flawed assumption in real-world domains like healthcare or finance where gaps persist.
SymAgent is a new framework that approaches this problem by making KGs active collaborators, not passive databases.
Its dual-module design combines symbolic logic with neural flexibility:
1. Agent-Planner extracts implicit rules from KGs (e.g., "If drug X interacts with Y, avoid co-prescription") to decompose complex questions into structured steps.
2. Agent-Executor dynamically pulls external data when KG triples are missing, bypassing the "static repository" limitation.
Perhaps most impressively, SymAgent’s self-learning observes failed reasoning paths to iteratively refine its strategy and flag missing KG connections - achieving 20-30% accuracy gains over raw LLMs.
Equipped with SymAgent, even 7B models rival their much larger counterparts by leveraging this closed-loop system.
It would be great if LLMs were able to autonomously curate knowledge and adapt to domain shifts without costly retraining.
But are we there yet? Are hybrid architectures like SymAgent the future?
↓
Liked this post? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
KAG: Boosting LLMs in Professional Domains via Knowledge Augmented...
The recently developed retrieval-augmented generation (RAG) technology has enabled the efficient construction of domain-specific applications. However, it also has limitations, including the gap...
Ontologies as Conceptualizations by Nicola Guarino
Nicola Guarino Keynote Address for the Ontology Summit 2025 on 22 January 2025 "Ontologies as specifications of conceptualizations: correctness, precision, a...
Terminology Augmented Generation (TAG)? Recently some fellow terminologists have proposed the new term "Terminology-Augmented Generation (TAG)" to refer to… | 29 comments on LinkedIn
What is really Graph RAG? Inspired by "From Local to Global: A Graph RAG Approach to Query-Focused Summarization" paper from Microsoft! How do you combine… | 12 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering That has been our position from the beginning when we started our research… | 29 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Graph contrastive learning (GCL) is a self-supervised learning technique for graphs that focuses on learning representations by contrasting different views of…
OG-RAG: Ontology-Grounded Retrieval-Augmented Generation For Large...
This paper presents OG-RAG, an Ontology-Grounded Retrieval Augmented Generation method designed to enhance LLM-generated responses by anchoring retrieval processes in domain-specific ontologies....
Large Language Models, Knowledge Graphs and Search Engines: A...
Much has been discussed about how Large Language Models, Knowledge Graphs and Search Engines can be combined in a synergistic manner. A dimension largely absent from current academic discourse is...
Background: The field of Artificial Intelligence has undergone cyclical periods of growth and decline, known as AI summers and winters. Currently, we are in the third AI summer, characterized by...
PG-Schema: Schemas for Property Graphs | Proceedings of the ACM on Management of Data
Property graphs have reached a high level of maturity, witnessed by multiple robust
graph database systems as well as the ongoing ISO standardization effort aiming at
creating a new standard Graph Query Language (GQL). Yet, despite documented demand,
...
What if creating Linked Open Data was less like coding and more like writing? Could anyone extend the Semantic Web by sharing a document? Publish a knowledge… | 13 comments on LinkedIn
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph…
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Unlocking universal reasoning across knowledge graphs
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts of… | 11 comments on LinkedIn
Unlocking universal reasoning across knowledge graphs.
takeaways from the International Semantic Web Conference #iswc2024
My takeaways from the International Semantic Web Conference #iswc2024 Ioana keynote: Great example of data integration for journalism, highlighting the use of…
takeaways from the International Semantic Web Conference hashtag#iswc2024
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy ⛳ In the realm of agentic systems, a fundamental challenge emerges…
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy
Understanding SPARQL Queries: Are We Already There
👉 Our paper "Understanding SPARQL Queries: Are We Already There?", explores the potential of Large Language Models (#LLMs) to generate natural-language…
Understanding SPARQL Queries: Are We Already There
500 million+ members | Manage your professional identity. Build and engage with your professional network. Access knowledge, insights and opportunities.
Knowledge Graph Enhanced Language Agents for Recommendation
Language agents have recently been used to simulate human behavior and user-item interactions for recommendation systems. However, current language agent simulations do not understand the...
TGB 2.0: A Benchmark for Learning on Temporal Knowledge Graphs and Heterogeneous Graphs
🌟 TGB 2.0 @NeurIPS 2024 🌟 We are very happy to share that our paper TGB 2.0: A Benchmark for Learning on Temporal Knowledge Graphs and Heterogeneous Graphs… | 11 comments on LinkedIn
TGB 2.0: A Benchmark for Learning on Temporal Knowledge Graphs and Heterogeneous Graphs
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts of… | 13 comments on LinkedIn
Recently, knowledge-graph-enhanced recommendation systems have attracted much attention, since knowledge graph (KG) can help improving the dataset quality and offering rich semantics for explainable recommendation. However, current KG-enhanced solutions focus on analyzing user behaviors on the product level and lack effective approaches to extract user preference towards product category, which is essential for better recommendation because users shopping online normally have strong preference towards distinctive product categories, not merely on products, according to various user studies. Moreover, the existing pure embedding-based recommendation methods can only utilize KGs with a limited size, which is not adaptable to many real-world applications. In this paper, we generalize the recommendation problem with preference mining as a compound knowledge reasoning task and propose a novel multi-agent system, called Mcore, which can promote model performance by mining users’ high-level interests and is adaptable to large KGs. Specifically, we split the overall problem and allocate sub-task to each agent: Coordinate Agent takes charge of recognizing the product-category preference of current user, while Relation Agent and Entity Agent perform KG reasoning cooperatively from a user node towards the preferred categories and terminate at a product node as recommendation. To train this heterogeneous multi-agent system, where agents own various functionalities, we propose an asynchronous reinforcement training pipeline, called Multi-agent Collaborative Learning. The extensive experiments on real datasets demonstrate the effectiveness and adaptability of Mcore on recommendation tasks.