KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Problem: Knowledge Graphs Are Expensive (and Clunky)
AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares:
✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case.
✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers.
☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Fix: KET-RAG’s Two-Layer Brain
KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system:
✸ Layer 1: Knowledge Graph Skeleton
☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs).
☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs.
✸ Layer 2: Keyword-Chunk Bipartite Graph
☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed.
☆ Acts as a “fast lane” for retrieving context without expensive entity extraction.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Results: Beating Microsoft’s Graph-RAG with Pennies
On HotpotQA and MuSiQue benchmarks, KET-RAG:
✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost.
✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%.
✸ Scales to terabytes of data without melting budgets.
☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Why AI Agents Need This
AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them:
✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds.
✸ Cost-effective scalability: Deploying agents across millions of documents without going broke.
✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed).
Paper in comments
≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣
》Build Your Own Supercharged AI Agent?
🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY!
and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines
𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]:
👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering That has been our position from the beginning when we started our research… | 29 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph…
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
takeaways from the International Semantic Web Conference #iswc2024
My takeaways from the International Semantic Web Conference #iswc2024 Ioana keynote: Great example of data integration for journalism, highlighting the use of…
takeaways from the International Semantic Web Conference hashtag#iswc2024
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy ⛳ In the realm of agentic systems, a fundamental challenge emerges…
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy
𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩 𝘪𝘴 𝘢 𝘧𝘳𝘢𝘮𝘦𝘸𝘰𝘳𝘬 𝘵𝘢𝘪𝘭𝘰𝘳𝘦𝘥 𝘧𝘰𝘳 𝘪𝘯𝘵𝘦𝘯𝘵-𝘣𝘢𝘴𝘦𝘥 𝘢𝘯𝘥 𝘤𝘰𝘯𝘵𝘦𝘹𝘵𝘶𝘢𝘭𝘭𝘺 𝘢𝘭𝘪𝘨𝘯𝘦𝘥 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭.
𝗥𝗔𝗚 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻𝘀 𝗙𝗮𝗶𝗹 𝗗𝘂𝗲 𝗧𝗼 𝗜𝗻𝘀𝘂𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝗙𝗼𝗰𝘂𝘀 𝗢𝗻 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗜𝗻𝘁𝗲𝗻𝘁 𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎… | 12 comments on LinkedIn
𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩 𝘪𝘴 𝘢 𝘧𝘳𝘢𝘮𝘦𝘸𝘰𝘳𝘬 𝘵𝘢𝘪𝘭𝘰𝘳𝘦𝘥 𝘧𝘰𝘳 𝘪𝘯𝘵𝘦𝘯𝘵-𝘣𝘢𝘴𝘦𝘥 𝘢𝘯𝘥 𝘤𝘰𝘯𝘵𝘦𝘹𝘵𝘶𝘢𝘭𝘭𝘺 𝘢𝘭𝘪𝘨𝘯𝘦𝘥 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭.
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models
This is something very cool! 3. GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models "GraphReader addresses the…
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models
GitHub - SynaLinks/HybridAGI: The Programmable Neuro-Symbolic AGI that lets you program its behavior using Graph-based Prompt Programming: for people who want AI to behave as expected
The Programmable Neuro-Symbolic AGI that lets you program its behavior using Graph-based Prompt Programming: for people who want AI to behave as expected - SynaLinks/HybridAGI
Open Research Knowledge Graph (ORKG) ASK (Assistant for Scientific Knowledge) uses vector hashtag#embeddings to find the most relevant papers and an open-source hashtag#LLM to synthesize the answer for you
Ask your (research) question against 76 Million scientific articles: https://ask.orkg.org Open Research Knowledge Graph (ORKG) ASK (Assistant for Scientific…
Open Research Knowledge Graph (ORKG) ASK (Assistant for Scientific Knowledge) uses vector hashtag#embeddings to find the most relevant papers and an open-source hashtag#LLM to synthesize the answer for you
Synergizing LLMs and KGs in the GenAI Landscape | LinkedIn
Our paper "Are Large Language Models a Good Replacement of Taxonomies?" was just accepted to VLDB'2024! This finished our last stroke of study on how knowledgeable LLMs are and confirmed our recommendation for the next generation of KGs. How knowledgeable are LLMs? 1.
GraCoRe: Benchmarking Graph Comprehension and Complex Reasoning in Large Language Models
Can LLMs understand graphs? The results might surprise you. Graphs are everywhere, from social networks to biological pathways. As AI systems become more…
GraCoRe: Benchmarking Graph Comprehension and Complex Reasoning in Large Language Models
[2310.01061v1] Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning
Large language models (LLMs) have demonstrated impressive reasoning abilities in complex tasks. However, they lack up-to-date knowledge and experience hallucinations during reasoning, which can...