Found 380 bookmarks
Custom sorting
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Problem: Knowledge Graphs Are Expensive (and Clunky) AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares: ✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case. ✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers. ☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Fix: KET-RAG’s Two-Layer Brain KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system: ✸ Layer 1: Knowledge Graph Skeleton ☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs). ☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs. ✸ Layer 2: Keyword-Chunk Bipartite Graph ☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed. ☆ Acts as a “fast lane” for retrieving context without expensive entity extraction. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Results: Beating Microsoft’s Graph-RAG with Pennies On HotpotQA and MuSiQue benchmarks, KET-RAG: ✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost. ✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%. ✸ Scales to terabytes of data without melting budgets. ☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Why AI Agents Need This AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them: ✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds. ✸ Cost-effective scalability: Deploying agents across millions of documents without going broke. ✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed). Paper in comments ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ 》Build Your Own Supercharged AI Agent? 🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY! and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines 𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]: 👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
·linkedin.com·
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
LLMs that automatically fill knowledge gaps - too good to be true? Large Language Models (LLMs) often stumble in logical tasks due to hallucinations, especially when relying on incomplete Knowledge Graphs (KGs). Current methods naively trust KGs as exhaustive truth sources - a flawed assumption in real-world domains like healthcare or finance where gaps persist. SymAgent is a new framework that approaches this problem by making KGs active collaborators, not passive databases. Its dual-module design combines symbolic logic with neural flexibility: 1. Agent-Planner extracts implicit rules from KGs (e.g., "If drug X interacts with Y, avoid co-prescription") to decompose complex questions into structured steps. 2. Agent-Executor dynamically pulls external data when KG triples are missing, bypassing the "static repository" limitation. Perhaps most impressively, SymAgent’s self-learning observes failed reasoning paths to iteratively refine its strategy and flag missing KG connections - achieving 20-30% accuracy gains over raw LLMs. Equipped with SymAgent, even 7B models rival their much larger counterparts by leveraging this closed-loop system. It would be great if LLMs were able to autonomously curate knowledge and adapt to domain shifts without costly retraining. But are we there yet? Are hybrid architectures like SymAgent the future? ↓ Liked this post? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
·linkedin.com·
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
Pathway to Artificial General Intelligence (AGI)
Pathway to Artificial General Intelligence (AGI)
🌟 Pathway to Artificial General Intelligence (AGI) 🌟 This is my view on the evolutionary steps toward AGI: 1️⃣ Large Language Models (LLMs): Language models…
Pathway to Artificial General Intelligence (AGI)
·linkedin.com·
Pathway to Artificial General Intelligence (AGI)
Knowledge graphs are shaping the future of data and AI, and I’m excited to see them featured in the Data Gang’s predictions for 2025!
Knowledge graphs are shaping the future of data and AI, and I’m excited to see them featured in the Data Gang’s predictions for 2025!
🚀 Knowledge graphs are shaping the future of data and AI, and I’m excited to see them featured in the Data Gang’s predictions for 2025! 🚀 Every year I enjoy… | 10 comments on LinkedIn
Knowledge graphs are shaping the future of data and AI, and I’m excited to see them featured in the Data Gang’s predictions for 2025!
·linkedin.com·
Knowledge graphs are shaping the future of data and AI, and I’m excited to see them featured in the Data Gang’s predictions for 2025!
A zero-hallucination AI chatbot that answered over 10000 questions of students at the University of Chicago using GraphRAG
A zero-hallucination AI chatbot that answered over 10000 questions of students at the University of Chicago using GraphRAG
UChicago Genie is now open source! How we built a zero-hallucination AI chatbot that answered over 10000 questions of students at the University of… | 25 comments on LinkedIn
a zero-hallucination AI chatbot that answered over 10000 questions of students at the University of Chicago
·linkedin.com·
A zero-hallucination AI chatbot that answered over 10000 questions of students at the University of Chicago using GraphRAG
Graph contrastive learning
Graph contrastive learning
Graph contrastive learning (GCL) is a self-supervised learning technique for graphs that focuses on learning representations by contrasting different views of…
Graph contrastive learning
·linkedin.com·
Graph contrastive learning
LightRAG
LightRAG
🚀 Breaking Boundaries in Graph + Retrieval-Augmented Generation (RAG)! 🌐🤖 The rapid pace of innovation in combining graphs with RAG is absolutely…
LightRAG
·linkedin.com·
LightRAG
Improving Retrieval Augmented Generation accuracy with GraphRAG | Amazon Web Services
Improving Retrieval Augmented Generation accuracy with GraphRAG | Amazon Web Services
Lettria, an AWS Partner, demonstrated that integrating graph-based structures into RAG workflows improves answer precision by up to 35% compared to vector-only retrieval methods. In this post, we explore why GraphRAG is more comprehensive and explainable than vector RAG alone, and how you can use this approach using AWS services and Lettria.
·aws.amazon.com·
Improving Retrieval Augmented Generation accuracy with GraphRAG | Amazon Web Services
Ontologies and knowledge graphs are the secret sauce for AI
Ontologies and knowledge graphs are the secret sauce for AI
𝐌𝐲 𝐛𝐨𝐥𝐝 𝐚𝐧𝐝 𝐨𝐧𝐥𝐲 𝐩𝐫𝐞𝐝𝐢𝐜𝐭𝐢𝐨𝐧 𝐟𝐨𝐫 𝟐𝟎𝟐𝟓: By December, everyone, their chatbot, and their agents will finally agree that ontologies… | 80 comments on LinkedIn
ontologies and knowledge graphs are the secret sauce for AI
·linkedin.com·
Ontologies and knowledge graphs are the secret sauce for AI