Found 214 bookmarks
Newest
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Problem: Knowledge Graphs Are Expensive (and Clunky) AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares: ✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case. ✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers. ☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Fix: KET-RAG’s Two-Layer Brain KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system: ✸ Layer 1: Knowledge Graph Skeleton ☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs). ☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs. ✸ Layer 2: Keyword-Chunk Bipartite Graph ☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed. ☆ Acts as a “fast lane” for retrieving context without expensive entity extraction. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Results: Beating Microsoft’s Graph-RAG with Pennies On HotpotQA and MuSiQue benchmarks, KET-RAG: ✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost. ✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%. ✸ Scales to terabytes of data without melting budgets. ☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Why AI Agents Need This AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them: ✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds. ✸ Cost-effective scalability: Deploying agents across millions of documents without going broke. ✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed). Paper in comments ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ 》Build Your Own Supercharged AI Agent? 🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY! and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines 𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]: 👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
·linkedin.com·
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = 🤝 Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning. What if they could dynamically restructure their thought process like humans? A new paper introduces Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs (DAGs). Instead of forcing fixed reasoning steps, AGoT recursively decomposes problems into sub-tasks, selectively expanding only the most critical pathways. This is crucial for industries like scientific research or legal analysis, where problems demand non-linear, nested reasoning. The key innovation lies in complexity checks: AGoT assesses each reasoning node, spawning sub-graphs for intricate subtasks while resolving simpler ones directly. This mirrors how experts allocate mental effort—drilling into uncertainties while streamlining obvious steps. The framework achieved 46.2% improvement on GPQA (a notoriously hard science QA benchmark), rivaling gains from compute-heavy fine-tuning. By unifying chain, tree, and graph paradigms, AGoT retains CoT’s clarity, ToT’s exploration, and GoT’s flexibility without manual tuning. The result? LLMs that self-adapt their reasoning depth based on problem complexity—no architectural changes needed. For AI practitioners, AGoT’s DAG structure offers a principled interface to scale reasoning modularly. ↓ 𝐖𝐚𝐧𝐧𝐚 𝐤𝐧𝐨𝐰 𝐰𝐡𝐚𝐭 𝐲𝐨𝐮 𝐦𝐢𝐬𝐬𝐞𝐝? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
·linkedin.com·
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
LLMs that automatically fill knowledge gaps - too good to be true? Large Language Models (LLMs) often stumble in logical tasks due to hallucinations, especially when relying on incomplete Knowledge Graphs (KGs). Current methods naively trust KGs as exhaustive truth sources - a flawed assumption in real-world domains like healthcare or finance where gaps persist. SymAgent is a new framework that approaches this problem by making KGs active collaborators, not passive databases. Its dual-module design combines symbolic logic with neural flexibility: 1. Agent-Planner extracts implicit rules from KGs (e.g., "If drug X interacts with Y, avoid co-prescription") to decompose complex questions into structured steps. 2. Agent-Executor dynamically pulls external data when KG triples are missing, bypassing the "static repository" limitation. Perhaps most impressively, SymAgent’s self-learning observes failed reasoning paths to iteratively refine its strategy and flag missing KG connections - achieving 20-30% accuracy gains over raw LLMs. Equipped with SymAgent, even 7B models rival their much larger counterparts by leveraging this closed-loop system. It would be great if LLMs were able to autonomously curate knowledge and adapt to domain shifts without costly retraining. But are we there yet? Are hybrid architectures like SymAgent the future? ↓ Liked this post? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
·linkedin.com·
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
feedforward graphs (i.e. graphs w/o back edges)
feedforward graphs (i.e. graphs w/o back edges)
And so we set out to understand _feedforward_ graphs (i.e. graphs w/o back edges) ⏩ Turns out these graphs are rather understudied for how often they are…
feedforward_ graphs (i.e. graphs w/o back edges)
·linkedin.com·
feedforward graphs (i.e. graphs w/o back edges)
Enterprise Ontology: A Human-Centric Approach to Understanding the Essence of Organisation : Dietz, Jan L. G., Mulder, Hans B. F.: Amazon.nl: Boeken
Enterprise Ontology: A Human-Centric Approach to Understanding the Essence of Organisation : Dietz, Jan L. G., Mulder, Hans B. F.: Amazon.nl: Boeken
Enterprise Ontology: A Human-Centric Approach to Understanding the Essence of Organisation : Dietz, Jan L. G., Mulder, Hans B. F.: Amazon.nl: Boeken
Enterprise Ontology
·amazon.nl·
Enterprise Ontology: A Human-Centric Approach to Understanding the Essence of Organisation : Dietz, Jan L. G., Mulder, Hans B. F.: Amazon.nl: Boeken
Graph contrastive learning
Graph contrastive learning
Graph contrastive learning (GCL) is a self-supervised learning technique for graphs that focuses on learning representations by contrasting different views of…
Graph contrastive learning
·linkedin.com·
Graph contrastive learning
LightRAG
LightRAG
🚀 Breaking Boundaries in Graph + Retrieval-Augmented Generation (RAG)! 🌐🤖 The rapid pace of innovation in combining graphs with RAG is absolutely…
LightRAG
·linkedin.com·
LightRAG
GraphAgent — An innovative AI agent that efficiently integrates structured and unstructured data
GraphAgent — An innovative AI agent that efficiently integrates structured and unstructured data
🚀 Excited to Share Our Recent Work! 🌟 GraphAgent — An innovative AI agent that efficiently integrates structured and unstructured data! 📚 👉 Paper link:…
GraphAgent — An innovative AI agent that efficiently integrates structured and unstructured data
·linkedin.com·
GraphAgent — An innovative AI agent that efficiently integrates structured and unstructured data
Introduction to Graph Neural Networks
Introduction to Graph Neural Networks
Want to catch up on Graph Neural Networks? Now's the time! Graph Neural Networks (GNNs) have become a popular solution for problems that include network data,…
Graph Neural Networks
·linkedin.com·
Introduction to Graph Neural Networks
Context-based Graph Neural Network
Context-based Graph Neural Network
❓How Can Graph Neural Networks Enhance Recommendation Systems by Incorporating Contextual Information? Traditional recommendation systems often leverage a…
Context-based Graph Neural Network
·linkedin.com·
Context-based Graph Neural Network
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph…
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
·linkedin.com·
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Graph resoning in Large Language Models
Graph resoning in Large Language Models
ICYMI, here are the slides from our standing room only talk at NeurIPS yesterday! Concepts we discuss include: ➡️ Quantifying how much Transformer you need to… | 18 comments on LinkedIn
·linkedin.com·
Graph resoning in Large Language Models