GraphNews

#research #llm
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
šŸ†šŸš£MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graphā€¦
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Ā·linkedin.comĀ·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
šŸ†šŸš£MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graphā€¦
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Ā·linkedin.comĀ·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋The Problem: Knowledge Graphs Are Expensive (and Clunky) AI agents needĀ contextĀ to answer complex questionsā€”like connecting ā€œCOVID vaccinesā€ to ā€œmyocarditis risksā€ across research papers. But todayā€™s solutions face two nightmares: āœøĀ Cost: Building detailed knowledge graphs with LLMs can costĀ $33,000 for a 5GB legal case. āœøĀ Quality: Cheap methods (like KNN graphs) miss key relationships, leading toĀ 32% worse answers. ā˜†Ā Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch. ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋The Fix: KET-RAGā€™s Two-Layer Brain KET-RAG mergesĀ precisionĀ (knowledge graphs) andĀ efficiencyĀ (keyword-text maps) into one system: āœøĀ Layer 1: Knowledge Graph Skeleton ā˜† Uses PageRank to findĀ core text chunksĀ (like ā€œvaccine side effectsā€ in medical docs). ā˜† Builds a sparse graphĀ onlyĀ on these chunks with LLMsā€”saving 80% of indexing costs. āœøĀ Layer 2: Keyword-Chunk Bipartite Graph ā˜† Links keywords (e.g., ā€œmyocarditisā€) to all related text snippetsā€”no LLM needed. ā˜† Acts as a ā€œfast laneā€ for retrieving context without expensive entity extraction. ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋Results: Beating Microsoftā€™s Graph-RAG with Pennies On HotpotQA and MuSiQue benchmarks, KET-RAG: āœøĀ Retrieves 81.6%Ā of critical info vs. Microsoftā€™s 74.6%ā€”with 10x lower cost. āœø Boosts answer accuracy (F1 score) byĀ 32.4%Ā while cutting indexing bills byĀ 20%. āœø Scales to terabytes of data without melting budgets. ā˜†Ā Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price. ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋Why AI Agents Need This AI agents arenā€™t just chatbotsā€”theyā€™reĀ problem solversĀ for medicine, law, and customer service. KET-RAG gives them: āœøĀ Real-time, multi-hop reasoning: Connecting ā€œdrug A ā†’ gene B ā†’ side effect Cā€ in milliseconds. āœøĀ Cost-effective scalability: Deploying agents across millions of documents without going broke. āœøĀ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed). Paper in comments ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ 怋Build Your Own Supercharged AI Agent? šŸ”® Join My š‡ššš§šš¬-šŽš§ š€šˆ š€š šžš§š­š¬ š“š«ššš¢š§š¢š§š  TODAY! and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines š„š§š«šØš„š„ ššŽš– [34% discount]: šŸ‘‰ https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Ā·linkedin.comĀ·
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = šŸ¤ Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning. What if they could dynamically restructure their thought process like humans? A new paper introduces Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs (DAGs). Instead of forcing fixed reasoning steps, AGoT recursively decomposes problems into sub-tasks, selectively expanding only the most critical pathways. This is crucial for industries like scientific research or legal analysis, where problems demand non-linear, nested reasoning. The key innovation lies in complexity checks: AGoT assesses each reasoning node, spawning sub-graphs for intricate subtasks while resolving simpler ones directly. This mirrors how experts allocate mental effortā€”drilling into uncertainties while streamlining obvious steps. The framework achieved 46.2% improvement on GPQA (a notoriously hard science QA benchmark), rivaling gains from compute-heavy fine-tuning. By unifying chain, tree, and graph paradigms, AGoT retains CoTā€™s clarity, ToTā€™s exploration, and GoTā€™s flexibility without manual tuning. The result? LLMs that self-adapt their reasoning depth based on problem complexityā€”no architectural changes needed. For AI practitioners, AGoTā€™s DAG structure offers a principled interface to scale reasoning modularly. ā†“ š–ššš§š§šš š¤š§šØš° š°š”ššš­ š²šØš® š¦š¢š¬š¬šžš? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com šŸ’”
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Ā·linkedin.comĀ·
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
What is really Graph RAG?
What is really Graph RAG?
What is really Graph RAG? Inspired by "From Local to Global: A Graph RAG Approach to Query-Focused Summarization" paper from Microsoft! How do you combineā€¦ | 12 comments on LinkedIn
What is really Graph RAG?
Ā·linkedin.comĀ·
What is really Graph RAG?
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering That has been our position from the beginning when we started our researchā€¦ | 29 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Ā·linkedin.comĀ·
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graphā€¦
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Ā·linkedin.comĀ·
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Graphs + Transformers = the best of both worlds
Graphs + Transformers = the best of both worlds
Graphs + Transformers = the best of both worlds šŸ¤ The same models powering breakthroughs in natural language processing are now being adapted for graphsā€¦
Graphs + Transformers = the best of both worlds
Ā·linkedin.comĀ·
Graphs + Transformers = the best of both worlds
Knowledge Graph In-Context Learning
Knowledge Graph In-Context Learning
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts ofā€¦ | 13 comments on LinkedIn
Knowledge Graph In-Context Learning
Ā·linkedin.comĀ·
Knowledge Graph In-Context Learning
Graph-constrained Reasoning
Graph-constrained Reasoning
šŸš€ Exciting New Research: "Graph-constrained Reasoning (GCR)" - Enabling Faithful KG-grounded LLM Reasoning with Zero Hallucination! šŸ§  šŸŽ‰ Proud to share ourā€¦ | 11 comments on LinkedIn
Graph-constrained Reasoning
Ā·linkedin.comĀ·
Graph-constrained Reasoning
Medical Graph RAG
Medical Graph RAG
LLMs and Knowledge Graphs: A love story šŸ’“ Researchers from University of Oxford recently released MedGraphRAG. At its core, MedGraphRAG is a frameworkā€¦
Ā·linkedin.comĀ·
Medical Graph RAG