MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
šš£MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graphā¦
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
I love Markus J. Buehler's work, and his latest paper "Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks" does not disappoint, revealingā¦ | 19 comments on LinkedIn
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
šš£MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graphā¦
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
KnowPath: Knowledge-enhanced Reasoning via LLM-generated Inference Paths over Knowledge Graphs
Breaking LLM Hallucinations in a Smarter Way!
(Itās not about feeding more data)
Large Language Models (LLMs) still struggle with factual inaccuracies, butā¦
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff
ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹
ćThe Problem: Knowledge Graphs Are Expensive (and Clunky)
AI agents needĀ contextĀ to answer complex questionsālike connecting āCOVID vaccinesā to āmyocarditis risksā across research papers. But todayās solutions face two nightmares:
āøĀ Cost: Building detailed knowledge graphs with LLMs can costĀ $33,000 for a 5GB legal case.
āøĀ Quality: Cheap methods (like KNN graphs) miss key relationships, leading toĀ 32% worse answers.
āĀ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch.
ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹
ćThe Fix: KET-RAGās Two-Layer Brain
KET-RAG mergesĀ precisionĀ (knowledge graphs) andĀ efficiencyĀ (keyword-text maps) into one system:
āøĀ Layer 1: Knowledge Graph Skeleton
ā Uses PageRank to findĀ core text chunksĀ (like āvaccine side effectsā in medical docs).
ā Builds a sparse graphĀ onlyĀ on these chunks with LLMsāsaving 80% of indexing costs.
āøĀ Layer 2: Keyword-Chunk Bipartite Graph
ā Links keywords (e.g., āmyocarditisā) to all related text snippetsāno LLM needed.
ā Acts as a āfast laneā for retrieving context without expensive entity extraction.
ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹
ćResults: Beating Microsoftās Graph-RAG with Pennies
On HotpotQA and MuSiQue benchmarks, KET-RAG:
āøĀ Retrieves 81.6%Ā of critical info vs. Microsoftās 74.6%āwith 10x lower cost.
āø Boosts answer accuracy (F1 score) byĀ 32.4%Ā while cutting indexing bills byĀ 20%.
āø Scales to terabytes of data without melting budgets.
āĀ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price.
ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹ļ¹
ćWhy AI Agents Need This
AI agents arenāt just chatbotsātheyāreĀ problem solversĀ for medicine, law, and customer service. KET-RAG gives them:
āøĀ Real-time, multi-hop reasoning: Connecting ādrug A ā gene B ā side effect Cā in milliseconds.
āøĀ Cost-effective scalability: Deploying agents across millions of documents without going broke.
āøĀ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed).
Paper in comments
ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£ā£
ćBuild Your Own Supercharged AI Agent?
š® Join My ššš§šš¬-šš§ šš šš šš§šš¬ šš«šš¢š§š¢š§š TODAY!
and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines
šš§š«šØš„š„ ššš [34% discount]:
š https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = š¤
Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning.
What if they could dynamically restructure their thought process like humans?
A new paper introduces Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs (DAGs).
Instead of forcing fixed reasoning steps, AGoT recursively decomposes problems into sub-tasks, selectively expanding only the most critical pathways.
This is crucial for industries like scientific research or legal analysis, where problems demand non-linear, nested reasoning.
The key innovation lies in complexity checks: AGoT assesses each reasoning node, spawning sub-graphs for intricate subtasks while resolving simpler ones directly.
This mirrors how experts allocate mental effortādrilling into uncertainties while streamlining obvious steps.
The framework achieved 46.2% improvement on GPQA (a notoriously hard science QA benchmark), rivaling gains from compute-heavy fine-tuning.
By unifying chain, tree, and graph paradigms, AGoT retains CoTās clarity, ToTās exploration, and GoTās flexibility without manual tuning.
The result? LLMs that self-adapt their reasoning depth based on problem complexityāno architectural changes needed.
For AI practitioners, AGoTās DAG structure offers a principled interface to scale reasoning modularly.
ā
ššš§š§š š¤š§šØš° š°š”šš š²šØš® š¦š¢š¬š¬šš? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com š”
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
GFM-RAG: The First Graph Foundation Model for Retrieval-Augmented Generation
š Introducing GFM-RAG: The First Graph Foundation Model for Retrieval-Augmented Generation!
Weāre excited to share our latest research: GFM-RAG: Graphā¦ | 20 comments on LinkedIn
GFM-RAG: The First Graph Foundation Model for Retrieval-Augmented Generation
Dynamic Reasoning Graphs + LLMs = š¤
Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning.
What if they couldā¦ | 10 comments on LinkedIn
KAG: Boosting LLMs in Professional Domains via Knowledge Augmented...
The recently developed retrieval-augmented generation (RAG) technology has enabled the efficient construction of domain-specific applications. However, it also has limitations, including the gap...
Terminology Augmented Generation (TAG)? Recently some fellow terminologists have proposed the new term "Terminology-Augmented Generation (TAG)" to refer toā¦ | 29 comments on LinkedIn
What is really Graph RAG? Inspired by "From Local to Global: A Graph RAG Approach to Query-Focused Summarization" paper from Microsoft! How do you combineā¦ | 12 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering That has been our position from the beginning when we started our researchā¦ | 29 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
OG-RAG: Ontology-Grounded Retrieval-Augmented Generation For Large...
This paper presents OG-RAG, an Ontology-Grounded Retrieval Augmented Generation method designed to enhance LLM-generated responses by anchoring retrieval processes in domain-specific ontologies....
Large Language Models, Knowledge Graphs and Search Engines: A...
Much has been discussed about how Large Language Models, Knowledge Graphs and Search Engines can be combined in a synergistic manner. A dimension largely absent from current academic discourse is...
Can Graph Learning Improve Planning in LLM-based Agents?
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs). It aims to break down complex user requests in natural...
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graphā¦
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Graphs + Transformers = the best of both worlds š¤ The same models powering breakthroughs in natural language processing are now being adapted for graphsā¦
Unlocking universal reasoning across knowledge graphs
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts ofā¦ | 11 comments on LinkedIn
Unlocking universal reasoning across knowledge graphs.
takeaways from the International Semantic Web Conference #iswc2024
My takeaways from the International Semantic Web Conference #iswc2024 Ioana keynote: Great example of data integration for journalism, highlighting the use ofā¦
takeaways from the International Semantic Web Conference hashtag#iswc2024
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy ā³ In the realm of agentic systems, a fundamental challenge emergesā¦
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy
Understanding SPARQL Queries: Are We Already There
š Our paperĀ "Understanding SPARQL Queries: Are We Already There?", explores the potential of Large Language Models (#LLMs) to generate natural-languageā¦
Understanding SPARQL Queries: Are We Already There
Knowledge Graph Enhanced Language Agents for Recommendation
Language agents have recently been used to simulate human behavior and user-item interactions for recommendation systems. However, current language agent simulations do not understand the...
More Graph, More Agents: Scaling Graph Reasoning with LLMs
More Graph, More Agents: Scaling Graph Reasoning with LLMs Graph reasoning tasks have proven to be a tough nut to crack for Large Language Models (LLMs).ā¦
More Graph, More Agents: Scaling Graph Reasoning with LLMs
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts ofā¦ | 13 comments on LinkedIn
Fact Finder -- Enhancing Domain Expertise of Large Language Models...
Recent advancements in Large Language Models (LLMs) have showcased their proficiency in answering natural language queries. However, their effectiveness is hindered by limited domain-specific...
Recently, Retrieval-Augmented Generation (RAG) has achieved remarkable success in addressing the challenges of Large Language Models (LLMs) without necessitating retraining. By referencing an...
LLMs and Knowledge Graphs: A love story š Researchers from University of Oxford recently released MedGraphRAG. At its core, MedGraphRAG is a frameworkā¦
Think-on-Graph 2.0: Deep and Interpretable Large Language Model...
Retrieval-augmented generation (RAG) has significantly advanced large language models (LLMs) by enabling dynamic information retrieval to mitigate knowledge gaps and hallucinations in generated...