GraphNews

#LLM
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
šŸ†šŸš£MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graphā€¦
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Ā·linkedin.comĀ·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
šŸ†šŸš£MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graphā€¦
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Ā·linkedin.comĀ·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋The Problem: Knowledge Graphs Are Expensive (and Clunky) AI agents needĀ contextĀ to answer complex questionsā€”like connecting ā€œCOVID vaccinesā€ to ā€œmyocarditis risksā€ across research papers. But todayā€™s solutions face two nightmares: āœøĀ Cost: Building detailed knowledge graphs with LLMs can costĀ $33,000 for a 5GB legal case. āœøĀ Quality: Cheap methods (like KNN graphs) miss key relationships, leading toĀ 32% worse answers. ā˜†Ā Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch. ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋The Fix: KET-RAGā€™s Two-Layer Brain KET-RAG mergesĀ precisionĀ (knowledge graphs) andĀ efficiencyĀ (keyword-text maps) into one system: āœøĀ Layer 1: Knowledge Graph Skeleton ā˜† Uses PageRank to findĀ core text chunksĀ (like ā€œvaccine side effectsā€ in medical docs). ā˜† Builds a sparse graphĀ onlyĀ on these chunks with LLMsā€”saving 80% of indexing costs. āœøĀ Layer 2: Keyword-Chunk Bipartite Graph ā˜† Links keywords (e.g., ā€œmyocarditisā€) to all related text snippetsā€”no LLM needed. ā˜† Acts as a ā€œfast laneā€ for retrieving context without expensive entity extraction. ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋Results: Beating Microsoftā€™s Graph-RAG with Pennies On HotpotQA and MuSiQue benchmarks, KET-RAG: āœøĀ Retrieves 81.6%Ā of critical info vs. Microsoftā€™s 74.6%ā€”with 10x lower cost. āœø Boosts answer accuracy (F1 score) byĀ 32.4%Ā while cutting indexing bills byĀ 20%. āœø Scales to terabytes of data without melting budgets. ā˜†Ā Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price. ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋Why AI Agents Need This AI agents arenā€™t just chatbotsā€”theyā€™reĀ problem solversĀ for medicine, law, and customer service. KET-RAG gives them: āœøĀ Real-time, multi-hop reasoning: Connecting ā€œdrug A ā†’ gene B ā†’ side effect Cā€ in milliseconds. āœøĀ Cost-effective scalability: Deploying agents across millions of documents without going broke. āœøĀ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed). Paper in comments ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ 怋Build Your Own Supercharged AI Agent? šŸ”® Join My š‡ššš§šš¬-šŽš§ š€šˆ š€š šžš§š­š¬ š“š«ššš¢š§š¢š§š  TODAY! and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines š„š§š«šØš„š„ ššŽš– [34% discount]: šŸ‘‰ https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Ā·linkedin.comĀ·
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = šŸ¤ Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning. What if they could dynamically restructure their thought process like humans? A new paper introduces Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs (DAGs). Instead of forcing fixed reasoning steps, AGoT recursively decomposes problems into sub-tasks, selectively expanding only the most critical pathways. This is crucial for industries like scientific research or legal analysis, where problems demand non-linear, nested reasoning. The key innovation lies in complexity checks: AGoT assesses each reasoning node, spawning sub-graphs for intricate subtasks while resolving simpler ones directly. This mirrors how experts allocate mental effortā€”drilling into uncertainties while streamlining obvious steps. The framework achieved 46.2% improvement on GPQA (a notoriously hard science QA benchmark), rivaling gains from compute-heavy fine-tuning. By unifying chain, tree, and graph paradigms, AGoT retains CoTā€™s clarity, ToTā€™s exploration, and GoTā€™s flexibility without manual tuning. The result? LLMs that self-adapt their reasoning depth based on problem complexityā€”no architectural changes needed. For AI practitioners, AGoTā€™s DAG structure offers a principled interface to scale reasoning modularly. ā†“ š–ššš§š§šš š¤š§šØš° š°š”ššš­ š²šØš® š¦š¢š¬š¬šžš? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com šŸ’”
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Ā·linkedin.comĀ·
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Pathway to Artificial General Intelligence (AGI)
Pathway to Artificial General Intelligence (AGI)
šŸŒŸ Pathway to Artificial General Intelligence (AGI) šŸŒŸ This is my view on the evolutionary steps toward AGI: 1ļøāƒ£ Large Language Models (LLMs): Language modelsā€¦
Pathway to Artificial General Intelligence (AGI)
Ā·linkedin.comĀ·
Pathway to Artificial General Intelligence (AGI)
A comparison between ChatGPT and DeepSeek capabilities writing a valid Cypher query
A comparison between ChatGPT and DeepSeek capabilities writing a valid Cypher query
Today, I conducted a comparison between ChatGPT and DeepSeek chat capabilities by providing them with a schema and a natural language question. I tasked themā€¦
a comparison between ChatGPT and DeepSeek chat capabilities by providing them with a schema and a natural language question. I tasked them with writing a valid Cypher query to answer the question.
Ā·linkedin.comĀ·
A comparison between ChatGPT and DeepSeek capabilities writing a valid Cypher query
What is really Graph RAG?
What is really Graph RAG?
What is really Graph RAG? Inspired by "From Local to Global: A Graph RAG Approach to Query-Focused Summarization" paper from Microsoft! How do you combineā€¦ | 12 comments on LinkedIn
What is really Graph RAG?
Ā·linkedin.comĀ·
What is really Graph RAG?
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering That has been our position from the beginning when we started our researchā€¦ | 29 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Ā·linkedin.comĀ·
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
A zero-hallucination AI chatbot that answered over 10000 questions of students at the University of Chicago using GraphRAG
A zero-hallucination AI chatbot that answered over 10000 questions of students at the University of Chicago using GraphRAG
UChicago Genie is now open source! How we built a zero-hallucination AI chatbot that answered over 10000 questions of students at the University ofā€¦ | 25 comments on LinkedIn
a zero-hallucination AI chatbot that answered over 10000 questions of students at the University of Chicago
Ā·linkedin.comĀ·
A zero-hallucination AI chatbot that answered over 10000 questions of students at the University of Chicago using GraphRAG