Found 104 bookmarks
Newest
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role. It’s not just smarter retrieval. It’s structured memory for AI agents. 》 Why NodeRAG? Most Retrieval-Augmented Generation (RAG) methods retrieve chunks of text. Good enough — until you need reasoning, precision, and multi-hop understanding. This is how NodeRAG solves these problems: 》 🔹Step 1: Graph Decomposition NodeRAG begins by decomposing raw text into smart building blocks: ✸ Semantic Units (S): Little event nuggets ("Hinton won the Nobel Prize.") ✸ Entities (N): Key names or concepts ("Hinton", "Nobel Prize") ✸ Relationships (R): Links between entities ("awarded to") ✩ This is like teaching your AI to recognize the actors, actions, and scenes inside any document. 》 🔹Step 2: Graph Augmentation Decomposition alone isn't enough. NodeRAG augments the graph by identifying important hubs: ✸ Node Importance: Using K-Core and Betweenness Centrality to find critical nodes ✩ Important entities get special attention — their attributes are summarized into new nodes (A). ✸ Community Detection: Grouping related nodes into communities and summarizing them into high-level insights (H). ✩ Each community gets a "headline" overview node (O) for quick retrieval. It's like adding context and intuition to raw facts. 》 🔹 Step 3: Graph Enrichment Knowledge without detail is brittle. So NodeRAG enriches the graph: ✸ Original Text: Full chunks are linked back into the graph (Text nodes, T) ✸ Semantic Edges: Using HNSW for fast, meaningful similarity connections ✩ Only smart nodes are embedded (not everything!) — saving huge storage space. ✩ Dual search (exact + vector) makes retrieval laser-sharp. It’s like turning a 2D map into a 3D living world. 》 🔹 Step 4: Graph Searching Now comes the magic. ✸ Dual Search: First find strong entry points (by name or by meaning) ✸ Shallow Personalized PageRank (PPR): Expand carefully from entry points to nearby relevant nodes. ✩ No wandering into irrelevant parts of the graph. The search is surgical. ✩ Retrieval includes fine-grained semantic units, attributes, high-level elements — everything you need, nothing you don't. It’s like sending out agents into a city — and they return not with everything they saw, but exactly what you asked for, summarized and structured. 》 Results: NodeRAG's Performance Compared to GraphRAG, LightRAG, NaiveRAG, and HyDE — NodeRAG wins across every major domain: Tech, Science, Writing, Recreation, and Finance. NodeRAG isn’t just a better graph. NodeRAG is a new operating system for memory. ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ ⫸ꆛ Want to build Real-World AI agents? Join My 𝗛𝗮𝗻𝗱𝘀-𝗼𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 TODAY! ➠ Build Real-World AI Agents + RAG Pipelines ➠ Learn 3 Tools: LangGraph/LangChain | CrewAI | OpenAI Swarm ➠ Work with Text, Audio, Video and Tabular Data 👉𝗘𝗻𝗿𝗼𝗹𝗹 𝗡𝗢𝗪 (𝟯𝟰% 𝗱𝗶𝘀𝗰𝗼𝘂𝗻𝘁): https://lnkd.in/eGuWr4CH | 20 comments on LinkedIn
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
·linkedin.com·
NodeRAG restructures knowledge into a heterograph: a rich, layered, musical graph where each node plays a different role
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ... 👉 Why This Matters Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But there’s a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data. 👉 What They Built KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs. It includes five tasks: - Triple verification (“Does this fact exist?”) - Shortest path finding (“How are two concepts connected?”) - Aggregation (“How many entities meet X condition?”) - Multi-hop reasoning (“Which entities linked to A also have property B?”) - Global analysis (“Which node is most central?”) The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to “textualize” graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle. 👉 Key Insights 1. Format matters more than assumed:   - Structured JSON and edge lists performed best overall, but results varied by task.   - For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections). 2. Models don’t cheat: Replacing real entity names with fake ones (e.g., “France” → “Verdania”) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge. 3. Token efficiency:   - Edge lists used ~2,600 tokens vs. JSON-LD’s ~13,500. Shorter formats free up context space for complex reasoning.   - But concise ≠ always better: structured formats improved accuracy for tasks requiring grouped data. 4. Models struggle with directionality:   Counting outgoing edges (e.g., “Which countries does France border?”) is easier than incoming ones (“Which countries border France?”), likely due to formatting biases. 👉 Practical Takeaways - Optimize for your task: Use JSON for aggregation, edge lists for centrality. - Test your model: The best format depends on the LLM—Claude thrived with RDF Turtle, while Gemini preferred edge lists. - Don’t fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data. The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right “data language” becomes as critical as the reasoning logic itself. Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs] Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
·linkedin.com·
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
🎁⏳ Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy. Build Personalized AI… | 46 comments on LinkedIn
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
·linkedin.com·
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Problem: Knowledge Graphs Are Expensive (and Clunky) AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares: ✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case. ✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers. ☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Fix: KET-RAG’s Two-Layer Brain KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system: ✸ Layer 1: Knowledge Graph Skeleton ☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs). ☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs. ✸ Layer 2: Keyword-Chunk Bipartite Graph ☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed. ☆ Acts as a “fast lane” for retrieving context without expensive entity extraction. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Results: Beating Microsoft’s Graph-RAG with Pennies On HotpotQA and MuSiQue benchmarks, KET-RAG: ✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost. ✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%. ✸ Scales to terabytes of data without melting budgets. ☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Why AI Agents Need This AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them: ✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds. ✸ Cost-effective scalability: Deploying agents across millions of documents without going broke. ✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed). Paper in comments ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ 》Build Your Own Supercharged AI Agent? 🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY! and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines 𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]: 👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
·linkedin.com·
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph…
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
·linkedin.com·
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Knowledge Graph In-Context Learning
Knowledge Graph In-Context Learning
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts of… | 13 comments on LinkedIn
Knowledge Graph In-Context Learning
·linkedin.com·
Knowledge Graph In-Context Learning
Graph-constrained Reasoning
Graph-constrained Reasoning
🚀 Exciting New Research: "Graph-constrained Reasoning (GCR)" - Enabling Faithful KG-grounded LLM Reasoning with Zero Hallucination! 🧠 🎉 Proud to share our… | 11 comments on LinkedIn
Graph-constrained Reasoning
·linkedin.com·
Graph-constrained Reasoning
Medical Graph RAG
Medical Graph RAG
LLMs and Knowledge Graphs: A love story 💓 Researchers from University of Oxford recently released MedGraphRAG. At its core, MedGraphRAG is a framework…
·linkedin.com·
Medical Graph RAG
𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩 𝘪𝘴 𝘢 𝘧𝘳𝘢𝘮𝘦𝘸𝘰𝘳𝘬 𝘵𝘢𝘪𝘭𝘰𝘳𝘦𝘥 𝘧𝘰𝘳 𝘪𝘯𝘵𝘦𝘯𝘵-𝘣𝘢𝘴𝘦𝘥 𝘢𝘯𝘥 𝘤𝘰𝘯𝘵𝘦𝘹𝘵𝘶𝘢𝘭𝘭𝘺 𝘢𝘭𝘪𝘨𝘯𝘦𝘥 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭.
𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩 𝘪𝘴 𝘢 𝘧𝘳𝘢𝘮𝘦𝘸𝘰𝘳𝘬 𝘵𝘢𝘪𝘭𝘰𝘳𝘦𝘥 𝘧𝘰𝘳 𝘪𝘯𝘵𝘦𝘯𝘵-𝘣𝘢𝘴𝘦𝘥 𝘢𝘯𝘥 𝘤𝘰𝘯𝘵𝘦𝘹𝘵𝘶𝘢𝘭𝘭𝘺 𝘢𝘭𝘪𝘨𝘯𝘦𝘥 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭.
𝗥𝗔𝗚 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻𝘀 𝗙𝗮𝗶𝗹 𝗗𝘂𝗲 𝗧𝗼 𝗜𝗻𝘀𝘂𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝗙𝗼𝗰𝘂𝘀 𝗢𝗻 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗜𝗻𝘁𝗲𝗻𝘁 𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎… | 12 comments on LinkedIn
𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩 𝘪𝘴 𝘢 𝘧𝘳𝘢𝘮𝘦𝘸𝘰𝘳𝘬 𝘵𝘢𝘪𝘭𝘰𝘳𝘦𝘥 𝘧𝘰𝘳 𝘪𝘯𝘵𝘦𝘯𝘵-𝘣𝘢𝘴𝘦𝘥 𝘢𝘯𝘥 𝘤𝘰𝘯𝘵𝘦𝘹𝘵𝘶𝘢𝘭𝘭𝘺 𝘢𝘭𝘪𝘨𝘯𝘦𝘥 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭.
·linkedin.com·
𝘛𝘩𝘦 𝘔𝘪𝘯𝘥𝘧𝘶𝘭-𝘙𝘈𝘎 𝘢𝘱𝘱𝘳𝘰𝘢𝘤𝘩 𝘪𝘴 𝘢 𝘧𝘳𝘢𝘮𝘦𝘸𝘰𝘳𝘬 𝘵𝘢𝘪𝘭𝘰𝘳𝘦𝘥 𝘧𝘰𝘳 𝘪𝘯𝘵𝘦𝘯𝘵-𝘣𝘢𝘴𝘦𝘥 𝘢𝘯𝘥 𝘤𝘰𝘯𝘵𝘦𝘹𝘵𝘶𝘢𝘭𝘭𝘺 𝘢𝘭𝘪𝘨𝘯𝘦𝘥 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘳𝘦𝘵𝘳𝘪𝘦𝘷𝘢𝘭.
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models
This is something very cool! 3. GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models "GraphReader addresses the…
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models
·linkedin.com·
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models
GitHub - SynaLinks/HybridAGI: The Programmable Neuro-Symbolic AGI that lets you program its behavior using Graph-based Prompt Programming: for people who want AI to behave as expected
GitHub - SynaLinks/HybridAGI: The Programmable Neuro-Symbolic AGI that lets you program its behavior using Graph-based Prompt Programming: for people who want AI to behave as expected
The Programmable Neuro-Symbolic AGI that lets you program its behavior using Graph-based Prompt Programming: for people who want AI to behave as expected - SynaLinks/HybridAGI
·github.com·
GitHub - SynaLinks/HybridAGI: The Programmable Neuro-Symbolic AGI that lets you program its behavior using Graph-based Prompt Programming: for people who want AI to behave as expected