Found 392 bookmarks
Custom sorting
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
šŸ”Ž Lessons Learned from Evaluating NodeRAG vs Other RAG Systems I recently dug into the NodeRAG paper (https://lnkd.in/gwaJHP94) and it was eye-opening not just for how it performed, but for what it revealed about the evolution of RAG (Retrieval-Augmented Generation) systems. Some key takeaways for me: šŸ‘‰ NaiveRAG is stronger than you think. Brute-force retrieval using simple vector search sometimes beats graph-based methods, especially when graph structures are too coarse or noisy. šŸ‘‰ GraphRAG was an important step, but not the final answer. While it introduced knowledge graphs and community-based retrieval, GraphRAG sometimes underperformed NaiveRAG because its communities could be too coarse, leading to irrelevant retrieval. šŸ‘‰ LightRAG reduced token cost, but at the expense of accuracy. By focusing on retrieving just 1-hop neighbors instead of traversing globally, LightRAG made retrieval cheaper — but often missed important multi-hop reasoning paths, losing precision. šŸ‘‰ NodeRAG shows what mature RAG looks like. NodeRAG redesigned the graph structure itself: Instead of homogeneous graphs, it uses heterogeneous graphs with fine-grained semantic units, entities, relationships, and high-level summaries — all as nodes. It combines dual search (exact match + semantic search) and shallow Personalized PageRank to precisely retrieve the most relevant context. The result? šŸš€ Highest accuracy across multi-hop and open-ended benchmarks šŸš€ Lowest token retrieval (i.e., lower inference costs) šŸš€ Faster indexing and querying 🧠 Key takeaway: In the RAG world, it’s no longer about retrieving more — it’s about retrieving better. Fine-grained, explainable, efficient retrieval will define the next generation of RAG systems. If you’re working on RAG architectures, NodeRAG’s design principles are well worth studying! Would love to hear how others are thinking about the future of RAG systems. šŸš€šŸ“š #RAG #KnowledgeGraphs #AI #LLM #NodeRAG #GraphRAG #LightRAG #MachineLearning #GenAI #KnowledegGraphs
Ā·linkedin.comĀ·
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
We’re thrilled to announce new Text2Cypher models and Google’s MCP Toolbox for Databases from the collaboration between Google Cloud and Neo4j.
Ā·neo4j.comĀ·
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ... šŸ‘‰ Why This Matters Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But there’s a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data. šŸ‘‰ What They Built KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs. It includes five tasks: - Triple verification (ā€œDoes this fact exist?ā€) - Shortest path finding (ā€œHow are two concepts connected?ā€) - Aggregation (ā€œHow many entities meet X condition?ā€) - Multi-hop reasoning (ā€œWhich entities linked to A also have property B?ā€) - Global analysis (ā€œWhich node is most central?ā€) The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to ā€œtextualizeā€ graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle. šŸ‘‰ Key Insights 1. Format matters more than assumed: Ā Ā - Structured JSON and edge lists performed best overall, but results varied by task. Ā Ā - For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections). 2. Models don’t cheat: Replacing real entity names with fake ones (e.g., ā€œFranceā€ → ā€œVerdaniaā€) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge. 3. Token efficiency: Ā Ā - Edge lists used ~2,600 tokens vs. JSON-LD’s ~13,500. Shorter formats free up context space for complex reasoning. Ā Ā - But concise ≠ always better: structured formats improved accuracy for tasks requiring grouped data. 4. Models struggle with directionality: Ā  Counting outgoing edges (e.g., ā€œWhich countries does France border?ā€) is easier than incoming ones (ā€œWhich countries border France?ā€), likely due to formatting biases. šŸ‘‰ Practical Takeaways - Optimize for your task: Use JSON for aggregation, edge lists for centrality. - Test your model: The best format depends on the LLM—Claude thrived with RDF Turtle, while Gemini preferred edge lists. - Don’t fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data. The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right ā€œdata languageā€ becomes as critical as the reasoning logic itself. Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs] Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Ā·linkedin.comĀ·
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Is developing an ontology from an LLM really feasible?
Is developing an ontology from an LLM really feasible?
It seems the answer on whether an LMM would be able to replace the whole text-to-ontology pipeline is a resounding ā€˜no’. If you’re one of those who think that should be (or even is?) a ā€˜yes’: why, and did you do the experiments that show it’s as good as the alternatives (with the results available)? And I mean a proper ontology, not a knowledge graph with numerous duplications and contradictions and lacking constraints. For a few gentle considerations (and pointers to longer arguments) and a summary figure of processes the LLM supposedly would be replacing: see https://lnkd.in/dG_Xsv_6 | 43 comments on LinkedIn
Maria KeetMaria Keet
Ā·linkedin.comĀ·
Is developing an ontology from an LLM really feasible?
Knowledge graphs for LLM grounding and avoiding hallucination
Knowledge graphs for LLM grounding and avoiding hallucination
This blog post is part of a series that dives into various aspects of SAP’s approach to Generative AI, and its technical underpinnings. In previous blog posts of this series, you learned about how to use large language models (LLMs) for developing AI applications in a trustworthy and reliable manner...
Ā·community.sap.comĀ·
Knowledge graphs for LLM grounding and avoiding hallucination
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
šŸŽ‰šŸŽ‰ šŸŽ‰ "Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action" Four years ago, we embarked on writing "Knowledge Graphs Applied" with a clear mission: to guide practitioners in implementing production-ready knowledge graph solutions.Ā Drawing from our extensive field experience across multiple domains, we aimed to shareĀ battle-tested best practices that transcend basic use cases. Like fine wine, ideas, and concepts need time to mature. During these four years of careful development, we witnessed a seismic shift in the technological landscape.Ā Large Language Models (LLMs) emerged not just as a buzzword, but as a transformative force thatĀ naturally converged with knowledge graphs.Ā  This synergy unlocked new possibilities, particularly in simplifying complex tasks like unstructured data ingestion and knowledge graph-based question-answering. We couldn't ignore this technological disruption. Instead, we embraced it, incorporating our hands-on experience in combining LLMs with graph technologies.Ā The result is "Knowledge Graphs and LLMs in Action" – a thoroughly revised work with new chapters andĀ an expanded scope. Yet our fundamental goal remains unchanged: to empower you to harness the full potential of knowledge graphs, now enhanced by their increasingly natural companion, LLMs. This book represents the culmination of a journey that evolved alongside the technology itself. It delivers practical, production-focused guidance for the modern era, in which knowledge graphs and LLMs work in concert. Now available in MEAP, with new LLMs-focused chapters ready to be published. #llms #knowledgegraph #graphdatascience
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
Ā·linkedin.comĀ·
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Multi-LayerĀ Agentic Reasoning: Connecting ComplexĀ Data and Dynamic Insights in Graph-Based RAG Systems šŸ›œ At the most fundamentalĀ level, all approaches rely… | 11 comments on LinkedIn
Multi-LayerĀ Agentic Reasoning: Connecting ComplexĀ Data and Dynamic Insights in Graph-Based RAG Systems
Ā·linkedin.comĀ·
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Build a graph for RAG application for a price of a chocolate bar! What is GraphRAG for you? What is GraphRAG? What does GraphRAG mean from your perspective? What if you could have a standard RAG and a GraphRAG as a combi-package, with just a query switch? The fact is, there is no concrete, universal
Ā·linkedin.comĀ·
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
šŸŽā³ Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy. Build Personalized AI… | 46 comments on LinkedIn
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
Ā·linkedin.comĀ·
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
Synalinks is an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
Synalinks is an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
šŸŽ‰ We're thrilled to unveil Synalinks (šŸ§ šŸ”—), an open-source framework designed to streamline the creation, evaluation, training, and deployment of…
Synalinks (šŸ§ šŸ”—), an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
Ā·linkedin.comĀ·
Synalinks is an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
šŸ†šŸš£MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Ā·linkedin.comĀ·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
šŸ†šŸš£MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Ā·linkedin.comĀ·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 怋The Problem: Knowledge Graphs Are Expensive (and Clunky) AI agents needĀ contextĀ to answer complex questions—like connecting ā€œCOVID vaccinesā€ to ā€œmyocarditis risksā€ across research papers. But today’s solutions face two nightmares: ✸ Cost: Building detailed knowledge graphs with LLMs can costĀ $33,000 for a 5GB legal case. ✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading toĀ 32% worse answers. ā˜†Ā Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 怋The Fix: KET-RAG’s Two-Layer Brain KET-RAG mergesĀ precisionĀ (knowledge graphs) andĀ efficiencyĀ (keyword-text maps) into one system: ✸ Layer 1: Knowledge Graph Skeleton ā˜† Uses PageRank to findĀ core text chunksĀ (like ā€œvaccine side effectsā€ in medical docs). ā˜† Builds a sparse graphĀ onlyĀ on these chunks with LLMs—saving 80% of indexing costs. ✸ Layer 2: Keyword-Chunk Bipartite Graph ā˜† Links keywords (e.g., ā€œmyocarditisā€) to all related text snippets—no LLM needed. ā˜† Acts as a ā€œfast laneā€ for retrieving context without expensive entity extraction. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 怋Results: Beating Microsoft’s Graph-RAG with Pennies On HotpotQA and MuSiQue benchmarks, KET-RAG: ✸ Retrieves 81.6%Ā of critical info vs. Microsoft’s 74.6%—with 10x lower cost. ✸ Boosts answer accuracy (F1 score) byĀ 32.4%Ā while cutting indexing bills byĀ 20%. ✸ Scales to terabytes of data without melting budgets. ā˜†Ā Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 怋Why AI Agents Need This AI agents aren’t just chatbots—they’reĀ problem solversĀ for medicine, law, and customer service. KET-RAG gives them: ✸ Real-time, multi-hop reasoning: Connecting ā€œdrug A → gene B → side effect Cā€ in milliseconds. ✸ Cost-effective scalability: Deploying agents across millions of documents without going broke. ✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed). Paper in comments ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ 怋Build Your Own Supercharged AI Agent? šŸ”® Join My š‡ššš§šš¬-šŽš§ š€šˆ š€š šžš§š­š¬ š“š«ššš¢š§š¢š§š  TODAY! and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines š„š§š«šØš„š„ ššŽš– [34% discount]: šŸ‘‰ https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Ā·linkedin.comĀ·
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Pathway to Artificial General Intelligence (AGI)
Pathway to Artificial General Intelligence (AGI)
🌟 Pathway to Artificial General Intelligence (AGI) 🌟 This is my view on the evolutionary steps toward AGI: 1ļøāƒ£ Large Language Models (LLMs): Language models…
Pathway to Artificial General Intelligence (AGI)
Ā·linkedin.comĀ·
Pathway to Artificial General Intelligence (AGI)