Found 122 bookmarks
Custom sorting
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
🔎 Lessons Learned from Evaluating NodeRAG vs Other RAG Systems I recently dug into the NodeRAG paper (https://lnkd.in/gwaJHP94) and it was eye-opening not just for how it performed, but for what it revealed about the evolution of RAG (Retrieval-Augmented Generation) systems. Some key takeaways for me: 👉 NaiveRAG is stronger than you think. Brute-force retrieval using simple vector search sometimes beats graph-based methods, especially when graph structures are too coarse or noisy. 👉 GraphRAG was an important step, but not the final answer. While it introduced knowledge graphs and community-based retrieval, GraphRAG sometimes underperformed NaiveRAG because its communities could be too coarse, leading to irrelevant retrieval. 👉 LightRAG reduced token cost, but at the expense of accuracy. By focusing on retrieving just 1-hop neighbors instead of traversing globally, LightRAG made retrieval cheaper — but often missed important multi-hop reasoning paths, losing precision. 👉 NodeRAG shows what mature RAG looks like. NodeRAG redesigned the graph structure itself: Instead of homogeneous graphs, it uses heterogeneous graphs with fine-grained semantic units, entities, relationships, and high-level summaries — all as nodes. It combines dual search (exact match + semantic search) and shallow Personalized PageRank to precisely retrieve the most relevant context. The result? 🚀 Highest accuracy across multi-hop and open-ended benchmarks 🚀 Lowest token retrieval (i.e., lower inference costs) 🚀 Faster indexing and querying 🧠 Key takeaway: In the RAG world, it’s no longer about retrieving more — it’s about retrieving better. Fine-grained, explainable, efficient retrieval will define the next generation of RAG systems. If you’re working on RAG architectures, NodeRAG’s design principles are well worth studying! Would love to hear how others are thinking about the future of RAG systems. 🚀📚 #RAG #KnowledgeGraphs #AI #LLM #NodeRAG #GraphRAG #LightRAG #MachineLearning #GenAI #KnowledegGraphs
·linkedin.com·
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
RDF-specific functionality for VS Code
RDF-specific functionality for VS Code
A little peek into our developments of RDF-specific functionality for VS Code: 1️⃣ The autocompletion and hover-help for RDF vocabularies. Some are stored within the VS Code plugin, the rest are queried from LOV, giving intellisense to the most prominent ontologies. 2️⃣ We can use the ontology of the vocabularies to show when something is not typed correctly 3️⃣ SHACL has a SHACL meta-model. As we built a SHACL engine into VS Code, we can use this meta model to hint if something is not done correctly (e.g., a string as part of a datatype). We plan to release the plugin to the marketplace in some time (However, we are still building more functionality). To not take too much credit: https://lnkd.in/eFB2wKdz delivers important features like most syntax highlighting and auto-import of the prefixes.
RDF-specific functionality for VS Code
·linkedin.com·
RDF-specific functionality for VS Code
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
"I'm Ukrainian and I'm wearing a suit, so no complaints about me from the Oval Office" - that's the start of my lecture about building Artificial Intelligence with Croissant ML in the Dataverse data platform, for the Bio x AI Hackathon kick-off event in Berlin. https://lnkd.in/ePYHCfJt * 750,000+ FAIR datasets across the world forcing the innovation of the whole data landscape. * A knowledge graph with 50M+ triples. * AI-ready metadata exports. * Qdrant as a vector storage, Google Meta Mistral AI as LLM model providers. * Adrian Gschwend Qlever as fastest triple store for Dataverse knowledge graphs Multilingual, machine-readable, queryable scientific data at scale. If you're interested, you can also apply for the 2-month #BioAgentHack online hackathon: • $125K+ prizes • Mentorship from Biotech and AI leaders • Build alongside top open-science researchers & devs More info: https://lnkd.in/eGhvaKdH
·linkedin.com·
The Dataverse Project: 750K FAIR Datasets and a Living Knowledge Graph
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
We’re thrilled to announce new Text2Cypher models and Google’s MCP Toolbox for Databases from the collaboration between Google Cloud and Neo4j.
·neo4j.com·
Google Cloud & Neo4j: Teaming Up at the Intersection of Knowledge Graphs, Agents, MCP, and Natural Language Interfaces - Graph Database & Analytics
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ... 👉 Why This Matters Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But there’s a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data. 👉 What They Built KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs. It includes five tasks: - Triple verification (“Does this fact exist?”) - Shortest path finding (“How are two concepts connected?”) - Aggregation (“How many entities meet X condition?”) - Multi-hop reasoning (“Which entities linked to A also have property B?”) - Global analysis (“Which node is most central?”) The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to “textualize” graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle. 👉 Key Insights 1. Format matters more than assumed:   - Structured JSON and edge lists performed best overall, but results varied by task.   - For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections). 2. Models don’t cheat: Replacing real entity names with fake ones (e.g., “France” → “Verdania”) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge. 3. Token efficiency:   - Edge lists used ~2,600 tokens vs. JSON-LD’s ~13,500. Shorter formats free up context space for complex reasoning.   - But concise ≠ always better: structured formats improved accuracy for tasks requiring grouped data. 4. Models struggle with directionality:   Counting outgoing edges (e.g., “Which countries does France border?”) is easier than incoming ones (“Which countries border France?”), likely due to formatting biases. 👉 Practical Takeaways - Optimize for your task: Use JSON for aggregation, edge lists for centrality. - Test your model: The best format depends on the LLM—Claude thrived with RDF Turtle, while Gemini preferred edge lists. - Don’t fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data. The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right “data language” becomes as critical as the reasoning logic itself. Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs] Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
·linkedin.com·
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Is developing an ontology from an LLM really feasible?
Is developing an ontology from an LLM really feasible?
It seems the answer on whether an LMM would be able to replace the whole text-to-ontology pipeline is a resounding ‘no’. If you’re one of those who think that should be (or even is?) a ‘yes’: why, and did you do the experiments that show it’s as good as the alternatives (with the results available)? And I mean a proper ontology, not a knowledge graph with numerous duplications and contradictions and lacking constraints. For a few gentle considerations (and pointers to longer arguments) and a summary figure of processes the LLM supposedly would be replacing: see https://lnkd.in/dG_Xsv_6 | 43 comments on LinkedIn
Maria KeetMaria Keet
·linkedin.com·
Is developing an ontology from an LLM really feasible?
What are the Different Types of Graphs? The Most Common Misconceptions and Understanding Their Applications - Enterprise Knowledge
What are the Different Types of Graphs? The Most Common Misconceptions and Understanding Their Applications - Enterprise Knowledge
Learn about different types of graphs and their applications in data management and AI, as well as common misconceptions, in this article by Lulit Tesfaye.
·enterprise-knowledge.com·
What are the Different Types of Graphs? The Most Common Misconceptions and Understanding Their Applications - Enterprise Knowledge
Knowledge graphs for LLM grounding and avoiding hallucination
Knowledge graphs for LLM grounding and avoiding hallucination
This blog post is part of a series that dives into various aspects of SAP’s approach to Generative AI, and its technical underpinnings. In previous blog posts of this series, you learned about how to use large language models (LLMs) for developing AI applications in a trustworthy and reliable manner...
·community.sap.com·
Knowledge graphs for LLM grounding and avoiding hallucination
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
🎉🎉 🎉 "Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action" Four years ago, we embarked on writing "Knowledge Graphs Applied" with a clear mission: to guide practitioners in implementing production-ready knowledge graph solutions. Drawing from our extensive field experience across multiple domains, we aimed to share battle-tested best practices that transcend basic use cases. Like fine wine, ideas, and concepts need time to mature. During these four years of careful development, we witnessed a seismic shift in the technological landscape. Large Language Models (LLMs) emerged not just as a buzzword, but as a transformative force that naturally converged with knowledge graphs.  This synergy unlocked new possibilities, particularly in simplifying complex tasks like unstructured data ingestion and knowledge graph-based question-answering. We couldn't ignore this technological disruption. Instead, we embraced it, incorporating our hands-on experience in combining LLMs with graph technologies. The result is "Knowledge Graphs and LLMs in Action" – a thoroughly revised work with new chapters and an expanded scope. Yet our fundamental goal remains unchanged: to empower you to harness the full potential of knowledge graphs, now enhanced by their increasingly natural companion, LLMs. This book represents the culmination of a journey that evolved alongside the technology itself. It delivers practical, production-focused guidance for the modern era, in which knowledge graphs and LLMs work in concert. Now available in MEAP, with new LLMs-focused chapters ready to be published. #llms #knowledgegraph #graphdatascience
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
·linkedin.com·
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Build a graph for RAG application for a price of a chocolate bar! What is GraphRAG for you? What is GraphRAG? What does GraphRAG mean from your perspective? What if you could have a standard RAG and a GraphRAG as a combi-package, with just a query switch? The fact is, there is no concrete, universal
·linkedin.com·
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn