Knowledge graphs for LLM grounding and avoiding hallucination
This blog post is part of a series that dives into various aspects of SAP’s approach to Generative AI, and its technical underpinnings. In previous blog posts of this series, you learned about how to use large language models (LLMs) for developing AI applications in a trustworthy and reliable manner...
Enabling LLM development through knowledge graph visualization
Discover how to empower LLM development through effective knowledge graph visualization. Learn to leverage yFiles for intuitive, interactive diagrams that simplify debugging and optimization in AI applications.
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems 🛜
At the most fundamental level, all approaches rely… | 11 comments on LinkedIn
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Build a graph for RAG application for a price of a chocolate bar! What is GraphRAG for you? What is GraphRAG? What does GraphRAG mean from your perspective? What if you could have a standard RAG and a GraphRAG as a combi-package, with just a query switch? The fact is, there is no concrete, universal
Knowledge graphs: the missing link in enterprise AI
To gain competitive advantage from gen AI, enterprises need to be able to add their own expertise to off-the-shelf systems. Yet standard enterprise data stores aren't a good fit to train large language models.
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
I love Markus J. Buehler's work, and his latest paper "Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks" does not disappoint, revealing… | 19 comments on LinkedIn
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = 🤝
Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning.
What if they could dynamically restructure their thought process like humans?
A new paper introduces Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs (DAGs).
Instead of forcing fixed reasoning steps, AGoT recursively decomposes problems into sub-tasks, selectively expanding only the most critical pathways.
This is crucial for industries like scientific research or legal analysis, where problems demand non-linear, nested reasoning.
The key innovation lies in complexity checks: AGoT assesses each reasoning node, spawning sub-graphs for intricate subtasks while resolving simpler ones directly.
This mirrors how experts allocate mental effort—drilling into uncertainties while streamlining obvious steps.
The framework achieved 46.2% improvement on GPQA (a notoriously hard science QA benchmark), rivaling gains from compute-heavy fine-tuning.
By unifying chain, tree, and graph paradigms, AGoT retains CoT’s clarity, ToT’s exploration, and GoT’s flexibility without manual tuning.
The result? LLMs that self-adapt their reasoning depth based on problem complexity—no architectural changes needed.
For AI practitioners, AGoT’s DAG structure offers a principled interface to scale reasoning modularly.
↓
𝐖𝐚𝐧𝐧𝐚 𝐤𝐧𝐨𝐰 𝐰𝐡𝐚𝐭 𝐲𝐨𝐮 𝐦𝐢𝐬𝐬𝐞𝐝? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = 🤝
Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning.
What if they could… | 10 comments on LinkedIn
A comparison between ChatGPT and DeepSeek capabilities writing a valid Cypher query
Today, I conducted a comparison between ChatGPT and DeepSeek chat capabilities by providing them with a schema and a natural language question. I tasked them…
a comparison between ChatGPT and DeepSeek chat capabilities by providing them with a schema and a natural language question. I tasked them with writing a valid Cypher query to answer the question.
The journey towards a knowledge graph for generative AI
While retrieval-augmented generation is effective for simpler queries, advanced reasoning questions require deeper connections between information that exist across documents. They require a knowledge graph.
Building Knowledge Graphs with LLM Graph Transformer
🧱Building Knowledge Graphs with LLM Graph Transformer A deep dive into LangChain’s implementation of graph construction with LLMs If you want to try out… | 32 comments on LinkedIn
Building Knowledge Graphs with LLM Graph Transformer
Paco Nathan's Graph Power Hour: Understanding Graph Rag
Watch the first podcast of Paco Nathan's Graph Power Hour. This week's topic - Understanding Graph Rag: Enhancing LLM Applications Through Knowledge Graphs.
The Power of Graph-Native Intelligence for Agentic AI Systems
The Power of Graph-Native Intelligence for Agentic AI Systems How Entity Resolution, Knowledge Fusion, and Extension Frameworks Transform Enterprise AI ⚡…
The Power of Graph-Native Intelligence for Agentic AI Systems
benchmarks to prove the value of GraphRAG for question & answering on complex documents
We are launching a series of benchmarks to prove the value of GraphRAG for question & answering on complex documents. The process is simple, we ingest the…
benchmarks to prove the value of GraphRAG for question & answering on complex documents
LightRAG: A More Efficient Solution than GraphRAG for RAG Systems?
In this video, I introduce LightRAG, a new, cost-effective retrieval augmented generation (RAG) method that combines knowledge graphs and embedding-based ret...
Graphs Neural Networks (GNNs) and LLMs are colliding in exciting ways
Graphs Neural Networks (GNNs) and LLMs are colliding in exciting ways. 💥 This survey introduces a novel taxonomy for categorizing existing methods that… | 19 comments on LinkedIn