Knowledge graphs to teach LLMs how to reason like doctors
Knowledge graphs to teach LLMs how to reason like doctors!
Many medical LLMs can give you the right answer, but not the right reasoning which is a problem for clinical trust.
๐ ๐ฒ๐ฑ๐ฅ๐ฒ๐ฎ๐๐ผ๐ป ๐ถ๐ ๐๐ต๐ฒ ๐ณ๐ถ๐ฟ๐๐ ๐ณ๐ฎ๐ฐ๐๐๐ฎ๐น๐น๐-๐ด๐๐ถ๐ฑ๐ฒ๐ฑ ๐ฑ๐ฎ๐๐ฎ๐๐ฒ๐ ๐๐ผ ๐๐ฒ๐ฎ๐ฐ๐ต ๐๐๐ ๐ ๐ฐ๐น๐ถ๐ป๐ถ๐ฐ๐ฎ๐น ๐๐ต๐ฎ๐ถ๐ป-๐ผ๐ณ-๐ง๐ต๐ผ๐๐ด๐ต๐ (๐๐ผ๐ง) ๐ฟ๐ฒ๐ฎ๐๐ผ๐ป๐ถ๐ป๐ด ๐๐๐ถ๐ป๐ด ๐บ๐ฒ๐ฑ๐ถ๐ฐ๐ฎ๐น ๐ธ๐ป๐ผ๐๐น๐ฒ๐ฑ๐ด๐ฒ ๐ด๐ฟ๐ฎ๐ฝ๐ต๐.
ย 1. Created 32,682 clinically validated QA explanations by linking symptoms, findings, and diagnoses through PrimeKG.
ย 2. Generated CoT reasoning paths using GPT-4o, but retained only those that produced correct answers during post-hoc verification.
ย 3. Validated with physicians across 7 specialties, with expert preference for MedReasonโs reasoning in 80โ100% of cases.
ย 4. Enabled interpretable, step-by-step answers like linking difficulty walking to medulloblastoma via ataxia, preserving clinical fidelity throughout.
Couple thoughts:ย
ย โข introducing dynamic KG updates (e.g., weekly ingests of new clinical trial data) could keep reasoning current with evolving medical knowledge.
ย โข Could also integrating visual KGs derived from DICOM metadata help coherent reasoning across text and imaging inputs? We don't use DICOM metadata enough tbh
ย โข Adding testing with adversarial probing (like edgeโcase clinical scenarios) and continuous alignment checks against updated evidenceโbased guidelines might benefit the model performance
Here's the awesome work: https://lnkd.in/g42-PKMG
Congrats to Juncheng Wu, Wenlong Deng, Xiaoxiao Li, Yuyin Zhou and co!
I post my takes on the latest developments in health AI โ ๐ฐ๐ผ๐ป๐ป๐ฒ๐ฐ๐ ๐๐ถ๐๐ต ๐บ๐ฒ ๐๐ผ ๐๐๐ฎ๐ ๐๐ฝ๐ฑ๐ฎ๐๐ฒ๐ฑ!
Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW | 40 comments on LinkedIn
Knowledge graphs to teach LLMs how to reason like doctors
Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
๐ Lessons Learned from Evaluating NodeRAG vs Other RAG Systems
I recently dug into the NodeRAG paper (https://lnkd.in/gwaJHP94) and it was eye-opening not just for how it performed, but for what it revealed about the evolution of RAG (Retrieval-Augmented Generation) systems.
Some key takeaways for me:
๐ NaiveRAG is stronger than you think.
Brute-force retrieval using simple vector search sometimes beats graph-based methods, especially when graph structures are too coarse or noisy.
๐ GraphRAG was an important step, but not the final answer.
While it introduced knowledge graphs and community-based retrieval, GraphRAG sometimes underperformed NaiveRAG because its communities could be too coarse, leading to irrelevant retrieval.
๐ LightRAG reduced token cost, but at the expense of accuracy.
By focusing on retrieving just 1-hop neighbors instead of traversing globally, LightRAG made retrieval cheaper โ but often missed important multi-hop reasoning paths, losing precision.
๐ NodeRAG shows what mature RAG looks like.
NodeRAG redesigned the graph structure itself:
Instead of homogeneous graphs, it uses heterogeneous graphs with fine-grained semantic units, entities, relationships, and high-level summaries โ all as nodes.
It combines dual search (exact match + semantic search) and shallow Personalized PageRank to precisely retrieve the most relevant context.
The result?
๐ Highest accuracy across multi-hop and open-ended benchmarks
๐ Lowest token retrieval (i.e., lower inference costs)
๐ Faster indexing and querying
๐ง Key takeaway:
In the RAG world, itโs no longer about retrieving more โ itโs about retrieving better.
Fine-grained, explainable, efficient retrieval will define the next generation of RAG systems.
If youโre working on RAG architectures, NodeRAGโs design principles are well worth studying!
Would love to hear how others are thinking about the future of RAG systems. ๐๐
#RAG #KnowledgeGraphs #AI #LLM #NodeRAG #GraphRAG #LightRAG #MachineLearning #GenAI #KnowledegGraphs
Affordable AI Assistants with Knowledge Graph of Thoughts
Large Language Models (LLMs) are revolutionizing the development of AI assistants capable of performing diverse tasks across domains. However, current state-of-the-art LLM-driven agents face...
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ...
๐ Why This Matters
Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But thereโs a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data.
๐ What They Built
KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs.
It includes five tasks:
- Triple verification (โDoes this fact exist?โ)
- Shortest path finding (โHow are two concepts connected?โ)
- Aggregation (โHow many entities meet X condition?โ)
- Multi-hop reasoning (โWhich entities linked to A also have property B?โ)
- Global analysis (โWhich node is most central?โ)
The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to โtextualizeโ graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle.
๐ Key Insights
1. Format matters more than assumed:
ย ย - Structured JSON and edge lists performed best overall, but results varied by task.
ย ย - For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections).
2. Models donโt cheat:
Replacing real entity names with fake ones (e.g., โFranceโ โ โVerdaniaโ) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge.
3. Token efficiency:
ย ย - Edge lists used ~2,600 tokens vs. JSON-LDโs ~13,500. Shorter formats free up context space for complex reasoning.
ย ย - But concise โ always better: structured formats improved accuracy for tasks requiring grouped data.
4. Models struggle with directionality:
ย
Counting outgoing edges (e.g., โWhich countries does France border?โ) is easier than incoming ones (โWhich countries border France?โ), likely due to formatting biases.
๐ Practical Takeaways
- Optimize for your task: Use JSON for aggregation, edge lists for centrality.
- Test your model: The best format depends on the LLMโClaude thrived with RDF Turtle, while Gemini preferred edge lists.
- Donโt fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data.
The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right โdata languageโ becomes as critical as the reasoning logic itself.
Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs]
Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Knowledge graphs for LLM grounding and avoiding hallucination
This blog post is part of a series that dives into various aspects of SAPโs approach to Generative AI, and its technical underpinnings. In previous blog posts of this series, you learned about how to use large language models (LLMs) for developing AI applications in a trustworthy and reliable manner...
Enabling LLM development through knowledge graph visualization
Discover how to empower LLM development through effective knowledge graph visualization. Learn to leverage yFiles for intuitive, interactive diagrams that simplify debugging and optimization in AI applications.
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Multi-Layerย Agentic Reasoning: Connecting Complexย Data and Dynamic Insights in Graph-Based RAG Systems ๐
At the most fundamentalย level, all approaches relyโฆ | 11 comments on LinkedIn
Multi-Layerย Agentic Reasoning: Connecting Complexย Data and Dynamic Insights in Graph-Based RAG Systems
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Build a graph for RAG application for a price of a chocolate bar! What is GraphRAG for you? What is GraphRAG? What does GraphRAG mean from your perspective? What if you could have a standard RAG and a GraphRAG as a combi-package, with just a query switch? The fact is, there is no concrete, universal
Knowledge graphs: the missing link in enterprise AI
To gain competitive advantage from gen AI, enterprises need to be able to add their own expertise to off-the-shelf systems. Yet standard enterprise data stores aren't a good fit to train large language models.
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
๐๐ฃMiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graphโฆ
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
I love Markus J. Buehler's work, and his latest paper "Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks" does not disappoint, revealingโฆ | 19 comments on LinkedIn
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
๐๐ฃMiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graphโฆ
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
The journey towards a knowledge graph for generative AI
While retrieval-augmented generation is effective for simpler queries, advanced reasoning questions require deeper connections between information that exist across documents. They require a knowledge graph.
Building Knowledge Graphs with LLM Graph Transformer
๐งฑBuilding Knowledge Graphs with LLM Graph Transformer A deep dive into LangChainโs implementation of graph construction with LLMs If you want to try outโฆ | 32 comments on LinkedIn
Building Knowledge Graphs with LLM Graph Transformer
Paco Nathan's Graph Power Hour: Understanding Graph Rag
Watch the first podcast of Paco Nathan's Graph Power Hour. This week's topic - Understanding Graph Rag: Enhancing LLM Applications Through Knowledge Graphs.
The Power of Graph-Native Intelligence for Agentic AI Systems
The Power of Graph-Native Intelligence for Agentic AI Systems How Entity Resolution, Knowledge Fusion, and Extension Frameworks Transform Enterprise AI โกโฆ
The Power of Graph-Native Intelligence for Agentic AI Systems
benchmarks to prove the value of GraphRAG for question & answering on complex documents
We are launching a series of benchmarks to prove the value of GraphRAG for question & answering on complex documents. The process is simple, we ingest theโฆ
benchmarks to prove the value of GraphRAG for question & answering on complex documents
LightRAG: A More Efficient Solution than GraphRAG for RAG Systems?
In this video, I introduce LightRAG, a new, cost-effective retrieval augmented generation (RAG) method that combines knowledge graphs and embedding-based ret...