Knowledge graphs to teach LLMs how to reason like doctors
Knowledge graphs to teach LLMs how to reason like doctors!
Many medical LLMs can give you the right answer, but not the right reasoning which is a problem for clinical trust.
𝗠𝗲𝗱𝗥𝗲𝗮𝘀𝗼𝗻 𝗶𝘀 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝗳𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆-𝗴𝘂𝗶𝗱𝗲𝗱 𝗱𝗮𝘁𝗮𝘀𝗲𝘁 𝘁𝗼 𝘁𝗲𝗮𝗰𝗵 𝗟𝗟𝗠𝘀 𝗰𝗹𝗶𝗻𝗶𝗰𝗮𝗹 𝗖𝗵𝗮𝗶𝗻-𝗼𝗳-𝗧𝗵𝗼𝘂𝗴𝗵𝘁 (𝗖𝗼𝗧) 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 𝘂𝘀𝗶𝗻𝗴 𝗺𝗲𝗱𝗶𝗰𝗮𝗹 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗴𝗿𝗮𝗽𝗵𝘀.
1. Created 32,682 clinically validated QA explanations by linking symptoms, findings, and diagnoses through PrimeKG.
2. Generated CoT reasoning paths using GPT-4o, but retained only those that produced correct answers during post-hoc verification.
3. Validated with physicians across 7 specialties, with expert preference for MedReason’s reasoning in 80–100% of cases.
4. Enabled interpretable, step-by-step answers like linking difficulty walking to medulloblastoma via ataxia, preserving clinical fidelity throughout.
Couple thoughts:
• introducing dynamic KG updates (e.g., weekly ingests of new clinical trial data) could keep reasoning current with evolving medical knowledge.
• Could also integrating visual KGs derived from DICOM metadata help coherent reasoning across text and imaging inputs? We don't use DICOM metadata enough tbh
• Adding testing with adversarial probing (like edge‑case clinical scenarios) and continuous alignment checks against updated evidence‑based guidelines might benefit the model performance
Here's the awesome work: https://lnkd.in/g42-PKMG
Congrats to Juncheng Wu, Wenlong Deng, Xiaoxiao Li, Yuyin Zhou and co!
I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱!
Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW | 40 comments on LinkedIn
Knowledge graphs to teach LLMs how to reason like doctors
Affordable AI Assistants with Knowledge Graph of Thoughts
Large Language Models (LLMs) are revolutionizing the development of AI assistants capable of performing diverse tasks across domains. However, current state-of-the-art LLM-driven agents face...
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ...
👉 Why This Matters
Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But there’s a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data.
👉 What They Built
KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs.
It includes five tasks:
- Triple verification (“Does this fact exist?”)
- Shortest path finding (“How are two concepts connected?”)
- Aggregation (“How many entities meet X condition?”)
- Multi-hop reasoning (“Which entities linked to A also have property B?”)
- Global analysis (“Which node is most central?”)
The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to “textualize” graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle.
👉 Key Insights
1. Format matters more than assumed:
- Structured JSON and edge lists performed best overall, but results varied by task.
- For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections).
2. Models don’t cheat:
Replacing real entity names with fake ones (e.g., “France” → “Verdania”) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge.
3. Token efficiency:
- Edge lists used ~2,600 tokens vs. JSON-LD’s ~13,500. Shorter formats free up context space for complex reasoning.
- But concise ≠ always better: structured formats improved accuracy for tasks requiring grouped data.
4. Models struggle with directionality:
Counting outgoing edges (e.g., “Which countries does France border?”) is easier than incoming ones (“Which countries border France?”), likely due to formatting biases.
👉 Practical Takeaways
- Optimize for your task: Use JSON for aggregation, edge lists for centrality.
- Test your model: The best format depends on the LLM—Claude thrived with RDF Turtle, while Gemini preferred edge lists.
- Don’t fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data.
The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right “data language” becomes as critical as the reasoning logic itself.
Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs]
Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
I love Markus J. Buehler's work, and his latest paper "Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks" does not disappoint, revealing… | 19 comments on LinkedIn
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
KnowPath: Knowledge-enhanced Reasoning via LLM-generated Inference Paths over Knowledge Graphs
Breaking LLM Hallucinations in a Smarter Way!
(It’s not about feeding more data)
Large Language Models (LLMs) still struggle with factual inaccuracies, but…
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Problem: Knowledge Graphs Are Expensive (and Clunky)
AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares:
✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case.
✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers.
☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Fix: KET-RAG’s Two-Layer Brain
KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system:
✸ Layer 1: Knowledge Graph Skeleton
☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs).
☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs.
✸ Layer 2: Keyword-Chunk Bipartite Graph
☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed.
☆ Acts as a “fast lane” for retrieving context without expensive entity extraction.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Results: Beating Microsoft’s Graph-RAG with Pennies
On HotpotQA and MuSiQue benchmarks, KET-RAG:
✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost.
✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%.
✸ Scales to terabytes of data without melting budgets.
☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Why AI Agents Need This
AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them:
✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds.
✸ Cost-effective scalability: Deploying agents across millions of documents without going broke.
✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed).
Paper in comments
≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣
》Build Your Own Supercharged AI Agent?
🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY!
and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines
𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]:
👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KAG: Boosting LLMs in Professional Domains via Knowledge Augmented...
The recently developed retrieval-augmented generation (RAG) technology has enabled the efficient construction of domain-specific applications. However, it also has limitations, including the gap...
Terminology Augmented Generation (TAG)? Recently some fellow terminologists have proposed the new term "Terminology-Augmented Generation (TAG)" to refer to… | 29 comments on LinkedIn
What is really Graph RAG? Inspired by "From Local to Global: A Graph RAG Approach to Query-Focused Summarization" paper from Microsoft! How do you combine… | 12 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering That has been our position from the beginning when we started our research… | 29 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
OG-RAG: Ontology-Grounded Retrieval-Augmented Generation For Large...
This paper presents OG-RAG, an Ontology-Grounded Retrieval Augmented Generation method designed to enhance LLM-generated responses by anchoring retrieval processes in domain-specific ontologies....
Large Language Models, Knowledge Graphs and Search Engines: A...
Much has been discussed about how Large Language Models, Knowledge Graphs and Search Engines can be combined in a synergistic manner. A dimension largely absent from current academic discourse is...
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph…
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Unlocking universal reasoning across knowledge graphs
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts of… | 11 comments on LinkedIn
Unlocking universal reasoning across knowledge graphs.
takeaways from the International Semantic Web Conference #iswc2024
My takeaways from the International Semantic Web Conference #iswc2024 Ioana keynote: Great example of data integration for journalism, highlighting the use of…
takeaways from the International Semantic Web Conference hashtag#iswc2024
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy ⛳ In the realm of agentic systems, a fundamental challenge emerges…
Beyond Vector Space : Knowledge Graphs and the New Frontier of Agentic System Accuracy
Understanding SPARQL Queries: Are We Already There
👉 Our paper "Understanding SPARQL Queries: Are We Already There?", explores the potential of Large Language Models (#LLMs) to generate natural-language…
Understanding SPARQL Queries: Are We Already There
Knowledge Graph Enhanced Language Agents for Recommendation
Language agents have recently been used to simulate human behavior and user-item interactions for recommendation systems. However, current language agent simulations do not understand the...
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts of… | 13 comments on LinkedIn
Fact Finder -- Enhancing Domain Expertise of Large Language Models...
Recent advancements in Large Language Models (LLMs) have showcased their proficiency in answering natural language queries. However, their effectiveness is hindered by limited domain-specific...
Recently, Retrieval-Augmented Generation (RAG) has achieved remarkable success in addressing the challenges of Large Language Models (LLMs) without necessitating retraining. By referencing an...
LLMs and Knowledge Graphs: A love story 💓 Researchers from University of Oxford recently released MedGraphRAG. At its core, MedGraphRAG is a framework…
Think-on-Graph 2.0: Deep and Interpretable Large Language Model...
Retrieval-augmented generation (RAG) has significantly advanced large language models (LLMs) by enabling dynamic information retrieval to mitigate knowledge gaps and hallucinations in generated...