Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large Language Models
LLMs are taking Graph Neural Networks to the next level:
While we've been discussing LLMs for natural language, they're quietly changing how we represent…
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large
Knowledge graphs for LLM grounding and avoiding hallucination
This blog post is part of a series that dives into various aspects of SAP’s approach to Generative AI, and its technical underpinnings. In previous blog posts of this series, you learned about how to use large language models (LLMs) for developing AI applications in a trustworthy and reliable manner...
Enabling LLM development through knowledge graph visualization
Discover how to empower LLM development through effective knowledge graph visualization. Learn to leverage yFiles for intuitive, interactive diagrams that simplify debugging and optimization in AI applications.
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
🎉🎉 🎉 "Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
Four years ago, we embarked on writing "Knowledge Graphs Applied" with a clear mission: to guide practitioners in implementing production-ready knowledge graph solutions. Drawing from our extensive field experience across multiple domains, we aimed to share battle-tested best practices that transcend basic use cases.
Like fine wine, ideas, and concepts need time to mature. During these four years of careful development, we witnessed a seismic shift in the technological landscape. Large Language Models (LLMs) emerged not just as a buzzword, but as a transformative force that naturally converged with knowledge graphs.
This synergy unlocked new possibilities, particularly in simplifying complex tasks like unstructured data ingestion and knowledge graph-based question-answering.
We couldn't ignore this technological disruption. Instead, we embraced it, incorporating our hands-on experience in combining LLMs with graph technologies. The result is "Knowledge Graphs and LLMs in Action" – a thoroughly revised work with new chapters and an expanded scope.
Yet our fundamental goal remains unchanged: to empower you to harness the full potential of knowledge graphs, now enhanced by their increasingly natural companion, LLMs. This book represents the culmination of a journey that evolved alongside the technology itself. It delivers practical, production-focused guidance for the modern era, in which knowledge graphs and LLMs work in concert.
Now available in MEAP, with new LLMs-focused chapters ready to be published.
#llms #knowledgegraph #graphdatascience
"Knowledge Graphs Applied" becomes "Knowledge Graphs and LLMs in Action"
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems 🛜
At the most fundamental level, all approaches rely… | 11 comments on LinkedIn
Multi-Layer Agentic Reasoning: Connecting Complex Data and Dynamic Insights in Graph-Based RAG Systems
Build your hybrid-Graph for RAG & GraphRAG applications using the power of NLP | LinkedIn
Build a graph for RAG application for a price of a chocolate bar! What is GraphRAG for you? What is GraphRAG? What does GraphRAG mean from your perspective? What if you could have a standard RAG and a GraphRAG as a combi-package, with just a query switch? The fact is, there is no concrete, universal
Knowledge graphs: the missing link in enterprise AI
To gain competitive advantage from gen AI, enterprises need to be able to add their own expertise to off-the-shelf systems. Yet standard enterprise data stores aren't a good fit to train large language models.
Synalinks is an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
🎉 We're thrilled to unveil Synalinks (🧠🔗), an open-source framework designed to streamline the creation, evaluation, training, and deployment of…
Synalinks (🧠🔗), an open-source framework designed to streamline the creation, evaluation, training, and deployment of industry-standard Language Models (LMs) applications
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
I love Markus J. Buehler's work, and his latest paper "Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks" does not disappoint, revealing… | 19 comments on LinkedIn
Agentic Deep Graph Reasoning Yields Self-Organizing Knowledge Networks
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage.
Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
KnowPath: Knowledge-enhanced Reasoning via LLM-generated Inference Paths over Knowledge Graphs
Breaking LLM Hallucinations in a Smarter Way!
(It’s not about feeding more data)
Large Language Models (LLMs) still struggle with factual inaccuracies, but…
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Problem: Knowledge Graphs Are Expensive (and Clunky)
AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares:
✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case.
✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers.
☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Fix: KET-RAG’s Two-Layer Brain
KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system:
✸ Layer 1: Knowledge Graph Skeleton
☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs).
☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs.
✸ Layer 2: Keyword-Chunk Bipartite Graph
☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed.
☆ Acts as a “fast lane” for retrieving context without expensive entity extraction.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Results: Beating Microsoft’s Graph-RAG with Pennies
On HotpotQA and MuSiQue benchmarks, KET-RAG:
✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost.
✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%.
✸ Scales to terabytes of data without melting budgets.
☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Why AI Agents Need This
AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them:
✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds.
✸ Cost-effective scalability: Deploying agents across millions of documents without going broke.
✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed).
Paper in comments
≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣
》Build Your Own Supercharged AI Agent?
🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY!
and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines
𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]:
👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = 🤝
Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning.
What if they could dynamically restructure their thought process like humans?
A new paper introduces Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs (DAGs).
Instead of forcing fixed reasoning steps, AGoT recursively decomposes problems into sub-tasks, selectively expanding only the most critical pathways.
This is crucial for industries like scientific research or legal analysis, where problems demand non-linear, nested reasoning.
The key innovation lies in complexity checks: AGoT assesses each reasoning node, spawning sub-graphs for intricate subtasks while resolving simpler ones directly.
This mirrors how experts allocate mental effort—drilling into uncertainties while streamlining obvious steps.
The framework achieved 46.2% improvement on GPQA (a notoriously hard science QA benchmark), rivaling gains from compute-heavy fine-tuning.
By unifying chain, tree, and graph paradigms, AGoT retains CoT’s clarity, ToT’s exploration, and GoT’s flexibility without manual tuning.
The result? LLMs that self-adapt their reasoning depth based on problem complexity—no architectural changes needed.
For AI practitioners, AGoT’s DAG structure offers a principled interface to scale reasoning modularly.
↓
𝐖𝐚𝐧𝐧𝐚 𝐤𝐧𝐨𝐰 𝐰𝐡𝐚𝐭 𝐲𝐨𝐮 𝐦𝐢𝐬𝐬𝐞𝐝? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
GFM-RAG: The First Graph Foundation Model for Retrieval-Augmented Generation
🚀 Introducing GFM-RAG: The First Graph Foundation Model for Retrieval-Augmented Generation!
We’re excited to share our latest research: GFM-RAG: Graph… | 20 comments on LinkedIn
GFM-RAG: The First Graph Foundation Model for Retrieval-Augmented Generation
Dynamic Reasoning Graphs + LLMs = 🤝
Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning.
What if they could… | 10 comments on LinkedIn
🌟 Pathway to Artificial General Intelligence (AGI) 🌟 This is my view on the evolutionary steps toward AGI: 1️⃣ Large Language Models (LLMs): Language models…
KAG: Boosting LLMs in Professional Domains via Knowledge Augmented...
The recently developed retrieval-augmented generation (RAG) technology has enabled the efficient construction of domain-specific applications. However, it also has limitations, including the gap...
A comparison between ChatGPT and DeepSeek capabilities writing a valid Cypher query
Today, I conducted a comparison between ChatGPT and DeepSeek chat capabilities by providing them with a schema and a natural language question. I tasked them…
a comparison between ChatGPT and DeepSeek chat capabilities by providing them with a schema and a natural language question. I tasked them with writing a valid Cypher query to answer the question.