GraphNews

#research #llm
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
Our first attempts at mechanistic interpretability of Transformers from the perspective of network science and graph theory! Check out our preprint: arxiv.org/abs/2502.12352 A wonderful collaboration with superstar MPhil students Batu El, Deepro Choudhury, as well as Pietro Lio' as part of the Geometric Deep Learning class last year at University of Cambridge Department of Computer Science and Technology We were motivated by Demis Hassabis calling AlphaFold and other AI systems for scientific discovery as ‘engineering artifacts’. We need new tools to interpret the underlying mechanisms and advance our scientific understanding. Graph Transformers are a good place to start. The key ideas are: - Attention across multi-heads and layers can be seen as a heterogenous, dynamically evolving graph. - Attention graphs are complex systems represent information flow in Transformers. - We can use network science to extract mechanistic insights from them! More to come on the network science perspective to understanding LLMs next! | 13 comments on LinkedIn
·linkedin.com·
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large Language Models
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large Language Models
LLMs are taking Graph Neural Networks to the next level: While we've been discussing LLMs for natural language, they're quietly changing how we represent…
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large
·linkedin.com·
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large Language Models
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
🎁⏳ Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy. Build Personalized AI… | 46 comments on LinkedIn
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
·linkedin.com·
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storage—Outperforming MemGPT with 94.8% Accuracy
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
·linkedin.com·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
🏆🚣MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage. Achieving that by Semantic-Aware Heterogeneous Graph…
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
·linkedin.com·
MiniRAG Introduces Near-LLM Accurate RAG for Small Language Models with Just 25% of the Storage
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Problem: Knowledge Graphs Are Expensive (and Clunky) AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares: ✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case. ✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers. ☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》The Fix: KET-RAG’s Two-Layer Brain KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system: ✸ Layer 1: Knowledge Graph Skeleton ☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs). ☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs. ✸ Layer 2: Keyword-Chunk Bipartite Graph ☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed. ☆ Acts as a “fast lane” for retrieving context without expensive entity extraction. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Results: Beating Microsoft’s Graph-RAG with Pennies On HotpotQA and MuSiQue benchmarks, KET-RAG: ✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost. ✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%. ✸ Scales to terabytes of data without melting budgets. ☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price. ﹌﹌﹌﹌﹌﹌﹌﹌﹌ 》Why AI Agents Need This AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them: ✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds. ✸ Cost-effective scalability: Deploying agents across millions of documents without going broke. ✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed). Paper in comments ≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣ 》Build Your Own Supercharged AI Agent? 🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY! and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines 𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]: 👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
·linkedin.com·
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = 🤝 Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning. What if they could dynamically restructure their thought process like humans? A new paper introduces Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs (DAGs). Instead of forcing fixed reasoning steps, AGoT recursively decomposes problems into sub-tasks, selectively expanding only the most critical pathways. This is crucial for industries like scientific research or legal analysis, where problems demand non-linear, nested reasoning. The key innovation lies in complexity checks: AGoT assesses each reasoning node, spawning sub-graphs for intricate subtasks while resolving simpler ones directly. This mirrors how experts allocate mental effort—drilling into uncertainties while streamlining obvious steps. The framework achieved 46.2% improvement on GPQA (a notoriously hard science QA benchmark), rivaling gains from compute-heavy fine-tuning. By unifying chain, tree, and graph paradigms, AGoT retains CoT’s clarity, ToT’s exploration, and GoT’s flexibility without manual tuning. The result? LLMs that self-adapt their reasoning depth based on problem complexity—no architectural changes needed. For AI practitioners, AGoT’s DAG structure offers a principled interface to scale reasoning modularly. ↓ 𝐖𝐚𝐧𝐧𝐚 𝐤𝐧𝐨𝐰 𝐰𝐡𝐚𝐭 𝐲𝐨𝐮 𝐦𝐢𝐬𝐬𝐞𝐝? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
·linkedin.com·
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph…
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
·linkedin.com·
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Knowledge Graph In-Context Learning
Knowledge Graph In-Context Learning
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts of… | 13 comments on LinkedIn
Knowledge Graph In-Context Learning
·linkedin.com·
Knowledge Graph In-Context Learning
Graph-constrained Reasoning
Graph-constrained Reasoning
🚀 Exciting New Research: "Graph-constrained Reasoning (GCR)" - Enabling Faithful KG-grounded LLM Reasoning with Zero Hallucination! 🧠 🎉 Proud to share our… | 11 comments on LinkedIn
Graph-constrained Reasoning
·linkedin.com·
Graph-constrained Reasoning