Found 113 bookmarks
Custom sorting
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning ... šŸ‘‰ Why This Matters Most AI systems blend knowledge graphs (structured data) with large language models (flexible reasoning). But thereā€™s a hidden variable: "how" you translate the graph into text for the AI. Researchers discovered that the formatting choice alone can swing performance by up to "17.5%" on reasoning tasks. Imagine solving 1 in 5 more problems correctly just by adjusting how you present data. šŸ‘‰ What They Built KG-LLM-Bench is a new benchmark to test how language models reason with knowledge graphs. It includes five tasks: - Triple verification (ā€œDoes this fact exist?ā€) - Shortest path finding (ā€œHow are two concepts connected?ā€) - Aggregation (ā€œHow many entities meet X condition?ā€) - Multi-hop reasoning (ā€œWhich entities linked to A also have property B?ā€) - Global analysis (ā€œWhich node is most central?ā€) The team tested seven models (Claude, GPT-4o, Gemini, Llama, Nova) with five ways to ā€œtextualizeā€ graphs, from simple edge lists to structured JSON and semantic web formats like RDF Turtle. šŸ‘‰ Key Insights 1. Format matters more than assumed: Ā Ā - Structured JSON and edge lists performed best overall, but results varied by task. Ā Ā - For example, JSON excels at aggregation tasks (data is grouped by entity), while edge lists help identify central nodes (repeated mentions highlight connections). 2. Models donā€™t cheat: Replacing real entity names with fake ones (e.g., ā€œFranceā€ ā†’ ā€œVerdaniaā€) caused only a 0.2% performance drop, proving models rely on context, not memorized knowledge. 3. Token efficiency: Ā Ā - Edge lists used ~2,600 tokens vs. JSON-LDā€™s ~13,500. Shorter formats free up context space for complex reasoning. Ā Ā - But concise ā‰  always better: structured formats improved accuracy for tasks requiring grouped data. 4. Models struggle with directionality: Ā  Counting outgoing edges (e.g., ā€œWhich countries does France border?ā€) is easier than incoming ones (ā€œWhich countries border France?ā€), likely due to formatting biases. šŸ‘‰ Practical Takeaways - Optimize for your task: Use JSON for aggregation, edge lists for centrality. - Test your model: The best format depends on the LLMā€”Claude thrived with RDF Turtle, while Gemini preferred edge lists. - Donā€™t fear pseudonyms: Masking real names minimally impacts performance, useful for sensitive data. The benchmark is openly available, inviting researchers to add new tasks, graphs, and models. As AI handles larger knowledge bases, choosing the right ā€œdata languageā€ becomes as critical as the reasoning logic itself. Paper: [KG-LLM-Bench: A Scalable Benchmark for Evaluating LLM Reasoning on Textualized Knowledge Graphs] Authors: Elan Markowitz, Krupa Galiya, Greg Ver Steeg, Aram Galstyan
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Ā·linkedin.comĀ·
Choosing the Right Format: How Knowledge Graph Layouts Impact AI Reasoning
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
Our first attempts at mechanistic interpretability of Transformers from the perspective of network science and graph theory! Check out our preprint: arxiv.org/abs/2502.12352 A wonderful collaboration with superstar MPhil students Batu El, Deepro Choudhury, as well as Pietro Lio' as part of the Geometric Deep Learning class last year at University of Cambridge Department of Computer Science and Technology We were motivated by Demis Hassabis calling AlphaFold and other AI systems for scientific discovery as ā€˜engineering artifactsā€™. We need new tools to interpret the underlying mechanisms and advance our scientific understanding. Graph Transformers are a good place to start. The key ideas are: - Attention across multi-heads and layers can be seen as a heterogenous, dynamically evolving graph. - Attention graphs are complex systems represent information flow in Transformers. - We can use network science to extract mechanistic insights from them! More to come on the network science perspective to understanding LLMs next! | 13 comments on LinkedIn
Ā·linkedin.comĀ·
Towards Mechanistic Interpretability of Graph Transformers via Attention Graphs
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
šŸš€ Thrilled to share our latest work published in Nature Machine Intelligence! šŸ“„ "A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research" In this study, we constructed iKraph, one of the most comprehensive biomedical knowledge graphs to date, using a human-level information extraction pipeline that won both the LitCoin NLP Challenge and the BioCreative Challenge. iKraph integrates insights from over 34 million PubMed abstracts and 40 public databases, enabling unprecedented scale and precision in automated knowledge discovery (AKD). šŸ’” What sets our work apart? We developed a causal knowledge graph and a probabilistic semantic reasoning (PSR) algorithm to infer indirect entity relationships, such as drug-disease relationships. This time-aware framework allowed us to retrospectively and prospectively validate drug repurposing and drug target predictions, something rarely done in prior work. āœ… For COVID-19, we predicted hundreds of drug candidates in real-time, one-third of which were later supported by clinical trials or publications. āœ… For cystic fibrosis, we demonstrated our predictions were often validated up to a decade later, suggesting our method could significantly accelerate the drug discovery pipeline. āœ… Across diverse diseases and common drugs, we achieved benchmark-setting recall and positive predictive rates, pushing the boundaries of what's possible in drug repurposing. We believe this study sets a new frontier in biomedical discovery and demonstrates the power of structured knowledge and interpretability in real-world applications. šŸ“š Read the full paper: https://lnkd.in/egYgbYT4? šŸ“Œ Access the platform: https://lnkd.in/ecxwHBK7 šŸ“‚ Access the data and code: https://lnkd.in/eBp2GEnH LitCoin NLP Challenge: https://lnkd.in/e-cBc6eR Kudos to our incredible team and collaborators who made this possible! #DrugDiscovery #AI #KnowledgeGraph #Bioinformatics #MachineLearning #NatureMachineIntelligence #DrugRepurposing #LLM #BiomedicalAI #NLP #COVID19 #Insilicom #NIH #NCI #NSF #ARPA-H | 10 comments on LinkedIn
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
Ā·linkedin.comĀ·
A comprehensive large-scale biomedical knowledge graph for AI-powered data-driven biomedical research
Self-Organizing Graph Reasoning Evolves into a Critical State for Continuous Discovery Through Structural-Semantic Dynamics
Self-Organizing Graph Reasoning Evolves into a Critical State for Continuous Discovery Through Structural-Semantic Dynamics
Deep stuff! We uncovered a startling link between #entropy, a bedrock concept in #physics, and how #AI can discover new ideas without stagnating. In an eraā€¦ | 41 comments on LinkedIn
Ā·linkedin.comĀ·
Self-Organizing Graph Reasoning Evolves into a Critical State for Continuous Discovery Through Structural-Semantic Dynamics
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large Language Models
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large Language Models
LLMs are taking Graph Neural Networks to the next level: While we've been discussing LLMs for natural language, they're quietly changing how we representā€¦
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large
Ā·linkedin.comĀ·
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large Language Models
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storageā€”Outperforming MemGPT with 94.8% Accuracy
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storageā€”Outperforming MemGPT with 94.8% Accuracy
šŸŽā³ Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storageā€”Outperforming MemGPT with 94.8% Accuracy. Build Personalized AIā€¦ | 46 comments on LinkedIn
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storageā€”Outperforming MemGPT with 94.8% Accuracy
Ā·linkedin.comĀ·
Zep packs Temporal Knowledge Graphs + Semantic Entity Extraction + Cypher Queries storageā€”Outperforming MemGPT with 94.8% Accuracy
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋The Problem: Knowledge Graphs Are Expensive (and Clunky) AI agents needĀ contextĀ to answer complex questionsā€”like connecting ā€œCOVID vaccinesā€ to ā€œmyocarditis risksā€ across research papers. But todayā€™s solutions face two nightmares: āœøĀ Cost: Building detailed knowledge graphs with LLMs can costĀ $33,000 for a 5GB legal case. āœøĀ Quality: Cheap methods (like KNN graphs) miss key relationships, leading toĀ 32% worse answers. ā˜†Ā Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch. ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋The Fix: KET-RAGā€™s Two-Layer Brain KET-RAG mergesĀ precisionĀ (knowledge graphs) andĀ efficiencyĀ (keyword-text maps) into one system: āœøĀ Layer 1: Knowledge Graph Skeleton ā˜† Uses PageRank to findĀ core text chunksĀ (like ā€œvaccine side effectsā€ in medical docs). ā˜† Builds a sparse graphĀ onlyĀ on these chunks with LLMsā€”saving 80% of indexing costs. āœøĀ Layer 2: Keyword-Chunk Bipartite Graph ā˜† Links keywords (e.g., ā€œmyocarditisā€) to all related text snippetsā€”no LLM needed. ā˜† Acts as a ā€œfast laneā€ for retrieving context without expensive entity extraction. ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋Results: Beating Microsoftā€™s Graph-RAG with Pennies On HotpotQA and MuSiQue benchmarks, KET-RAG: āœøĀ Retrieves 81.6%Ā of critical info vs. Microsoftā€™s 74.6%ā€”with 10x lower cost. āœø Boosts answer accuracy (F1 score) byĀ 32.4%Ā while cutting indexing bills byĀ 20%. āœø Scales to terabytes of data without melting budgets. ā˜†Ā Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price. ļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œļ¹Œ 怋Why AI Agents Need This AI agents arenā€™t just chatbotsā€”theyā€™reĀ problem solversĀ for medicine, law, and customer service. KET-RAG gives them: āœøĀ Real-time, multi-hop reasoning: Connecting ā€œdrug A ā†’ gene B ā†’ side effect Cā€ in milliseconds. āœøĀ Cost-effective scalability: Deploying agents across millions of documents without going broke. āœøĀ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed). Paper in comments ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ā‰£ 怋Build Your Own Supercharged AI Agent? šŸ”® Join My š‡ššš§šš¬-šŽš§ š€šˆ š€š šžš§š­š¬ š“š«ššš¢š§š¢š§š  TODAY! and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines š„š§š«šØš„š„ ššŽš– [34% discount]: šŸ‘‰ https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Ā·linkedin.comĀ·
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
LLMs that automatically fill knowledge gaps - too good to be true? Large Language Models (LLMs) often stumble in logical tasks due to hallucinations, especially when relying on incomplete Knowledge Graphs (KGs). Current methods naively trust KGs as exhaustive truth sources - a flawed assumption in real-world domains like healthcare or finance where gaps persist. SymAgent is a new framework that approaches this problem by makingĀ KGs active collaborators, not passive databases. Its dual-module design combines symbolic logic with neural flexibility: 1. Agent-PlannerĀ extracts implicit rules from KGs (e.g., "If drug X interacts with Y, avoid co-prescription") to decompose complex questions into structured steps. 2. Agent-ExecutorĀ dynamically pulls external data when KG triples are missing, bypassing the "static repository" limitation. Perhaps most impressively, SymAgentā€™s self-learning observes failed reasoning paths to iteratively refine its strategyĀ andĀ flag missing KG connections - achieving 20-30% accuracy gains over raw LLMs. Equipped with SymAgent, even 7B models rival their much larger counterparts by leveraging this closed-loop system. It would be great if LLMs were able to autonomously curate knowledge and adapt to domain shifts without costly retraining. But are we there yet? Are hybrid architectures like SymAgent the future? ā†“ Liked this post? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com šŸ’”
Ā·linkedin.comĀ·
SymAgent: A Neural-Symbolic Self-Learning Agent Framework for Complex Reasoning over Knowledge Graphs
What is really Graph RAG?
What is really Graph RAG?
What is really Graph RAG? Inspired by "From Local to Global: A Graph RAG Approach to Query-Focused Summarization" paper from Microsoft! How do you combineā€¦ | 12 comments on LinkedIn
What is really Graph RAG?
Ā·linkedin.comĀ·
What is really Graph RAG?
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering That has been our position from the beginning when we started our researchā€¦ | 29 comments on LinkedIn
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Ā·linkedin.comĀ·
Knowledge Graphs as a source of trust for LLM-powered enterprise question answering
Graph contrastive learning
Graph contrastive learning
Graph contrastive learning (GCL) is a self-supervised learning technique for graphs that focuses on learning representations by contrasting different views ofā€¦
Graph contrastive learning
Ā·linkedin.comĀ·
Graph contrastive learning
LightRAG
LightRAG
šŸš€ Breaking Boundaries in Graph + Retrieval-Augmented Generation (RAG)! šŸŒšŸ¤– The rapid pace of innovation in combining graphs with RAG is absolutelyā€¦
LightRAG
Ā·linkedin.comĀ·
LightRAG
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graphā€¦
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Ā·linkedin.comĀ·
SimGRAG is a novel method for knowledge graph driven RAG, transforms queries into graph patterns and aligns them with candidate subgraphs using a graph semantic distance metric
Graphs + Transformers = the best of both worlds
Graphs + Transformers = the best of both worlds
Graphs + Transformers = the best of both worlds šŸ¤ The same models powering breakthroughs in natural language processing are now being adapted for graphsā€¦
Graphs + Transformers = the best of both worlds
Ā·linkedin.comĀ·
Graphs + Transformers = the best of both worlds
Knowledge Graph In-Context Learning
Knowledge Graph In-Context Learning
Unlocking universal reasoning across knowledge graphs. Knowledge graphs (KGs) are powerful tools for organizing and reasoning over vast amounts ofā€¦ | 13 comments on LinkedIn
Knowledge Graph In-Context Learning
Ā·linkedin.comĀ·
Knowledge Graph In-Context Learning