Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large Language Models
LLMs are taking Graph Neural Networks to the next level:
While we've been discussing LLMs for natural language, they're quietly changing how we represent…
Unifying Text Semantics and Graph Structures for Temporal Text-attributed Graphs with Large
GiGL: Large-Scale Graph Neural Networks at Snapchat
Recent advances in graph machine learning (ML) with the introduction of Graph Neural Networks (GNNs) have led to a widespread interest in applying these approaches to business applications at...
GiGL: Large-Scale Graph Neural Networks at Snapchat
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
This Multi-Granular Graph Framework uses PageRank and Keyword-Chunk Graph to have the Best Cost-Quality Tradeoff
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Problem: Knowledge Graphs Are Expensive (and Clunky)
AI agents need context to answer complex questions—like connecting “COVID vaccines” to “myocarditis risks” across research papers. But today’s solutions face two nightmares:
✸ Cost: Building detailed knowledge graphs with LLMs can cost $33,000 for a 5GB legal case.
✸ Quality: Cheap methods (like KNN graphs) miss key relationships, leading to 32% worse answers.
☆ Imagine training an AI doctor that either bankrupts you or misdiagnoses patients. Ouch.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》The Fix: KET-RAG’s Two-Layer Brain
KET-RAG merges precision (knowledge graphs) and efficiency (keyword-text maps) into one system:
✸ Layer 1: Knowledge Graph Skeleton
☆ Uses PageRank to find core text chunks (like “vaccine side effects” in medical docs).
☆ Builds a sparse graph only on these chunks with LLMs—saving 80% of indexing costs.
✸ Layer 2: Keyword-Chunk Bipartite Graph
☆ Links keywords (e.g., “myocarditis”) to all related text snippets—no LLM needed.
☆ Acts as a “fast lane” for retrieving context without expensive entity extraction.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Results: Beating Microsoft’s Graph-RAG with Pennies
On HotpotQA and MuSiQue benchmarks, KET-RAG:
✸ Retrieves 81.6% of critical info vs. Microsoft’s 74.6%—with 10x lower cost.
✸ Boosts answer accuracy (F1 score) by 32.4% while cutting indexing bills by 20%.
✸ Scales to terabytes of data without melting budgets.
☆ Think of it as a Tesla Model 3 outperforming a Lamborghini at 1/10th the price.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》Why AI Agents Need This
AI agents aren’t just chatbots—they’re problem solvers for medicine, law, and customer service. KET-RAG gives them:
✸ Real-time, multi-hop reasoning: Connecting “drug A → gene B → side effect C” in milliseconds.
✸ Cost-effective scalability: Deploying agents across millions of documents without going broke.
✸ Adaptability: Mixing precise knowledge graphs (for critical data) with keyword maps (for speed).
Paper in comments
≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣
》Build Your Own Supercharged AI Agent?
🔮 Join My 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 𝐀𝐈 𝐀𝐠𝐞𝐧𝐭𝐬 𝐓𝐫𝐚𝐢𝐧𝐢𝐧𝐠 TODAY!
and Learn Building AI Agent with Langgraph/Langchain, CrewAI and OpenAI Swarm + RAG Pipelines
𝐄𝐧𝐫𝐨𝐥𝐥 𝐍𝐎𝐖 [34% discount]:
👉 https://lnkd.in/eGuWr4CH | 10 comments on LinkedIn
KET-RAG: Turbocharging AI Agents with 10x Cheaper, Smarter Knowledge Retrieval
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
Dynamic Reasoning Graphs + LLMs = 🤝
Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning.
What if they could dynamically restructure their thought process like humans?
A new paper introduces Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs (DAGs).
Instead of forcing fixed reasoning steps, AGoT recursively decomposes problems into sub-tasks, selectively expanding only the most critical pathways.
This is crucial for industries like scientific research or legal analysis, where problems demand non-linear, nested reasoning.
The key innovation lies in complexity checks: AGoT assesses each reasoning node, spawning sub-graphs for intricate subtasks while resolving simpler ones directly.
This mirrors how experts allocate mental effort—drilling into uncertainties while streamlining obvious steps.
The framework achieved 46.2% improvement on GPQA (a notoriously hard science QA benchmark), rivaling gains from compute-heavy fine-tuning.
By unifying chain, tree, and graph paradigms, AGoT retains CoT’s clarity, ToT’s exploration, and GoT’s flexibility without manual tuning.
The result? LLMs that self-adapt their reasoning depth based on problem complexity—no architectural changes needed.
For AI practitioners, AGoT’s DAG structure offers a principled interface to scale reasoning modularly.
↓
𝐖𝐚𝐧𝐧𝐚 𝐤𝐧𝐨𝐰 𝐰𝐡𝐚𝐭 𝐲𝐨𝐮 𝐦𝐢𝐬𝐬𝐞𝐝? Join my newsletter with 50k+ readers that breaks down all you need to know about the latest LLM research: llmwatch.com 💡
Adaptive Graph of Thoughts (AGoT), a test-time framework that replaces rigid prompting strategies (like Chain/Tree of Thought) with dynamic directed acyclic graphs
GFM-RAG: The First Graph Foundation Model for Retrieval-Augmented Generation
🚀 Introducing GFM-RAG: The First Graph Foundation Model for Retrieval-Augmented Generation!
We’re excited to share our latest research: GFM-RAG: Graph… | 20 comments on LinkedIn
GFM-RAG: The First Graph Foundation Model for Retrieval-Augmented Generation
Dynamic Reasoning Graphs + LLMs = 🤝
Large Language Models (LLMs) often stumble on complex tasks when confined to linear reasoning.
What if they could… | 10 comments on LinkedIn
And so we set out to understand _feedforward_ graphs (i.e. graphs w/o back edges) ⏩
Turns out these graphs are rather understudied for how often they are…
The Evolution of Intelligent Recommendations with Agentic Graph Systems
The Evolution of Intelligent Recommendations with Agentic Graph Systems ➿ Agentic graph systems for recommendation represent a sophisticated fusion of…
The Evolution of Intelligent Recommendations with Agentic Graph Systems
GraphAgent — An innovative AI agent that efficiently integrates structured and unstructured data
🚀 Excited to Share Our Recent Work! 🌟 GraphAgent — An innovative AI agent that efficiently integrates structured and unstructured data! 📚 👉 Paper link:…
GraphAgent — An innovative AI agent that efficiently integrates structured and unstructured data
Want to catch up on Graph Neural Networks? Now's the time! Graph Neural Networks (GNNs) have become a popular solution for problems that include network data,…
❓How Can Graph Neural Networks Enhance Recommendation Systems by Incorporating Contextual Information? Traditional recommendation systems often leverage a…
Can Graph Learning Improve Planning in LLM-based Agents?
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs). It aims to break down complex user requests in natural...
ICYMI, here are the slides from our standing room only talk at NeurIPS yesterday! Concepts we discuss include: ➡️ Quantifying how much Transformer you need to… | 18 comments on LinkedIn
Graphs + Transformers = the best of both worlds 🤝 The same models powering breakthroughs in natural language processing are now being adapted for graphs…
GNN: Graph Neural Network and Large Language Model Based for Data Discovery
Our algorithm GNN: Graph Neural Network and Large Language Model Based for Data Discovery inherits the benefits of [Hoang(2024b)] (PLOD: Predictive Learning Opt
A Graph Neural Network (GNN) won the highly competitive Causal Discovery competition arranged by ADIA Lab
A Graph Neural Network (GNN) won the highly competitive Causal Discovery competition arranged by ADIA Lab. Of course I mean to say that Hicham Hallak won the… | 19 comments on LinkedIn
A Graph Neural Network (GNN) won the highly competitive Causal Discovery competition arranged by ADIA Lab
A collection of Graph Embedding methods in Python. 🧠💎 This repository provides hands-on implementations of essential graph embedding algorithms like: ▪️…
Can Ontologies be seen as General Ledger for AI? Could that be a good way to audit AI systems delivering critical business outcomes? In my quest to develop a… | 69 comments on LinkedIn
🕸️Building a LangGraph agent with graph memory The following community examples demonstrates building an agent using LangGraph. Graphiti is used to… | 24 comments on LinkedIn
TGB 2.0: A Benchmark for Learning on Temporal Knowledge Graphs and Heterogeneous Graphs
🌟 TGB 2.0 @NeurIPS 2024 🌟 We are very happy to share that our paper TGB 2.0: A Benchmark for Learning on Temporal Knowledge Graphs and Heterogeneous Graphs… | 11 comments on LinkedIn
TGB 2.0: A Benchmark for Learning on Temporal Knowledge Graphs and Heterogeneous Graphs
More Graph, More Agents: Scaling Graph Reasoning with LLMs
More Graph, More Agents: Scaling Graph Reasoning with LLMs Graph reasoning tasks have proven to be a tough nut to crack for Large Language Models (LLMs).…
More Graph, More Agents: Scaling Graph Reasoning with LLMs
Graphs Neural Networks (GNNs) and LLMs are colliding in exciting ways
Graphs Neural Networks (GNNs) and LLMs are colliding in exciting ways. 💥 This survey introduces a novel taxonomy for categorizing existing methods that… | 19 comments on LinkedIn
graphgeeks-lab/awesome-graph-universe: A curated list of resources for graph-related topics, including graph databases, analytics and science
A curated list of resources for graph-related topics, including graph databases, analytics and science - graphgeeks-lab/awesome-graph-universe
Awesome Graph Universe 🌐
Welcome to Awesome Graph Universe, a curated list of resources, tools, libraries, and applications for working with graphs and networks. This repository covers everything from Graph Databases and Knowledge Graphs to Graph Analytics, Graph Computing, and beyond.
Graphs and networks are essential in fields like data science, knowledge representation, machine learning, and computational biology. Our goal is to provide a comprehensive resource that helps researchers, developers, and enthusiasts explore and utilize graph-based technologies.
Feel free to contribute by submitting pull requests! 🚀
UltraQuery: going beyond simple one-hop link prediction to answering more complex queries on any graph in the zero-shot fashion better than trainable SOTA
📣 Foundation models for graph reasoning become even stronger - in our new NeurIPS 2024 work we introduce UltraQuery: going beyond simple one-hop link…
UltraQuery: going beyond simple one-hop link prediction to answering more complex queries on any graph in the zero-shot fashion better than trainable SOTA
In this work, we achieve perfect neural execution of several algorithms by forcing the node and edge representations to be from a fixed finite set. Also, the proposed architectural choice allows us to prove the correctness of the learned algorithms for any test data.
🚀 PyG 2.6 is here! 🎉 We’re excited to announce the release of PyG 2.6.0, packed with incredible updates for graph learning! Here’s a quick rundown of what’s… | 14 comments on LinkedIn
Can Graph Reordering Speed Up Graph Neural Network Training? An...
Graph neural networks (GNNs) are a type of neural network capable of learning on graph-structured data. However, training GNNs on large-scale graphs is challenging due to iterative aggregations of...