GraphNews

4424 bookmarks
Custom sorting
MIRAGE: Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains
MIRAGE: Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains
MIRAGE: Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains ... When AI Diagnoses Patients, Should Reasoning Be a Team Sport? 👉 Why Existing Approaches Fall Short Medical question answering demands precision, but current AI methods struggle with two key issues: 1. Error Accumulation: Linear reasoning chains (like Chain-of-Thought) risk compounding mistakes—if the first step is wrong, the entire answer falters. 2. Flat Knowledge Retrieval: Traditional retrieval-augmented methods treat medical facts as unrelated text snippets, ignoring complex relationships between symptoms, diseases, and treatments. This leads to unreliable diagnoses and opaque decision-making—a critical problem when patient outcomes are at stake. 👉 What MIRAGE Does Differently MIRAGE transforms reasoning from a solo sprint into a coordinated team effort: - Parallel Detective Work: Instead of one linear chain, multiple specialized "detectives" (reasoning chains) investigate different symptoms or entities in parallel. - Structured Evidence Hunting: Retrieval operates on medical knowledge graphs, tracing connections between symptoms (e.g., "face pain → lead poisoning") rather than scanning documents. - Cross-Check Consensus: Answers from parallel chains are verified against each other to resolve contradictions, like clinicians discussing differential diagnoses. 👉 How It Works (Without the Jargon) 1. Break It Down   - Splits complex queries ("Why am I fatigued with knee pain?") into focused sub-questions grounded in specific symptoms/entities.   - Example: "Conditions linked to fatigue" and "Causes of knee lumps" become separate investigation threads. 2. Graph-Guided Retrieval   - Each thread explores a medical knowledge graph like a map:    - Anchor Mode: Examines direct connections (e.g., diseases causing a symptom).    - Bridge Mode: Hunts multi-step relationships (e.g., toxin exposure → neurological symptoms → joint pain). 3. Vote & Verify   - Combines evidence from all threads, prioritizing answers supported by multiple independent chains.   - Discards conflicting hypotheses (e.g., ruling out lupus if only one chain suggests it without corroboration). 👉 Why This Matters Tested on three medical benchmarks (including real clinician queries), MIRAGE: - Outperformed GPT-4 and Tree-of-Thought variants in accuracy (84.8% vs. 80.2%) - Reduced error propagation by 37% compared to linear retrieval-augmented methods - Produced answers with traceable evidence paths, critical for auditability in healthcare The Big Picture MIRAGE shifts AI reasoning from brittle, opaque processes to collaborative, structured exploration. By mirroring how clinicians synthesize information from multiple angles, it highlights a path toward AI systems that are both smarter and more trustworthy in high-stakes domains. Paper: Wei et al. MIRAGE: Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains
MIRAGE: Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains
·linkedin.com·
MIRAGE: Scaling Test-Time Inference with Parallel Graph-Retrieval-Augmented Reasoning Chains
Hot take on "faster then Dijkstra"
Hot take on "faster then Dijkstra"
𝗛𝗼𝘁 𝘁𝗮𝗸𝗲 𝗼𝗻 𝘁𝗵𝗲 “𝗳𝗮𝘀𝘁𝗲𝗿 𝘁𝗵𝗮𝗻 𝗗𝗶𝗷𝗸𝘀𝘁𝗿𝗮” 𝗵𝗲𝗮𝗱𝗹𝗶𝗻𝗲𝘀: The recent result given in the paper: https://lnkd.in/dQSbqrhD is a breakthrough for theory. It beats Dijkstra’s classic worst-case bound for single-source shortest paths on directed graphs with non-negative weights. That’s big for the research community. 𝗕𝘂𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀𝗻’𝘁 “𝗿𝗲𝘄𝗿𝗶𝘁𝗲” 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗿𝗼𝘂𝘁𝗶𝗻𝗴. In practice, large-scale systems (maps, logistics, ride-hailing) moved past plain Dijkstra years ago. They rely on heavy preprocessing. Contraction Hierarchies, Hub Labels and other methods are used to answer point-to-point queries in milliseconds, even on large, continental networks. 𝗪𝗵𝘆 𝘁𝗵𝗲 𝗱𝗶𝘀𝗰𝗼𝗻𝗻𝗲𝗰𝘁?  • Different goals: The paper targets single-source shortest paths; production prioritizes point-to-point queries at interactive latencies.  • Asymptotics vs. constants: Beating O(m + n log n) matters in principle, but real systems live and die by constants, cache behavior, and integration with traffic/turn costs.  • Preprocessing wins: Once you allow preprocessing, the speedups from hierarchical/labeling methods dwarf Dijkstra and likely any drop-in replacement without preprocessing. We should celebrate the theoretical advance and keep an eye on practical implementations. Just don’t confuse a sorting-barrier result with an immediate upgrade for Google Maps. 𝗕𝗼𝘁𝘁𝗼𝗺 𝗹𝗶𝗻𝗲: Great theory milestone. Production routing already “changed the rules” years ago with preprocessing and smart graph engineering.
𝗛𝗼𝘁 𝘁𝗮𝗸𝗲 𝗼𝗻 𝘁𝗵𝗲 “𝗳𝗮𝘀𝘁𝗲𝗿 𝘁𝗵𝗮𝗻 𝗗𝗶𝗷𝗸𝘀𝘁𝗿𝗮” 𝗵𝗲𝗮𝗱𝗹𝗶𝗻𝗲𝘀
·linkedin.com·
Hot take on "faster then Dijkstra"
4.7 times better write query price-performance with AWS Graviton4 R8g instances using Amazon Neptune v1.4.5 | Amazon Web Services
4.7 times better write query price-performance with AWS Graviton4 R8g instances using Amazon Neptune v1.4.5 | Amazon Web Services
Amazon Neptune version 1.4.5 introduces engine improvements and support for AWS Graviton-based r8g instances. In this post, we show you how these updates can improve your graph database performance and reduce costs. We walk you through the benchmark results for Gremlin and openCypher comparing Neptune v1.4.5 on r8g instances against previous versions. You'll see performance improvements of up to 4.7x for write throughput and 3.7x for read throughput, along with the cost implications.
·aws.amazon.com·
4.7 times better write query price-performance with AWS Graviton4 R8g instances using Amazon Neptune v1.4.5 | Amazon Web Services
Faster than Dijkstra? Tsinghua University’s new shortest path algorithm just rewrite the rules of graph traversal.
Faster than Dijkstra? Tsinghua University’s new shortest path algorithm just rewrite the rules of graph traversal.
🚀 Faster than Dijkstra? Tsinghua University’s new shortest path algorithm just rewrite the rules of graph traversal. For 65+ years, Dijkstra’s algorithm was the gold standard for finding shortest paths in weighted graphs. But now, a team from Tsinghua University has introduced a recursive partial ordering method that outperforms Dijkstra—especially on directed graphs. 🔍 What’s different?  Instead of sorting all vertices by distance (which adds log-time overhead), this new approach uses a clever recursive structure that breaks the O(m + n log n) barrier ✨.  It’s faster, leaner, and already winning awards at STOC 2025 🏆. 📍 Why it matters:  Think Google Maps, Uber routing, disaster evacuation planning, circuit design—any system that relies on real-time pathfinding across massive graphs. Paper ➡ https://lnkd.in/dGTdRj2X #Algorithms #ComputerScience #Engineering #Dijkstra #routing #planning #logistic | 34 comments on LinkedIn
Faster than Dijkstra? Tsinghua University’s new shortest path algorithm just rewrite the rules of graph traversal.
·linkedin.com·
Faster than Dijkstra? Tsinghua University’s new shortest path algorithm just rewrite the rules of graph traversal.
Quality metrics: mathematical functions designed to measure the “goodness” of a network visualization
Quality metrics: mathematical functions designed to measure the “goodness” of a network visualization
I’m proud to share an exciting piece of work by my PhD student, Simon van Wageningen, whom I have the pleasure of supervising. Simon asked a bold question that challenges the state of the art in our field! A bit of background first: together with Simon, we study network visualizations — those diagrams made of dots and lines. They’re more than just pretty pictures: they help us gain intuition about the structure of networks around us, such as social networks, protein networks, or even money-laundering networks 😉. But how do we know if a visualization really shows the structure well? That’s where quality metrics come in — mathematical functions designed to measure the “goodness” of a network visualization. Many of these metrics correlate nicely with human intuition. Yet, in our community, there has long been a belief — more of a tacit knowledge — that these metrics fail in certain cases. This is exactly where Simon’s work comes in: he set out to make this tacit knowledge explicit. Take a look at the dancing man and the network in the slider — they represent the same network with very similar quality metric values. And yet, the dancing man clearly does not don’t show the network's structure. This tells us something important: we can’t blindly rely on quality metrics. Simon’s work will be presented at the International Symposium on Graph Drawing and Network Visualization in Norrköping, Sweden this year. 🎉 If you’d like to dive deeper, here’s the link to the GitHub repository https://lnkd.in/eqw3nYmZ #graphdrawing #networkvisualization #qualitymetrics #research with Simon van Wageningen and Alex Telea | 13 comments on LinkedIn
quality metrics come in — mathematical functions designed to measure the “goodness” of a network visualization
·linkedin.com·
Quality metrics: mathematical functions designed to measure the “goodness” of a network visualization
True Cost of Enterprise Knowledge Graph Adoption from PoC to Production | LinkedIn
True Cost of Enterprise Knowledge Graph Adoption from PoC to Production | LinkedIn
Enterprise Knowledge Graph costs scale in phases—from a modest $50K–$100K PoC, to a $1M–$3M pilot with infrastructure and dedicated teams, to a $10M–$20M enterprise-wide platform. Reusability reduces costs to ~30% of the original for new domains, with faster delivery and self-sufficiency typically b
·linkedin.com·
True Cost of Enterprise Knowledge Graph Adoption from PoC to Production | LinkedIn
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
I'm extremely excited to announce that Siemens and AIT Austrian Institute of Technology—two leaders in industrial innovation—chose TDengine as the time-series backbone for a groundbreaking project at TCG Unitech GmbH! Here’s the magic: Imagine stitching together over a thousand time-series signals per machine with domain knowledge, and connecting it all through an intelligent semantic layer. With TDengine capturing high-frequency sensor data, PostgreSQL holding production context, and Ontopic virtualizing everything into a cohesive knowledge graph—this isn’t just data collection. It’s an orchestration that reveals hidden patterns, powers real-time anomaly and defect detection, supports traceability, and enables explainable root-cause analysis. And none of this works without good semantics. The system understands the relationships—between sensors, machines, processes, and defects—which means both AI and humans can ask the right questions and get meaningful, actionable answers. For me, this is the future of smart manufacturing: when data, infrastructure, and domain expertise come together, you get proactive, explainable, and scalable insights that keep factories running at peak performance. It's a true pleasure working with Stefan B. from Siemens AG Österreich, Stephan Strommer and David Gruber from AIT, Peter Hopfgartner from Ontopic and our friends Klaus Neubauer, Herbert Kerbl, Bernhard Schmiedinger from TCG on this technical blog! We hope this will bring some good insights into how time-series data and semantics can transform the operations of modern manufacturing! Read the full case study: https://lnkd.in/gtuf8KzU
·linkedin.com·
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
Unlocking LLMs' Graph Reasoning Potential Through Cognitive-Inspired Collaboration 👉 Why This Matters Large language models often falter when analyzing transportation networks, social connections, or citation graphs—not due to lacking intelligence, but because of working memory constraints. Imagine solving a 1,000-node shortest path problem while simultaneously memorizing every connection. Like humans juggling too many thoughts, LLMs lose accuracy as graph complexity increases. 👉 What GraphCogent Solves This new framework addresses three core limitations: 1. Representation confusion: Mixed graph formats (adjacency lists, symbols, natural language) 2. Memory overload: Context window limitations for large-scale graphs 3. Execution fragility: Error-prone code generation for graph algorithms Drawing inspiration from human cognition's working memory model, GraphCogent decomposes graph reasoning into specialized processes mirroring how our brains handle complex tasks. 👉 How It Works Sensory Module - Acts as an LLM's "eyes," standardizing diverse graph inputs through subgraph sampling - Converts web links, social connections, or traffic routes into uniform adjacency lists Buffer Module - Functions as a "mental workspace," integrating graph data across formats (NetworkX/PyG/NumPy) - Maintains persistent memory beyond standard LLM context limits Execution Module - Combines two reasoning modes:  - Tool calling for common tasks (pathfinding, cycle detection)  - Model generation for novel problems using preprocessed data 👉 Proven Impact - Achieves 98.5% accuracy on real-world graphs (social networks, transportation systems) using Llama3.1-8B - Outperforms 671B parameter models by 50% while using 80% fewer tokens - Handles graphs 10x larger than previous benchmarks through efficient memory management The framework's secret sauce? Treating graph reasoning as a team effort rather than a single AI's task—much like how human experts collaborate on complex problems. Key Question for Discussion As multi-agent systems become more sophisticated, how might we redesign LLM architectures to better emulate human cognitive processes for specific problem domains? Paper: "GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding" (Wang et al., 2025)
- Achieves 98.5% accuracy on real-world graphs (social networks, transportation systems) using Llama3.1-8B- Outperforms 671B parameter models by 50% while using 80% fewer tokens- Handles graphs 10x larger than previous benchmarks through efficient memory management
·linkedin.com·
GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge graphs help understand relationships between the objects, events, situations, and concepts in your data so you can readily identify important patterns and make better decisions. This book provides tools and techniques for efficiently labeling data, modeling a knowledge graph, and using it to derive useful insights. In Knowledge Graphs and LLMs in Action you will learn how to: Model knowledge graphs with an iterative top-down approach based in business needs Create a knowledge graph starting from ontologies, taxonomies, and structured data Use machine learning algorithms to hone and complete your graphs Build knowledge graphs from unstructured text data sources Reason on the knowledge graph and apply machine learning algorithms Move beyond analyzing data and start making decisions based on useful, contextual knowledge. The cutting-edge knowledge graphs (KG) approach puts that power in your hands. In Knowledge Graphs and LLMs in Action, you’ll discover the theory of knowledge graphs and learn how to build services that can demonstrate intelligent behavior. You’ll learn to create KGs from first principles and go hands-on to develop advisor applications for real-world domains like healthcare and finance.
·manning.com·
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Find the best link prediction for your specific graph
Find the best link prediction for your specific graph
🔗 How's your Link Prediction going? Did you know that the best algorithm for link prediction can vary by network? Slight differences in your graph data, and you may be better off with a new approach. Join us for an exclusive talk on August 28th to learn how to find the right link prediction model and, ultimately, get to more complete graph data. Researchers Bisman Singh and Aaron Clauset will share a new (just published!) meta-learning approach that uses a network's own structural features to automatically select the optimal link prediction algorithm! This is a must-attend event for any data scientist or researcher who wants to eliminate exhaustive benchmarking while getting more accurate predictions. The code will be made public, so you can put these insights into practice immediately. 🤓 Ready to really geek out? Register now: https://lnkd.in/g38EfQ2s
·linkedin.com·
Find the best link prediction for your specific graph
TuringDB release
TuringDB release

TuringDB is built to make real-time graph analytics effortless: ⚡ 1–50 ms queries on graphs with 10M+ nodes 🛠️ Python SDK • REST API • Web UI - no index tuning 🔄 Zero-lock concurrency - reads never block writes 📜 Git-like versioning & time-travel queries for full auditability & control Perfect for knowledge graphs, AI agent memory, and large-scale analytics pipelines. Early access is now open.

·linkedin.com·
TuringDB release