Quality metrics: mathematical functions designed to measure the “goodness” of a network visualization
I’m proud to share an exciting piece of work by my PhD student, Simon van Wageningen, whom I have the pleasure of supervising. Simon asked a bold question that challenges the state of the art in our field!
A bit of background first: together with Simon, we study network visualizations — those diagrams made of dots and lines. They’re more than just pretty pictures: they help us gain intuition about the structure of networks around us, such as social networks, protein networks, or even money-laundering networks 😉. But how do we know if a visualization really shows the structure well? That’s where quality metrics come in — mathematical functions designed to measure the “goodness” of a network visualization. Many of these metrics correlate nicely with human intuition. Yet, in our community, there has long been a belief — more of a tacit knowledge — that these metrics fail in certain cases.
This is exactly where Simon’s work comes in: he set out to make this tacit knowledge explicit. Take a look at the dancing man and the network in the slider — they represent the same network with very similar quality metric values. And yet, the dancing man clearly does not don’t show the network's structure. This tells us something important: we can’t blindly rely on quality metrics.
Simon’s work will be presented at the International Symposium on Graph Drawing and Network Visualization in Norrköping, Sweden this year. 🎉
If you’d like to dive deeper, here’s the link to the GitHub repository https://lnkd.in/eqw3nYmZ #graphdrawing #networkvisualization #qualitymetrics #research with Simon van Wageningen and Alex Telea | 13 comments on LinkedIn
quality metrics come in — mathematical functions designed to measure the “goodness” of a network visualization
The New Dijkstra’s Algorithm: Shortest Route from Data to Insights (and Action)?
Reforms on the "Shortest Path" Algorithm, Parallels with Modular Data Architectures, and Diving Into Key Components: Product Buckets, Semantic Spine, & Insight Routers
True Cost of Enterprise Knowledge Graph Adoption from PoC to Production | LinkedIn
Enterprise Knowledge Graph costs scale in phases—from a modest $50K–$100K PoC, to a $1M–$3M pilot with infrastructure and dedicated teams, to a $10M–$20M enterprise-wide platform. Reusability reduces costs to ~30% of the original for new domains, with faster delivery and self-sufficiency typically b
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
I'm extremely excited to announce that Siemens and AIT Austrian Institute of Technology—two leaders in industrial innovation—chose TDengine as the time-series backbone for a groundbreaking project at TCG Unitech GmbH!
Here’s the magic: Imagine stitching together over a thousand time-series signals per machine with domain knowledge, and connecting it all through an intelligent semantic layer. With TDengine capturing high-frequency sensor data, PostgreSQL holding production context, and Ontopic virtualizing everything into a cohesive knowledge graph—this isn’t just data collection. It’s an orchestration that reveals hidden patterns, powers real-time anomaly and defect detection, supports traceability, and enables explainable root-cause analysis.
And none of this works without good semantics. The system understands the relationships—between sensors, machines, processes, and defects—which means both AI and humans can ask the right questions and get meaningful, actionable answers.
For me, this is the future of smart manufacturing: when data, infrastructure, and domain expertise come together, you get proactive, explainable, and scalable insights that keep factories running at peak performance.
It's a true pleasure working with Stefan B. from Siemens AG Österreich, Stephan Strommer and David Gruber from AIT, Peter Hopfgartner from Ontopic and our friends Klaus Neubauer, Herbert Kerbl, Bernhard Schmiedinger from TCG on this technical blog! We hope this will bring some good insights into how time-series data and semantics can transform the operations of modern manufacturing!
Read the full case study: https://lnkd.in/gtuf8KzU
GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
Unlocking LLMs' Graph Reasoning Potential Through Cognitive-Inspired Collaboration
👉 Why This Matters
Large language models often falter when analyzing transportation networks, social connections, or citation graphs—not due to lacking intelligence, but because of working memory constraints. Imagine solving a 1,000-node shortest path problem while simultaneously memorizing every connection. Like humans juggling too many thoughts, LLMs lose accuracy as graph complexity increases.
👉 What GraphCogent Solves
This new framework addresses three core limitations:
1. Representation confusion: Mixed graph formats (adjacency lists, symbols, natural language)
2. Memory overload: Context window limitations for large-scale graphs
3. Execution fragility: Error-prone code generation for graph algorithms
Drawing inspiration from human cognition's working memory model, GraphCogent decomposes graph reasoning into specialized processes mirroring how our brains handle complex tasks.
👉 How It Works
Sensory Module
- Acts as an LLM's "eyes," standardizing diverse graph inputs through subgraph sampling
- Converts web links, social connections, or traffic routes into uniform adjacency lists
Buffer Module
- Functions as a "mental workspace," integrating graph data across formats (NetworkX/PyG/NumPy)
- Maintains persistent memory beyond standard LLM context limits
Execution Module
- Combines two reasoning modes:
- Tool calling for common tasks (pathfinding, cycle detection)
- Model generation for novel problems using preprocessed data
👉 Proven Impact
- Achieves 98.5% accuracy on real-world graphs (social networks, transportation systems) using Llama3.1-8B
- Outperforms 671B parameter models by 50% while using 80% fewer tokens
- Handles graphs 10x larger than previous benchmarks through efficient memory management
The framework's secret sauce? Treating graph reasoning as a team effort rather than a single AI's task—much like how human experts collaborate on complex problems.
Key Question for Discussion
As multi-agent systems become more sophisticated, how might we redesign LLM architectures to better emulate human cognitive processes for specific problem domains?
Paper: "GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding" (Wang et al., 2025)
- Achieves 98.5% accuracy on real-world graphs (social networks, transportation systems) using Llama3.1-8B- Outperforms 671B parameter models by 50% while using 80% fewer tokens- Handles graphs 10x larger than previous benchmarks through efficient memory management
Why Top-Level Ontologies (TLOs) Matter for ROI | LinkedIn
In enterprise data strategy, most of the attention goes to pipelines, dashboards, and AI models. But all of that sits on top of a foundation that is too often overlooked: the ontology layer.
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge graphs help understand relationships between the objects, events, situations, and concepts in your data so you can readily identify important patterns and make better decisions. This book provides tools and techniques for efficiently labeling data, modeling a knowledge graph, and using it to derive useful insights.
In Knowledge Graphs and LLMs in Action you will learn how to:
Model knowledge graphs with an iterative top-down approach based in business needs
Create a knowledge graph starting from ontologies, taxonomies, and structured data
Use machine learning algorithms to hone and complete your graphs
Build knowledge graphs from unstructured text data sources
Reason on the knowledge graph and apply machine learning algorithms
Move beyond analyzing data and start making decisions based on useful, contextual knowledge. The cutting-edge knowledge graphs (KG) approach puts that power in your hands. In Knowledge Graphs and LLMs in Action, you’ll discover the theory of knowledge graphs and learn how to build services that can demonstrate intelligent behavior. You’ll learn to create KGs from first principles and go hands-on to develop advisor applications for real-world domains like healthcare and finance.
Find the best link prediction for your specific graph
🔗 How's your Link Prediction going?
Did you know that the best algorithm for link prediction can vary by network? Slight differences in your graph data, and you may be better off with a new approach.
Join us for an exclusive talk on August 28th to learn how to find the right link prediction model and, ultimately, get to more complete graph data. Researchers Bisman Singh and Aaron Clauset will share a new (just published!) meta-learning approach that uses a network's own structural features to automatically select the optimal link prediction algorithm!
This is a must-attend event for any data scientist or researcher who wants to eliminate exhaustive benchmarking while getting more accurate predictions.
The code will be made public, so you can put these insights into practice immediately.
🤓 Ready to really geek out? Register now: https://lnkd.in/g38EfQ2s
Ever wish you could make an ontology right from your spreadsheet? A lot of my ontology drafting work begins with a spreadsheet: a lexicon, a catalog of important concepts or subject-matter expert t…
Making the Case for Small Knowledge Graphs (and what to do if you have a large graph already)
There has been a shift happening from large monolithic graphs to more tailored graphs, specifically for analytics and AI use cases. Join me in walking throug...
TuringDB is built to make real-time graph analytics effortless:
⚡ 1–50 ms queries on graphs with 10M+ nodes
🛠️ Python SDK • REST API • Web UI - no index tuning
🔄 Zero-lock concurrency - reads never block writes
📜 Git-like versioning & time-travel queries for full auditability & control
Perfect for knowledge graphs, AI agent memory, and large-scale analytics pipelines.
Early access is now open.
Most people talk about AI agents like they’re already reliable. They aren’t.
Most people talk about AI agents like they’re already reliable. They aren’t.
They follow instructions. They spit out results. But they forget what they did, why it mattered, or how circumstances have changed. There’s no continuity. No memory. No grasp of unfolding context. Today’s agents can respond - but they can’t reflect, reason, or adapt over time.
OpenAI’s new cookbook Temporal Agents with Knowledge Graphs lays out just how limiting that is and offers a credible path forward. It introduces a new class of temporal agents: systems built not around isolated prompts, but around structured, persistent memory.
At the core is a knowledge graph that acts as an evolving world model - not a passive record, but a map of what happened, why it mattered, and what it connects to. This lets agents handle questions like:
“What changed since last week?”
“Why was this decision made?”
“What’s still pending and what’s blocking it?”
It’s an architectural shift that turns time, intent, and interdependence into first-class elements.
This mirrors Tony Seale’s argument about enterprise data: most data products don’t fail because of missing pipelines - they fail because they don’t align with how the business actually thinks. Data lives in tables and schemas. Business lives in concepts like churn, margin erosion, customer health, or risk exposure.
Tony’s answer is a business ontology: a formal, machine-readable layer that defines the language of the business and anchors data products to it. It’s a shift from structure to semantics - from warehouse to shared understanding.
That’s the same shift OpenAI is proposing for agents.
In both cases, what’s missing isn’t infrastructure. It’s interpretation.
The challenge isn’t access. It’s alignment.
If we want agents that behave reliably in real-world settings, it’s not enough to fine-tune them on PDFs or dump Slack threads into context windows. They need to be wired into shared ontologies - concept-level scaffolding like:
Who are our customers?
What defines success?
What risks are emerging, and how are they evolving?
The temporal knowledge graph becomes more than just memory. It becomes an interface - a structured bridge between reasoning and meaning.
This goes far beyond another agent orchestration blueprint. It points to something deeper: Without time and meaning, there is no true delegation.
We don’t need agents that mimic tasks.
We need agents that internalise context and navigate change.
That means building systems that don’t just handle data, but understand how it fits into the changing world we care about.
OpenAI’s temporal memory graphs and Tony’s business ontologies aren’t separate ideas. They’re converging on the same missing layer:
AI that reasons in the language of time and meaning.
H/T Vin Vashishta for the pointer to the OpenAI cookbook, and image nicked from Tony (as usual). | 72 comments on LinkedIn
Most people talk about AI agents like they’re already reliable. They aren’t.
Looking to improve the performance of Cypher queries or learn how to model graphs to support business use cases? A graph database like Neo4j can help. In fact, many enterprises are... - Selection from Neo4j: The Definitive Guide [Book]