GraphNews

4409 bookmarks
Custom sorting
True Cost of Enterprise Knowledge Graph Adoption from PoC to Production | LinkedIn
True Cost of Enterprise Knowledge Graph Adoption from PoC to Production | LinkedIn
Enterprise Knowledge Graph costs scale in phases—from a modest $50K–$100K PoC, to a $1M–$3M pilot with infrastructure and dedicated teams, to a $10M–$20M enterprise-wide platform. Reusability reduces costs to ~30% of the original for new domains, with faster delivery and self-sufficiency typically b
·linkedin.com·
True Cost of Enterprise Knowledge Graph Adoption from PoC to Production | LinkedIn
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
I'm extremely excited to announce that Siemens and AIT Austrian Institute of Technology—two leaders in industrial innovation—chose TDengine as the time-series backbone for a groundbreaking project at TCG Unitech GmbH! Here’s the magic: Imagine stitching together over a thousand time-series signals per machine with domain knowledge, and connecting it all through an intelligent semantic layer. With TDengine capturing high-frequency sensor data, PostgreSQL holding production context, and Ontopic virtualizing everything into a cohesive knowledge graph—this isn’t just data collection. It’s an orchestration that reveals hidden patterns, powers real-time anomaly and defect detection, supports traceability, and enables explainable root-cause analysis. And none of this works without good semantics. The system understands the relationships—between sensors, machines, processes, and defects—which means both AI and humans can ask the right questions and get meaningful, actionable answers. For me, this is the future of smart manufacturing: when data, infrastructure, and domain expertise come together, you get proactive, explainable, and scalable insights that keep factories running at peak performance. It's a true pleasure working with Stefan B. from Siemens AG Österreich, Stephan Strommer and David Gruber from AIT, Peter Hopfgartner from Ontopic and our friends Klaus Neubauer, Herbert Kerbl, Bernhard Schmiedinger from TCG on this technical blog! We hope this will bring some good insights into how time-series data and semantics can transform the operations of modern manufacturing! Read the full case study: https://lnkd.in/gtuf8KzU
·linkedin.com·
Enabling Industrial AI: How Siemens and AIT Leverage TDengine and Ontop to Help TCG UNITECH Boost Productivity and Efficiency
GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
Unlocking LLMs' Graph Reasoning Potential Through Cognitive-Inspired Collaboration 👉 Why This Matters Large language models often falter when analyzing transportation networks, social connections, or citation graphs—not due to lacking intelligence, but because of working memory constraints. Imagine solving a 1,000-node shortest path problem while simultaneously memorizing every connection. Like humans juggling too many thoughts, LLMs lose accuracy as graph complexity increases. 👉 What GraphCogent Solves This new framework addresses three core limitations: 1. Representation confusion: Mixed graph formats (adjacency lists, symbols, natural language) 2. Memory overload: Context window limitations for large-scale graphs 3. Execution fragility: Error-prone code generation for graph algorithms Drawing inspiration from human cognition's working memory model, GraphCogent decomposes graph reasoning into specialized processes mirroring how our brains handle complex tasks. 👉 How It Works Sensory Module - Acts as an LLM's "eyes," standardizing diverse graph inputs through subgraph sampling - Converts web links, social connections, or traffic routes into uniform adjacency lists Buffer Module - Functions as a "mental workspace," integrating graph data across formats (NetworkX/PyG/NumPy) - Maintains persistent memory beyond standard LLM context limits Execution Module - Combines two reasoning modes:  - Tool calling for common tasks (pathfinding, cycle detection)  - Model generation for novel problems using preprocessed data 👉 Proven Impact - Achieves 98.5% accuracy on real-world graphs (social networks, transportation systems) using Llama3.1-8B - Outperforms 671B parameter models by 50% while using 80% fewer tokens - Handles graphs 10x larger than previous benchmarks through efficient memory management The framework's secret sauce? Treating graph reasoning as a team effort rather than a single AI's task—much like how human experts collaborate on complex problems. Key Question for Discussion As multi-agent systems become more sophisticated, how might we redesign LLM architectures to better emulate human cognitive processes for specific problem domains? Paper: "GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding" (Wang et al., 2025)
- Achieves 98.5% accuracy on real-world graphs (social networks, transportation systems) using Llama3.1-8B- Outperforms 671B parameter models by 50% while using 80% fewer tokens- Handles graphs 10x larger than previous benchmarks through efficient memory management
·linkedin.com·
GraphCogent: Overcoming LLMs’ Working Memory Constraints via Multi-Agent Collaboration in Complex Graph Understanding
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Knowledge graphs help understand relationships between the objects, events, situations, and concepts in your data so you can readily identify important patterns and make better decisions. This book provides tools and techniques for efficiently labeling data, modeling a knowledge graph, and using it to derive useful insights. In Knowledge Graphs and LLMs in Action you will learn how to: Model knowledge graphs with an iterative top-down approach based in business needs Create a knowledge graph starting from ontologies, taxonomies, and structured data Use machine learning algorithms to hone and complete your graphs Build knowledge graphs from unstructured text data sources Reason on the knowledge graph and apply machine learning algorithms Move beyond analyzing data and start making decisions based on useful, contextual knowledge. The cutting-edge knowledge graphs (KG) approach puts that power in your hands. In Knowledge Graphs and LLMs in Action, you’ll discover the theory of knowledge graphs and learn how to build services that can demonstrate intelligent behavior. You’ll learn to create KGs from first principles and go hands-on to develop advisor applications for real-world domains like healthcare and finance.
·manning.com·
Knowledge Graphs and LLMs in Action - Alessandro Negro with Vlastimil Kus, Giuseppe Futia and Fabio Montagna
Find the best link prediction for your specific graph
Find the best link prediction for your specific graph
🔗 How's your Link Prediction going? Did you know that the best algorithm for link prediction can vary by network? Slight differences in your graph data, and you may be better off with a new approach. Join us for an exclusive talk on August 28th to learn how to find the right link prediction model and, ultimately, get to more complete graph data. Researchers Bisman Singh and Aaron Clauset will share a new (just published!) meta-learning approach that uses a network's own structural features to automatically select the optimal link prediction algorithm! This is a must-attend event for any data scientist or researcher who wants to eliminate exhaustive benchmarking while getting more accurate predictions. The code will be made public, so you can put these insights into practice immediately. 🤓 Ready to really geek out? Register now: https://lnkd.in/g38EfQ2s
·linkedin.com·
Find the best link prediction for your specific graph
TuringDB release
TuringDB release

TuringDB is built to make real-time graph analytics effortless: ⚡ 1–50 ms queries on graphs with 10M+ nodes 🛠️ Python SDK • REST API • Web UI - no index tuning 🔄 Zero-lock concurrency - reads never block writes 📜 Git-like versioning & time-travel queries for full auditability & control Perfect for knowledge graphs, AI agent memory, and large-scale analytics pipelines. Early access is now open.

·linkedin.com·
TuringDB release
Most people talk about AI agents like they’re already reliable. They aren’t.
Most people talk about AI agents like they’re already reliable. They aren’t.
Most people talk about AI agents like they’re already reliable. They aren’t. They follow instructions. They spit out results. But they forget what they did, why it mattered, or how circumstances have changed. There’s no continuity. No memory. No grasp of unfolding context. Today’s agents can respond - but they can’t reflect, reason, or adapt over time. OpenAI’s new cookbook Temporal Agents with Knowledge Graphs lays out just how limiting that is and offers a credible path forward. It introduces a new class of temporal agents: systems built not around isolated prompts, but around structured, persistent memory. At the core is a knowledge graph that acts as an evolving world model - not a passive record, but a map of what happened, why it mattered, and what it connects to. This lets agents handle questions like: “What changed since last week?” “Why was this decision made?” “What’s still pending and what’s blocking it?” It’s an architectural shift that turns time, intent, and interdependence into first-class elements. This mirrors Tony Seale’s argument about enterprise data: most data products don’t fail because of missing pipelines - they fail because they don’t align with how the business actually thinks. Data lives in tables and schemas. Business lives in concepts like churn, margin erosion, customer health, or risk exposure. Tony’s answer is a business ontology: a formal, machine-readable layer that defines the language of the business and anchors data products to it. It’s a shift from structure to semantics - from warehouse to shared understanding. That’s the same shift OpenAI is proposing for agents. In both cases, what’s missing isn’t infrastructure. It’s interpretation. The challenge isn’t access. It’s alignment. If we want agents that behave reliably in real-world settings, it’s not enough to fine-tune them on PDFs or dump Slack threads into context windows. They need to be wired into shared ontologies - concept-level scaffolding like: Who are our customers? What defines success? What risks are emerging, and how are they evolving? The temporal knowledge graph becomes more than just memory. It becomes an interface - a structured bridge between reasoning and meaning. This goes far beyond another agent orchestration blueprint. It points to something deeper: Without time and meaning, there is no true delegation. We don’t need agents that mimic tasks. We need agents that internalise context and navigate change. That means building systems that don’t just handle data, but understand how it fits into the changing world we care about. OpenAI’s temporal memory graphs and Tony’s business ontologies aren’t separate ideas. They’re converging on the same missing layer: AI that reasons in the language of time and meaning. H/T Vin Vashishta for the pointer to the OpenAI cookbook, and image nicked from Tony (as usual). | 72 comments on LinkedIn
Most people talk about AI agents like they’re already reliable. They aren’t.
·linkedin.com·
Most people talk about AI agents like they’re already reliable. They aren’t.
Neo4j: The Definitive Guide
Neo4j: The Definitive Guide
Looking to improve the performance of Cypher queries or learn how to model graphs to support business use cases? A graph database like Neo4j can help. In fact, many enterprises are... - Selection from Neo4j: The Definitive Guide [Book]
·oreilly.com·
Neo4j: The Definitive Guide
Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement.
Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement.
Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement. That post struck a chord. With GPT-5 now here, it’s the right moment to revisit the idea. Back then, GPT-3.5 and GPT-4 could draft ontology structures, but there were limits in context, reasoning, and abstraction. With GPT-5 (and other frontier models), that’s changing: 🔹 Larger context windows let entire ontologies sit in working memory at once.   🔹 Test-time compute enables better abstraction of concepts.   🔹 Multimodal input can turn diagrams, tables, and videos into structured ontology scaffolds.   🔹 Tool use allows ontologies to be validated, aligned, and extended in one flow. But some fundamentals remain. GPT-5 is still curve-fitting to a training set - and that brings limits: 🔹 The flipside of flexibility is hallucination. OpenAI has reduced it, but GPT-5 still scores 0.55 on SimpleQA, with a 5% hallucination rate on its own public-question dataset.   🔹 The model is bound by the landscape of its training data. That landscape is vast, but it excludes your private, proprietary data - and increasingly, an organisation’s edge will track directly to the data it owns outside that distribution. Fortunately, the benefits flow both ways. LLMs can help build ontologies, but ontologies and knowledge graphs can also help improve LLMs. The two systems can work in tandem.   Ontologies bring structure, consistency, and domain-specific context.   LLMs bring adaptability, speed, and pattern recognition that ontologies can’t achieve in isolation.   Each offsets the other’s weaknesses - and together they make both stronger. The feedback loop is no longer theory - we’ve been proving it:   Better LLM → Better Ontology → Better LLM - in your domain. There is a lot of hype around AI. GPT-5 is good, but not ground-breaking. Still, the progress over two years is remarkable. For the foreseeable future, we are living in a world where models keep improving - but where we must pair classic formal symbolic systems with these new probabilistic models. For organisations, the challenge is to match growing model power with equally strong growth in the power of their proprietary symbolic formalisation. Not all formalisations are equal. We want fewer brittle IF statements buried in application code, and more rich, flexible abstractions embedded in the data itself. That’s what ontologies and knowledge graphs promise to deliver. Two years ago, this was a hopeful idea.   Today, it’s looking less like a nice-to-have…   …and more like the only sensible way forward for organisations. ⭕ Neural-Symbolic Loop: https://lnkd.in/eJ7S22hF 🔗 Turn your data into a competitive edge: https://lnkd.in/eDd-5hpV
·linkedin.com·
Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement.
Palantir hit $175/share because they understand what 99% of AI companies don't: ontologies
Palantir hit $175/share because they understand what 99% of AI companies don't: ontologies
palantir hit $175/share because they understand what 99% of AI companies don't: ontologies. in 2021, the word "ontology" appeared 0 times in their earnings calls. by Q3 2024? 9 times. their US commercial revenue is growing 153% YoY. why? because LLMs are becoming the commodity, while ontologies are becoming the moat. let me explain why most enterprise AI initiatives are failing without one: every enterprise has the same problem: 47 different systems ❗️ 19 definitions of "customer" ❗️ 34 versions of "product"❗️ business logic scattered across 100+ applications ❗️ you throw AI at something like this? it hallucinates. but if you build an ontology first? it gains the context and data depth to be able to reason. palantir figured this out years ago. but here's what palantir doesn't do: verticalize at scale. they're brilliant at defense, government, contracting. but specialized industries need specialized ontologies. take telecommunications. a telco's "customer" isn't just a record - it's: ➕ a subscriber with multiple services ➕ a hierarchy of accounts and sub-accounts ➕ real-time network states ➕ billing cycles across geographies ➕ regulatory compliance per jurisdiction Orgs have tried to standardize this before. but standards aren't ontologies. they're just vocabularies. this is why Totogi has spent so much time and effort building their telco-specific ontology layer while palantir was perfecting horizontal enterprise ontologies, we went deep on telecom's unique semantic complexity. now telcos can deploy AI that takes one action - 'activate new customer' - and correctly translates it across systems that call it 'create subscriber' (BSS), 'provision user' (network), 'establish account' (billing), and 'initialize profile' (CRM). No more manual steps, no more dropped handoffs between systems. palantir proved the model. but they can't be everywhere. the future belongs to industry-specific semantic platforms like Totogi's BSS Magic 🚀 | 18 comments on LinkedIn
palantir hit $175/share because they understand what 99% of AI companies don't:ontologies
·linkedin.com·
Palantir hit $175/share because they understand what 99% of AI companies don't: ontologies
Hydra is a unique functional programming language based on the LambdaGraph data model.
Hydra is a unique functional programming language based on the LambdaGraph data model.
In case you were wondering what I have been up to lately, Hydra is a large part of it. This is the open source graph programming language I alluded to last year at the Knowledge Graph Conference. Hydra is almost ready for its 1.0 release, and I am planning on making it into a community project, possibly through the Apache Incubator. In this initial demo video, we take an arbitrary tabular dataset and use Hydra + Claude to map it into a property graph. More specifically, we use the LLM once to construct a pair of schemas and a mapping. From there, we apply the mapping deterministically and efficiently to each row of data, without additional calls to the LLM. The recording was a little too long for LinkedIn, so I broke it into two parts. I will post part 2 momentarily (edit: part 2 is here: https://lnkd.in/gZmHicXu). More videos will follow as we get closer to the release. GitHub: https://lnkd.in/g8v2hvd5 Discord: https://bit.ly/lg-discord
·linkedin.com·
Hydra is a unique functional programming language based on the LambdaGraph data model.
Semantic Data in Medallion Architecture: Enterprise Knowledge Graphs at Scale | LinkedIn
Semantic Data in Medallion Architecture: Enterprise Knowledge Graphs at Scale | LinkedIn
Building Enterprise Knowledge Graphs Within Modern Data Platforms - Version 26 Louie Franco III Enterprise Architect - Knowledge Graph Architect - Semantics Architect August 3, 2025 In my previous article on Data Vault Medallion Architecture, I outlined how structured data flows through Landing, Bro
·linkedin.com·
Semantic Data in Medallion Architecture: Enterprise Knowledge Graphs at Scale | LinkedIn
A gentle introduction to DSPy for graph data enrichment | Kuzu
A gentle introduction to DSPy for graph data enrichment | Kuzu

📢 Check out our latest blog post by Prashanth Rao, where we introduce the DSPy framework to help you build composable pipelines with LLMs and graphs. In the post, we dive into a fascinating dataset of Nobel laureates and their mentorship networks for a data enrichment task. 👇🏽

✅ The source data that contains the tree structures is enriched with data from the official Nobel Prize API.

✅ We showcase a 2-step methodology that combines the benefits of Kuzu's vector search capabilities with DSPy's powerful primitives to build an LLM-as-a-judge pipeline that help disambiguate entities in the data.

✅ The DSPy approach is scalable, low-cost and efficient, and is flexible enough to apply to a wide variety of domains and use cases.

·blog.kuzudb.com·
A gentle introduction to DSPy for graph data enrichment | Kuzu