GraphNews

4794 bookmarks
Custom sorting
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
KG-R1, Why Knowledge Graph RAG Systems Are Too Expensive to Deploy (And How One Team Fixed It) ... What if I told you that most knowledge graph systems require multiple AI models just to answer a single question? That's exactly the problem plaguing current KG-RAG deployments. 👉 The Cost Problem Traditional knowledge graph retrieval systems use a pipeline approach: one model for planning, another for reasoning, a third for reviewing, and a fourth for responding. Each step burns through tokens and compute resources, making deployment prohibitively expensive for most organizations. Even worse? These systems are built for specific knowledge graphs. Change your data source, and you need to retrain everything. 👉 A Single-Agent Solution Researchers from MIT and IBM just published KG-R1, which replaces this entire multi-model pipeline with one lightweight agent that learns through reinforcement learning. Here's the clever part: instead of hardcoding domain-specific logic, the system uses four simple, universal operations: - Get relations from an entity - Get entities from a relation - Navigate forward or backward through connections These operations work on any knowledge graph without modification. 👉 The Results Are Striking Using just a 3B parameter model, KG-R1: - Matches accuracy of much larger foundation models - Uses 60% fewer tokens per query than existing methods - Transfers across different knowledge graphs without retraining - Processes queries in under 7 seconds on a single GPU The system learned to retrieve information strategically through multi-turn interactions, optimized end-to-end rather than stage-by-stage. This matters because knowledge graphs contain some of our most valuable structured data - from scientific databases to legal documents. Making them accessible and affordable could unlock entirely new applications.
https://arxiv.org/abs/2509.26383v1 Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
·linkedin.com·
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
GraphLand: Evaluating Graph Machine Learning Models on Diverse...
GraphLand: Evaluating Graph Machine Learning Models on Diverse...

Recently, there has been a lot of criticism of existing popular graph ML benchmark datasets concerning such aspects as lacking practical relevance, low structural diversity that leaves most of the possible graph structure space not represented, low application domain diversity, graph structure not being beneficial for the considered tasks, and potential bugs in the data collection processes. Some of these criticisms previously appeared on this channel.

To provide the community with better benchmarks, we present GraphLand: a collection of 14 graph datasets for node property prediction coming from diverse real-world industrial applications of graph ML. What makes this benchmark stand out?

Diverse application domains: social networks, web graphs, road networks, and more. Importantly, half of the datasets feature node-level regression tasks that are currently underrepresented in graph ML benchmarks, but are often encountered in real-world applications.

Range of sizes: from thousands to millions of nodes, providing opportunities for researchers with different computational resources.

Rich node attributes that contain numerical and categorical features — these are more typical for industrial applications than textual descriptions that are standard for current benchmarks.

Different learning scenarios. For all datasets, we provide two random data splits with low and high label rate. Further, many of our networks are evolving over time, and for them we additionally provide more challenging temporal data splits and an opportunity to evaluate models in the inductive setting where only an early snapshot of the evolving network is available at train time.

We evaluated a range of models on our datasets and found that, while GNNs achieve strong performance on industrial datasets, they can sometimes be rivaled by popular in the industry gradient boosted decision trees which are provided with additional graph-based input features.

Further, we evaluated several graph foundation models (GFMs). Despite much attention being paid to GFMs recently, we found that there are currently only a few GFMs that can handle arbitrary node features (which is required for true generalization between different graphs) and that these GFMs produce very weak results on our benchmark. So it seemed like the problem of developing general-purpose graph foundation models was far from being solved, which motivated our research in this direction (see the next post).

·arxiv.org·
GraphLand: Evaluating Graph Machine Learning Models on Diverse...
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Knowledge graphs offer many advantages in the fields of materials science and manufacturing technology. But how can we explore knowledge graphs in a meaningful way? The current article “Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing”shows what such a solution could look like: https://lnkd.in/ehiK5php. Special thanks to Matthias BĂŒschelberger as main author and many thanks to all co-authors Konstantinos Tsitseklis, Lukas Morand, Anastasios Zafeiropoulos, Yoav Nahshon, Symeon Papavassiliou, and Dirk Helm for the great collaboration as part of our wunderful DiMAT project. Would you like to see such a chatbot acting on a knowledge graph in action? Take a look at the video. #datamanagement #FAIRData #dataspace #ontology #knowledgegraph #AI #materials #sustainability #digitalisation #InsideMaterial Fraunhofer IWM, National Technical University of Athens
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
·linkedin.com·
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Kuzu is no more
Kuzu is no more
Kuzu is no more The project was archived last night with one last major release. The communication has not been very clear, but I can bet Semih Salihoğlu is under a lot of pressure and I am looking forward to hearing the full story someday. We liked the product and will fork it and continue supporting it as a way for our users to run local memory workloads on their machines. We'll not support it in production anymore though, since we are not database developers and don't plan to be. You can only get that far without the need to grow a mighty Unix beard. Instead, we'll be going with Neo4j for larger loads and our partner Qdrant for embeddings + extend our FalkorDB and Postgres support. It does feel a bit strange when your default DB disappears overnight. That is why cognee is database agnostic and all features that were Kuzu specific will be migrated in about 2 weeks. This time we were just too fast for our own good. | 47 comments on LinkedIn
Kuzu is no more
·linkedin.com·
Kuzu is no more
Unlock GPU Power with GFQL
Unlock GPU Power with GFQL
Rough news on #kuzu being archived - startups are hard and Semih Salihoğlu + Prashanth Rao did so much in ways I value, and the same architectural principles we've been quietly tackling in GFQL. For those left in the lurch for an embeddable compute-tier solution to graphs, #GFQL should be pretty fascinating yet also familiar (ex: Apache Arrow-native graph queries for modern OSS ecosystems), and hopefully less stress due to a sustainable governance model. Likewise, as an oss deeptech community, we add interesting new bits like the optional record-breaking GPU mode with NVIDIA #RAPIDSAI . If you're new to it and seeing this: #GFQL, the graph dataframe-native query language, is increasingly how Graphistry, Inc. and our community work with graphs at the compute tier. Whether the data comes from a tabular ETL pipeline, a file, SQL, nosql, or a graph storage DB, GFQL makes it easy to do on-the-fly graph transforms and queries at the compute tier at sub-second speeds for graphs anywhere from 100 edges to 1,000,000,000 . Currently, we support arrow/pandas, and arrow / #nvidia #RAPIDS as the main engine modes. While we're not marketing it much yet, GFQL is already used daily by every single Graphistry user behind-the-scenes, and directly by analysts & developers at banks, startups, etc around the world. We built it because we needed an OSS compute-tier graph solution for working with modern data systems that separate storage from compute. Likewise, data is a team sport, so it is used by folks on teams who have to rapidly wrangle graphs, whether for analysis, data science, ETL, visualization, or AI. Imagine an ETL pipeline or notebook flow or web app where data comes from files, elastic search, databricks, and neo4j, and you need to do more on-the-fly graph stuff with it. We started building what became GFQL *before* Kuzu because it solves real architectural & graph productivity problems that have been challenging our team, our users, and the broader graph community for years now. Likewise, by going dataframe-native & GPU-mode from day 1, it's now a large part of how we approach GPU graph deep tech investments throughout our stack, and means it's a sustainably funded system. We are looking at bigger R&D and commercial support contracts with organizations needing to do subsecond billion+-scale with us so we can build even more, faster (hit me up if that's you!), but overall, most of our users are just like ourselves, and the day-to-day is wanting an easy OSS way to wrangle graphs in our apps & notebooks. As we continue to smooth it out (ex: we'll be adding a familiar Cypher syntax), we'll be writing about it a lot more. 4 links below: ReadTheDocs, pip install, SOTA GPU benchmarks, and original aha moment + Russell Jurney Ben Lorica çœ—ç‘žćĄ Taurean Dyer Bradley Rees
·linkedin.com·
Unlock GPU Power with GFQL
Discontinued graph database systems
Discontinued graph database systems
Last week, the KĂčzu Inc team announced that they will no longer actively support the open-source KuzuDB project. I've been a fan of KuzuDB and think its discontinuation leaves a big gap in the graph ecosystem. This is especially the case for open-source solutions – over the last few years, many open-source graph database systems were forked, relicensed or discontinued. Currently, users looking for an OSS graph database are left to pick from: - community editions of systems with enterprise/cloud offerings (Neo4j, Dgraph) - variants of a heavily-forked system (ArcadeDB / YouTrackDB, HugeGraph) - projects under non-OSI approved licenses - experimental systems (e.g., DuckPGQ) I'm wondering whether this trends continues or someone steps up to maintain KuzuDB or create a new OSS system.
·linkedin.com·
Discontinued graph database systems
For years, I considered graph databases “interesting but niche.”
For years, I considered graph databases “interesting but niche.”
For years, I considered graph databases “interesting but niche.” Relevant commercially for social networks, supply chain and academically for biotech, maybe some knowledge management. Basically, not something most companies would ever need. I stand corrected. With AI, they’re having a very big moment! Working with graphs the first time feels unusual but also just right. The best analogy I have is that feeling we get when we try to visualize a higher dimension when all we have ever known are three (+ time for the purists). (or is it just me?) Two use-cases that I have been riffing on: * Knowledge management: For me it started as a personal project for personal knowledge management. For enterprises, this is where RAG shines. But I also wonder if there are other applications within Enterprise Knowledge Management that we aren’t thinking of yet.  * Master Data Management (MDM): Potentially a subset of above, but explicitly about attributes and relationships that columnar databases might handle too rigidly. I am a lifetime subscriber for relational and SQL till they exist. Not saying they will go away. Graphs still feel intuitive and unusual at the same time. They are still complex to build (although companies like Neo4j simplify them really well), and difficult to traverse / interpret. I believe there is a stronger convergence of these 2 systems coming. Graphs will augment relational before replacing in some of these use-cases. But they have to be way more simplified first for greater adoption. Would love to hear more from graph experts and/or from those who share this feeling of “just right” for graphs. Are you seeing use-cases where graph databases are picking up? #AI #DataStrategy #Graphs #KnowledgeManagement #MDM | 37 comments on LinkedIn
For years, I considered graph databases “interesting but niche.”
·linkedin.com·
For years, I considered graph databases “interesting but niche.”
Your agents NEED a semantic layer
Your agents NEED a semantic layer
Your agents NEED a semantic layer đŸ«” Traditional RAG systems embed documents, retrieve similar chunks, and feed them to LLMs. This works for simple Q&A. It fails catastrophically for agents that need to reason across systems. Why? Because semantic similarity doesn't capture relationships. Your vector database can tell you that two documents are "about bonds." It can't tell you that Document A contains the official pricing methodology, Document B is a customer complaint referencing that methodology, and Document C is an assembly guide that superseded both. These relationships are invisible to embeddings. What semantic layers provide: Entity resolution across data silos. When "John Smith" in your CRM, "J. Smith" in email, and "john.smith@company.com" in logs all map to the same person node, agents can traverse the complete context. Cross-domain entity linking through knowledge graphs. Products in your database connect to assembly guides, which link to customer reviews, which reference support tickets. Single-query traversal instead of application-level joins. Provenance-tracked derivations. Every extracted entity, inferred relationship, and generated embedding maintains lineage to source data. Critical for regulatory compliance and debugging agent behavior. Ontology-grounded reasoning. Financial instruments mapped to FIBO standards. Products mapped to domain taxonomies. Agents reason with structured vocabulary, not statistical word associations. The technical implementation pattern: Layer 1: Unified graph database supporting vector, structured, and semi-structured data types in single queries. Layer 2: Entity extraction pipeline with coreference resolution and deduplication across sources. Layer 3: Relationship inference and cross-domain linking using both explicit identifiers and contextual signals. Layer 4: Separation of first-party data from derived artifacts with clear tagging for safe regeneration. The result: Agents can traverse "Product → described_in → AssemblyGuide → improved_by → CommunityTip → authored_by → Expert" in a single graph query instead of five API calls with application-level joins. Model Context Protocol is emerging as the open standard for semantic tool modeling. Not just describing APIs, but encoding what tools do, when to use them, and how outputs compose. This enables agents to discover and reason about capabilities dynamically. The competitive moat isn't your model choice. The moat is your knowledge graph architecture and the accumulated entity relationships that took years to build. | 28 comments on LinkedIn
Your agents NEED a semantic layer
·linkedin.com·
Your agents NEED a semantic layer
Can LLMs Really Build Knowledge Graphs We Can Trust?
Can LLMs Really Build Knowledge Graphs We Can Trust?
đŸ•žïž Can LLMs Really Build Knowledge Graphs We Can Trust? There’s a growing trend: “Let’s use LLMs to build knowledge graphs.” It sounds like the perfect shortcut - take unstructured data, prompt an LLM, and get a ready-to-use graph. But
 are we sure those graphs are trustworthy? Before that, let’s pause for a second: 💡 Why build knowledge graphs at all? Because they solve one of AI’s biggest weaknesses - lack of structure and reasoning. Graphs let us connect facts, entities, and relationships in a way that’s transparent, queryable, and explainable. They give context, memory, and logic - everything that raw text or embeddings alone can’t provide. Yet, here’s the catch when using LLMs to build them: đŸ”č Short context window - LLMs can only “see” a limited amount of data at once, losing consistency across larger corpora. đŸ”č Hallucinations - when context runs out or ambiguity appears, models confidently invent facts or relations that never existed. đŸ”č Lack of provenance - LLM outputs don’t preserve why or how a link was made. Without traceability, you can’t audit or explain your graph. đŸ”č Temporal instability - the same prompt can yield different graphs tomorrow, because stochastic generation ≠ deterministic structure. đŸ”č Scalability & cost - large-scale graph construction requires persistent context and reasoning, which LLMs weren’t designed for. Building knowledge graphs isn’t just data extraction - it’s engineering meaning. It demands consistency, provenance, and explainability, not just text generation. LLMs can assist in this process, but they shouldn’t be the architect. The next step is finding a way to make graphs both trustworthy and instant - without compromising one for the other. | 11 comments on LinkedIn
Can LLMs Really Build Knowledge Graphs We Can Trust?
·linkedin.com·
Can LLMs Really Build Knowledge Graphs We Can Trust?
Where is GraphRAG actually working in production?
Where is GraphRAG actually working in production?
"GraphRAG chatter is louder than its footprint in production." That line from Ben Lorica's piece on Gradient Flow stopped me in my tracks: https://lnkd.in/dmC-ykAu I was reading it because of my deep interest in graph-based reasoning, and while the content is excellent, I was genuinely surprised by the assessment of GraphRAG adoption. The article suggests that a year after the initial buzz, GraphRAG remains mostly confined to graph vendors and specialists, with little traction in mainstream AI engineering teams. Here's the thing: at GraphAware, we have GraphRAG running in production: our AskTheDocs conversational interface in Hume uses this approach to help customers query documentation, and the feedback has been consistently positive. It's not an experiment—it's a production feature our users rely on daily. So I have a question for my network (yes, I know you're a bit biased—many of you are graph experts, after all 😊): Where is GraphRAG actually working in production? I'm not looking for POCs, experiments, or "we're exploring it." I want to hear about real, deployed systems serving actual users. Success stories. Production use cases. The implementations that are quietly delivering value while the tech commentary wonders if anyone is using this stuff. If you have direct or indirect experience with GraphRAG in production, I'd love to hear from you: - Drop a comment below - Send me a DM - Email me directly I want to give these cases a voice and learn from what's actually working out there. Who's building with GraphRAG beyond the buzz? #GraphRAG #KnowledgeGraphs #AI #ProductionAI #RAG
Where is GraphRAG actually working in production?
·linkedin.com·
Where is GraphRAG actually working in production?
Let's talk ontologies. They are all the rage.
Let's talk ontologies. They are all the rage.
Let's talk ontologies. They are all the rage. I've been drawing what I now know is a 'triple' on whiteboards for years. It's one of the standard ways I know to start to understand a business. A triple is: subject, predicate, object I cannot overstate how useful this practice has been. Understanding how everything links together is useful, for people and AI. I'm now stuck on what that gets stored in. I'm reading about triplestores and am unclear on the action needed. Years ago some colleagues and I used Neo4j to do this. I liked the visual interaction of the output but I'm not sure that is the best path here. Who can help me understand how to move from whiteboarding to something more formal? Where to actually store all these triples? At what point does it become a 'knowledge graph'? Are there tools or products that help with this? Or is there a new language to learn to store it properly? (I think yes) #ontology #help | 40 comments on LinkedIn
Let's talk ontologies. They are all the rage.
·linkedin.com·
Let's talk ontologies. They are all the rage.
YouTube channel on graphs has just exceeded 3,000,000 views
YouTube channel on graphs has just exceeded 3,000,000 views
Ma chaine YouTube sur les graphes vient de dĂ©passer les 3.000.000 de vues ! 🎉 đŸŸ Avec 73 vidĂ©os disponibles đŸ–„ïž , elle aide plus de 25.000 abonnĂ©s (et d'autres qui passent par hasard) Ă  se familiariser sur un sujet qui devrait faire partie de la culture gĂ©nĂ©rale 📚 de tout ingĂ©nieur. Faites principalement Ă  base d'exemples commentĂ©s, mes vidĂ©os explorent de nombreux sujets de ce domaine Ă  l'intersection entre les mathĂ©matiques discrĂštes et l'informatique. Les graphes sont "prĂ©sents" partout, dans tous les systĂšmes composĂ©s d'Ă©lĂ©ments et de relations entre ces Ă©lĂ©ments ; ils peuvent aider Ă  les modĂ©liser, Ă  mieux les maitriser et Ă  les exploiter. Cela fait plusieurs mois maintenant que je n'ai plus rien publiĂ© sur cette chaine mais chaque jour de nouveaux venus (Ă©tudiants principalement mais pas que...) viennent dĂ©couvrir ces objets simples Ă  dĂ©crire mais si difficiles Ă  manipuler efficacement ! Ma chaine n'est pas monĂ©tisĂ©e, je ne gagne donc pas d'argent avec. Les publicitĂ©s sont ajoutĂ©es par Youtube, Ă  leur seul profit... https://lnkd.in/exfWrPxA| 18 commentaires sur LinkedIn
YouTube channel on graphs has just exceeded 3,000,000 views
·linkedin.com·
YouTube channel on graphs has just exceeded 3,000,000 views
Let's chat a bit about the use of graph databases in retrieval-augmented generation (RAG)
Let's chat a bit about the use of graph databases in retrieval-augmented generation (RAG)
Let's chat a bit about the use of graph databases in retrieval-augmented generation (RAG). One problem in GenAI is that while the LLMs are fed a lot of text during training, perhaps a model isn't fed the specific information the user is asking about, which could be in a private corporate document. Since the dawn of GenAI, pipelines have existed to store private documents in a vector database and search for text relevant to the user's question in the database. This text is then fed to the LLM for use in generating the answer to the user query. One problem in such pipelines is that the document search may retrieve a lot of text containing terms similar to those in the user query which still isn't relevant to answering the query. At this point, many folks say, "knowledge graphs to the rescue!" Knowledge graphs after all can store information about entities mentioned in private documents, so can't they help disambiguate user questions? Graph DBs have been used in RAG for some time now; I started with them in 2021, before ChatGPT existed. There are various problems with using graph data in RAG. First off, the knowledge graphs we are trying to leverage are themselves generated by machine learning. But what are the guarantees that ML engineers are training their models or agents to produce useful KGs? Are we even using the right kind of statistical learning, never mind agent architectures? After all, if you are going to build a KG based on information in natural language, then you are parsing out conceptual relations from natural language, which are dependent on syntax. So perhaps we should be utilizing machine learning in the syntactic parsing problem, so that we ensure a relation isn't added to the graph if the syntax expresses the negation of the relation, for instance. To graph data modelers, again I maintain that methods for extracting information from syntax have more bearing on the use of graph data in RAG than existing modeling techniques that fail to factor in natural language syntax just like most ML inference fails here. And perhaps graph databases aren't even the right target for storing extracted conceptual relations; I switched to logic databases after a month of working with graphs. The use of KGs and logic bases in RAG needs to be tackled through innovations in syntax parsing like semantic grammars, and through better techniques for performant inference engines than graph query, such as GPU-native parallel inference engines. This isn't a problem I expect to be solved through Kaggle competitions or corporate R&D leveraging recently minted ML engineers.
Let's chat a bit about the use of graph databases in retrieval-augmented generation (RAG)
·linkedin.com·
Let's chat a bit about the use of graph databases in retrieval-augmented generation (RAG)
Unified Foundational Ontology tutorial
Unified Foundational Ontology tutorial
As requested, these are the FIRST set of slides for my Ontobras Tutorial on the Unified Foundational Ontology i.e., the upcoming ISO/IEC CD 21838-5 (https://lnkd.in/egrMiCvG), and as announced here: https://lnkd.in/eeKmVW-5. The Brazilian community is one of the most active and lively communities in ontologies these days and the event joined many people from academia, government and industry. The slides for the SECOND part can be found here: https://lnkd.in/eD2xhPKj Thanks again for the invitation Jose M Parente de Oliveira. #ontology #ontologies #conceptualmodeling #semantics Semantics, Cybersecurity, and Services (SCS)/University of Twente
Unified Foundational Ontology
·linkedin.com·
Unified Foundational Ontology tutorial
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
SuperMemory is just one example of a growing ecosystem of knowledge graph systems (Graphiti- Zep, Fast Graph RAG, TrustGraph...) Some in Python, some TypeScript with the added benefit of having graph visualization. Even in Rust and Go there is a growing list of open source graph-RAG. Ontology (LLM generated in particular) seems to have its own moment in the sun with a growing interest in RDF, OWL, SHACL and whatnot. Whether the big guys (OpenAI, Microsoft...) will launch something ontological remains to be seen. They likely leave it to triple-store vendors to figure it out. https://lnkd.in/e3HAiC8c #KnowledgeGraph #GraphRAG
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
·linkedin.com·
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations? Because it forces you to slow down before you speed up. It means defining what exists in your organization before building systems to act on it. It requires clarity, discipline, and the ability to model multiple perspectives without losing coherence. Ontology-first doesn’t mean everyone must agree; it means connecting different views through layers: application ontologies for context, domain ontologies for shared objects, mid-level ontologies for reusable patterns, and a top-level ontology for common sense. It means defining what exists in your organization before building systems to act on it. Without a shared map of what things mean, every new system just adds noise. Ontology-first architecture isn’t about technology; it’s about truth, structure, and long-term adaptability. It’s the foundation that allows AI to enhance human power and impact without losing context or control. It’s hard because it demands that we think, model, and connect before we automate. But that’s also why it’s the only path toward a world where humans ingenuity can be enhanced with AI truly. | 42 comments on LinkedIn
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
·linkedin.com·
Why is it so hard to build an ontology-first architecture yet so necessary for the future of organizations?
Graphs → fabric of meaning
Graphs → fabric of meaning
Small insights that unfolded while tracing and expanding on Irina Malkova’s mindset map (Joe Reis' SOURCE: https://lnkd.in/dVGgxPdF) 12 learning points ‱ Telemetry → the nervous system of AI: Event data from software is no longer an error log - it forms the sensory layer, the neural pattern of reality for AI. ‱ Dashboard → dialogue: Data no longer lives in tables but in question and response. Attention shifts from the screen to the conversation. ‱ Graphs → fabric of meaning: Connections are not technical structures but semantic threads - turning the graph into a landscape of thought. ‱ ROI → measurable, because we choose to measure it: Once the agent lives inside the workflow, its impact can be traced all the way to business outcomes. Measurability is not a technical, but a strategic decision. ‱ Agent → decision within the process, not beside it: An agent is not another tool but the embedding of decision-making into daily work - the age of AI-in-flow replacing traditional BI. ‱ Data Cloud → activated data: Data gains meaning in motion, not in storage. The Salesforce model makes that movement visible and automatable. ‱ Dashboard blindness → cognitive overload: Visualization tools don’t remove complexity - they shift it to the eyes. Agents interpret instead of merely displaying. ‱ Graph-RAG → thinking against density: Enterprise corpora are too homogeneous for vector search; graph semantics dissolves the overcrowding of entities. ‱ Data taxonomy → corporate self-knowledge: The data team’s real task is not measurement but mapping the conceptual landscape of the organization - they are its quiet ontologists. ‱ Agentic ROI → the feedback loop of decisions: Conversational agents make it possible to trace how a recommendation turns into real action - something BI could never show. ‱ Data team → AI change agent: Data professionals carry the rare skill of building under uncertainty. That skill now becomes a strategic advantage. ‱ AI → thinking partner: AI does not write for us - it thinks with us. Between text and meaning, a dialogue emerges instead of automation. 7 quiet directions that might have unfolded, had the conversation lasted longer: ‱ Agentic enterprise → Meta-agentic reasoning: Perhaps this will be the first, quiet form of organizational consciousness. ‱ Data cloud → Ontology-ready stack: Where data becomes not only accessible but interpretable. ‱ Telemetry → Self-reflective feedback: When data streams not only observe but look back at themselves, a learning system emerges - not supervised, but self-aware. ‱ Graph-RAG → Dynamic knowledge fabric: Where connections are no longer static edges but moving contexts. ‱ ROI → Cognitive value creation: Cognitive efficiency, too, can be a business metric. ‱ Data team → Thinking ecosystem: The data department slowly evolves into a cognitive community. ‱ Prompt → Model of thought: A question is not an input but a form - a way of showing AI one’s own pattern of reasoning.
Graphs → fabric of meaning
·linkedin.com·
Graphs → fabric of meaning
Introducing Graph in Microsoft Fabric – Connected Data for the Era of AI | Microsoft Fabric Blog | Microsoft Fabric
Introducing Graph in Microsoft Fabric – Connected Data for the Era of AI | Microsoft Fabric Blog | Microsoft Fabric
Microsoft has launched a native graph data management, analytics, and visualization service. Its horizontally scalable, native graph engine empowers enterprises of all sizes with a relationship‑first way to model and explore interwoven data.
·blog.fabric.microsoft.com·
Introducing Graph in Microsoft Fabric – Connected Data for the Era of AI | Microsoft Fabric Blog | Microsoft Fabric
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
Why Large Graphs Fail to Understand Small Scalability We love to talk about scaling graphs: billions of nodes, trillions of relationships and distributed clusters. But, in practice, larger graphs often become harder to understand. As Labelled Property Graphs (LPGs) grow, their structure remains sound, but their meaning starts to drift. Queries still run, but the answers become useless. In my latest post, I explore why semantic coherence collapses faster than infrastructure can scale up, what 'cognitive coherence' really means in graph systems and how the flexibility of LPGs can empower and endanger knowledge integrity. Full article: 'Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence' https://lnkd.in/epmwGM9u #GraphRAG #KnowledgeGraph #LabeledPropertyGraph #LPG #SemanticAI #AIExplainability #GraphThinking #RDF #AKG #KGL | 15 comments on LinkedIn
·linkedin.com·
Why Large Graphs Fail Small: When LPG Scalability Breaks Cognitive Coherence
Unlock Cross-Domain Insight: Uncover Hidden Opportunities in Your Data with Knowledge Graphs and Ontologies
Unlock Cross-Domain Insight: Uncover Hidden Opportunities in Your Data with Knowledge Graphs and Ontologies
From Siloed Data to Missed Opportunities Organizations today sit on massive troves of data – customer transactions, logs, metrics, documents – often scattered across departments and trapped in spreadsheets or relational tables. The data is diverse, dispersed, and growing at unfathomable rates, to th
·linkedin.com·
Unlock Cross-Domain Insight: Uncover Hidden Opportunities in Your Data with Knowledge Graphs and Ontologies
OpenAI Emerging Semantic Layer | LinkedIn
OpenAI Emerging Semantic Layer | LinkedIn
Following yesterday's announcements from OpenAI, brands start to have real ways to operate inside ChatGPT. At a very high-level this is the map for anyone considering entering (or expanding) into the ChatGPT ecosystem: Conversational Prompts / UX: optimize how ChatGPT “asks” for or surfaces brand se
·linkedin.com·
OpenAI Emerging Semantic Layer | LinkedIn
Is OpenAI quietly moving toward knowledge graphs?
Is OpenAI quietly moving toward knowledge graphs?
Is OpenAI quietly moving toward knowledge graphs? Yesterday’s OpenAI DevDay was all about new no-code tools to create agents. Impressive. But what caught my attention wasn’t what they announced
 it’s what they didn’t talk about. During the summer, OpenAI released a Cookbook update introducing the concept Temporal Agents (see below) connecting it to Subject–Predicate–Object triples: the very foundation of a knowledge graph. If you’ve ever worked with graphs, you know this means something big: they’re not just building agents anymore they’re building memory, relationships, and meaning. When you see “London – isCapitalOf – United Kingdom” in their official docs, you realize they’re experimenting with how to represent knowledge itself. And with any good knowledge graph
 comes an ontology. So here’s my prediction: ChatGPT-6 will come with a built-in graph that connects everything about you. The question is: do you want their AI to know everything about you? Or do you want to build your own sovereign AI, one that you own, built from open-source intelligence and collective knowledge? Would love to know what you think. Is that me hallucinating or is that a weak signal?👇 | 62 comments on LinkedIn
Is OpenAI quietly moving toward knowledge graphs?
·linkedin.com·
Is OpenAI quietly moving toward knowledge graphs?