GraphNews

4343 bookmarks
Custom sorting
Knowledge graphs as the foundation for Systems of Intelligence
Knowledge graphs as the foundation for Systems of Intelligence
In this Breaking Analysis we examine how Snowflake moves Beyond Walled Gardens and is entering a world where it faces new competitive dynamics from SaaS vendors like Salesforce, ServiceNow, Palantir and of course Databricks.
Beyond Walled Gardens: How Snowflake Navigates New Competitive Dynamics
·thecuberesearch.com·
Knowledge graphs as the foundation for Systems of Intelligence
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
📣 AI Engineer World's Fair 2025: GraphRAG Track Spotlight! 🚀 So grateful to have hosted the GraphRAG Track at the Fair. The sessions were great, highlighting the depth and breadth of graph thinking for AI. Shoutouts to... - Mitesh Patel "HybridRAG" as a fusion of graph and vector retrieval designed to master complex data interpretation and specialized terminology for question answering - Chin Keong Lam "Wisdom Discovery at Scale" using Knowledge Augmented Generation (KAG) in a multi agent system with n8n - Sam Julien "When Vectors Break Down" carefully explaining how graph-based RAG architecture achieved a whopping 86.31% accuracy for dense enterprise knowledge - Daniel Chalef "Stop Using RAG as Memory" explored temporally-aware knowledge graphs, built by the open-source Graphiti framework, to provide precise, context-rich memory for agents, - Ola Mabadeje "Witness the power of Multi-Agent AI & Network Knowledge Graphs" showing dramatic improvements in ticket resolution efficiency and overall execution quality in network operations. - Thomas Smoker "Beyond Documents"! casually mentioning scraping the entire internet to distill a knowledge graph focused with legal agents - Mark Bain hosting an excellent Agentic Memory with Knowledge Graphs lunch&learn, with expansive thoughts and demos from Vasilije Markovic Daniel Chalef and Alexander Gilmore Also, of course, huge congrats to Shawn swyx W and Benjamin Dunphy on an excellent conference. 🎩 #graphrag Neo4j AI Engineer
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
·linkedin.com·
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
Trip Report: ESWC 2025
Trip Report: ESWC 2025
Last week, I was happy to be able to attend the 22nd European Semantic Web Conference. I’m a regular at this conference and it’s great to see many friends and colleagues as well as meet…
·thinklinks.wordpress.com·
Trip Report: ESWC 2025
Building more Expressive Knowledge Graph Nodes | LinkedIn
Building more Expressive Knowledge Graph Nodes | LinkedIn
In a knowledge graph, more expressive nodes are clearly more useful, dramatically more valuable nodes – when we focus on the right nodes. This was a key lesson I learned building knowledge graphs at LinkedIn with the terrific team that I assembled.
·linkedin.com·
Building more Expressive Knowledge Graph Nodes | LinkedIn
Unified graph architecture for Agentic AI based on Postgres and Apache AGE
Unified graph architecture for Agentic AI based on Postgres and Apache AGE
Picture an AI agent that seamlessly traverses knowledge graphs while performing semantic vector searches, applies probabilistic predictions alongside deterministic rules, reasons about temporal evolution and spatial relationships, and resolves contradictions between multiple data sources—all within a single atomic transaction. It is PostgreSQL-based architecture that consolidates traditionally distributed data systems into a single, coherent platform. This architecture doesn't just store different data types; it enables every conceivable form of reasoning—deductive, inductive, abductive, analogical, causal, and spatial—transforming isolated data modalities into a coherent intelligence substrate where graph algorithms, embeddings, tabular predictions, and ontological inference work in perfect harmony. It changes how agentic systems operate by eliminating the complexity and inconsistencies inherent in multi-database architectures while enabling sophisticated multi-modal reasoning capabilities. Conventional approaches typically distribute agent knowledge across multiple specialized systems: vector databases for semantic search, graph databases for relationship reasoning, relational databases for structured data, and separate ML platforms for predictions. This fragmentation creates synchronization nightmares, latency penalties, and operational complexity that can cripple agent performance and reliability. Apache AGE brings native graph database capabilities to PostgreSQL, enabling complex relationship traversal and graph algorithms without requiring a separate graph database. Similarly, pgvector enables semantic search through vector embeddings, while extensions like TabICL provide zero-shot machine learning predictions directly within the database. This extensibility allows PostgreSQL to serve as a unified substrate for all data modalities that agents require. While AGE may not match the pure performance of dedicated graph databases like Neo4j for certain specialized operations, it excels in the hybrid queries that agents typically require. An agent rarely needs just graph traversal or just vector search; it needs to combine these operations with structured queries and ML predictions in coherent reasoning chains. The ability to perform these operations within single ACID transactions eliminates entire classes of consistency bugs that plague distributed systems. Foundational models eliminate traditional ML complexity. TabICL and TabSTAR enable instant predictions on new data patterns without training, deployment, or complex MLOps pipelines. This capability is particularly crucial for agentic systems that must adapt quickly to new situations and data types without human intervention or retraining cycles. The unified architecture simplifies every aspect of system management: one backup strategy instead of multiple, unified security through PostgreSQL's mature RBAC system, consistent monitoring, and simplified debugging. | 21 comments on LinkedIn
·linkedin.com·
Unified graph architecture for Agentic AI based on Postgres and Apache AGE
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It The Conversation’s new piece makes a clear case for neurosymbolic AI—integrating symbolic logic with statistical learning—as the long-term fix for LLM hallucinations. It’s a timely and necessary argument: “No matter how large a language model gets, it can’t escape its fundamental lack of grounding in rules, logic, or real-world structure. Hallucination isn’t a bug, it’s the default.” But what’s crucial—and often glossed over—is that symbolic logic alone isn’t enough. The real leap comes from adding formal ontologies and semantic constraints that make meaning machine-computable. OWL, Shapes Constraint Language (SHACL), and frameworks like BFO, Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE), the Suggested Upper Merged Ontology (SUMO), and the Common Core Ontologies (CCO) don’t just “represent rules”—they define what exists, what can relate, and under what conditions inference is valid. That’s the difference between “decorating” a knowledge graph and engineering one that can detect, explain, and prevent hallucinations in practice. I’d go further: • Most enterprise LLM hallucinations are just semantic errors—mislabeling, misattribution, or class confusion that only formal ontologies can prevent. • Neurosymbolic systems only deliver if their symbolic half is grounded in ontological reality, not just handcrafted rules or taxonomies. The upshot: We need to move beyond mere integration of symbols and neurons. We need semantic scaffolding—ontologies as infrastructure—to ensure AI isn’t just fluent, but actually right. Curious if others are layering formal ontologies (BFO, DOLCE, SUMO) into their AI stacks yet? Or are we still hoping that more compute and prompt engineering will do the trick? #NeuroSymbolicAI #SemanticAI #Ontology #LLMs #AIHallucination #KnowledgeGraphs #AITrust #AIReasoning
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It
·linkedin.com·
Want to Fix LLM Hallucination? Neurosymbolic Alone Won’t Cut It
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas 👉 Why This Matters Traditional knowledge graphs face a paradox: they require expert-crafted schemas to organize information, creating bottlenecks for scalability and adaptability. This limits their ability to handle dynamic real-world knowledge or cross-domain applications effectively. 👉 What Changed AutoSchemaKG eliminates manual schema design through three innovations: 1. Dynamic schema induction: LLMs automatically create conceptual hierarchies while extracting entities/events 2. Event-aware modeling: Captures temporal relationships and procedural knowledge missed by entity-only approaches 3. Multi-level conceptualization: Organizes instances into semantic categories through abstraction layers The system processed 50M+ documents to build ATLAS - a family of KGs with: - 900M+ nodes (entities/events/concepts) - 5.9B+ relationships - 95% alignment with human-created schemas (zero manual intervention) 👉 How It Works 1. Triple extraction pipeline:   - LLMs identify entity-entity, entity-event, and event-event relationships   - Processes text at document level rather than sentence level for context preservation 2. Schema induction:   - Automatically groups instances into conceptual categories   - Creates hierarchical relationships between specific facts and abstract concepts 3. Scale optimization:   - Handles web-scale corpora through GPU-accelerated batch processing   - Maintains semantic consistency across 3 distinct domains (Wikipedia, academic papers, Common Crawl) 👉 Proven Impact - Boosts multi-hop QA accuracy by 12-18% over state-of-the-art baselines - Improves LLM factuality by up to 9% on specialized domains like medicine and law - Enables complex reasoning through conceptual bridges between disparate facts 👉 Key Insight The research demonstrates that billion-scale KGs with dynamic schemas can effectively complement parametric knowledge in LLMs when they reach critical mass (1B+ facts). This challenges the assumption that retrieval augmentation needs domain-specific tuning to be effective. Question for Discussion As autonomous KG construction becomes viable, how should we rethink the role of human expertise in knowledge representation? Should curation shift from schema design to validation and ethical oversight? | 15 comments on LinkedIn
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
·linkedin.com·
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through Evidence-based distillation and Graph-based structuring
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through Evidence-based distillation and Graph-based structuring
Small Models, Big Knowledge: How DRAG Bridges the AI Efficiency-Accuracy Gap 👉 Why This Matters Modern AI systems face a critical tension: large language models (LLMs) deliver impressive knowledge recall but demand massive computational resources, while smaller models (SLMs) struggle with factual accuracy and "hallucinations." Traditional retrieval-augmented generation (RAG) systems amplify this problem by requiring constant updates to vast knowledge bases. 👉 The Innovation DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through two key mechanisms: 1. Evidence-based distillation: Filters and ranks factual snippets from teacher LLMs 2. Graph-based structuring: Converts retrieved knowledge into relational graphs to preserve critical connections This dual approach reduces model size requirements by 10-100x while improving factual accuracy by up to 27.7% compared to prior methods like MiniRAG. 👉 How It Works 1. Evidence generation: A large teacher LLM produces multiple context-relevant facts 2. Semantic filtering: Combines cosine similarity and LLM scoring to retain top evidence 3. Knowledge graph creation: Extracts entity relationships to form structured context 4. Distilled inference: SLMs generate answers using both filtered text and graph data The process mimics how humans combine raw information with conceptual understanding, enabling smaller models to "think" like their larger counterparts without the computational overhead. 👉 Privacy Bonus DRAG adds a privacy layer by: - Local query sanitization before cloud processing - Returning only de-identified knowledge graphs Tests show 95.7% reduction in potential personal data leakage while maintaining answer quality. 👉 Why It’s Significant This work addresses three critical challenges simultaneously: - Makes advanced RAG capabilities accessible on edge devices - Reduces hallucination rates through structured knowledge grounding - Preserves user privacy in cloud-based AI interactions The GitHub repository provides full implementation details, enabling immediate application in domains like healthcare diagnostics, legal analysis, and educational tools where accuracy and efficiency are non-negotiable.
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through two key mechanisms:1. Evidence-based distillation: Filters and ranks factual snippets from teacher LLMs2. Graph-based structuring: Converts retrieved knowledge into relational graphs to preserve critical connections
·linkedin.com·
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through Evidence-based distillation and Graph-based structuring
Synalinks release 0.3 focuses on the Knowledge Graph layer
Synalinks release 0.3 focuses on the Knowledge Graph layer
Your agents, multi-agent systems and LMs apps are still failing with basic logic? We got you covered. Today we're excited to announce Synalinks 0.3 our Keras-based neuro-symbolic framework that bridges the gap between neural networks and symbolic reasoning. Our latest release focuses entirely on the Knowledge Graph layer, delivering production-ready solutions for real-world applications: - Fully constrained KG extraction powered by Pydantic: ensuring that relations connect to the correct entity types. - Seamless integration with our Agents/Chain-of-Thought and Self-Critique modules. - Automatic entity alignment with HSWN. - KG extraction and retrieval optimizable with OPRO and RandomFewShot algorithms. - 100% reliable Cypher query generation through logic-enhanced hybrid triplet retrieval (works with local models too!). - We took extra care to avoid Cypher injection vulnerabilities (yes, we're looking at you, LangGraph 👀) - The retriever don't need the graph schema, as it is included in the way we constrain the generation, avoiding context pollution (hence better accuracy). - We also fixed Synalinks CLI for Windows users along with some minor bug fixes. Our technology combine constrained structured output with in-context reinforcement learning, making enterprise-grade reasoning both highly efficient and cost-effective. Currently supporting Neo4j with plans to expand to other graph databases. Built this initially for a client project, but the results were too good not to share with the community. Want to add support for your preferred graph database? It's just one file to implement! Drop a comment and let's make it happen! #AI #MachineLearning #KnowledgeGraphs #NeuralNetworks #Keras #Neo4j #AIAgents #TechInnovation #OpenSource | 10 comments on LinkedIn
·linkedin.com·
Synalinks release 0.3 focuses on the Knowledge Graph layer
From Dictionaries to Ontologies: Bridging Human Understanding and Machine Reasoning | LinkedIn
From Dictionaries to Ontologies: Bridging Human Understanding and Machine Reasoning | LinkedIn
In the long tradition of dictionaries, the essence of meaning has always relied on two elements: a symbol (usually a word or a phrase) and a definition—an intelligible explanation composed using other known terms. This recursive practice builds a web of meanings, where each term is explained using o
·linkedin.com·
From Dictionaries to Ontologies: Bridging Human Understanding and Machine Reasoning | LinkedIn
Ontology for knowledge sharing in design
Ontology for knowledge sharing in design
In complex engineering systems, how can we ensure that design knowledge doesn’t get lost in spreadsheets, silos, or forgotten documents? One of the greatest challenges in design domain and product development isn’t a lack of data, but a lack of meaningful, connected knowledge. This is where ontologies come in. An ontology is more than just a taxonomy or glossary. It’s a formal representation of concepts and relationships that enables shared understanding across teams, tools, and disciplines. In the design domain, ontologies serve as a semantic backbone, helping engineers and systems interpret, reuse, and reason over knowledge that would otherwise remain trapped in silos. Why does this matter? Because design decisions are rarely made in isolation. Whether it’s integrating functional models, analysing field failures, or updating risk assessment documents, we need a way to bridge data across multiple sources and domains. Ontologies enable that integration by creating a common language and structured relationships, allowing information to flow intelligently from design to deployment. Ontology-driven systems also support human decision-making by enhancing traceability, contextualising feedback, and enabling AI-powered insights. It’s not about replacing designers, it’s about augmenting their intuition with structured, reusable knowledge. As we move towards more data-driven and model-based approaches in engineering, ontologies are key to unlocking collaboration, innovation, and resilience in product development. #Ontology #KnowledgeEngineering #SystemsThinking #DesignThinking #SystemEngineering #AI #DigitalEngineering #MBSE #KnowledgeSharing #DecisionSupport #AugmentedIntelligence | 16 comments on LinkedIn
An ontology is more than just a taxonomy or glossary. It’s a formal representation of concepts and relationships
·linkedin.com·
Ontology for knowledge sharing in design
Semantically Composable Architectures
Semantically Composable Architectures
I'm happy to share the draft of the "Semantically Composable Architectures" mini-paper. It is the culmination of approximately four years' work, which began with Coreless Architectures and has now evolved into something much bigger. LLMs are impressive, but a real breakthrough will occur once we surpass the cognitive capabilities of a single human brain. Enabling autonomous large-scale system reverse engineering and large-scale autonomous transformation with minimal to no human involvement, while still making it understandable to humans if they choose to, is a central pillar of making truly groundbreaking changes. We hope the ideas we shared will be beneficial to humanity and advance our civilization further. It is not final and will require some clarification and improvements, but the key concepts are present. Happy to hear your thoughts and feedback. Some of these concepts underpin the design of the Product X system. Part of the core team + external contribution: Andrew Barsukov Andrey Kolodnitsky Sapta Girisa N Keith E. Glendon Gurpreet Sachdeva Saurav Chandra Mike Diachenko Oleh Sinkevych | 13 comments on LinkedIn
Semantically Composable Architectures
·linkedin.com·
Semantically Composable Architectures
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks. 🔹 Scalability & Performance: Handles large-scale graph data seamlessly, enabling fast computations. 🔹 Temporal Analysis: Investigate how networks change over time, identifying trends and key shifts. 🔹 Multi-layer Modeling: Incorporate diverse data sources into a unified, structured framework for deeper insights. 🔹 Integration: Works easily with existing pipelines via **Python APIs**, ensuring a smooth workflow for professionals. #Graphs #GraphDB #NetworkAnalysis #TemporalData https://www.raphtory.com/
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks
·linkedin.com·
Unlocking graph insights with Raphtory, an advanced in-memory graph tool designed to facilitate efficient exploration of evolving networks
Graph Modeling Mastery — GraphGeeks
Graph Modeling Mastery — GraphGeeks
In our GraphGeeks Talk with Max De Marzi , we unpack what makes a graph model solid, what tends to break things, and how to design with both your data and your queries in mind.
·graphgeeks.org·
Graph Modeling Mastery — GraphGeeks