GraphNews

4335 bookmarks
Custom sorting
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
In this position paper "Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine" my L3S Research Center and TIB – Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek colleagues around Maria-Esther Vidal have nicely laid out some research challenges on the way to interpretable hybrid AI systems in medicine. However, I think the conceptual framework is broadly applicable way beyond medicine. For example, my former colleagues and PhD students at eccenca are working on operationalizing Neuro-Symbolic AI for Enterprise Knowledge Management with eccenca's Corporate Memory. The paper outlines a compelling architecture for combining sub-symbolic models (e.g., deep learning) with symbolic reasoning systems to enable AI that is interpretable, robust, and aligned with human values. eccenca implements these principles at scale through its neuro-symbolic Enterprise Knowledge Graph platform, Corporate Memory for real-world industrial settings: 1. Symbolic Foundation via Semantic Web Standards - Corporate Memory is grounded in W3C standards (RDF, RDFS, OWL, SHACL, SPARQL), enabling formal knowledge representation, inferencing, and constraint validation. This allows to encode domain ontologies, business rules, and data governance policies in a machine-interpretable and human-verifiable manner. 2. Integration of Sub-symbolic Components - it integrates LLMs and ML models for tasks such as schema matching, natural language interpretation, entity resolution, and ontology population. These are linked to the symbolic layer via mappings and annotations, ensuring traceability and explainability. 3. Neuro-Symbolic Interfaces for Hybrid Reasoning - Hybrid workflows where symbolic constraints (e.g., SHACL shapes) guide LLM-based data enrichment. LLMs suggest schema alignments, which are verified against ontological axioms. Graph embeddings and path-based querying power semantic search and similarity. 4. Human-in-the-loop Interactions - Domain experts interact through low-code interfaces and semantic UIs that allow inspection, validation, and refinement of both the symbolic and neural outputs, promoting human oversight and continuous improvement. Such an approach can power Industrial Applications, e.g. in digital thread integration in manufacturing, compliance automation in pharma and finance and in general, cross-domain interoperability in data mesh architectures. Corporate Memory is a practical instantiation of neuro-symbolic AI that meets industrial-grade requirements for governance, scalability, and explainability – key tenets of Human-Centric AI. Check it out here: https://lnkd.in/evyarUsR #NeuroSymbolicAI #HumanCentricAI #KnowledgeGraphs #EnterpriseArchitecture #ExplainableAI #SemanticWeb #LinkedData #LLM #eccenca #CorporateMemory #OntologyDrivenAI #AI4Industry
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
·linkedin.com·
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
In enterprise organisations today, two important disciplines are working in parallel universes, tackling nearly identical challenges whilst speaking completely different languages. Ontology architects and data architects are both wrestling with ETL processes, data modelling, transformations, referen
·linkedin.com·
The Great Divide: Why Ontology and Data Architecture Teams Are Solving the Same Problems with Different Languages | LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer?
Everyone is talking about Semantic Layers, but what is a semantic layer? Some of the latest hot topics to get more out of your agents discuss topics such as knowledge graphs, vector search, semantics, and agent frameworks. A new and important area that encompasses the above is the notion that we need to have a stronger semantic layer on top of our data to provide structure, definitions, discoverability and more for our agents (human or other). While a lot of these concepts are not new, they have had to evolve to be relevant in today's world and this means that there is a fair bit of confusion surrounding this whole area. Depending on your background (AI, ML, Library Sciences) and focus (LLM-first or Knowledge Graph), you likely will emphasize different aspects as being key to a semantic layer. I come primarily from an AI/ML/LLM-first world, but have built and utilized knowledge graphs for most of my career. Given my background, I of course have my perspective on this and I tend to break things down to first principles and I like to simplify. Given this, preamble, here is what I think makes a semantic layer. WHAT MAKES A SEMANTIC LAYER: 🟤 Scope 🟢 You should not create a semantic layer that covers everything in the world, nor even everything in your company. You can tie semantic layers together, but focus on the job to be done. 🟤 You will need to have semantics, obviously. There are two particular types semantics that are important to include. 🟢 Vectors: These encapsulate semantics at a high-dimensional space so you can easily find similar concepts in your data 🟢 Ontology (including Taxonomy): Explicitly define meaning of your data in a structured and fact-based way, including appropriate vocabulary. This complements vectors superbly. 🟤 You need to respect the data and meet it where it is at. 🟢 Structured data: For most companies, their data reside in data lakes of some sort and most of it is structured. There is power in this structure, but also noise. The semantic layer needs to understand this and map it into the semantics above. 🟢 Unstructured data: Most data is unstructured and resides all over the place. Often this is stored in object stores or databases as part of structured tables, for example. However there is a lot of information in the unstructured data that the semantic layer needs to map -- and for that you need extraction, resolution, and a number of other techniques based on the modality of the data. 🟤 You need to index the data 🟢 You will need to index all of this to make your data discoverable and retrievable. And this needs to scale. 🟢 You need to have tight integration between vectors, ontology/knowledge graph and keywords to make this seamless. These are 4 key components that are all needed for you to have a true semantic layer. Thoughts? #knowledgegraph, #semanticlayer, #agent, #rag | 13 comments on LinkedIn
Everyone is talking about Semantic Layers, but what is a semantic layer?
·linkedin.com·
Everyone is talking about Semantic Layers, but what is a semantic layer?
Introducing RAG-Anything: All-in-One RAG System
Introducing RAG-Anything: All-in-One RAG System
🚀 Introducing RAG-Anything: All-in-One RAG System! ⚡ LightRAG + Multi-Modal = RAG-Anything 🔗 Get started today: https://lnkd.in/gF3D8rnc 📦 Install: pip install raganything No more switching between multiple tools or losing critical visual information! With RAG-Anything, you get ONE unified solution that understands your documents as completely as you do ✨ 🌟 What makes RAG-Anything innovative: - 🔄 End-to-End Multimodal Pipeline: Complete workflow from document ingestion and parsing to intelligent multimodal query answering. - 📄 Universal Document Support: Seamless processing of PDFs, Office documents (DOC/DOCX/PPT/PPTX/XLS/XLSX), images, and diverse file formats. - 🧠 Specialized Content Analysis: Dedicated processors for images, tables, mathematical equations, and heterogeneous content types. - 🔗 Multimodal Knowledge Graph: Automatic entity extraction and cross-modal relationship discovery for enhanced understanding. - ⚡ Adaptive Processing Modes: Flexible MinerU-based parsing or direct multimodal content injection workflows. - 🎯 Hybrid Intelligent Retrieval: Advanced search capabilities spanning textual and multimodal content with contextual understanding. 💡 Well-suited for: - 🎓 Academic research with complex documents - 📋 Technical documentation processing - 💼 Financial report analysis - 🏢 Enterprise knowledge management
Introducing RAG-Anything: All-in-One RAG System
·linkedin.com·
Introducing RAG-Anything: All-in-One RAG System
Why AI Hallucinates: The Shallow Semantics Problem | LinkedIn
Why AI Hallucinates: The Shallow Semantics Problem | LinkedIn
By J Bittner Part 1 in our 5-part series: From Hallucination to Reasoning—The Case for Ontology-Driven AI Welcome to “Semantically Speaking”—a new series on what makes AI systems genuinely trustworthy, explainable, and future-proof. This is Part 1 in a 5-part journey, exploring why so many AI system
·linkedin.com·
Why AI Hallucinates: The Shallow Semantics Problem | LinkedIn
Towards Multi-modal Graph Large Language Model
Towards Multi-modal Graph Large Language Model
Multi-modal graphs are everywhere in the digital world. Yet the tools used to understand them haven't evolved as much as one would expect. What if the same model could handle your social network analysis, molecular discovery, AND urban planning tasks? A new paper from Tsinghua University proposes Multi-modal Graph Large Language Models (MG-LLM) - a paradigm shift in how we process complex interconnected data that combines text, images, audio, and structured relationships. Think of it as ChatGPT for graphs, but, metaphorically speaking, with eyes, ears, and structural understanding. Their key insight? Treating all graph tasks as generative problems. Instead of training separate models for node classification, link prediction, or graph reasoning, MG-LLM frames everything as transforming one multi-modal graph into another. This unified approach means the same model that predicts protein interactions could also analyze social media networks or urban traffic patterns. What makes this particularly exciting is the vision for natural language interaction with graph data. Imagine querying complex molecular structures or editing knowledge graphs using plain English, without learning specialized query languages. The challenges remain substantial - from handling the multi-granularity of data (pixels to full images) to managing multi-scale tasks (entire graph input, single node output). But if successful, this could fundamentally change the level of graph-based insights across industries that have barely scratched the surface of AI adoption. ↓ 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐮𝐩? Join my newsletter with 50k+ readers and be the first to learn about the latest AI research: llmwatch.com 💡
Towards Multi-modal Graph Large Language Model
·linkedin.com·
Towards Multi-modal Graph Large Language Model
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations Knowledge Graphs are a key part of the shift, positioned on the slope of enlightenment By Haritha Khandabattu and Birgi Tamersoy: Al investment remains strong, but focus is shifting from GenAl hype to foundational innovations like Al-ready data, Al agents, Al engineering and ModelOps. This research helps leaders prioritize high-impact, emerging Al techniques while navigating regulatory complexity and operational scaling. As Gartner notes, Generative AI capabilities are advancing at a rapid pace and the tools that will become available over the next 2-5 years will be transformative. The rapid evolution of these technologies and techniques continues unabated, as does the corresponding hype, making this tumultuous landscape difficult to navigate. These conditions mean GenAI continues to be a top priority for the C-suite. Weaving in another foundational concept, Systems of Intelligence as coined by Geoffrey Moore and reference by David Vellante and George Gilbert: Systems of Intelligence are the linchpin of modern enterprise architecture because [AI] agents are only as smart as the state of the business represented in the knowledge graph. If a platform controls that graph, it becomes the default policymaker for “why is this happening, what comes next, and what should we do?” For enterprises, there is only one feasible answer to the "who controls the graph" question: you should. To do that, start working on your enterprise knowledge graph today, if you haven't already. And if you are looking for the place to learn, network, and share experience and knowledge, look no further 👇 Connected Data London 2025 has been announced! 20-21 November, Leonardo Royal Hotel London Tower Bridge Join us for all things #KnowledgeGraph #Graph #analytics #datascience #AI #graphDB #SemTech 🎟️ Ticket sales are open. Benefit from early bird prices with discounts up to 30%. 2025.connected-data.london 📋 Call for submissions is open. Check topics of interest, submission process and evaluation criteria https://lnkd.in/dhbAeYtq 📺 Sponsorship opportunities are available. Maximize your exposure with early onboarding. Contact us at info@connected-data.london for more.
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations
·linkedin.com·
Gartner 2025 AI Hype Cycle: The focus is shifting from hype to foundational innovations
Multi-modal Graph Large Language Models (MG-LLM)
Multi-modal Graph Large Language Models (MG-LLM)
Multi-modal graphs are everywhere in the digital world. Yet the tools used to understand them haven't evolved as much as one would expect. What if the same model could handle your social network analysis, molecular discovery, AND urban planning tasks? A new paper from Tsinghua University proposes Multi-modal Graph Large Language Models (MG-LLM) - a paradigm shift in how we process complex interconnected data that combines text, images, audio, and structured relationships. Think of it as ChatGPT for graphs, but, metaphorically speaking, with eyes, ears, and structural understanding. Their key insight? Treating all graph tasks as generative problems. Instead of training separate models for node classification, link prediction, or graph reasoning, MG-LLM frames everything as transforming one multi-modal graph into another. This unified approach means the same model that predicts protein interactions could also analyze social media networks or urban traffic patterns. What makes this particularly exciting is the vision for natural language interaction with graph data. Imagine querying complex molecular structures or editing knowledge graphs using plain English, without learning specialized query languages. The challenges remain substantial - from handling the multi-granularity of data (pixels to full images) to managing multi-scale tasks (entire graph input, single node output). But if successful, this could fundamentally change the level of graph-based insights across industries that have barely scratched the surface of AI adoption. ↓ 𝐖𝐚𝐧𝐭 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐮𝐩? Join my newsletter with 50k+ readers and be the first to learn about the latest AI research: llmwatch.com 💡
Multi-modal Graph Large Language Models (MG-LLM)
·linkedin.com·
Multi-modal Graph Large Language Models (MG-LLM)
Knowledge graphs as the foundation for Systems of Intelligence
Knowledge graphs as the foundation for Systems of Intelligence
In this Breaking Analysis we examine how Snowflake moves Beyond Walled Gardens and is entering a world where it faces new competitive dynamics from SaaS vendors like Salesforce, ServiceNow, Palantir and of course Databricks.
Beyond Walled Gardens: How Snowflake Navigates New Competitive Dynamics
·thecuberesearch.com·
Knowledge graphs as the foundation for Systems of Intelligence
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
📣 AI Engineer World's Fair 2025: GraphRAG Track Spotlight! 🚀 So grateful to have hosted the GraphRAG Track at the Fair. The sessions were great, highlighting the depth and breadth of graph thinking for AI. Shoutouts to... - Mitesh Patel "HybridRAG" as a fusion of graph and vector retrieval designed to master complex data interpretation and specialized terminology for question answering - Chin Keong Lam "Wisdom Discovery at Scale" using Knowledge Augmented Generation (KAG) in a multi agent system with n8n - Sam Julien "When Vectors Break Down" carefully explaining how graph-based RAG architecture achieved a whopping 86.31% accuracy for dense enterprise knowledge - Daniel Chalef "Stop Using RAG as Memory" explored temporally-aware knowledge graphs, built by the open-source Graphiti framework, to provide precise, context-rich memory for agents, - Ola Mabadeje "Witness the power of Multi-Agent AI & Network Knowledge Graphs" showing dramatic improvements in ticket resolution efficiency and overall execution quality in network operations. - Thomas Smoker "Beyond Documents"! casually mentioning scraping the entire internet to distill a knowledge graph focused with legal agents - Mark Bain hosting an excellent Agentic Memory with Knowledge Graphs lunch&learn, with expansive thoughts and demos from Vasilije Markovic Daniel Chalef and Alexander Gilmore Also, of course, huge congrats to Shawn swyx W and Benjamin Dunphy on an excellent conference. 🎩 #graphrag Neo4j AI Engineer
AI Engineer World's Fair 2025: GraphRAG Track Spotlight
·linkedin.com·
AI Engineer World's Fair 2025: GraphRAG Track Spotlight