Transforming RDF Graphs to Property Graphs using Standardized Schemas
Provenance-Enabled Explainable AI
Credible Intervals for Knowledge Graph Accuracy Estimation
Knowledge Graphs (KGs) are widely used in data-driven applications and downstream tasks, such as virtual assistants, recommendation systems, and semantic search. The accuracy of KGs directly...
Leveraging Knowledge Graphs and Large Language Models to Track and...
This study addresses the challenges of tracking and analyzing students' learning trajectories, particularly the issue of inadequate knowledge coverage in course assessments. Traditional assessment...
AutoSchemaKG: Autonomous Knowledge Graph Construction through...
We present AutoSchemaKG, a framework for fully autonomous knowledge graph construction that eliminates the need for predefined schemas. Our system leverages large language models to simultaneously...
Structural Alignment IV: Implications and Future Directions
Structural Alignment has implications not just for graph-based AI / machine learning, but also for how we design graphs so that they can be…
Structural Alignment III: Learning on Graphs as a Function of Structure
Learning on knowledge graphs can be expressed directly in terms of graph structure. Here’s how — and here’s how you can do it yours
Structural Alignment II: Understanding Graph Structure
An overview of how the structure of knowledge graphs can be measured, based on my thesis “Structural Alignment in Link Prediction”.
Structural Alignment I: Introduction to Knowledge Graphs and Link Prediction
An overview of knowledge graphs and the link prediction task, based on my thesis “Structural Alignment in Link Prediction”.
RePlanIT Ontology for Digital Product Passports of ICT: Laptops and Data Servers | www.semantic-web-journal.net
New workbook for the Ontology Engineering textbook
The first part of my textbook revisions is complete and there’s now a first version of the accompanying workbook (also available here)! It’s designed partially to not substantially increase the pri…
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
In this position paper "Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine" my L3S Research Center and TIB – Leibniz-Informationszentrum Technik und Naturwissenschaften und Universitätsbibliothek colleagues around Maria-Esther Vidal have nicely laid out some research challenges on the way to interpretable hybrid AI systems in medicine. However, I think the conceptual framework is broadly applicable way beyond medicine.
For example, my former colleagues and PhD students at eccenca are working on operationalizing Neuro-Symbolic AI for Enterprise Knowledge Management with eccenca's Corporate Memory. The paper outlines a compelling architecture for combining sub-symbolic models (e.g., deep learning) with symbolic reasoning systems to enable AI that is interpretable, robust, and aligned with human values. eccenca implements these principles at scale through its neuro-symbolic Enterprise Knowledge Graph platform, Corporate Memory for real-world industrial settings:
1. Symbolic Foundation via Semantic Web Standards - Corporate Memory is grounded in W3C standards (RDF, RDFS, OWL, SHACL, SPARQL), enabling formal knowledge representation, inferencing, and constraint validation. This allows to encode domain ontologies, business rules, and data governance policies in a machine-interpretable and human-verifiable manner.
2. Integration of Sub-symbolic Components - it integrates LLMs and ML models for tasks such as schema matching, natural language interpretation, entity resolution, and ontology population. These are linked to the symbolic layer via mappings and annotations, ensuring traceability and explainability.
3. Neuro-Symbolic Interfaces for Hybrid Reasoning - Hybrid workflows where symbolic constraints (e.g., SHACL shapes) guide LLM-based data enrichment. LLMs suggest schema alignments, which are verified against ontological axioms. Graph embeddings and path-based querying power semantic search and similarity.
4. Human-in-the-loop Interactions - Domain experts interact through low-code interfaces and semantic UIs that allow inspection, validation, and refinement of both the symbolic and neural outputs, promoting human oversight and continuous improvement.
Such an approach can power Industrial Applications, e.g. in digital thread integration in manufacturing, compliance automation in pharma and finance
and in general, cross-domain interoperability in data mesh architectures. Corporate Memory is a practical instantiation of neuro-symbolic AI that meets industrial-grade requirements for governance, scalability, and explainability – key tenets of Human-Centric AI. Check it out here: https://lnkd.in/evyarUsR
#NeuroSymbolicAI #HumanCentricAI #KnowledgeGraphs #EnterpriseArchitecture #ExplainableAI #SemanticWeb #LinkedData #LLM #eccenca #CorporateMemory #OntologyDrivenAI #AI4Industry
Integrating Knowledge Graphs with Symbolic AI: The Path to Interpretable Hybrid AI Systems in Medicine
CTINexus: Automatic Cyber Threat Intelligence Knowledge Graph...
Textual descriptions in cyber threat intelligence (CTI) reports, such as security articles and news, are rich sources of knowledge about cyber threats, crucial for organizations to stay informed...
A Knowledge Graph Approach for the Standardization, Integration and Exploitation of Carbon Emissions Data
The European Union's (EU) Carbon Border Adjustment Mechanism (CBAM) aims to prevent carbon leakage and accelerate global decarbonization by aligning carbon
HippoRAG takes cues from the brain to improve LLM retrieval
HippoRAG is a technique inspired from the interactions between the cortex and hippocampus to improve knowledge retrieval for large language models (LLM).
OWLstrict: A Constrained OWL Fragment to avoid Ambiguities for Knowledge Graph Practitioners
OntoAligner: A Comprehensive Modular and Robust Python Toolkit for...
Ontology Alignment (OA) is fundamental for achieving semantic interoperability across diverse knowledge systems. We present OntoAligner, a comprehensive, modular, and robust Python toolkit for...
SPARQLLM
Contribute to GDD-Nantes/SPARQLLM development by creating an account on GitHub.
Procedural Knowledge Ontology (PKO)
Repository for the Procedural Knowledge Ontology (PKO) - GitHub - perks-project/pk-ontology: Repository for the Procedural Knowledge Ontology (PKO)
Trip Report: ESWC 2025
Last week, I was happy to be able to attend the 22nd European Semantic Web Conference. I’m a regular at this conference and it’s great to see many friends and colleagues as well as meet…
Multimodal for Knowledge Graphs (MM4KG)
A Practical Implementation
Optimizing the Interface Between Knowledge Graphs and LLMs for...
Integrating Large Language Models (LLMs) with Knowledge Graphs (KGs) results in complex systems with numerous hyperparameters that directly affect performance. While such systems are increasingly...
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
👉 Why This Matters
Traditional knowledge graphs face a paradox: they require expert-crafted schemas to organize information, creating bottlenecks for scalability and adaptability. This limits their ability to handle dynamic real-world knowledge or cross-domain applications effectively.
👉 What Changed
AutoSchemaKG eliminates manual schema design through three innovations:
1. Dynamic schema induction: LLMs automatically create conceptual hierarchies while extracting entities/events
2. Event-aware modeling: Captures temporal relationships and procedural knowledge missed by entity-only approaches
3. Multi-level conceptualization: Organizes instances into semantic categories through abstraction layers
The system processed 50M+ documents to build ATLAS - a family of KGs with:
- 900M+ nodes (entities/events/concepts)
- 5.9B+ relationships
- 95% alignment with human-created schemas (zero manual intervention)
👉 How It Works
1. Triple extraction pipeline:
- LLMs identify entity-entity, entity-event, and event-event relationships
- Processes text at document level rather than sentence level for context preservation
2. Schema induction:
- Automatically groups instances into conceptual categories
- Creates hierarchical relationships between specific facts and abstract concepts
3. Scale optimization:
- Handles web-scale corpora through GPU-accelerated batch processing
- Maintains semantic consistency across 3 distinct domains (Wikipedia, academic papers, Common Crawl)
👉 Proven Impact
- Boosts multi-hop QA accuracy by 12-18% over state-of-the-art baselines
- Improves LLM factuality by up to 9% on specialized domains like medicine and law
- Enables complex reasoning through conceptual bridges between disparate facts
👉 Key Insight
The research demonstrates that billion-scale KGs with dynamic schemas can effectively complement parametric knowledge in LLMs when they reach critical mass (1B+ facts). This challenges the assumption that retrieval augmentation needs domain-specific tuning to be effective.
Question for Discussion
As autonomous KG construction becomes viable, how should we rethink the role of human expertise in knowledge representation? Should curation shift from schema design to validation and ethical oversight? | 15 comments on LinkedIn
AutoSchemaKG: Building Billion-Node Knowledge Graphs Without Human Schemas
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through Evidence-based distillation and Graph-based structuring
Small Models, Big Knowledge: How DRAG Bridges the AI Efficiency-Accuracy Gap
👉 Why This Matters
Modern AI systems face a critical tension: large language models (LLMs) deliver impressive knowledge recall but demand massive computational resources, while smaller models (SLMs) struggle with factual accuracy and "hallucinations." Traditional retrieval-augmented generation (RAG) systems amplify this problem by requiring constant updates to vast knowledge bases.
👉 The Innovation
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through two key mechanisms:
1. Evidence-based distillation: Filters and ranks factual snippets from teacher LLMs
2. Graph-based structuring: Converts retrieved knowledge into relational graphs to preserve critical connections
This dual approach reduces model size requirements by 10-100x while improving factual accuracy by up to 27.7% compared to prior methods like MiniRAG.
👉 How It Works
1. Evidence generation: A large teacher LLM produces multiple context-relevant facts
2. Semantic filtering: Combines cosine similarity and LLM scoring to retain top evidence
3. Knowledge graph creation: Extracts entity relationships to form structured context
4. Distilled inference: SLMs generate answers using both filtered text and graph data
The process mimics how humans combine raw information with conceptual understanding, enabling smaller models to "think" like their larger counterparts without the computational overhead.
👉 Privacy Bonus
DRAG adds a privacy layer by:
- Local query sanitization before cloud processing
- Returning only de-identified knowledge graphs
Tests show 95.7% reduction in potential personal data leakage while maintaining answer quality.
👉 Why It’s Significant
This work addresses three critical challenges simultaneously:
- Makes advanced RAG capabilities accessible on edge devices
- Reduces hallucination rates through structured knowledge grounding
- Preserves user privacy in cloud-based AI interactions
The GitHub repository provides full implementation details, enabling immediate application in domains like healthcare diagnostics, legal analysis, and educational tools where accuracy and efficiency are non-negotiable.
DRAG introduces a novel distillation framework that transfers RAG capabilities from LLMs to SLMs through two key mechanisms:1. Evidence-based distillation: Filters and ranks factual snippets from teacher LLMs2. Graph-based structuring: Converts retrieved knowledge into relational graphs to preserve critical connections
Semantically Composable Architectures
I'm happy to share the draft of the "Semantically Composable Architectures" mini-paper.
It is the culmination of approximately four years' work, which began with Coreless Architectures and has now evolved into something much bigger.
LLMs are impressive, but a real breakthrough will occur once we surpass the cognitive capabilities of a single human brain.
Enabling autonomous large-scale system reverse engineering and large-scale autonomous transformation with minimal to no human involvement, while still making it understandable to humans if they choose to, is a central pillar of making truly groundbreaking changes.
We hope the ideas we shared will be beneficial to humanity and advance our civilization further.
It is not final and will require some clarification and improvements, but the key concepts are present. Happy to hear your thoughts and feedback.
Some of these concepts underpin the design of the Product X system.
Part of the core team + external contribution:
Andrew Barsukov Andrey Kolodnitsky Sapta Girisa N Keith E. Glendon Gurpreet Sachdeva Saurav Chandra Mike Diachenko Oleh Sinkevych | 13 comments on LinkedIn
Semantically Composable Architectures
Leveraging Large Language Models for Realizing Truly Intelligent...
The number of published scholarly articles is growing at a significant rate, making scholarly knowledge organization increasingly important. Various approaches have been proposed to organize...
Building and navigating attribution graphs for Large Language Models
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Open-sourcing circuit tracing tools
Semantic Spacetime 1: The Shape of Knowledge
Spacetime and information both have the basics of a geometry
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
🏯🏇 A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution!
This Novel Memory fixes rigid structures with adaptable, evolving, and interconnected knowledge networks, delivering 2x performance in complex reasoning tasks.
𝗧𝗵𝗶𝘀 𝗶𝘀 𝘄𝗵𝗮𝘁 𝗜 𝗹𝗲𝗮𝗿𝗻𝗲𝗱:
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》 𝗪𝗵𝘆 𝗧𝗿𝗮𝗱𝗶𝘁𝗶𝗼𝗻𝗮𝗹 𝗠𝗲𝗺𝗼𝗿𝘆 𝗙𝗮𝗹𝗹 𝗦𝗵𝗼𝗿𝘁
Most AI agents today rely on simplistic storage and retrieval but break down when faced with complex, multi-step reasoning tasks.
✸ Common Limitations:
☆ Fixed schemas: Conventional memory systems require predefined structures that limit flexibility.
☆ Limited adaptability: When new information arises, old memories remain static and disconnected, reducing an agent’s ability to build on past experiences.
☆ Ineffective long-term retention: AI agents often struggle to recall relevant past interactions, leading to redundant processing and inefficiencies.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》𝗔-𝗠𝗘𝗠: 𝗔𝘁𝗼𝗺𝗶𝗰 𝗻𝗼𝘁𝗲𝘀 𝗮𝗻𝗱 𝗗𝘆𝗻𝗮𝗺𝗶𝗰 𝗹𝗶𝗻𝗸𝗶𝗻𝗴
A-MEM organizes knowledge in a way that mirrors how humans create and refine ideas over time.
✸ How it Works:
☆ Atomic notes: Information is broken down into small, self-contained knowledge units, ensuring clarity and easy integration with future knowledge.
☆ Dynamic linking: Instead of relying on static categories, A-MEM automatically creates connections between related knowledge, forming a network of interrelated ideas.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》 𝗣𝗿𝗼𝘃𝗲𝗻 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗔𝗱𝘃𝗮𝗻𝘁𝗮𝗴𝗲
A-MEM delivers measurable improvements.
✸ Empirical results demonstrate:
☆ Over 2x performance improvement in complex reasoning tasks, where AI must synthesize multiple pieces of information across different timeframes.
☆ Superior efficiency across top foundation models, including GPT, Llama, and Qwen—proving its versatility and broad applicability.
﹌﹌﹌﹌﹌﹌﹌﹌﹌
》 𝗜𝗻𝘀𝗶𝗱𝗲 𝗔-𝗠𝗘𝗠
✸ Note Construction:
☆ AI-generated structured notes that capture essential details and contextual insights.
☆ Each memory is assigned metadata, including keywords and summaries, for faster retrieval.
✸ Link Generation:
☆ The system autonomously connects new memories to relevant past knowledge.
☆ Relationships between concepts emerge naturally, allowing AI to recognize patterns over time.
✸ Memory Evolution:
☆ Older memories are continuously updated as new insights emerge.
☆ The system dynamically refines knowledge structures, mimicking the way human memory strengthens connections over time.
≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣≣
⫸ꆛ Want to build Real-World AI agents?
Join My 𝗛𝗮𝗻𝗱𝘀-𝗼𝗻 𝗔𝗜 𝗔𝗴𝗲𝗻𝘁 𝟰-𝗶𝗻-𝟭 𝗧𝗿𝗮𝗶𝗻𝗶𝗻𝗴 TODAY! 𝟰𝟴𝟬+ already Enrolled.
➠ Build Real-World AI Agents for Healthcare, Finance,Smart Cities,Sales
➠ Learn 4 Framework: LangGraph | PydanticAI | CrewAI | OpenAI Swarm
➠ Work with Text, Audio, Video and Tabular Data
👉𝗘𝗻𝗿𝗼𝗹𝗹 𝗡𝗢𝗪 (𝟰𝟱% 𝗱𝗶𝘀𝗰𝗼𝘂𝗻𝘁):
https://lnkd.in/eGuWr4CH
| 27 comments on LinkedIn
A-MEM Transforms AI Agent Memory with Zettelkasten Method, Atomic Notes, Dynamic Linking & Continuous Evolution
A Pilot Empirical Study on When and How to Use Knowledge Graphs as...
The integration of Knowledge Graphs (KGs) into the Retrieval Augmented Generation (RAG) framework has attracted significant interest, with early studies showing promise in mitigating...