UB to offer a fully online graduate degree in ontology
The applied ontology degree will prepare students from around the world for work in this rapidly growing interdisciplinary branch of information science.
𝐑𝐞𝐮𝐬𝐢𝐧𝐠 Ontologies 𝐌𝐚𝐤𝐞𝐬 Your 𝐋𝐢𝐟𝐞 𝐄𝐚𝐬𝐢𝐞𝐫
𝐃𝐚𝐭𝐚 contains tremendous 𝐯𝐚𝐥𝐮𝐞. Unfortunately, it is often only used in a specific application, even though it would be useful in other contexts as well. However, 𝐬𝐡𝐚𝐫𝐢𝐧𝐠 data is 𝐧𝐨𝐭 a 𝐭𝐫𝐢𝐯𝐢𝐚𝐥 task.
To share data effectively within an organization, we need to 𝐚𝐥𝐢𝐠𝐧 our data with a 𝐜𝐨𝐦𝐦𝐨𝐧 𝐦𝐨𝐝𝐞𝐥. The first thought that comes to mind when hearing about the concept of shared data models (also known as ontologies) is often to develop a new one from 𝐬𝐜𝐫𝐚𝐭𝐜𝐡 quickly. That allows for a fast start and often a slow, yet inevitable, 𝐜𝐡𝐚𝐨𝐬.
Ontologies aim to provide a well-described and carefully disambiguated meaning. They are about finding consensus, which is a process rather than a quick win. In that regard, using standardized ontologies is tremendously helpful.
(1.) Because they are the product of a collaborative process of 𝐞𝐱𝐩𝐞𝐫𝐭𝐬, many potential 𝐩𝐢𝐭𝐟𝐚𝐥𝐥𝐬 have already been considered and 𝐞𝐥𝐢𝐦𝐢𝐧𝐚𝐭𝐞𝐝. They are established and well used.
(2.) They are often abstract enough to be 𝐚𝐝𝐚𝐩𝐭𝐚𝐛𝐥𝐞 to more specific domains. Reused ontologies are not a dead end. They are a 𝐬𝐭𝐚𝐫𝐭𝐢𝐧𝐠 𝐩𝐨𝐢𝐧𝐭 for making data your own.
(3.) [𝘈𝘯𝘥 𝘵𝘩𝘪𝘴 𝘪𝘴 𝘮𝘺 𝘧𝘢𝘷𝘰𝘳𝘪𝘵𝘦:] They are 𝐛𝐚𝐜𝐤𝐞𝐝 𝐛𝐲 one or more established 𝐨𝐫𝐠𝐚𝐧𝐢𝐳𝐚𝐭𝐢𝐨𝐧𝐬. Often, it is so much 𝐞𝐚𝐬𝐢𝐞𝐫 to 𝐜𝐨𝐧𝐯𝐢𝐧𝐜𝐞 people to use the standard pushed by Google or the guys who standardize the internet itself, rather than your own definitions.
That does not mean that there is no need to create your own ontologies. However, your use case is likely not as unique as you think. And it might be useful to extend an existing ontology to your needs or use one as a blueprint.
Want to hear more about how graphs can solve your data problems? Join our next webinar: https://lnkd.in/e6JgQzhP
Knowledge Graphs in the Era of Large Language Models (KGELL)
Knowledge Graphs (KGs) have gained attention due to their ability to represent structured and interlinked information. KGs represent knowledge in the form of relations between entities, referred to as...
Property Graph Standards: State of the Art and Open Challenges
The paper 'Property Graph Standards: State of the Art and Open Challenges' (VLDB 2025), Haridimos Kondylakis and his colleagues take an in-depth look at the current state of property graph standards, which form the basis of many modern graph databases.
While property graphs have become a popular way to show complex, connected data (think nodes and edges with flexible key–value properties), the ecosystem is still divided. Each vendor or tool implements its own version of 'the standard', which makes interoperability, schema definition and query translation difficult.
The authors review the major initiatives to standardise property graphs and demonstrate the current situation: efforts from LDBC, GQL and ISO are advancing the field, but challenges remain. The biggest gaps lie in schema constraints, data validation, and cross-system compatibility — all of which are crucial if graph systems are to become integral components of enterprise data architectures.
The paper calls for a unified model in which graph structure, constraints, and semantics are shared across tools and databases. This isn't just academic. It's about ensuring that graph data can be trusted. It's also about making sure that it is portable. And that it can be used at scale.
In simple terms, property graphs are maturing. The next step is not just to connect data, but to agree on how we define, validate and exchange those connections.
Article: https://lnkd.in/eva_xSsT
Property Graph Standards: State of the Art and Open Challenges
Two Meanings of “Semantic Layer” and Why Both Matter in the Age of AI
"Semantic layer” means different things depending on who you ask.
In my latest newsletter, published on Medium first this time, I look at the two definitions and how they can work together.
Are you using a semantic layer? if so, which type?
#SemanticLayer #DataGovernance #AnalyticsEngineering #DataandAI | 25 comments on LinkedIn
Open-source Graph Explorer v2.4.0 is now released, and it includes a new SPARQL editor
Calling all Graph Explorers! 📣
I'm excited to share that open-source Graph Explorer v2.4.0 is now released, and it includes a new SPARQL editor!
Release notes: https://lnkd.in/ePhwPQ5W
This means that in addition to being a powerful no-code exploration tool, you can now start your visualization and exploration by writing queries directly in SPARQL. (Gremlin & openCypher too for Property Graph workloads).
This makes Graph Explorer an ideal companion for Amazon Neptune, as it supports connections via all three query languages, but you can connect to other graph databases that support these languages too.
🔹 Run it anywhere (it's open source): https://lnkd.in/ehbErxMV
🔹 Access through the AWS console in a Neptune graph notebook: https://lnkd.in/gZ7CJT8D
Special thanks go to Kris McGinnes for his efforts.
#AWS #AmazonNeptune #GraphExplorer #SPARQL #Gremlin #openCypher #KnowledgeGraph #OpenSource #RDF #LPG
open-source Graph Explorer v2.4.0 is now released, and it includes a new SPARQL editor
AIOTI WG Standardisation Focus Group on Semantic Interoperability has prepared a report on Data to Ontology Mapping. A key challenge people face when using ontologies is […]
Tree-KG: An Expandable Knowledge Graph Construction Framework for Knowledge-intensive Domains
Songjie Niu, Kaisen Yang, Rui Zhao, Yichao Liu, Zonglin Li, Hongning Wang, Wenguang Chen. Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2025.
A Survey on Temporal Knowledge Graph: Representation Learning and...
Knowledge graphs have garnered significant research attention and are widely used to enhance downstream applications. However, most current studies mainly focus on static knowledge graphs, whose...
Time and space in the Unified Knowledge Graph environment
PDF | On Oct 2, 2025, Lyubo Blagoev published Time and space in the Unified Knowledge Graph environment | Find, read and cite all the research you need on ResearchGate
Cognee - AI Agents with LangGraph + cognee: Persistent Semantic Memory
Build AI agents with LangGraph and cognee: persistent semantic memory across sessions for cleaner context and higher accuracy. See the demo—get started now.
Transforming SHACL Shape Graphs into HTML Applications for Populating Knowledge Graphs
Creating applications to manually populate and modify knowledge graphs is a complex task. In this paper, we propose a novel approach for designing user interfaces for this purpose, based on existing SHACL constraint files. Our method consists of taking SHACL constraints and creating multi-form web applications. The novelty of the approach is to treat the editing of knowledge graphs via multi-form application interaction as a business process. This enables user interface modeling, such as modeling of application control flows by integrating ontology-based business process management components. Additionally, because our application models are themselves knowledge graphs, we demonstrate how they can leverage OWL reasoning to verify logical consistency and improve the user experience.
How to achieve logical inference performantly on huge data volumes
Lots of people talking about semantic layers. Okay, welcome to the party! The big question in our space, is how to achieve logical inference performantly on huge data volumes, given the inherent problems of combinatorial explosion that search algorithms (on which inference algorithms are based) have always confronted. After all, semantic layers are about offering inference services, the services that Edgar Codd envisioned DBMSes on the relational model eventually supporting in the very first paper on the relational model.
So what are the leading approaches in terms of performance?
1. GPU Datalog
2. High-speed OWL reasoners like RDFox
3. Rete networks like Sparkling Logic's Rete-NT
4. High-speed FOL provers like Vampire
Let's get down to brass tacks. RDFox posts some impressive benchmarks, but they aren't exactly obsoleting GPU Datalog, and I haven't seen any good data on RDFox vs Relational AI. If you have benchmarks on that, I'd love to see them. Rete-NT and RDFox are heavily proprietary, so understanding how the performance has been achieved is not really possible for the broader community beyond these vendors' consultants. And RDFox is now owned by Samsung, further complicating the picture.
That leaves us with the open-source GPU Datalogs and high-speed FOL provers. That's what's worth studying right now in semantic layers, not engaging in dogmatic debates between relational model, property graph model, RDF, and "name your emerging data model." Performance has ALWAYS been the name of the game in automated theorem proving. We still struggle to handle inference on large datasets. We need to quit focusing on non-issues and work to streamline existing high-speed inference methods for business usage. GPU Datalog on CUDA seems promising. I imagine the future will bring further optimizations. | 11 comments on LinkedIn
how to achieve logical inference performantly on huge data volumes
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
KG-R1, Why Knowledge Graph RAG Systems Are Too Expensive to Deploy (And How One Team Fixed It) ...
What if I told you that most knowledge graph systems require multiple AI models just to answer a single question? That's exactly the problem plaguing current KG-RAG deployments.
👉 The Cost Problem
Traditional knowledge graph retrieval systems use a pipeline approach: one model for planning, another for reasoning, a third for reviewing, and a fourth for responding. Each step burns through tokens and compute resources, making deployment prohibitively expensive for most organizations.
Even worse? These systems are built for specific knowledge graphs. Change your data source, and you need to retrain everything.
👉 A Single-Agent Solution
Researchers from MIT and IBM just published KG-R1, which replaces this entire multi-model pipeline with one lightweight agent that learns through reinforcement learning.
Here's the clever part: instead of hardcoding domain-specific logic, the system uses four simple, universal operations:
- Get relations from an entity
- Get entities from a relation
- Navigate forward or backward through connections
These operations work on any knowledge graph without modification.
👉 The Results Are Striking
Using just a 3B parameter model, KG-R1:
- Matches accuracy of much larger foundation models
- Uses 60% fewer tokens per query than existing methods
- Transfers across different knowledge graphs without retraining
- Processes queries in under 7 seconds on a single GPU
The system learned to retrieve information strategically through multi-turn interactions, optimized end-to-end rather than stage-by-stage.
This matters because knowledge graphs contain some of our most valuable structured data - from scientific databases to legal documents. Making them accessible and affordable could unlock entirely new applications.
https://arxiv.org/abs/2509.26383v1
Efficient and Transferable Agentic Knowledge Graph RAG via Reinforcement Learning
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Knowledge graphs offer many advantages in the fields of materials science and manufacturing technology. But how can we explore knowledge graphs in a meaningful way?
The current article “Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing”shows what such a solution could look like: https://lnkd.in/ehiK5php.
Special thanks to Matthias Büschelberger as main author and many thanks to all co-authors Konstantinos Tsitseklis, Lukas Morand, Anastasios Zafeiropoulos, Yoav Nahshon, Symeon Papavassiliou, and Dirk Helm for the great collaboration as part of our wunderful DiMAT project.
Would you like to see such a chatbot acting on a knowledge graph in action? Take a look at the video.
#datamanagement #FAIRData #dataspace #ontology #knowledgegraph #AI #materials #sustainability #digitalisation #InsideMaterial
Fraunhofer IWM, National Technical University of Athens
Digital Products Based on Large Language Models for the Exploration of Graph-Databases in Materials Science and Manufacturing
Your agents NEED a semantic layer 🫵
Traditional RAG systems embed documents, retrieve similar chunks, and feed them to LLMs. This works for simple Q&A. It fails catastrophically for agents that need to reason across systems.
Why? Because semantic similarity doesn't capture relationships.
Your vector database can tell you that two documents are "about bonds." It can't tell you that Document A contains the official pricing methodology, Document B is a customer complaint referencing that methodology, and Document C is an assembly guide that superseded both.
These relationships are invisible to embeddings.
What semantic layers provide:
Entity resolution across data silos. When "John Smith" in your CRM, "J. Smith" in email, and "john.smith@company.com" in logs all map to the same person node, agents can traverse the complete context.
Cross-domain entity linking through knowledge graphs. Products in your database connect to assembly guides, which link to customer reviews, which reference support tickets. Single-query traversal instead of application-level joins.
Provenance-tracked derivations. Every extracted entity, inferred relationship, and generated embedding maintains lineage to source data. Critical for regulatory compliance and debugging agent behavior.
Ontology-grounded reasoning. Financial instruments mapped to FIBO standards. Products mapped to domain taxonomies. Agents reason with structured vocabulary, not statistical word associations.
The technical implementation pattern:
Layer 1: Unified graph database supporting vector, structured, and semi-structured data types in single queries.
Layer 2: Entity extraction pipeline with coreference resolution and deduplication across sources.
Layer 3: Relationship inference and cross-domain linking using both explicit identifiers and contextual signals.
Layer 4: Separation of first-party data from derived artifacts with clear tagging for safe regeneration.
The result: Agents can traverse "Product → described_in → AssemblyGuide → improved_by → CommunityTip → authored_by → Expert" in a single graph query instead of five API calls with application-level joins.
Model Context Protocol is emerging as the open standard for semantic tool modeling. Not just describing APIs, but encoding what tools do, when to use them, and how outputs compose. This enables agents to discover and reason about capabilities dynamically.
The competitive moat isn't your model choice.
The moat is your knowledge graph architecture and the accumulated entity relationships that took years to build. | 28 comments on LinkedIn
Can LLMs Really Build Knowledge Graphs We Can Trust?
🕸️ Can LLMs Really Build Knowledge Graphs We Can Trust?
There’s a growing trend: “Let’s use LLMs to build knowledge graphs.”
It sounds like the perfect shortcut - take unstructured data, prompt an LLM, and get a ready-to-use graph.
But… are we sure those graphs are trustworthy?
Before that, let’s pause for a second:
💡 Why build knowledge graphs at all?
Because they solve one of AI’s biggest weaknesses - lack of structure and reasoning.
Graphs let us connect facts, entities, and relationships in a way that’s transparent, queryable, and explainable. They give context, memory, and logic - everything that raw text or embeddings alone can’t provide.
Yet, here’s the catch when using LLMs to build them:
🔹 Short context window - LLMs can only “see” a limited amount of data at once, losing consistency across larger corpora.
🔹 Hallucinations - when context runs out or ambiguity appears, models confidently invent facts or relations that never existed.
🔹 Lack of provenance - LLM outputs don’t preserve why or how a link was made. Without traceability, you can’t audit or explain your graph.
🔹 Temporal instability - the same prompt can yield different graphs tomorrow, because stochastic generation ≠ deterministic structure.
🔹 Scalability & cost - large-scale graph construction requires persistent context and reasoning, which LLMs weren’t designed for.
Building knowledge graphs isn’t just data extraction - it’s engineering meaning. It demands consistency, provenance, and explainability, not just text generation.
LLMs can assist in this process, but they shouldn’t be the architect.
The next step is finding a way to make graphs both trustworthy and instant - without compromising one for the other. | 11 comments on LinkedIn
Can LLMs Really Build Knowledge Graphs We Can Trust?
"GraphRAG chatter is louder than its footprint in production."
That line from Ben Lorica's piece on Gradient Flow stopped me in my tracks: https://lnkd.in/dmC-ykAu
I was reading it because of my deep interest in graph-based reasoning, and while the content is excellent, I was genuinely surprised by the assessment of GraphRAG adoption. The article suggests that a year after the initial buzz, GraphRAG remains mostly confined to graph vendors and specialists, with little traction in mainstream AI engineering teams.
Here's the thing: at GraphAware, we have GraphRAG running in production: our AskTheDocs conversational interface in Hume uses this approach to help customers query documentation, and the feedback has been consistently positive. It's not an experiment—it's a production feature our users rely on daily.
So I have a question for my network (yes, I know you're a bit biased—many of you are graph experts, after all 😊):
Where is GraphRAG actually working in production?
I'm not looking for POCs, experiments, or "we're exploring it." I want to hear about real, deployed systems serving actual users. Success stories. Production use cases. The implementations that are quietly delivering value while the tech commentary wonders if anyone is using this stuff.
If you have direct or indirect experience with GraphRAG in production, I'd love to hear from you:
- Drop a comment below
- Send me a DM
- Email me directly
I want to give these cases a voice and learn from what's actually working out there.
Who's building with GraphRAG beyond the buzz?
#GraphRAG #KnowledgeGraphs #AI #ProductionAI #RAG
Let's talk ontologies. They are all the rage.
I've been drawing what I now know is a 'triple' on whiteboards for years. It's one of the standard ways I know to start to understand a business.
A triple is:
subject, predicate, object
I cannot overstate how useful this practice has been. Understanding how everything links together is useful, for people and AI.
I'm now stuck on what that gets stored in. I'm reading about triplestores and am unclear on the action needed. Years ago some colleagues and I used Neo4j to do this. I liked the visual interaction of the output but I'm not sure that is the best path here.
Who can help me understand how to move from whiteboarding to something more formal?
Where to actually store all these triples?
At what point does it become a 'knowledge graph'?
Are there tools or products that help with this?
Or is there a new language to learn to store it properly? (I think yes)
#ontology #help | 40 comments on LinkedIn
As requested, these are the FIRST set of slides for my Ontobras Tutorial on the Unified Foundational Ontology i.e., the upcoming ISO/IEC CD 21838-5 (https://lnkd.in/egrMiCvG), and as announced here: https://lnkd.in/eeKmVW-5.
The Brazilian community is one of the most active and lively communities in ontologies these days and the event joined many people from academia, government and industry.
The slides for the SECOND part can be found here:
https://lnkd.in/eD2xhPKj
Thanks again for the invitation Jose M Parente de Oliveira.
#ontology #ontologies #conceptualmodeling #semantics
Semantics, Cybersecurity, and Services (SCS)/University of Twente
SuperMemory is just one example of a growing ecosystem of knowledge graph systems
SuperMemory is just one example of a growing ecosystem of knowledge graph systems (Graphiti- Zep, Fast Graph RAG, TrustGraph...) Some in Python, some TypeScript with the added benefit of having graph visualization. Even in Rust and Go there is a growing list of open source graph-RAG.
Ontology (LLM generated in particular) seems to have its own moment in the sun with a growing interest in RDF, OWL, SHACL and whatnot. Whether the big guys (OpenAI, Microsoft...) will launch something ontological remains to be seen. They likely leave it to triple-store vendors to figure it out.
https://lnkd.in/e3HAiC8c #KnowledgeGraph #GraphRAG
SuperMemory is just one example of a growing ecosystem of knowledge graph systems