Beyond Palantir’s Ontology: The Paradigm, the Platform, and the Path to Open Semantics
Beyond Palantir’s Ontology: The Paradigm, the Platform, and the Path to Open Semantics From proprietary semantics to open metadata — understanding Ontology as a paradigm, not a product. When I …
Stop Right There: Conducting Fraud Detection with Aerospike Graph
Discover how to detect fraudulent financial transactions with speed and accuracy using G.V() and Aerospike Graph Database in this developer walkthrough.
HyperbolicRAG: Curved Spaces, Better Answers — AI Innovations and Insights 94
Traditional RAG systems are pretty familiar by now: retrieve a few relevant passages using dense retrieval, then feed them to a language model for answering the question.
Demystifying ontologies: What ontologies really do in knowledge graphs | LinkedIn
If you talk to people working with data, AI, or enterprise architecture and ask, “What is an ontology?”, you’re likely to get very different answers. For some, ontology is a kind of clever data schema.
Yesterday, I came across LinkedIn posts by 𓋹 Athanassios Hatzis, Ph.D. ("What Do You Use to Visualize RDF/OWL Ontologies") and Connected Data ("How do you visualize, edit and create Ontologies?") about creating and sharing ontology models, making this post feel timely.
In the past, I've consistently struggled to find effective ways to share developed ontology models with business and tech teams. Business teams want to ensure semantic models accurately represent domain concepts. Meanwhile, tech teams are interested in understanding how data attributes, stored in specific database columns, are represented in the ontology model. Our only choices were either 1) to share the ontology file with the business team—who generally couldn't understand a turtle (.ttl) file format or understand what the various axioms meant—or 2) to manually create visuals in PowerPoint, or use a tool to depict node/edge relationships. Since ontologies continually evolve, these manually created visuals quickly became outdated.
For a while, we used documentation generated by Widoco. The documentation is handy for browsing lists of classes, properties, and instances (if there are any), and reading their definitions. However, it doesn't illustrate how classes are connected within a semantic model, nor does it indicate which relationships in an ontology model link Class A to Class B. #WebVOWL do offer a visual however we didn't find it very intuitive.
We recently built OntoView - A semantic model viewer; by leveraging SHACL Shapes, our internally developed Metadata Ontology Model, and #Ontodia capabilities in #Metaphactory (metaphacts GmbH). We can now share data models at various levels: for a single class, for a specific data source, for an ontology, or even for the entire Knowledge Graph. The models can be visualized either in Graph or Table View, which is particularly useful when there are many overlapping nodes and relationships. Users can select classes from the detailed models and create custom navigation paths. These visuals can be exported and easily shared with team members. Each ontology class node includes an info icon, allowing users to view the class IRI, definition, and other details from the ontology. Our metadata model demonstrates how relational table columns are represented in the ontology and informs tech teams about which triple patterns to implement in their ETL pipelines.
Questions that can be answered from the OntoView:
- How is data modeled from a specific data source?
- Which classes are defined under a given data source or dataset?
- What is the data modeling approach for a particular dataset product?
- How is a specific RDB column value reflected in the semantic model?
- Can I filter, export, and share specific linked data model paths?
Thanks to the support from the Metaphacts team - Ademar Crotti Junior & Kai Preuss
#RDF, #OWL, #Ontology, #KnowledgeGraph, #Semantics
Hannah Bast and Ruben Verborgh discuss Benchmarking of Triple Stores and SPARQL engines
Join Hannah Bast and myself on Friday 12 December at 10am Eastern / 4pm CET to discuss “Benchmarking of Triple Stores and SPARQL engines” at https://lnkd.in/eEJr69zu
Join Hannah Bast and myself on Friday 12 December at 10am Eastern / 4pm CET to discuss “Benchmarking of Triple Stores and SPARQL engines
I have just come across a post by Michael Hoogkamer about his Termboard RDF/OWL graph visualization tool. It was perfect timing, because over the past few days I’ve been exploring various RDF/OWL knowledge-graph applications and databases.
We establish connections between the Transformer architecture, originally introduced for natural language processing, and Graph Neural Networks (GNNs) for representation learning on graphs. We...
Milkyweb: building a startup on the hypothesis that semantic graphs are the best way to organize high-level world knowledge for AI
We’re building our startup on the hypothesis that semantic graphs are the best way to organize high-level world knowledge for AI. The central idea is to provide a system with a data store that can accommodate rich information about the world without losing its meaning, similar to how humans store and relate concepts in their minds.
But if it is so convenient and promising, why is it not popular yet? We do not see a trend toward the widespread adoption of semantic systems in production.
There are roughly two classes of graph data stores: graph databases (Neo4j, Amazon Neptune, etc.) and semantic stores (Virtuoso, GraphDB, etc.).
When we talk about creating complex software systems, we usually think of the first class. They are simpler than the second class and therefore more reliable. The problem is that these stores are not semantic. They are very simplified graphs that allow only slightly more convenient data retrieval.
The second class is better suited to semantic graphs. However, they are very complex, unintuitive, used in narrow specialized cases, and difficult to implement effectively in agentic systems.
It turns out that:
Graph databases are not truly semantic.
Semantic data stores are not truly databases.
Here is the point:
The prospects for using semantic graphs to develop AI systems are unclear, as graph databases are generally considered the first working prototypes of this idea, yet in reality they offer little advantage.
In turn, true semantic stores are complex and entail numerous costs, which is why they have not gained popularity.
Implementing semantic graphs should be as easy as setting up a NoSQL database. Then this technology will have a significant impact on the development of agentic systems.
That is why we consider the development of such technologies important, and this is what we are actively researching and applying at Milkyweb. | 33 comments on LinkedIn
building our startup on the hypothesis that semantic graphs are the best way to organize high-level world knowledge for AI
Earlier this year, I discovered Larry Swanson podcast on Knowledge Graphs.
One of the first episodes I listened to was with George Anadiotis
Six months later, I finally met George in person. And not just George!
Amy Hodler from GraphGeeks was there too, hosting an excellent event on graph technologies.
I’m still quite new to the graph tech world.
My initial interest came from exploring how graphs can support legal reasoning and inference in LLMs, mainly because graphs help introduce logic, determinism, and reduce hallucinations.
But this event (and the people there) helped me understand how much broader the applications really are.
One of the first things I learned was that graph technologies largely fall into two families:
Label-Property Graphs (LPG) and RDF.
With the help of the experts onsite, I explored the most common use cases for each.
For LPG (structured knowledge):
• Pattern recognition
– Finance: fraud detection, anti-fraud behavior chains (something SQL can’t trace)
– Compliance & risk: the London Stock Exchange uses LPG to trace paths leading to risk concentration for DORA compliance
• Route / path finding
– Cybersecurity: mapping exploit paths
– Supply chain: modelling supply routes and comparing alternatives
– Incident analysis: understanding causal chains inside complex systems
For RDF (built for meaning and semantics):
• Domain modeling and knowledge engineering
• AI memory architectures
There were also discussions about hybrid approaches where both frameworks work together:
Natural language query → grounded semantically with RDF → executed through an LPG engine.
In practice, this looks like:
LLMs providing the interface, RDF providing the semantics, and LPG providing the performance.
A powerful combination for building the next generation of intelligent systems.
Thank you George, Amy, Maja and everyone else for the insights and conversations.
And thanks to GraphGeeks and Connected Data for bringing such a strong community together.
2026 Is the Year Ontology Breaks Out — And Why Getting It Wrong Is Dangerous
2026 Is the Year Ontology Breaks Out — And Why Getting It Wrong Is Dangerous
Neil Gentleman-Hobbs is spot on. 2026 is the tipping point where enterprises finally realize that AI can’t act intelligently without a semantic foundation. LLMs can produce language, but without ontology they cannot understand anything they output.
A while back I wrote the Palantir piece about Alex Karp’s “chips and ontology” quote — and the reaction was huge. Not because Palantir isn’t doing important work, but because there’s a major misunderstanding in the market about what ontology actually is.
And here’s the core truth:
**Palantir is not doing ontology.
They are doing a configurable data model with workflow logic.**
Their “ontology” has:
• no OWL or CLIF
• no TBox or ABox structure
• no identity conditions
• no logical constraints
• no reasoner
• no formal semantics at all
It’s powerful, yes. It’s useful, yes.
But it is not ontology.
Why this is dangerous
When companies believe they have ontology but instead have a dressed-up data model, three risks emerge:
1. False confidence
You think your AI can reason. It can’t.
It can only follow whatever imperative code a developer wrote.
2. No logical guarantees
Without identity, constraints, and semantics, the system cannot detect contradictions, errors, or impossible states.
It hallucinates in a different way — structurally.
3. Brittleness at scale
Every new policy or relationship must be coded by hand.
That’s not semantic automation, it’s enterprise-scale brute force.
This is exactly where systems crack under combinatorial growth.
In other words:
you don’t get an enterprise brain, you get a beautiful spreadsheet with an API.
At Interstellar Semantics, this is the gap we focus on closing:
building real ontologies with identity, constraints, time, roles, and reasoning — the kind of foundations AI systems actually depend on.
If your organization wants to understand what real ontology looks like:
👉 https://lnkd.in/ek3sssCY
#Ontology #SemanticAI #EnterpriseAI #AI2026 #Palantir #SemanticReasoning #KnowledgeGraphs #DataStrategy #AITrustworthiness | 12 comments on LinkedIn
2026 Is the Year Ontology Breaks Out — And Why Getting It Wrong Is Dangerous
Nuix to acquire graph intelligence platform Linkurious in €20M deal
Nuix Ltd (ASX: NXL) has agreed to acquire French graph-intelligence company Linkurious SAS, strengthening the company’s data analytics and visualisation...