Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement.
Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement. That post struck a chord.
With GPT-5 now here, it’s the right moment to revisit the idea.
Back then, GPT-3.5 and GPT-4 could draft ontology structures, but there were limits in context, reasoning, and abstraction.
With GPT-5 (and other frontier models), that’s changing:
🔹 Larger context windows let entire ontologies sit in working memory at once.
🔹 Test-time compute enables better abstraction of concepts.
🔹 Multimodal input can turn diagrams, tables, and videos into structured ontology scaffolds.
🔹 Tool use allows ontologies to be validated, aligned, and extended in one flow.
But some fundamentals remain. GPT-5 is still curve-fitting to a training set - and that brings limits:
🔹 The flipside of flexibility is hallucination. OpenAI has reduced it, but GPT-5 still scores 0.55 on SimpleQA, with a 5% hallucination rate on its own public-question dataset.
🔹 The model is bound by the landscape of its training data. That landscape is vast, but it excludes your private, proprietary data - and increasingly, an organisation’s edge will track directly to the data it owns outside that distribution.
Fortunately, the benefits flow both ways. LLMs can help build ontologies, but ontologies and knowledge graphs can also help improve LLMs. The two systems can work in tandem.
Ontologies bring structure, consistency, and domain-specific context.
LLMs bring adaptability, speed, and pattern recognition that ontologies can’t achieve in isolation.
Each offsets the other’s weaknesses - and together they make both stronger.
The feedback loop is no longer theory - we’ve been proving it:
Better LLM → Better Ontology → Better LLM - in your domain.
There is a lot of hype around AI. GPT-5 is good, but not ground-breaking. Still, the progress over two years is remarkable. For the foreseeable future, we are living in a world where models keep improving - but where we must pair classic formal symbolic systems with these new probabilistic models.
For organisations, the challenge is to match growing model power with equally strong growth in the power of their proprietary symbolic formalisation. Not all formalisations are equal. We want fewer brittle IF statements buried in application code, and more rich, flexible abstractions embedded in the data itself. That’s what ontologies and knowledge graphs promise to deliver.
Two years ago, this was a hopeful idea.
Today, it’s looking less like a nice-to-have…
…and more like the only sensible way forward for organisations.
⭕ Neural-Symbolic Loop: https://lnkd.in/eJ7S22hF
🔗 Turn your data into a competitive edge: https://lnkd.in/eDd-5hpV