I used the o word last week and it hit a few nerves. Ontologies bring context.
But then context engineering is very poorly understood. Agent engineers speak about it, expect everyone is doing it, know but almost everyone is winging it.
Here's what context engineering is definitely not - ie. longer prompts.
What it actually is - the right information, with the right meaning, at the right time. Not more but the right information with the right meaning. Sounds super abstract.
That's why a brief video that actually breaks down how to load context.
Okay. Not brief. but context needs context.
The field is evolving from Prompt Engineering, treating context as a single, static string, to Contextual Engineering, which views context as a dynamic system of structured components (instructions, tools, memory, knowledge) orchestrated to solve complex tasks. 🔎
Nearly all innovation is a response to the primary limitation of Transformer models: the quadratic (O(n2)) computational cost of the self-attention mechanism as the context length (n) increases.
All techniques for managing this challenge can be organized into three areas:
1. Context Generation & Retrieval (Sourcing Ingredients)
Advanced Reasoning: Chain-of-Thought (CoT), Tree-of-Thoughts (ToT).
External Knowledge: Advanced Retrieval-Augmented Generation (RAG) like GraphRAG, which uses knowledge graphs for more structured retrieval.
2. Context Processing (Cooking the Ingredients)
Refinement: Using the LLM to iterate and improve its own output (Self-Refine).
Architectural Changes: Exploring models beyond Transformers (e.g., Mamba) to escape the quadratic bottleneck.
3. Context Management (The Pantry System)
Memory: Creating stateful interactions using hierarchical memory systems (e.g., MemGPT) that manage information between the active context window and external storage.
Key Distinction: RAG is stateless I/O to the world; Memory is the agent's stateful internal history.
The most advanced applications integrate these pillars to create sophisticated agents, with an added layer of dynamic adaptation:
Tool-Integrated Reasoning: Empowering LLMs to use external tools (APIs, databases, code interpreters) to interact with the real world.
Multi-Agent Systems: Designing "organizations" of specialized LLM agents that communicate and collaborate to solve multi-faceted problems, mirroring the structure of human teams.
Adaptive Context Optimization: Leveraging Reinforcement Learning (RL) to dynamically optimize context selection and construction for specific environments and tasks, ensuring efficient and effective performance.
Contextual Engineering is the emerging science of building robust, scalable, and stateful applications by systematically managing the flow of information to and from an LLM. | 16 comments on LinkedIn
What’s the difference between context engineering and ontology engineering?
What’s the difference between context engineering and ontology engineering?
We hear a lot about “context engineering” these days in AI wonderland. A lot of good thing are being said but it’s worth noting what’s missing.
Yes, context matters. But context without structure is narrative, not knowledge. And if AI is going to scale beyond demos and copilots into systems that reason, track memory, and interoperate across domains… then context alone isn’t enough.
We need ontology engineering.
Here’s the difference:
- Context engineering is about curating inputs: prompts, memory, user instructions, embeddings. It’s the art of framing.
- Ontology engineering is about modeling the world: defining entities, relations, axioms, and constraints that make reasoning possible.
In other words:
Context guides attention. Ontology shapes understanding.
What’s dangerous is that many teams stop at context, assuming that if you feed the right words to an LLM, you’ll get truth, traceability, or decisions you can trust. This is what I call “hallucination of control”.
Ontologies provide what LLMs lack: grounding, consistency, and interoperability, but they are hard to build without the right methods, adapted from the original discipline that started 20+ years ago with the semantic web, now it’s time to work it out for the LLM AI era.
If you’re serious about scaling AI across business processes or mission-critical systems, the real challenge is more than context, it’s shared meaning. And tech alone cannot solve this.
That’s why we need put ontology discussion in the board room, because integrating AI into organizations is much more complicated than just providing the right context in a prompt or a context window.
That’s it for today. More tomorrow!
I’m trying to get back at journaling here every day. 🤙 hope you will find something useful in what I write. | 71 comments on LinkedIn
What’s the difference between context engineering and ontology engineering?