Most people talk about AI agents like they’re already reliable. They aren’t.
Most people talk about AI agents like they’re already reliable. They aren’t.
They follow instructions. They spit out results. But they forget what they did, why it mattered, or how circumstances have changed. There’s no continuity. No memory. No grasp of unfolding context. Today’s agents can respond - but they can’t reflect, reason, or adapt over time.
OpenAI’s new cookbook Temporal Agents with Knowledge Graphs lays out just how limiting that is and offers a credible path forward. It introduces a new class of temporal agents: systems built not around isolated prompts, but around structured, persistent memory.
At the core is a knowledge graph that acts as an evolving world model - not a passive record, but a map of what happened, why it mattered, and what it connects to. This lets agents handle questions like:
“What changed since last week?”
“Why was this decision made?”
“What’s still pending and what’s blocking it?”
It’s an architectural shift that turns time, intent, and interdependence into first-class elements.
This mirrors Tony Seale’s argument about enterprise data: most data products don’t fail because of missing pipelines - they fail because they don’t align with how the business actually thinks. Data lives in tables and schemas. Business lives in concepts like churn, margin erosion, customer health, or risk exposure.
Tony’s answer is a business ontology: a formal, machine-readable layer that defines the language of the business and anchors data products to it. It’s a shift from structure to semantics - from warehouse to shared understanding.
That’s the same shift OpenAI is proposing for agents.
In both cases, what’s missing isn’t infrastructure. It’s interpretation.
The challenge isn’t access. It’s alignment.
If we want agents that behave reliably in real-world settings, it’s not enough to fine-tune them on PDFs or dump Slack threads into context windows. They need to be wired into shared ontologies - concept-level scaffolding like:
Who are our customers?
What defines success?
What risks are emerging, and how are they evolving?
The temporal knowledge graph becomes more than just memory. It becomes an interface - a structured bridge between reasoning and meaning.
This goes far beyond another agent orchestration blueprint. It points to something deeper: Without time and meaning, there is no true delegation.
We don’t need agents that mimic tasks.
We need agents that internalise context and navigate change.
That means building systems that don’t just handle data, but understand how it fits into the changing world we care about.
OpenAI’s temporal memory graphs and Tony’s business ontologies aren’t separate ideas. They’re converging on the same missing layer:
AI that reasons in the language of time and meaning.
H/T Vin Vashishta for the pointer to the OpenAI cookbook, and image nicked from Tony (as usual). | 72 comments on LinkedIn
Most people talk about AI agents like they’re already reliable. They aren’t.