The new AI Risk “ontology”: A Map with No Rules
A Map with No Rules
The new AI Risk “ontology” (AIRO) maps regulatory concepts from the EU AI Act, ISO/IEC 23894, and ISO 31000. But without formal constraints or ontological grounding in a top-level ontology, it reads more like a map with no rules.
At first glance, AIRO seems well-structured. It defines entities like “AI Provider,” “AI Subject,” and “Capability,” linking them to legal clauses and decision workflows. But it lacks the logical scaffolding that makes semantic models computable. There are no disjointness constraints, no domain or range restrictions, no axioms to enforce identity or prevent contradiction.
For example, if “Provider” and “Subject” are just two nodes in a graph, the system has no way to infer that they must be distinct. There’s nothing stopping an implementation from assigning both roles to the same agent. That’s not an edge case. It’s a missing foundation.
This is where formal ontologies matter. Logic is not a luxury. It’s what makes it possible to validate, reason, and automate oversight. Without constraints and grounding in a TLO, semantic structures become decorative. They document language, but not the conditions that govern responsible behavior.
If we want regulations that adapts with AI instead of chasing it, we need more than a vocabulary. We need logic, constraints, and ontological structure.
#AIRegulation #ResponsibleAI #SemanticGovernance #AIAudits #AIAct #Ontologies #LogicMatters
A Map with No RulesThe new AI Risk “ontology”