AI Is the Cognitive Layer. Schools Still Think It’s a Study Tool.
https://stefanbauschard.substack.com/p/ai-is-the-cognitive-layer-schools
A New World
We are living through the birth of a new world—and most of education doesn’t get it.
Right now, the debate about AI in education is stuck in a narrow loop, circling three questions:
Should society use AI at all?
Should students be allowed to use AI to complete schoolwork?
Should we augment existing teaching and learning with AI tools?
These may sound like the right questions. But they’re not. At least, not anymore.
The first question is already obsolete. We no longer get to decide whether society will use AI. It’s already here—ubiquitous, ambient, and integrated into nearly every part of daily life. AI is embedded in Google search results, smartphones, glasses, Microsoft Office, Google Suite, appliances, and cars. Debating whether we should use AI is like asking whether we should use electricity or the internet.
It’s not a choice—it’s an infrastructure and short of living outside of modern society, we are going to “use” it.
The second and third questions are more subtle—but just as misguided. They assume that the educational model we’ve built is still entirely relevant, and that the right move is simply to enhance it. But what if the model itself is collapsing? What if augmenting outdated goals and irrelevant skills with cutting-edge AI simply accelerates our drift into irrelevance? Few want to ask the hard questions.
The real question isn’t whether to allow students to use AI, or how to plug it into instruction.
The real question is: How do we prepare students for a world where machines can think?
…And where they are better at thinking than we are…when they have more knowledge, when the are more likely to be right, when they can think and act faster…when the economic value of knowledge collapses to close to zero…
What type of education is relevant in this world?
Here’s the larger list of jobs exposed to AI.
A New World is Arriving Quickly
The ground is shifting faster than most institutions can track. AI systems now possess access to more knowledge than any single human in history—instantly searchable, constantly updated, and cross-disciplinary in ways no mind can replicate. From scientific databases to legal code, literature to logistics, these models don’t just retrieve information; they synthesize, explain, and increasingly reason.
Many researchers now believe we are on a steep path toward Artificial General Intelligence (AGI).
OpenAI has already claimed progress toward reasoning across domains; Demis Hassabis, Ray Kurzweil, Ben Goertzel, and even Francois Chollet, who has always articulated conservative AI timelines, believes we will have AGI or something very similar (definitions vary) by approximately 2030. Some believe it will be sooner.
Even critics of exclusively LLM approaches to AI development (Gary Marcus, Yann LeCun) believe we will eventually develop AGI possibly on the same time frame: 5-20 (LeCun, 5-10 (maybe 3)) or (Marcus, 5-20) years. These frequent critics of the significance of LLMs simply believe that LLMs will only be part of the AGI infrastructure and that other approaches and models need to be integrated. This is something that most researchers also believe. For example, Google/Deep Mind, which generally projects a 5 year time frame is also working on the World Models LeCun argues for. Ben Goertzel, who has a 2028ish timeline believes that LLMs will only comprise around 30% of an AGI system.
Meanwhile, robotics is no longer crawling. General-purpose humanoid robots are walking, grasping, manipulating, learning from feedback, and being deployed into warehouses, hospitals, homes, and the military.
Students graduating today are stepping into a world that may be governed by hyperintelligence—systems that exceed not just memory and speed, but even judgment, in many domains. This is not a future challenge. It’s a current reality. We are already seeing layoffs, role shifts, and productivity gains in sectors touched by generative AI—from customer service to content production to law and software.
By the time today’s 9th graders and college freshman enter the workforce, the most disruptive waves of AGI and robotics may already be embedded into part society.
Energy limitations and ordinary diffusion bottlenecks will slow down distribution, but it’s already the the fastest adopted technology in history.
Education cannot afford to wait and see. We must prepare students not just to survive in this world, but to contribute something distinctly human to a human world.
You Don’t Just Add Electricity to a One-Room Schoolhouse
To understand how profoundly education is misreading the moment, it helps to revisit a previous general-purpose technology: electricity.
Electricity didn’t just improve the candle—it eliminated the need for it. It transformed every facet of life: labor, production, health, communication, transportation, entertainment, science, and more. It created entirely new sectors and social structures. It powered factories, night shifts, refrigeration, telecommunications, broadcast media, and computing.
And education changed with it.
We didn’t just hang a lightbulb in the one-room schoolhouse—we replaced the schoolhouse. Industrialization and electrification brought about the age-graded, subject-divided, bell-scheduled, standardized-mass-education model we now take for granted. We built that system not for philosophical reasons, but because the new economy demanded mass intellectual labor and civic participation.
Neither of those may be needed in the not-too-distant future.
Now we are in a moment just as consequential—and we are pretending it only requires new software licenses.
AI is not a tool.
It is the electricity of this era.
What replaces the old system will not simply be a more digital version of the same thing. Structurally, schools may move away from rigid age-groupings, fixed schedules, and subject silos. Instead, learning could become more fluid, personalized, and interdisciplinary—organized around problems, projects, and human development rather than discrete facts or standardized assessments.
AI tutors and mentors will allow for pacing that adapts to each student, freeing teachers to focus more on guidance, relationships, and high-level facilitation. Classrooms may feel less like miniature factories and more like collaborative studios, labs, or even homes—spaces for exploring meaning and building capacity, not just delivering content.
We already see the emergence of alternative K-12 systems designed along this lines in the form of the rapidly expanding Alpha School (US), where more and more technology leaders are educating their kids, and Colegio Ikagi (Mexico).
In terms of content, the curriculum will shift from memorizing information to making sense of it. AI systems students interact with all day will already know what they know, so there will be no need to test them.
Students may spend less time learning how to do tasks AI can perform—coding, writing research papers, even conducting formulaic experiments—and more time exploring emerging domains like synthetic biology, systems thinking, ethical reasoning, and multi-agent problem-solving. The question will not be “what do you know?” but “what can you do with intelligent tools—and what should you do?”
AI Is Now Ambient. You Can’t Ban It Anymore.
A few years ago, AI was something you had to go to. You had to log into ChatGPT or open an app and deliberately choose to use it. Schools could, in theory, block it. But that era is already over.
AI is now ambient. It’s integrated into your browser, your search engine, your smartphone keyboard, your email inbox, your car, your digital assistant, your creative tools. It is everywhere—not as a single tool, but as a layer under everything. And it's becoming more invisible, more embedded, and more personal every day.
Soon, students won’t even realize they’re “using” AI. It will be part of their search queries, their writing process, their glasses, their voice memos, their earbuds—and, increasingly, their thoughts.
And we are entering the era of brain-computer interfaces (BCIs).
Neuralink, Precision Neuroscience, and others are developing early-stage BCIs that allow humans to interface directly with machines via brain activity. While still in the experimental phase, the trajectory is clear: we are moving from a world where humans use keyboards to prompt AI to one where AI can respond to our thoughts, feelings, and neural patterns in real time.
What does it mean when AI is no longer something students use, but something they think through?
How do we talk about "academic integrity" when a student's ideas are co-generated with a machine inside or slightly outside their mind?
What does it mean to write a paper when your mind and a machine are co-authoring it in silence? How do we talk about "originality" or "learning outcomes" in that world?
What does it mean to “pay attention” when your brain is multitasking with a digital assistant?
How do we define cheating when it’s impossible to tell where a student’s thinking ends and the machine’s suggestions begin?
AI ambience isn’t just about visibility. It’s about proximity to cognition. And brain-computer interfaces take that to the extreme, but even without the extreme, the ambience creates proximity.
We’re not just integrating AI into the classroom. We’re integrating AI into the mind.
What does it mean to write a paper when your mind and a machine are co-authoring it in silence?
The closer AI gets to cognition, the more absurd it becomes to draw lines between "cheating" and "collaboration." Banning AI is like banning thinking itself. And schools must stop treating it as an external variable.
Education must prepare students for a world where AI is not just a tool—it’s a cognitive layer of human life.
Trying to ban AI in this context is