AI is a mass-delusion event
https://www.theatlantic.com/technology/archive/2025/08/ai-mass-delusion-event/683909/
It is a Monday afternoon in August, and I am on the internet watching a former cable-news anchor interview a dead teenager on Substack. This dead teenager—Joaquin Oliver, killed in the mass shooting at Marjory Stoneman Douglas High School, in Parkland, Florida—has been reanimated by generative AI, his voice and dialogue modeled on snippets of his writing and home-video footage. The animations are stiff, the model’s speaking cadence is too fast, and in two instances, when it is trying to convey excitement, its pitch rises rapidly, producing a digital shriek. How many people, I wonder, had to agree that this was a good idea to get us to this moment? I feel like I’m losing my mind watching it.
Jim Acosta, the former CNN personality who’s conducting the interview, appears fully bought-in to the premise, adding to the surreality: He’s playing it straight, even though the interactions are so bizarre. Acosta asks simple questions about Oliver’s interests and how the teenager died. The chatbot, which was built with the full cooperation of Oliver’s parents to advocate for gun control, responds like a press release: “We need to create safe spaces for conversations and connections, making sure everyone feels seen.” It offers bromides such as “More kindness and understanding can truly make a difference.”
On the live chat, I watch viewers struggle to process what they are witnessing, much in the same way I am. “Not sure how I feel about this,” one writes. “Oh gosh, this feels so strange,” another says. Still another thinks of the family, writing, “This must be so hard.” Someone says what I imagine we are all thinking: “He should be here.”
Read: AI’s real hallucination problem
The Acosta interview was difficult to process in the precise way that many things in this AI moment are difficult to process. I was grossed out by Acosta for “turning a murdered child into content,” as the critic Parker Molloy put it, and angry with the tech companies that now offer a monkey’s paw in the form of products that can reanimate the dead. I was alarmed when Oliver’s father told Acosta during their follow-up conversation that Oliver “is going to start having followers,” suggesting an era of murdered children as influencers. At the same time, I understood the compulsion of Oliver’s parents, still processing their profound grief, to do anything in their power to preserve their son’s memory and to make meaning out of senseless violence. How could I possibly judge the loss that leads Oliver’s mother to talk to the chatbot for hours on end, as his father described to Acosta—what could I do with the knowledge that she loves hearing the chatbot say “I love you, Mommy” in her dead son’s voice?
The interview triggered a feeling that has become exceedingly familiar over the past three years. It is the sinking feeling of a societal race toward a future that feels bloodless, hastily conceived, and shruggingly accepted. Are we really doing this? Who thought this was a good idea? In this sense, the Acosta interview is just a product of what feels like a collective delusion. This strange brew of shock, confusion, and ambivalence, I’ve realized, is the defining emotion of the generative-AI era. Three years into the hype, it seems that one of AI’s enduring cultural impacts is to make people feel like they’re losing it.
During his interview with Acosta, Oliver’s father noted that the family has plans to continue developing the bot. “Any other Silicon Valley tech guy will say, ‘This is just the beginning of AI,’” he said. “‘This is just the beginning of what we’re doing.’”
Just the beginning. Perhaps you’ve heard that too. “Welcome to the ChatGPT generation.” “The Generative AI Revolution.” “A new era for humanity,” as Mark Zuckerberg recently put it. It’s the moment before the computational big bang—everything is about to change, we’re told; you’ll see. God may very well be in the machine. Silicon Valley has invented a new type of mind. This is a moment to rejoice—to double down. You’re a fool if you’re not using it at work. It is time to accelerate.
How lucky we are to be alive right now! Yes, things are weird. But what do you expect? You are swimming in the primordial soup of machine cognition. There are bound to be growing pains and collateral damage. To live in such interesting times means contending with MechaHitler Grok and drinking from a fire hose of fascist-propaganda slop. It means Grandpa leaving confused Facebook comments under rendered images of Shrimp Jesus or, worse, falling for a flirty AI chatbot. This future likely requires a new social contract. But also: AI revenge porn and “nudify” apps that use AI to undress women and children, and large language models that have devoured the total creative output of humankind. From this morass, we are told, an “artificial general intelligence” will eventually emerge, turbo-charging the human race or, well, maybe destroying it. But look: Every boob with a T-Mobile plan will soon have more raw intelligence in their pocket than has ever existed in the world. Keep the faith.
Breathlessness is the modus operandi of those who are building out this technology. The venture capitalist Marc Andreessen is quote-tweeting guys on X bleating out statements such as “Everyone I know believes we have a few years max until the value of labor totally collapses and capital accretes to owners on a runaway loop—basically marx’ worst nightmare/fantasy.” How couldn’t you go a bit mad if you took them seriously? Indeed, it seems that one of the many offerings of generative AI is a kind of psychosis-as-a-service. If you are genuinely AGI-pilled—a term for those who believe that machine-born superintelligence is coming, and soon—the rational response probably involves some combination of building a bunker, quitting your job, and joining the cause. As my colleague Matteo Wong wrote after spending time with people in this cohort earlier this year, politics, the economy, and current events are essentially irrelevant to the true believers. It’s hard to care about tariffs or authoritarian encroachment or getting a degree if you believe that the world as we know it is about to change forever.
There are maddening effects downstream of this rhetoric. People have been involuntarily committed or had delusional breakdowns after developing relationships with chatbots. These stories have become a cottage industry in themselves, each one suggesting that a mix of obsequious models, their presentation of false information as true, and the tools’ ability to mimic human conversation pushes vulnerable users to think they’ve developed a human relationship with a machine. Subreddits such as r/MyBoyfriendIsAI, in which people describe their relationships with chatbots, may not be representative of most users, but it’s hard to browse through the testimonials and not feel that, just a few years into the generative-AI era, these tools have a powerful hold on people who may not understand what it is they’re engaging with.
As all of this happens, young people are experiencing a phenomenon that the writer Kyla Scanlon calls the “End of Predictable Progress.” Broadly, the theory argues that the usual pathways to a stable economic existence are no longer reliable. “You’re thinking: These jobs that I rely on to get on the bottom rung of my career ladder are going to be taken away from me” by AI, she recently told the journalist Ezra Klein. “I think that creates an element of fear.” The feeling of instability she describes is a hallmark of the generative-AI era. It’s not at all clear yet how many entry-level jobs will be claimed by AI, but the messaging from enthusiastic CEOs and corporations certainly sounds dire. In May, Dario Amodei, the CEO of Anthropic, warned that AI could wipe out half of all entry-level white-collar jobs. In June, Salesforce CEO Marc Benioff suggested that up to 50 percent of the company’s work was being done by AI.
The anxiety around job loss illustrates the fuzziness of this moment. Right now, there are competing theories as to whether AI is having a meaningful effect on employment. But real and perceived impact are different things. A recent Quinnipiac poll found that, “when it comes to their day-to-day life,” 44 percent of surveyed Americans believe that AI will do more harm than good. The survey found that Americans believe the technology will cause job loss—but many workers appeared confident in the security of their own job. Many people simply don’t know what conclusions to draw about AI, but it is impossible not to be thinking about it.
OpenAI CEO Sam Altman has demonstrated his own uncertainty. In a blog post titled “The Gentle Singularity” published in June, Altman argued that “we are past the event horizon” and are close to building digital superintelligence, and that “in some big sense, ChatGPT is already more powerful than any human who has ever lived.” He delivered the classic rhetorical flourishes of AI boosters, arguing that “the 2030s are likely going to be wildly different from any time that has come before.” And yet, this post also retreats ever so slightly from the dramatic rhetoric of inevitable “revolution” that he has previously employed. “In the most important ways, the 2030s may not be wildly different,” he wrote. “People will still love their families, express their creativity, play games, and swim in lakes”—a cheeky nod to the endurance of our corporeal form, as a little treat. Altman is a skilled marketer, and the post might simply be a way to signal a friendlier, more palatable future for those who are a little freaked out.
But a different way to read the post is to see Altman hedging slightly in the face of potential progress limitations on the technology. Earlier this month, OpenAI released GPT-5, to mixed reviews. Altman had promised “a Ph.D.-level” intelligence on any topic.