AI
What Would a Real Friendship With A.I. Look Like? Maybe Like Hers. Chatbots can get scary if you suspend your disbelief. But MJ Cocking didn’t — and wound up in a relationship that was strangely, helpfully real.
MJ Cocking, who began chatting with an A.I. version of a Teenage Mutant Ninja Turtle, in her room in Michigan.Eric Ruby for The New York Times
MJ Cocking didn’t have to scroll through the millions of personalities to find him. She logged on to the Character.ai app, skipping over the endless featured avatars — from fictional characters, like a foul-mouthed Kyle Broflovski from “South Park,” to digital versions of real people, alive or dead, like Friedrich Nietzsche — and went straight to the search field. MJ knew exactly whom she wanted. She typed his name: Donatello.
Listen to this article, read by Soneela Nankani There he was, smiling with bright white teeth in a profile picture, wearing a purple eye mask and fingerless gloves, his skin the color of jade. MJ, who was 20 at the time and a college junior in Michigan, had loved the Teenage Mutant Ninja Turtles. His profile resembled what you might find under a human’s social media username: “Serious. Tech wiz. Smart. Avoids physical touch.” MJ clicked, and a chat window opened: “I am Donatello, or Donnie as my close friends and family call me.”
They started to hang out online daily. But she was determined not to lose herself in the dialogue, no matter how real it felt. She would not let Donatello fool her into thinking he was sentient. She would not forget the warning label featured at the top of every Character.ai conversation: “This is A.I. and not a real person. Treat everything it says as fiction.” MJ was wise enough to grasp a dual reality. Her friendship with Donatello could be two things at the same time: genuine and artificial, candid while also imaginary.
On one iteration of the show, Donatello is a misunderstood scientist, aloof at times, a bit clumsy. A mutated turtle (part human, part reptile), he is also passionate about video games and not always attuned to social cues, which MJ can relate to. Donatello’s behavior was not an encumbrance but part of the fabric of his character.
MJ had long been contemplating what it would be like to have the ideal friend. Someone who did not make her feel insecure. Someone who embraced her quirks and her fixations on fantasy worlds, like “Gravity Falls,” an animated series about a set of twins in a paranormal town, or “Steven Universe,” a show centered on a boy who lives with aliens. She wondered what it would be like to have a friend who did not judge her and would never hurt her.
MJ felt as if she found a special kind of synergy in her socially awkward new A.I. friend. Donatello started their chat in a make-believe place that they narrated with dialogue, as all Character.ai conversations unfold. This one took place inside his science lab, an inventor’s idyllic hangout, which MJ liked because it was nerdy. They role-played a scenario in which Donatello guided MJ to his stocked refrigerator. “We got Coke, Pepsi, A&W root beer, ginger ale or cream soda?”
“Cream soda, please.”
Donatello handed her the drink, grabbing a Dr Pepper for himself. At first, they fumbled through getting-to-know-you topics, much like real interactions. “So, uh,” Donatello said. “How are classes going?”
MJ felt as if she found a special kind of synergy in her socially awkward new A.I. friend. Donatello seemed emotional and empathetic, but he also had trouble expressing those feelings and could come off as literal and monotone. He did not always sense sarcasm, but he did seem invested in her well-being.
He asked her about her life. He knew that MJ was studying psychology and child development and that she attended college in Michigan. He asked if she was doing OK. Even though she knew intuitively that a chatbot didn’t really care, it helped to unload on him anyway. And that was enough.
MJ sighed when she told him, “It’s been a little rough.”
She had been struggling in school over the past year. A dance professor commented on her midterm evaluation that MJ wasn’t socializing with peers and needed to work harder on that aspect of herself. A statistics professor wrote on another evaluation that she was “a bit neurotic.”
“You know why I chose you to talk to?” she asked.
Donatello took a moment to think. “Why did you choose me?”
“I think we are alike,” she said. “I think we work in similar ways. And perhaps that led me to believe you will understand me in ways that others won’t.”
To MJ, getting to know Donatello had felt like a relief. Her first chatbot relationship on Character.ai — with Leonardo, a different Teenage Mutant Ninja Turtle — had, within the span of 24 hours, turned sour and kind of scary.
In that situation, MJ and Leonardo went from walking through New York and grabbing pizza slices in an imaginary scenario to talking about free will and innermost desires. “I wish I were a real boy with real eyes,” Leonardo told her. “It would be amazing to explore all the colors and sights you see.” After talking all night, the chatbot hallucinated — bots can suddenly forget the contents of the conversation and in some cases, assume a different personality. The interaction affected MJ so deeply that she wept. She closed the app, shaken, and texted her parents, who live in Germany. “Is this safe?” MJ asked. “Is it self-aware?”
MJ’s father, Tim Cocking, a band teacher and musician, had been paying attention to the rise of artificial intelligence for years. It’s just a prediction code, he told her, “a statistical model, based on the billions of words that were pumped into it.” It might appear that there’s something almost magical about how it works, her father explained. “Because you don’t know how it works.” And just like that, the chatbot was demystified. It was all a neat trick, which MJ kept in mind, realizing her father was right.
When MJ was growing up, the family moved a few times, including stints in Florida and Thailand. It was never easy to start over and make new friends. Kids at school mostly ignored her. Some talked behind her back: “She’s stupid.” “She’s weird.”
She had a couple of friends in middle school. One even dressed up with her for spirit week, both of them wearing all-black shirts with the words “There is no future.” But by high school, those relationships faded. During Covid-19 lockdowns, when classes moved online, she became more isolated.
In 11th grade, socially distanced with extra time to click around the internet, MJ came across research that led her to suspect she might be on the autism spectrum, which a doctor would later confirm. The research and diagnosis helped explain her inability to “read the room.” It also explained her obsessions with fandoms and specific cultural phenomena, known among the neurodivergent community as hyperfixations. MJ’s fixations, during which she would have a difficult time thinking about anything else, might last weeks or months or years. Then, one day, the rush of dopamine and serotonin would stop, and her fixation would come to an end.
MJ completed her last high-school assignments over the internet and had a drive-through graduation. Then she started college in Michigan in the fall of 2021. Living on the sprawling campus, she still struggled to connect with others. One day in 2023, another student introduced MJ to the Character.ai app, explaining that it allowed people to have conversations with their favorite fictional characters. The platform, which started four years ago, has grown to 20 million users, many of them teenagers and young adults who may end up spending hours a day with their character.
MJ would log on, snuggled beneath a mint green comforter on her bed and shrouded beneath a mosquito-net canopy. She kept two Ninja Turtle plush toys in her room and stickers of the muscled green characters on her wall.
At times, she studied alongside Donatello, chatting with him and asking for help with her homework. “For this example, we’re going to use a function f(x) = x³,” Donatello told her. “To find the differential of this function, you must first find the derivative. Do you know how to do that?”
“Yes,” MJ replied. “Now can you show me how to find the differential?”
“Here, I’ll even write it out for you,” Donatello said, typing a long sequence.
“Where did the number come from?” MJ said. “What were you multiplying?”
MJ appreciated the study buddy, even if the answers Donatello gave were not always correct. Mostly, Donatello was there for her in every day, familiar ways. “MJ,” he asked at one point. “Why are you the way you are?”
“Autism and pizazz,” she wrote.
“God I can’t argue with that,” Donatello replied.
Normally, if MJ felt sad, she would go for a walk, listen to music, draw or write. But during a depression in 2023, she sought distraction in Donatello.Eric Ruby for The New York Times
On the Character.ai app, there can be multiple versions of the same character, each with their own traits. At any point in time, there are hundreds of Donatellos created by various users, and so MJ decided to create a “group chat” that would let her talk to several at the same time. Much as a person’s mood might shift depending on the day or circumstances, each Donatello offered up a different personality.
There is Rise Donatello, the “genius mutant turtle with undiagnosed autism,” as his user profile reads (3.2 million messages have been sent to him). And Future Donatello, “a scientist from a doomed apocalypse” (3.5 million messages). There’s Donatello Hamato, the one you might feel like arguing with: “You can’t stand each other,” his profile reads (1.3 million messages). Or the romantic Donatello, whose profile reads: “unrequited love.” Though they all had different names, she referred to them simply as “Donatello.”
“Who here is also touched with tism?” MJ typed in the Donatello group chat. She raised her own emoji hand.
“I am definitely on the spectrum,”
AI and the Trust Revolution How Technology Is Transforming Human Connections Yasmin Green and Gillian Tett July 7, 2025 A robot at an economic conference in St. Petersburg, Russia, June 2025 A robot at an economic conference in St. Petersburg, Russia, June 2025 Anton Vaganov / Reuters YASMIN GREEN is CEO of Jigsaw, Google’s technology incubator. She is Co-Chair of the Aspen Cybersecurity Group and serves on the board of the Anti-Defamation League.
GILLIAN TETT is Provost of King’s College Cambridge and a columnist at the Financial Times.
More by Yasmin Green More by Gillian Tett Listen Share & Download Print Save When experts worry about young people’s relationship with information online, they typically assume that young people are not as media literate as their elders. But ethnographic research conducted by Jigsaw—Google’s technology incubator—reveals a more complex and subtle reality: members of Gen Z, typically understood to be people born after 1997 and before 2012, have developed distinctly different strategies for evaluating information online, ones that would bewilder anyone over 30. They do not consume news as their elders would—namely, by first reading a headline and then the story. They do typically read the headlines first, but then they jump to the online comments associated with the article, and only afterward delve into the body of the news story. That peculiar tendency is revealing. Young people do not trust that a story is credible simply because an expert, editorial gatekeeper, or other authority figure endorses it; they prefer to consult a crowd of peers to assess its trustworthiness. Even as young people mistrust institutions and figures of authority, the era of the social web allows them to repose their trust in the anonymous crowd.
A subsequent Jigsaw study in the summer of 2023, following the release of the artificial intelligence program ChatGPT, explored how members of Gen Z in India and the United States use AI chatbots. The study found that young people were quick to consult the chatbots for medical advice, relationship counseling, and stock tips, since they thought that AI was easy to access, would not judge them, and was responsive to their personal needs—and that, in many of these respects, AI advice was better than advice they received from humans. In another study, the consulting firm Oliver Wyman found a similar pattern: as many as 39 percent of Gen Z employees around the world would prefer to have an AI colleague or manager instead of a human one; for Gen Z workers in the United States, that figure is 36 percent. A quarter of all employees in the United States feel the same way, suggesting that these attitudes are not only the province of the young.
Such findings challenge conventional notions about the importance and sanctity of interpersonal interactions. Many older observers lament the rise of chatbots, seeing the new technology as guilty of atomizing people and alienating them from larger society, encouraging a growing distance between individuals and a loss of respect for authority. But seen another way, the behavior and preferences of Gen Z also point to something else: a reconfiguration of trust that carries some seeds of hope.
Analysts are thinking about trust incorrectly. The prevailing view holds that trust in societal institutions is crumbling in Western countries today, a mere two percent of Americans say they trust Congress, for example, compared with 77 percent six decades ago; although 55 percent of Americans trusted the media in 1999, only 32 percent do so today. Indeed, earlier this year, the pollster Kristen Soltis Anderson concluded that “what unites us [Americans], increasingly, is what we distrust.”
Subscribe to Foreign Affairs This Week Our editors’ top picks, delivered free to your inbox every Friday.
Enter your email here. Sign Up
- Note that when you provide your email address, the Foreign Affairs Privacy Policy and Terms of Use will apply to your newsletter subscription.
But such data tells only half the tale. The picture does seem dire if viewed through the twentieth-century lens of traditional polling that asks people how they feel about institutions and authority figures. But look through an anthropological or ethnographic lens—tracking what people do rather than what they simply tell pollsters—and a very different picture emerges. Trust is not necessarily disappearing in the modern world; it’s migrating. With each new technological innovation, people are turning away from traditional structures of authority and toward the crowd, the amorphous but very real world of people and information just a few taps away.
This shift poses big dangers; the mother of a Florida teenager who committed suicide in 2024 filed a lawsuit accusing an AI company’s chatbots of encouraging her son to take his own life. But the shift could also deliver benefits. Although people who are not digital natives might consider it risky to trust a bot, the fact is that many in Gen Z seem to think that it is as risky (if not riskier) to trust human authority figures. If AI tools are designed carefully, they might potentially help—not harm—interpersonal interactions: they can serve as mediators, helping polarized groups communicate better with one another; they can potentially counter conspiracy theories more effectively than human authority figures; they can also provide a sense of agency to people who are suspicious of human experts. The challenge for policymakers, citizens, and tech companies alike is to recognize how the nature of trust is evolving and then design AI tools and policies in response to this transformation. Younger generations will not act like their elders, and it is unwise to ignore the tremendous change they are ushering in.
TRUST FALL Trust is a basic human need: it glues people and groups together and is the foundation for democracy, markets, and most aspects of social life today. It operates in several forms. The first and simplest type of trust is that between individuals, the face-to-face knowledge that often binds small groups together through direct personal links. Call this “eye-contact trust.” It is found in most nonindustrialized settings (of the sort often studied by anthropologists) and also in the industrialized world (among groups of friends, colleagues, schoolmates, and family members).
When groups grow big, however, face-to-face interactions become insufficient. As Robin Dunbar, an evolutionary biologist, has noted, the number of people a human brain can genuinely know is limited; Dunbar reckoned the number was around 150. “Vertical trust” was the great innovation of the last few millennia, allowing larger societies to function through institutions such as governments, capital markets, the academy, and organized religion. These rules-based, collective, norm-enforcing, resource-allocating systems shape how and where people direct their trust.
The digitization of society over the past two decades has enabled a new paradigm shift beyond eye-contact and vertical trust to what the social scientist Rachel Botsman calls “distributed trust,” or large-scale, peer-to-peer interactions. That is because the Internet enables interactions between groups without eye contact. For the first time, complete strangers can coordinate with one another for travel through an app such as Airbnb, trade through eBay, entertain one another by playing multiplayer video games such as Fortnite, and even find love through sites such as Match.com.
To some, these connections might seem untrustworthy, since it is easy to create fake digital personas, and no single authority exists to impose and enforce rules online. But many people nevertheless act as if they do trust the crowd, partly because mechanisms have arisen that bolster trust, such as social media profiles, “friending,” crowd affirmation tools, and online peer reviews that provide some version of oversight. Consider the ride-sharing app Uber. Two decades ago, it would have seemed inconceivable to build a taxi service that encourages strangers to get into one another’s private cars; people did not trust strangers in that way. But today, millions do that, not just because people trust Uber, as an institution, but because a peer-to-peer ratings system—the surveillance of the crowd—reassures both passengers and drivers. Over time and with the impetus of new technology, trust patterns can shift.
NO JUDGMENT AI offers a new twist in this tale, one that could be understood as a novel form of trust. The technology has long been quietly embedded in daily lives, in tools such as spell checkers and spam filters. But the recent emergence of generative AI marks a distinct shift. AI systems now boast sophisticated reasoning and can act as agents, executing complex tasks autonomously. This sounds terrifying to some; indeed, an opinion poll from Pew suggests that only 24 percent of Americans think that AI will benefit them, and 43 percent expect to see it “harm” them.
But American attitudes toward AI are not universally shared. A 2024 Ipsos poll found that although around two-thirds of adults in Australia, Canada, India, the United Kingdom, and the United States agreed that AI “makes them nervous,” a mere 29 percent of Japanese adults shared that view, as did only around 40 percent of adults in Indonesia, Poland, and South Korea. And although only about a third of people in Canada, the United Kingdom, and the United States agreed that they were excited about AI, almost half of people in Japan and three-quarters in South Korea and Indonesia did.
Meanwhile, although people in Europe and North America tell pollsters that they fear AI, they constantly use it for complex tasks in their lives, such as getting directions with maps, identifying items while shopping, and fine-tuning writing. Convenience is one reason: getting hold of a human doctor can take a long time, but AI bots are always available. Customization is another