AI therapy is a surveillance machine in a police state
Big Tech wants you to share your private thoughts with chatbots — while backing a government with contempt for privacy.
US residents are being urged to discuss their mental health conditions and personal beliefs with chatbots, and their simplest and best-known options are platforms whose owners are cozy with the Trump administration.
Chatbots, likewise, escalate the risks of typical online secret-sharing. Their conversational design can draw out private information in a format that can be more vivid and revealing — and, if exposed, embarrassing — than even something like a Google search.
Like the NSA’s anti-terrorism programs, the data-sharing could be framed in wholesome, prosocial ways. A 14-year-old wonders if they might be transgender, or a woman seeks support for an abortion? Of course OpenAI would help flag that — they’re just protecting children.
If AI companies are genuinely dedicated to building trustworthy services for therapy, they could commit to raising the privacy and security bar for bots that people use to discuss sensitive topics. They could focus on meeting compliance standards for the Health Insurance Portability and Accountability Act (HIPAA) or on designing systems whose logs are encrypted in a way that they can’t access, so there’s nothing to turn over. But whatever they do right now, it’s undercut by their ongoing support for an administration that holds contempt for the civil liberties people rely on to freely share their thoughts, including with a chatbot.
The obvious takeaway from this is “don’t get therapy from a chatbot, especially not from a high-profile platform, especially if you’re in the US, especially not right now.” The more important takeaway is that if chatbot makers are going to ask users to divulge their greatest vulnerabilities, they should do so with the kinds of privacy protections medical professionals are required to adhere to, in a world where the government seems likely to respect that privacy.