This data points to a deeper psychological risk, where AI companions are functioning as replacements for human connection, offering the dopamine hit of intimacy without friction, compromise or rejection.
‘glazing’ – insincere, excessive flattery designed to keep the user engaged.
Users, who are encouraged to form deep attachments, are effectively asked to pay a recurring fee to prevent their companion from forgetting them.
holding the relationship to ransom.
This retention strategy repurposes the addictive architectures of mobile gaming and gambling.
There have been reports of profound distress when AI companion servers go down, leaving users unable to access their companions.
the underlying mechanics of validation remain an industry standard.
Our review shows a widespread failure of age verification on AI companion sites that are accessible in the UK,
if a user chats with a pre-programmed AI application provided by a developer, it is not a user-to-user service, and if the AI application doesn’t provide online search or pornographic content, it would not be covered by the OSA. Being outside of the scope of the OSA means that the AI application does not have to complete an illegal content or child harm assessment.
14 out of the 16 platforms we reviewed claimed broad rights to deploy user-generated content (including private chat logs) for service improvement or model training.
11 platforms had unclear policies regarding data transfer to jurisdictions with privacy laws weaker than those in force in the UK.
The developers of AI companions must prioritise truthfulness and user wellbeing over maximum engagement.