ai
Abstract Although AI assistants are now deeply embedded in society, there has been limited empirical study of how their usage affects human empowerment. We present the first large-scale empirical analy- sis of disempowerment patterns in real-world AI assistant interactions, analyzing 1.5 million con- sumer Claude.ai conversations using a privacy- preserving approach. We focus on situational dis- empowerment potential, which occurs when AI assistant interactions risk leading users to form distorted perceptions of reality, make inauthen- tic value judgments, or act in ways misaligned with their values. Quantitatively, we find that severe forms of disempowerment potential occur in fewer than one in a thousand conversations, though rates are substantially higher in personal domains like relationships and lifestyle. Qualita- tively, we uncover several concerning patterns, such as validation of persecution narratives and grandiose identities with emphatic sycophantic language, definitive moral judgments about third parties, and complete scripting of value-laden personal communications that users appear to im- plement verbatim. Analysis of historical trends reveals an increase in the prevalence of disem- powerment potential over time. We also find that interactions with greater disempowerment po- tential receive higher user approval ratings, pos- sibly suggesting a tension between short-term user preferences and long-term human empow- erment. Our findings highlight the need for AI systems designed to robustly support human au- tonomy and flourishing.