Found 1 bookmarks
Custom sorting
An article in Nature Human Behaviour examines the cognitive and emotional…
An article in Nature Human Behaviour examines the cognitive and emotional…
An article in Nature Human Behaviour examines the cognitive and emotional reasons behind people’s resistance to AI tools, even when these tools could be beneficial. The authors, structure their analysis around five key psychological barriers: 1. Opacity – Many AI systems function as “black boxes,” meaning their decision-making processes are difficult to interpret. This lack of transparency fosters distrust, as users struggle to understand or predict AI behaviour. To address this, some AI-powered products now prioritise explainability. One example is Netflix recommendations, which provides explanations such as “We suggest this movie because you watched Don’t Look Up.” 2. Emotionlessness – AI lacks human emotions, making interactions with it feel impersonal and detached. People often prefer human decision-makers because they perceive them as capable of empathy, care, and moral reasoning. 3. Rigidity – AI operates based on predefined rules and patterns, which can make it appear inflexible or incapable of handling nuanced, context-dependent situations in the way humans can. 4. Autonomy – The idea that AI acts independently can create discomfort, as it raises concerns about control, agency, and the unpredictability of automated systems. This becomes particularly important for activities through which we express our identity. People are more trusting of AI in situations where they don't seek agency. 5. Group Membership – Humans have a natural tendency to trust other humans over non-human agents. AI is often perceived as an “outsider,” which can lead to resistance, particularly in domains where social interaction or human judgment is highly valued. The article discusses how these psychological barriers are deeply rooted in human cognition and biases, drawing on empirical studies that show both correlational and causal links between these factors and AI resistance. The authors also separate the barriers into two categories: - AI-related factors (e.g., a system’s lack of transparency or inability to convey emotions) - User-related factors (e.g., cognitive biases, emotional responses, and cultural influences shaping AI perception) This distinction is important for designing interventions that promote the adoption of beneficial AI tools. However, the authors warn that efforts to overcome AI resistance, for example by including anthropormorphic features, could have unintended consequences. | 28 comments on LinkedIn
·linkedin.com·
An article in Nature Human Behaviour examines the cognitive and emotional…