Digital Ethics

Digital Ethics

4003 bookmarks
Custom sorting
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.
·arxiv.org·
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Opinion | The A.I. Prompt That Could End the World
Opinion | The A.I. Prompt That Could End the World
A destructive A.I., like a nuclear bomb, is now a concrete possibility; the question is whether anyone will be reckless enough to build one.
·nytimes.com·
Opinion | The A.I. Prompt That Could End the World
ChatControl
ChatControl
·csa-scientist-open-letter.org·
ChatControl
Large Language Muddle | The Editors
Large Language Muddle | The Editors
The AI upheaval is unique in its ability to metabolize any number of dread-inducing transformations. The university is becoming more corporate, more politically oppressive, and all but hostile to the humanities? Yes — and every student gets their own personal chatbot. The second coming of the Trump Administration has exposed the civic sclerosis of the US body politic? Time to turn the Social Security Administration over to Grok. Climate apocalypse now feels less like a distant terror than a fact of life? In five years, more than a fifth of global energy demand will come from data centers alone.
·nplusonemag.com·
Large Language Muddle | The Editors
Are We in an AI Bubble?
Are We in an AI Bubble?
The entire U.S. economy is being propped up by the promise of productivity gains that seem very far from materializing.
·theatlantic.com·
Are We in an AI Bubble?
An essay on wank | deadSimpleTech
An essay on wank | deadSimpleTech
This captures well the uncomfortable, slightly disorienting feeling that wank creates when you're subjected to it, wherein you're expected to speak about and think about the statement as though it says what it facially does, but also not push too hard or at all, because challenging the factuality or other face-value elements of the statement is a personal attack on the person saying it and their identity. I'm sure we've all been in such situations, unfortunately, and we can all point to lots of situations where wank is prevalent in our current society.
·deadsimpletech.com·
An essay on wank | deadSimpleTech
AI data centers are undermining climate solutions
AI data centers are undermining climate solutions
The scrutiny of data centers has intensified because of tech company secrecy, energy consumption and societal impacts on customers, policymakers and communities.
·trellis.net·
AI data centers are undermining climate solutions
The Staggering Ecological Impacts of Computation and the Cloud
The Staggering Ecological Impacts of Computation and the Cloud
Anthropologist Steven Gonzalez Monserrate draws on five years of research and ethnographic fieldwork in server farms to illustrate some of the diverse environmental impacts of data storage.
·thereader.mitpress.mit.edu·
The Staggering Ecological Impacts of Computation and the Cloud
The Illusion of Conscious AI -
The Illusion of Conscious AI -
Debunking AI consciousness claims: Why Geoffrey Hinton's argument is flawed and why AI, despite its intelligence, is not truly conscious
·thomasramsoy.com·
The Illusion of Conscious AI -
From dorm room to default how voyeurism became a business model
From dorm room to default how voyeurism became a business model
By Christine HaskellOrigins in RejectionOver twenty years ago, a college sophomore sat in a dorm room, stewing after rejection, and built a crude website called FaceMash, where students could rate women like trading cards. Prank as power grab. Voyeurism coded as innovation.We like to file that under “youthful mistake.” It wasn’t. The logic metastasized. The same impulse that turns women into scores now turns all of us into streams of data—watchable, rankable, profitable—making “If you’re not pay
·thisisweave.com·
From dorm room to default how voyeurism became a business model
Which Humans?
Which Humans?
Large language models (LLMs) have recently made vast advances in both generating and analyzing textual data. Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?” Much of the existing literature largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe that is not fully captured by the textual data on which current LLMs have been trained.
·hks.harvard.edu·
Which Humans?