Digital Ethics

Digital Ethics

3963 bookmarks
Custom sorting
Are We in an AI Bubble?
Are We in an AI Bubble?
The entire U.S. economy is being propped up by the promise of productivity gains that seem very far from materializing.
·theatlantic.com·
Are We in an AI Bubble?
An essay on wank | deadSimpleTech
An essay on wank | deadSimpleTech
This captures well the uncomfortable, slightly disorienting feeling that wank creates when you're subjected to it, wherein you're expected to speak about and think about the statement as though it says what it facially does, but also not push too hard or at all, because challenging the factuality or other face-value elements of the statement is a personal attack on the person saying it and their identity. I'm sure we've all been in such situations, unfortunately, and we can all point to lots of situations where wank is prevalent in our current society.
·deadsimpletech.com·
An essay on wank | deadSimpleTech
AI data centers are undermining climate solutions
AI data centers are undermining climate solutions
The scrutiny of data centers has intensified because of tech company secrecy, energy consumption and societal impacts on customers, policymakers and communities.
·trellis.net·
AI data centers are undermining climate solutions
The Staggering Ecological Impacts of Computation and the Cloud
The Staggering Ecological Impacts of Computation and the Cloud
Anthropologist Steven Gonzalez Monserrate draws on five years of research and ethnographic fieldwork in server farms to illustrate some of the diverse environmental impacts of data storage.
·thereader.mitpress.mit.edu·
The Staggering Ecological Impacts of Computation and the Cloud
The Illusion of Conscious AI -
The Illusion of Conscious AI -
Debunking AI consciousness claims: Why Geoffrey Hinton's argument is flawed and why AI, despite its intelligence, is not truly conscious
·thomasramsoy.com·
The Illusion of Conscious AI -
From dorm room to default how voyeurism became a business model
From dorm room to default how voyeurism became a business model
By Christine HaskellOrigins in RejectionOver twenty years ago, a college sophomore sat in a dorm room, stewing after rejection, and built a crude website called FaceMash, where students could rate women like trading cards. Prank as power grab. Voyeurism coded as innovation.We like to file that under “youthful mistake.” It wasn’t. The logic metastasized. The same impulse that turns women into scores now turns all of us into streams of data—watchable, rankable, profitable—making “If you’re not pay
·thisisweave.com·
From dorm room to default how voyeurism became a business model
Which Humans?
Which Humans?
Large language models (LLMs) have recently made vast advances in both generating and analyzing textual data. Technical reports often compare LLMs’ outputs with “human” performance on various tests. Here, we ask, “Which humans?” Much of the existing literature largely ignores the fact that humans are a cultural species with substantial psychological diversity around the globe that is not fully captured by the textual data on which current LLMs have been trained.
·hks.harvard.edu·
Which Humans?
My New Op-Ed in the Culture Pages of Magazine L’Espresso | Antonio A. Casilli
My New Op-Ed in the Culture Pages of Magazine L’Espresso | Antonio A. Casilli
A new article of Yours Truly is featured in the culture pages of the Italian magazine L’Espresso. I introduce the readers to our award-winning documentary In the Belly of AI, which I co-wrote and that will presented at the Festival del Pensare Contemporaneo in Piacenza, Italy on September 13, 2025. The 75-minute documentary, co-written with Julien Goetz and Lili Fernandez and directed by Henri Poulain, exposes the hidden human and environmental costs behind artificial intelligence systems—from data centers consuming massive natural resources to underpaid “data workers” in the Global South who process disturbing content to train algorithms, often developing psychological […]
·casilli.fr·
My New Op-Ed in the Culture Pages of Magazine L’Espresso | Antonio A. Casilli
AI doesn’t just lie — it can make you believe it
AI doesn’t just lie — it can make you believe it
Memory manipulation, notes Pat Pataranutaporn, a researcher with the MIT Media Lab, is a very different process from fooling people with deep-fakes.
·japantimes.co.jp·
AI doesn’t just lie — it can make you believe it
Building a better relationship with AI?
Building a better relationship with AI?
Read for free https://acuity.design/building-a-better-relationship-with-ai/ I spoke yesterday at the Content Design Club meetup about accessibility and humane design.
·linkedin.com·
Building a better relationship with AI?
🚨 You might wanna turn off those auto-joining AI note takers for a hot minute.
🚨 You might wanna turn off those auto-joining AI note takers for a hot minute.
🚨 You might wanna turn off those auto-joining AI note takers for a hot minute. Like, right now. I’m not kidding. Because FYI, Otter just got slammed with a federal lawsuit for recording millions of people WITHOUT consent and using their (your?) voices and (probably) confidential data to train AI. The math is actually alarming: 25 million users × 1 billion meetings = the largest theft of conversational data in human history….? And it's technically "legal" because they shift liability to users? Wow. Basically, if ANYONE on your Zoom/Teams/Meet has Otter integrated, their AI bot can slip into your meeting and start recording. You don't get asked. No popup. No disclosure. Plot twist: This isn't just Otter..that’s just where it’s starting. Every "helpful" AI assistant uses the same playbook: •Join your workspace • Extract your data • Train on your ideas • Sell back "improvements" Think about what you've said in "private" calls lately: Your salary negotiation. Medical stuff? Family drama? Legal strategy? Yikes. 😬 I study this stuff AND I've always felt sketchy about these tools. They can be legitimately SO helpful for me, but I still: put a disclosure in every meeting invite, get verbal consent before any call starts, let people opt out, always. Lately, I’ve been using Google Gemini because (allegedly) conversations and transcriptions are not used for machine learning improvement or AI model training, and I feel ok about storing the transcripts in my workspace with other confidential info, plus I like the transparency of the other call attendees getting the notes right away for their records as well. Maybe that will change in the future, but that’s where I’m at. Does this change how you think about/use notetakers? Tell me everything, and change my mind if you need to.😅 | 100 comments on LinkedIn
·linkedin.com·
🚨 You might wanna turn off those auto-joining AI note takers for a hot minute.