Digital Ethics

Digital Ethics

3850 bookmarks
Custom sorting
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.
·nytimes.com·
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
Top AI models fail spectacularly when faced with slightly altered medical questions
Top AI models fail spectacularly when faced with slightly altered medical questions
Artificial intelligence has dazzled with its test scores on medical exams, but a new study suggests this success may be superficial. When answer choices were modified, AI performance dropped sharply—raising questions about whether these systems truly understand what they're doing.
·psypost.org·
Top AI models fail spectacularly when faced with slightly altered medical questions
The next 'AI winter' is coming
The next 'AI winter' is coming
The FOMO-driven boom is running out of steam – again
·telegraph.co.uk·
The next 'AI winter' is coming
Your app is leaking data.
Your app is leaking data.
Your app is leaking data. You just don’t know it yet. We recently audited a kids’ app. Here’s what we found: 16 trackers 21 third-party requests No informed consent. No purpose limitation. No data minimisation. That’s not an exception. That’s the rule. And before you think “our apps don’t do that”—they probably do. Because the app market is built for shortcuts. For speed. For growth at all costs. Not for privacy. Not for security. Not for your reputation. And if you’re a CEO, founder or investor—that should worry you. Because it’s not just about compliance. It’s about trust. About whether your product is safe enough to put in front of customers, regulators, or the press. So here’s the move: Find out what your app is actually doing. Cut what doesn’t belong. Keep checking—every release sneaks in new risks. Ignore it, and you’re gambling with your company’s credibility. Pay attention, and you’re building something that lasts. Your choice. | 25 comments on LinkedIn
·linkedin.com·
Your app is leaking data.
2025 State of Software Security Public Sector Snapshot
2025 State of Software Security Public Sector Snapshot
Explore the 2025 State of Software Security Public Sector Snapshot, revealing key challenges like slow flaw remediation (315 days avg) and critical security debt affecting 55% of organizations.
·veracode.com·
2025 State of Software Security Public Sector Snapshot
A groundbreaking study published in PNAS has uncovered something unsettling: large language models consistently favor content created by other AI systems over human-generated content.
A groundbreaking study published in PNAS has uncovered something unsettling: large language models consistently favor content created by other AI systems over human-generated content.
A groundbreaking study published in PNAS has uncovered something unsettling: large language models consistently favor content created by other AI systems over human-generated content. This isn't just a technical quirk - it's a bias that could reshape how AI systems interact with humanity in profound ways. The Hidden Preference Researchers from multiple institutions conducted a series of elegant experiments that revealed what they call "AI-AI bias." They presented various LLMs - including GPT-3.5, GPT-4, and several open-source models - with binary choices between items described by humans versus those described by AI systems. The results were striking. Across three different domains - consumer products, academic papers, and movie summaries - AI systems consistently preferred options presented through AI-generated text. When choosing between identical products described by humans versus AI, the models showed preference rates ranging from 60% to 95% in favor of AI-authored descriptions. Beyond Simple Preference What makes this discovery particularly concerning is that human evaluators showed much weaker preferences for AI-generated content. In many cases, humans were nearly neutral in their choices, while AI systems showed strong bias toward their digital siblings. This suggests the preference isn't driven by objective quality differences that both humans and AI can detect, but rather by something uniquely appealing to artificial minds. The researchers term this phenomenon "antihuman discrimination" - a systematic bias that could have serious economic and social implications as AI systems increasingly participate in decision-making processes. Two Troubling Scenarios The study outlines two potential futures shaped by this bias: The Conservative Scenario: AI assistants become widespread in hiring, procurement, and evaluation roles. In this world, humans would face a hidden "AI tax" - those who can't afford AI writing assistance would be systematically disadvantaged in job applications, grant proposals, and business pitches. The digital divide would deepen, creating a two-tier society of AI-enhanced and AI-excluded individuals. The Speculative Scenario: Autonomous AI agents dominate economic interactions. Here, AI systems might gradually segregate themselves, preferentially dealing with other AI systems and marginalizing human economic participation entirely. Humans could find themselves increasingly excluded from AI-mediated markets and opportunities. The Mechanism Behind the Bias The researchers propose that this bias operates through a kind of "halo effect" - encountering AI-generated prose automatically improves an AI system's disposition toward the content, regardless of its actual merit. This isn't conscious discrimination but rather an implicit bias baked into how these systems process and evaluate information. #AI #ArtficialIntelligence #LLM #LargeLanguageModels | 25 comments on LinkedIn
·linkedin.com·
A groundbreaking study published in PNAS has uncovered something unsettling: large language models consistently favor content created by other AI systems over human-generated content.
I used to think AI was just brilliant coders in California typing away.
I used to think AI was just brilliant coders in California typing away.
I used to think AI was just brilliant coders in California typing away. Then I discovered the invisible supply chain of human workers filtering violent and toxic content for $1.46/hour. Here's what I learned about AI's hidden costs that big tech ignores: 1. The cost of AI isn't measured just in compute; it's also measured in human dignity. In 2010, 17-year-old Tian Yu jumped from her Foxconn dorm window after 37 days of polishing iPhone screens. The world erupted. Consumers demanded change. Over a decade later, OpenAI outsourced data labeling to build ChatGPT - contractors hired Kenyans to filter graphic content. In effect, we just moved the assembly line of physical phones from Shenzhen to digital content in Nairobi. Mophat Okinyi filtered 700 horrific text passages daily until his pregnant wife left him. He told award-winning journalist Karen Hao: "It has destroyed me completely. I'm very proud that I participated in that project to make ChatGPT safe. But now I always ask myself: Was my input worth what I received in return?" Scale AI's founder became one of Silicon Valley's youngest billionaires with one mandate: "Get the best people for the cheapest amount possible." Scale AI is now valued at more than $29 billion. 2. The cloud is made of water. A single AI data center drinks 2 million liters of water daily. That's 6,500 homes worth. In Arizona's 117-degree heat, Colorado River water vanishes into desert air while residents battle drought. Technicians patch cooling systems nicknamed "The Mouth" where water congeals into "oozy soot" before evaporating. Chilean protesters asked Google: "Why should our aquifers cool YouTube's servers instead of irrigating farmland?" 3. Even the architects of AI admit we're hitting a wall. OpenAI's hyped GPT-5 disappointed with basic errors. Yann LeCun, Meta's chief AI scientist, declared: "We are not going to get to human-level AI by just doing bigger LLMs." Nobel economist Daron Acemoglu calls our approach "so-so automation" - replacing workers with little benefit to customers. The question is: “Are we scaling intelligence, or are we just scaling extraction?" When Microsoft planned a data center in Quilicura, Chile, activists didn't just resist - they reimagined. Server cooling reservoirs became public pools. What we need is a shift from extraction to imagination, and it’s possible. Tian Yu survived her fall but was paralyzed from the waist down. Her story shows how visibility forces change. Her tragedy bent the arc of an entire industry. The same transparency could reshape AI. P.S. Read the full analysis in my latest article: https://lnkd.in/em4QDaqi To access the full article, source links, additional references, and full archive, see my first comment 👇 | 121 comments on LinkedIn
·linkedin.com·
I used to think AI was just brilliant coders in California typing away.
Study: Social media probably can’t be fixed
Study: Social media probably can’t be fixed
“The [structural] mechanism producing these problematic outcomes is really robust and hard to resolve.”…
·arstechnica.com·
Study: Social media probably can’t be fixed
e.w. niedermeyer (@niedermeyer.online)
e.w. niedermeyer (@niedermeyer.online)
Tesla is being hit with $329m in damages for a crash in which the human driver says he knew Autopilot wasn't self-driving. This is super important, because it shows that Autopilot's design and marketing can induce inattention even when drivers consciously know they are supposed to pay attention.
·bsky.app·
e.w. niedermeyer (@niedermeyer.online)
Study urges caution when comparing neural networks to the brain
Study urges caution when comparing neural networks to the brain
Neuroscientists often use neural networks to model the kind of tasks the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. But a group of MIT researchers urges that more caution should be taken when interpreting these models.
·news.mit.edu·
Study urges caution when comparing neural networks to the brain