Which humans
Universities are embracing AI: will students get smarter or stop thinking?
Millions of students arriving at campuses are now using artificial intelligence. Worries abound.
AI machines aren’t ‘hallucinating’. But their makers are
Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves
The Great Escape: What Happens When the Builders of the Future No Longer Want to Live in It
The Great Escape: What Happens When the Builders of the Future No Longer Want to Live in It
Peter Thiel purchased a 477-acre compound in New Zealand and secured citizenship via a special investor visa, even though he’d only spent twelve days in the country.¹
Sam Altman, CEO of OpenAI, has reportedly stockpiled weapons, gold, and antibiotics in preparation for societal collapse.²
Reid Hoffman, co-founder of LinkedIn, estimates that more than half of Silicon Valley billionaires have bought some form of “apocalypse insurance” from private islands to alternate passports to reinforced bunkers.³
And then there’s Mark Zuckerberg. Over the past several years, he has quietly built a 1,400-acre ranch in Kauai, complete with multiple mansions, tunnels, and what planning documents describe as an underground shelter.⁴
These are not fringe survivalists.
They are the architects of our digital civilization, the people who built the systems that shape how we work, communicate, and think.
And yet, they are not building a better future.
They are building exits from the future they created.
Private jets sit fueled on tarmacs, ready for 24/7 departure.
Bunkers in Hawaii resemble small underground towns.
Companies promise to upload consciousness, to escape even death itself.
These people are investing in technologies to preserve their brains
This is what “winning” looks like when you optimize for growth without values, when you extract without contributing, when you innovate without asking why.
You end up so disconnected from humanity that your endgame is literally escaping it.
But maybe the more uncomfortable question isn’t why they’re leaving, it’s why the rest of us are still following them.
Why do we listen to everything they say. Have we vacated our minds and our values and lost our ability to ask questions and think critically?
Maybe the real revolution ahead isn’t technological at all.
Maybe it’s moral.
And are we really by-standers? Or worse followers? Is that what we are?
********************************************************************************
Stephen Klein
The trick with technology is to avoid spreading darkness at the speed of light
Founder & CEO, Curiouser.AI
— the only AI designed to augment human intelligence.
Lecturer at UC Berkeley.
We are raising on WeFunder and are looking to our community to build GenAI to elevate and build, not diminish and dismantle.
Footnotes
New Zealand investor visa and Thiel’s citizenship: The Guardian, “Peter Thiel granted New Zealand citizenship after spending 12 days in the country” (2017).
Altman’s doomsday preparations: The New Yorker, “Doomsday Prep for the Super-Rich” (2017).
Reid Hoffman’s ‘apocalypse insurance’ estimate: The New Yorker, ibid.
Zuckerberg’s Kauai compound and underground shelter: Wired, “Inside Mark Zuckerberg’s Secret Hawaii Compound” (2024); Business Insider, “Mark Zuckerberg built an underground shelter on his Hawaii estate” (2025). | 158 comments on LinkedIn
The Majority AI View - Anil Dash
A blog about making culture. Since 1999.
Sexualized Surveillance: OpenAI's Big Pivot
Open AI pivots from AGI to sexbots, the timing around California’s AI bills, plus 3 threats business leaders need to plan for in a post SB 243 world | Edition 22
Preprint: Human Ignorance Is All You Need
Drawing upon the foundational work of Dunning and Kruger (1999).
Even the Inventor of 'Vibe Coding' Says Vibe Coding Can't Cut It
Humans keep hanging on.
Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples
Poisoning attacks can compromise the safety of large language models (LLMs) by injecting malicious documents into their training data. Existing work has studied pretraining poisoning assuming adversaries control a percentage of the training corpus. However, for large models, even small percentages translate to impractically large amounts of data. This work demonstrates for the first time that poisoning attacks instead require a near-constant number of documents regardless of dataset size. We conduct the largest pretraining poisoning experiments to date, pretraining models from 600M to 13B parameters on chinchilla-optimal datasets (6B to 260B tokens). We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data. We also run smaller-scale experiments to ablate factors that could influence attack success, including broader ratios of poisoned to clean data and non-random distributions of poisoned samples. Finally, we demonstrate the same dynamics for poisoning during fine-tuning. Altogether, our results suggest that injecting backdoors through data poisoning may be easier for large models than previously believed as the number of poisons required does not scale up with model size, highlighting the need for more research on defences to mitigate this risk in future models.
Open Pit Programming: Silicon Valley’s Industrial Extraction of Human Potential Part II
Did you miss Part I?
Facebook’s motto, ‘move fast and break things,’ could have been written for hydraulic mining companies. Both unleashed unprecedented forces to extract maximum value, ignored downstream consequences and transformed their industries through sheer destructive efficiency. The difference? Hydraulic mining destroyed hillsides; social media plunders people’s humanity.
As a child I loved field trips to Malakoff Diggins State Park, a wild canyon-shaped scar in the foothill scrub ju
AI has social consequences, but who pays the price? Tech companies’ problem with ‘ethical debt’
Generative AI is designed to produce the unforeseen, but that doesn’t mean developers can’t predict the types of social consequences it may cause.
#iphone | Jason Murrell | 43 comments
🤳 If Your Apple #iPhone Gets Stolen...
Thieves try two things fast... turn on Airplane Mode, then power it off.
Do these steps now so they can’t!!
1. Stop Airplane Mode from the lock screen
📲 Settings → Face ID & Passcode → enter passcode.
📲 In 'Allow Access When Locked,' turn off Control Centre.
📲 While you’re there, also turn off Wallet, USB Accessories, Notification Centre, Siri on Lock Screen. This blocks the quick toggles thieves rely on.
* Optional (iOS 18) Settings → Control Centre → remove Airplane Mode from your controls so it’s harder to hit even when unlocked.
2. Make 'Find My' unkillable
📲 Settings → (your name) → Find My → Find My iPhone ON.
📲 Turn on Find My network and Send Last Location. On supported models, this can help locate the phone even if it’s powered off or the battery dies.
📲 In the Find My app, learn the flow ~ Devices → your iPhone → Mark As Lost (this locks it, shows a message, and suspends Apple Pay). You can also do this at iCloud.com/find.
3. Turn on Stolen Device Protection
📲 Settings → Face ID & Passcode → Stolen Device Protection ON.
📲 This forces Face ID/Touch ID for sensitive actions and adds a delay if you’re away from familiar locations. It stops thieves changing your Apple ID or passcode in a hurry.
4. Harden your passcode and Face ID
📲 Settings → Face ID & Passcode → Change Passcode → Passcode Options → choose Custom Alphanumeric (best) or at least a long numeric.
📲 Toggle Require Attention for Face ID so someone can’t unlock it by pointing it at your face while you’re asleep.
5. Hide your one time codes & money
📲 Settings → Notifications → Show Previews → When Unlocked. This stops OTPs showing on the lock screen.
📲 Wallet → turn off Double Click Side Button on Lock Screen.
📲 Consider Erase Data (10 failed passcode attempts) in Face ID & Passcode if you don’t have kids who might trigger it.
6. Lock your SIM
📲 Settings → Mobile/Cellular → SIM PIN ON → set a new PIN.
📲 Note: know your carrier’s default PIN first. If you enter it wrong three times you’ll need the PUK from your carrier to unlock the SIM. This stops thieves popping your SIM into another phone for SMS resets.
7. Lock down your Apple ID
📲 Settings → (your name) → Password & Security → make sure Two Factor Authentication is on.
📲 Add a Recovery Contact in case you get locked out.
📲 If the phone is stolen, go to appleid.apple.com and remove cards/devices you don’t control.
8. Practice the drill
📲 Open Find My and run a dry run ~ locate, play sound, Mark As Lost (cancel before confirming).
📲 Share your location with a trusted person. Trade 'in case of loss' steps.
Quick checklist (do it now)
☑️ Control Centre off on Lock Screen
☑️ Find My + Find My network + Send Last Location on
☑️ Stolen Device Protection on
☑️ Strong passcode + Require Attention
☑️ Notification previews when unlocked only
☑️ SIM PIN set
☑️ Apple ID 2FA + Recovery Contact set
These settings turn a 'Smash & Grab' into a dead end. | 43 comments on LinkedIn
KeShaun Pearson on Silicon Valley's data center expansion in poor areas | Karen Hao posted on the topic | LinkedIn
KeShaun Pearson, speaking on Silicon Valley's aggressive data center expansion in impoverished areas, gets to the heart of how the AI industry preys more broadly on people, institutions, communities:
"We've been economically strangled for so long and needed a breath of fresh air. And so it is cruel to use the myth of economic prosperity to push forward a project that is only going to bring pain and pollution." | 12 comments on LinkedIn
Human Error Is the Point: On Teaching College During the Rise of AI
My syllabus is a mess of half-remembered intentions. I re-use icebreakers that I know don’t work. I forget to grade the first assignment until Week Four. I write emails that begin with “So sorry for the delay!” and I mean it. I use “This reminded me of something I once read—” as a stall tactic. I say “I don’t know” more times than I should. I also say “I love that” when I don’t. Because I want to encourage them. Because I do love that they showed up. Because showing up is a miracle.
Ghost Workers in the AI Machine:
U.S. Data Workers Speak Out About Big Tech’s Exploitation
There is no ethical or responsible way to use artificial intelligence
Michael Smith discusses the environmental impacts of AI use and how society needs to rethink its use of this technology.
Major Discord hack exposes the real risks of digital ID
Tens of thousands of Discord users may have seen their ID data hacked. This doesn't bode well for the UK's Digital ID push.
Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users' actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants' willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.
Opinion | The A.I. Prompt That Could End the World
A destructive A.I., like a nuclear bomb, is now a concrete possibility; the question is whether anyone will be reckless enough to build one.
People Are Making Sora 2 Videos of Stephen Hawking Being Horribly Brutalized
People are using OpenAI's Sora 2 to generate videos of theoretical physicist Stephen Hawking being brutalized in ghoulish ways.
OpenAI launch of video app Sora plagued by violent and racist images: ‘The guardrails are not real’
Misinformation researchers say lifelike scenes could obfuscate truth and lead to fraud, bullying and intimidation
ChatControl
Filevine Doc Viewer
Large Language Muddle | The Editors
The AI upheaval is unique in its ability to metabolize any number of dread-inducing transformations. The university is becoming more corporate, more politically oppressive, and all but hostile to the humanities? Yes — and every student gets their own personal chatbot. The second coming of the Trump Administration has exposed the civic sclerosis of the US body politic? Time to turn the Social Security Administration over to Grok. Climate apocalypse now feels less like a distant terror than a fact of life? In five years, more than a fifth of global energy demand will come from data centers alone.
Ghost Workers in the AI Machine:
U.S. Data Workers Speak Out About Big Tech’s Exploitation
Are We in an AI Bubble?
The entire U.S. economy is being propped up by the promise of productivity gains that seem very far from materializing.
The Illusion of Readiness: Stress Testing Large Frontier Models on Multimodal Medical Benchmarks
An essay on wank | deadSimpleTech
This captures well the uncomfortable, slightly disorienting feeling that wank creates when you're subjected to it, wherein you're expected to speak about and think about the statement as though it says what it facially does, but also not push too hard or at all, because challenging the factuality or other face-value elements of the statement is a personal attack on the person saying it and their identity. I'm sure we've all been in such situations, unfortunately, and we can all point to lots of situations where wank is prevalent in our current society.
Signal president Meredith Whittaker: ‘In technology, it’s way too easy for marketing to replace substance. That’s what’s happened with Telegram’
The app best known for respecting privacy looks to grow, despite anti-privacy efforts
Sam Altman’s AI empire will devour as much power as New York City and San Diego combined. Experts say it's 'scary' | Fortune
Andrew Chien told Fortune he's been a computer science for 40 years but we're close to "some seminal moments for how we think about AI and its impact on society.”