Digital Ethics

Digital Ethics

4001 bookmarks
Custom sorting
I met Dorsen through a Sky News investigation.
I met Dorsen through a Sky News investigation.
I met Dorsen through a Sky News investigation. He was eight years old. For twelve hours each day, he sorted rocks in Kasulo mine, searching for streaks of cobalt. His payment: 10 cents. His last meal: two days ago. His mother: dead. Working beside him: Monica, age four. The cobalt they extracted traveled to Chinese refineries, where it sold for 2,000 times what Dorsen received. From there, it became batteries powering the data centers running ChatGPT, the servers training every major AI system transforming our world. Dorsen was eventually rescued. But 40,000 other children remain in those mines. This is not the story they tell you about AI. They don't mention Mophat Okinyi, who earned $1.50 per hour reading hundreds of child abuse descriptions daily to make ChatGPT "safe." The work destroyed his marriage. Over 140 of his colleagues have been diagnosed with severe PTSD. They don't mention that Africa produces 70% of the world's cobalt but captures only 3% of the revenue. That content moderators in Kenya earn $1.50 hourly while their American counterparts doing identical work earn $18. That 600 million Africans lack electricity while a single AI data center consumes more power than entire African nations can generate. They don't mention that 92% of African languages are invisible to AI systems. That ChatGPT Plus costs 6 to 39 months of median African income. That venture capital flows to Africa at $2.29 per capita versus $537 in the US, a 234-fold gap. The AI revolution is being built on African bodies, African minerals, African trauma, and African exclusion. But here's what they also don't tell you: M-Pesa serves 50 million users with $100 billion in transactions. InstaDeep achieved a $682 million exit. Masakhane's 2,000 volunteers built NLP datasets for 38+ African languages. When Africans build for Africa, innovation flourishes. I've spent months documenting the supply chains, power structures, and human stories behind the AI access gap. Every cited source. Every data point. Every name. What I found will change how you think about every AI tool you use. The question isn't whether Africa has the capacity. It's whether the world has the courage to build technology that serves humanity, not just the most privileged. Read the full investigation on Aylgorith: https://lnkd.in/dDyffjmw #AIEthics #DigitalColonialism #TechJustice #GlobalSouth | 18 comments on LinkedIn
·linkedin.com·
I met Dorsen through a Sky News investigation.
Surveillance Secrets - Lighthouse Reports
Surveillance Secrets - Lighthouse Reports
Trove of surveillance data challenges what we thought we knew about location tracking tools, who they target and how far they have spread
·lighthousereports.com·
Surveillance Secrets - Lighthouse Reports
Formal request to everyone on the internet to please stop comparing AI to calculators.
Formal request to everyone on the internet to please stop comparing AI to calculators.
Formal request to everyone on the internet to please stop comparing AI to calculators. Does your TI-84 exploit your vulnerabilities and compile personal data about you and your children to sell you junk via targeted ads that allege they will somehow make your life better? Does the Casio FX-300 have the ability to generate deepfakes without consent to create a disorienting reality in which factual grounding is absent and women and children are further exploited and objectified? Do calculators push agendas to perpetuate political and ideological narratives to further divide us and keep us too preoccupied by rage bait to come together and make meaningful change as a collective? Do calculators demonstrate bias and propagate a lack of equity and fairness across demographics resulting in marginalized groups facing higher rates of unemployment and homelessness? Calculators are not being used as a superficial salve to mitigate the loneliness epidemic. They are not being used as sounding boards for suicidal and depressed children. They are not being used to bully and harass and...I could keep going. So please, stop treating AI as if it is only conversational/writing-enhancement tool. AI is actively being used to collect endless data to target each and every one of us without our consent, even those of us who avoid using it (Flock's AI surveillance, Amazon's Ring cameras, our smartphones, smart city initiatives, etc). It is harming children, it is harming the planet, and it is turning the internet, a once sorta neat place, into ruins.
·linkedin.com·
Formal request to everyone on the internet to please stop comparing AI to calculators.
AI Data Centers Create Fury From Mexico to Ireland
AI Data Centers Create Fury From Mexico to Ireland
As tech companies build data centers worldwide to advance artificial intelligence, vulnerable communities have been hit by blackouts and water shortages.
·nytimes.com·
AI Data Centers Create Fury From Mexico to Ireland
A Single Character can Make or Break Your LLM Evals
A Single Character can Make or Break Your LLM Evals
Common Large Language model (LLM) evaluations rely on demonstration examples to steer models' responses to the desired style. While the number of examples used has been studied and standardized,...
·arxiv.org·
A Single Character can Make or Break Your LLM Evals
How low-paid workers in Madagascar power French tech’s AI ambitions
How low-paid workers in Madagascar power French tech’s AI ambitions
An investigation has revealed that French tech firms, seeking to create an AI “à la française”, have turned to one of the country’s former colonies, Madagascar, for low-cost labour.
·theconversation.com·
How low-paid workers in Madagascar power French tech’s AI ambitions
noyb win: Microsoft 365 Education tracks school children
noyb win: Microsoft 365 Education tracks school children
Favorable decision by the Austrian DSB: Microsoft Education 365 may not track school kids and Microsoft is ordered to provide full access to kids' data.
·noyb.eu·
noyb win: Microsoft 365 Education tracks school children
Annotated History of Modern AI and Deep Learning
Annotated History of Modern AI and Deep Learning
Machine learning is the science of credit assignment: finding patterns in observations that predict the consequences of actions and help to improve future performance. Credit assignment is also required for human understanding of how the world works, not only for individuals navigating daily life, but also for academic professionals like historians who interpret the present in light of past events. Here I focus on the history of modern artificial intelligence (AI) which is dominated by artificial neural networks (NNs) and deep learning, both conceptually closer to the old field of cybernetics than to what's been called AI since 1956 (e.g., expert systems and logic programming). A modern history of AI will emphasize breakthroughs outside of the focus of traditional AI text books, in particular, mathematical foundations of today's NNs such as the chain rule (1676), the first NNs (linear regression, circa 1800), and the first working deep learners (1965-). From the perspective of 2022, I provide a timeline of the -- in hindsight -- most important relevant events in the history of NNs, deep learning, AI, computer science, and mathematics in general, crediting those who laid foundations of the field. The text contains numerous hyperlinks to relevant overview sites from my AI Blog. It supplements my previous deep learning survey (2015) which provides hundreds of additional references. Finally, to round it off, I'll put things in a broader historic context spanning the time since the Big Bang until when the universe will be many times older than it is now.
·arxiv.org·
Annotated History of Modern AI and Deep Learning
Which humans
Which humans
·coevolution.fas.harvard.edu·
Which humans
The Great Escape: What Happens When the Builders of the Future No Longer Want to Live in It
The Great Escape: What Happens When the Builders of the Future No Longer Want to Live in It
The Great Escape: What Happens When the Builders of the Future No Longer Want to Live in It Peter Thiel purchased a 477-acre compound in New Zealand and secured citizenship via a special investor visa, even though he’d only spent twelve days in the country.¹ Sam Altman, CEO of OpenAI, has reportedly stockpiled weapons, gold, and antibiotics in preparation for societal collapse.² Reid Hoffman, co-founder of LinkedIn, estimates that more than half of Silicon Valley billionaires have bought some form of “apocalypse insurance” from private islands to alternate passports to reinforced bunkers.³ And then there’s Mark Zuckerberg. Over the past several years, he has quietly built a 1,400-acre ranch in Kauai, complete with multiple mansions, tunnels, and what planning documents describe as an underground shelter.⁴ These are not fringe survivalists. They are the architects of our digital civilization, the people who built the systems that shape how we work, communicate, and think. And yet, they are not building a better future. They are building exits from the future they created. Private jets sit fueled on tarmacs, ready for 24/7 departure. Bunkers in Hawaii resemble small underground towns. Companies promise to upload consciousness, to escape even death itself. These people are investing in technologies to preserve their brains This is what “winning” looks like when you optimize for growth without values, when you extract without contributing, when you innovate without asking why. You end up so disconnected from humanity that your endgame is literally escaping it. But maybe the more uncomfortable question isn’t why they’re leaving, it’s why the rest of us are still following them. Why do we listen to everything they say. Have we vacated our minds and our values and lost our ability to ask questions and think critically? Maybe the real revolution ahead isn’t technological at all. Maybe it’s moral. And are we really by-standers? Or worse followers? Is that what we are? ******************************************************************************** Stephen Klein The trick with technology is to avoid spreading darkness at the speed of light Founder & CEO, Curiouser.AI  — the only AI designed to augment human intelligence. Lecturer at UC Berkeley. We are raising on WeFunder and are looking to our community to build GenAI to elevate and build, not diminish and dismantle. Footnotes New Zealand investor visa and Thiel’s citizenship: The Guardian, “Peter Thiel granted New Zealand citizenship after spending 12 days in the country” (2017). Altman’s doomsday preparations: The New Yorker, “Doomsday Prep for the Super-Rich” (2017). Reid Hoffman’s ‘apocalypse insurance’ estimate: The New Yorker, ibid. Zuckerberg’s Kauai compound and underground shelter: Wired, “Inside Mark Zuckerberg’s Secret Hawaii Compound” (2024); Business Insider, “Mark Zuckerberg built an underground shelter on his Hawaii estate” (2025). | 158 comments on LinkedIn
·linkedin.com·
The Great Escape: What Happens When the Builders of the Future No Longer Want to Live in It
Sexualized Surveillance: OpenAI's Big Pivot
Sexualized Surveillance: OpenAI's Big Pivot
Open AI pivots from AGI to sexbots, the timing around California’s AI bills, plus 3 threats business leaders need to plan for in a post SB 243 world | Edition 22
·disesdi.substack.com·
Sexualized Surveillance: OpenAI's Big Pivot
Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples
Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples
Poisoning attacks can compromise the safety of large language models (LLMs) by injecting malicious documents into their training data. Existing work has studied pretraining poisoning assuming adversaries control a percentage of the training corpus. However, for large models, even small percentages translate to impractically large amounts of data. This work demonstrates for the first time that poisoning attacks instead require a near-constant number of documents regardless of dataset size. We conduct the largest pretraining poisoning experiments to date, pretraining models from 600M to 13B parameters on chinchilla-optimal datasets (6B to 260B tokens). We find that 250 poisoned documents similarly compromise models across all model and dataset sizes, despite the largest models training on more than 20 times more clean data. We also run smaller-scale experiments to ablate factors that could influence attack success, including broader ratios of poisoned to clean data and non-random distributions of poisoned samples. Finally, we demonstrate the same dynamics for poisoning during fine-tuning. Altogether, our results suggest that injecting backdoors through data poisoning may be easier for large models than previously believed as the number of poisons required does not scale up with model size, highlighting the need for more research on defences to mitigate this risk in future models.
·arxiv.org·
Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples
Open Pit Programming: Silicon Valley’s Industrial Extraction of Human Potential Part II
Open Pit Programming: Silicon Valley’s Industrial Extraction of Human Potential Part II
Did you miss Part I? Facebook’s motto, ‘move fast and break things,’ could have been written for hydraulic mining companies. Both unleashed unprecedented forces to extract maximum value, ignored downstream consequences and transformed their industries through sheer destructive efficiency. The difference? Hydraulic mining destroyed hillsides; social media plunders people’s humanity. As a child I loved field trips to Malakoff Diggins State Park, a wild canyon-shaped scar in the foothill scrub ju
·caitlinsteele.com·
Open Pit Programming: Silicon Valley’s Industrial Extraction of Human Potential Part II
#iphone | Jason Murrell | 43 comments
#iphone | Jason Murrell | 43 comments
🤳 If Your Apple #iPhone Gets Stolen... Thieves try two things fast... turn on Airplane Mode, then power it off. Do these steps now so they can’t!! 1. Stop Airplane Mode from the lock screen 📲 Settings → Face ID & Passcode → enter passcode. 📲 In 'Allow Access When Locked,' turn off Control Centre. 📲 While you’re there, also turn off Wallet, USB Accessories, Notification Centre, Siri on Lock Screen. This blocks the quick toggles thieves rely on. * Optional (iOS 18) Settings → Control Centre → remove Airplane Mode from your controls so it’s harder to hit even when unlocked. 2. Make 'Find My' unkillable 📲 Settings → (your name) → Find My → Find My iPhone ON. 📲 Turn on Find My network and Send Last Location. On supported models, this can help locate the phone even if it’s powered off or the battery dies. 📲 In the Find My app, learn the flow ~ Devices → your iPhone → Mark As Lost (this locks it, shows a message, and suspends Apple Pay). You can also do this at iCloud.com/find. 3. Turn on Stolen Device Protection 📲 Settings → Face ID & Passcode → Stolen Device Protection ON. 📲 This forces Face ID/Touch ID for sensitive actions and adds a delay if you’re away from familiar locations. It stops thieves changing your Apple ID or passcode in a hurry. 4. Harden your passcode and Face ID 📲 Settings → Face ID & Passcode → Change Passcode → Passcode Options → choose Custom Alphanumeric (best) or at least a long numeric. 📲 Toggle Require Attention for Face ID so someone can’t unlock it by pointing it at your face while you’re asleep. 5. Hide your one time codes & money 📲 Settings → Notifications → Show Previews → When Unlocked. This stops OTPs showing on the lock screen. 📲 Wallet → turn off Double Click Side Button on Lock Screen. 📲 Consider Erase Data (10 failed passcode attempts) in Face ID & Passcode if you don’t have kids who might trigger it. 6. Lock your SIM 📲 Settings → Mobile/Cellular → SIM PIN ON → set a new PIN. 📲 Note: know your carrier’s default PIN first. If you enter it wrong three times you’ll need the PUK from your carrier to unlock the SIM. This stops thieves popping your SIM into another phone for SMS resets. 7. Lock down your Apple ID 📲 Settings → (your name) → Password & Security → make sure Two Factor Authentication is on. 📲 Add a Recovery Contact in case you get locked out. 📲 If the phone is stolen, go to appleid.apple.com and remove cards/devices you don’t control. 8. Practice the drill 📲 Open Find My and run a dry run ~ locate, play sound, Mark As Lost (cancel before confirming). 📲 Share your location with a trusted person. Trade 'in case of loss' steps. Quick checklist (do it now) ☑️ Control Centre off on Lock Screen ☑️ Find My + Find My network + Send Last Location on ☑️ Stolen Device Protection on ☑️ Strong passcode + Require Attention ☑️ Notification previews when unlocked only ☑️ SIM PIN set ☑️ Apple ID 2FA + Recovery Contact set These settings turn a 'Smash & Grab' into a dead end. | 43 comments on LinkedIn
·linkedin.com·
#iphone | Jason Murrell | 43 comments
KeShaun Pearson on Silicon Valley's data center expansion in poor areas | Karen Hao posted on the topic | LinkedIn
KeShaun Pearson on Silicon Valley's data center expansion in poor areas | Karen Hao posted on the topic | LinkedIn
KeShaun Pearson, speaking on Silicon Valley's aggressive data center expansion in impoverished areas, gets to the heart of how the AI industry preys more broadly on people, institutions, communities: "We've been economically strangled for so long and needed a breath of fresh air. And so it is cruel to use the myth of economic prosperity to push forward a project that is only going to bring pain and pollution." | 12 comments on LinkedIn
·linkedin.com·
KeShaun Pearson on Silicon Valley's data center expansion in poor areas | Karen Hao posted on the topic | LinkedIn
Human Error Is the Point: On Teaching College During the Rise of AI
Human Error Is the Point: On Teaching College During the Rise of AI
My syllabus is a mess of half-remembered intentions. I re-use icebreakers that I know don’t work. I forget to grade the first assignment until Week Four. I write emails that begin with “So sorry for the delay!” and I mean it. I use “This reminded me of something I once read—” as a stall tactic. I say “I don’t know” more times than I should. I also say “I love that” when I don’t. Because I want to encourage them. Because I do love that they showed up. Because showing up is a miracle.
·therumpus.net·
Human Error Is the Point: On Teaching College During the Rise of AI