Seeing Like a Data Structure | Belfer Center for Science and International Affairs
Our data-centric way of seeing the world isn't serving us well. Barath Raghavan and Bruce Schneier argue that we need new socio-technical systems that leave room for the inherent messiness of reality.
🦜Stochastic Parrots Day Reading List🦜 On March 17, 2023, Stochastic Parrots Day organized by T Gebru, M Mitchell, and E Bender and hosted by The Distributed AI Research Institute (DAIR) was held online commemorating the 2nd anniversary of the paper’s publication. Below are the readings which po...
A New York Times Book Review Editors' Choice"In Daub’s hands the founding concepts of Silicon Valley don’t make money; they fall apart." --The New York T...
Uncanny Valley A Memoir, Anna Wiener | 9781250785695 | Boeken | bol
Uncanny Valley A Memoir (Paperback). A NEW YORK TIMES BESTSELLER. ONE OF THE NEW YORK TIMES'S 10 BEST BOOKS OF 2020. Named one of the Best Books of 2020...
Waag | Nederlandse bevolking stelt prioriteiten voor onderzoeksagenda AI
Onderzoek naar mening Nederlander over AI: 58% van de Nederlandse bevolking acht het thema "Nepnieuws, nepfoto's en polarisatie" cruciaal als het gaat over de ontwikkeling van Artificiële Intelligentie (AI) en het onderzoek hiernaar.
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
AI Nationalism(s): Global Industrial Policy Approaches to AI
Our latest report diagnoses concentration of power in the tech industry as a pressing challenge – and points the path forward to seize this moment of change.
Ecosystem - Future Art Ecosystems 4: Art x Public AI
Future Art Ecosystems 4: Art x Public AI provides analyses, concepts and strategies for responding to the transformations of AI systems on culture and society.
Future Art Ecosystems 4: Art x Public AI provides analyses, concepts and strategies for responding to the transformations of AI systems on culture and society.
A Roadmap to Democratic AI - 2024 — The Collective Intelligence Project
We are launching a "Roadmap to Democratic AI" outlining paths towards greater collective stewardship and better distribution of AI's benefits. Our roadmap outlines concrete steps that can be taken in 2024 to build a more democratic AI ecosystem that is adaptive, accountable, processes dece
Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help.
We look for playfulness as AI golems track mud everywhere, white people get upset for the wrong reasons, and companies attempt to further abdicate responsibility by blaming it on the AI chatbot.