Found 451 bookmarks
Newest
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The TESCREAL Bundle | DAIR
The TESCREAL Bundle | DAIR
The Distributed AI Research Institute is a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence.
The TESCREAL Bundle | DAIR
Ecosystem - Future Art Ecosystems 4: Art x Public AI
Ecosystem - Future Art Ecosystems 4: Art x Public AI
Future Art Ecosystems 4: Art x Public AI provides analyses, concepts and strategies for responding to the transformations of AI systems on culture and society.
Ecosystem - Future Art Ecosystems 4: Art x Public AI
Artifice and Intelligence
Artifice and Intelligence
Emily Tucker, Executive Director of the Center on Privacy & Technology at Georgetown Law, on tech, vocabulary and power.
Artifice and Intelligence
Future Art Ecosystems 4: Art x Public AI
Future Art Ecosystems 4: Art x Public AI
Future Art Ecosystems 4: Art x Public AI provides analyses, concepts and strategies for responding to the transformations of AI systems on culture and society.
Future Art Ecosystems 4: Art x Public AI
A Roadmap to Democratic AI - 2024 — The Collective Intelligence Project
A Roadmap to Democratic AI - 2024 — The Collective Intelligence Project
We are launching a "Roadmap to Democratic AI" outlining paths towards greater collective stewardship and better distribution of AI's benefits. Our roadmap outlines concrete steps that can be taken in 2024 to build a more democratic AI ecosystem that is adaptive, accountable, processes dece
A Roadmap to Democratic AI - 2024 — The Collective Intelligence Project
Measuring Diversity
Measuring Diversity
Search results that reflect historic inequities can amplify stereotypes and perpetuate under-representation. Carefully measuring diversity in data sets can help.
Measuring Diversity
Pathfinders Newmoonsletter, March 2024
Pathfinders Newmoonsletter, March 2024
We look for playfulness as AI golems track mud everywhere, white people get upset for the wrong reasons, and companies attempt to further abdicate responsibility by blaming it on the AI chatbot.
Pathfinders Newmoonsletter, March 2024
Let's not do this again, please
Let's not do this again, please
OpenAI's text-to-video generator Sora is being hyped as a game-changer by an industry at a crossroads. This time we should know better
Let's not do this again, please