AIxDesign Library

AIxDesign Library

1558 bookmarks
Newest
Office Hours: AI Now Is Hiring
Office Hours: AI Now Is Hiring
We’re excited to see interest in the positions AI Now recently posted: (see here for more info on our open Associate Director and Operations Director roles) If you have questions about these positions or AI Now as a workplace, we’re happy to answer them. For equity reasons, we will hold two office hours sessions so […]
·ainowinstitute.org·
Office Hours: AI Now Is Hiring
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
·firstmonday.org·
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The TESCREAL Bundle | DAIR
The TESCREAL Bundle | DAIR
The Distributed AI Research Institute is a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence.
·dair-institute.org·
The TESCREAL Bundle | DAIR
Remembering Our Future: shamanism, oracles and AI
Remembering Our Future: shamanism, oracles and AI
“Remembering Our Future: shamanism, oracles and AI”A Roundtable Conversation with Li-Chun MarinaLin, Cavan McLaughlin, KarinValis, Bogna Konior and Yin-JuChe...
·youtube.com·
Remembering Our Future: shamanism, oracles and AI
PartyRock
PartyRock
An Amazon Bedrock playground
·partyrock.aws·
PartyRock
Open Call for 2024 Open Future Fellows – Open Future
Open Call for 2024 Open Future Fellows – Open Future
Open Future is looking for fellows who will contribute to our work on advancing Digital Public Space and cultivating Digital Commons. For the first time, we are opening the call to creatives. We are a small, dedicated team of advocates, researchers, and community builders working to make the internet open. We strive to question the […]
·openfuture.eu·
Open Call for 2024 Open Future Fellows – Open Future
Tech Won't Save Us
Tech Won't Save Us
Listen to Tech Won't Save Us on Spotify. Silicon Valley wants to shape our future, but why should we let it? Every Thursday, Paris Marx is joined by a new guest to critically examine the tech industry, its big promises, and the people behind them. Tech Won’t Save Us challenges the notion that tech alone can drive our world forward by showing that separating tech from politics has consequences for us all, especially the most vulnerable. It’s not your usual tech podcast.
·open.spotify.com·
Tech Won't Save Us
So You Want to Be a Sorcerer in the Age of Mythic Powers... (The AI Episode)
So You Want to Be a Sorcerer in the Age of Mythic Powers... (The AI Episode)
Listen to this episode from The Emerald on Spotify. The rise of Artificial Intelligence has generated a rush of conversation about benefits and risks, about sentience and intelligence, and about the need for ethics and regulatory measures. Yet it may be that the only way to truly understand the implications of AI — the powers, the potential consequences, and the protocols for dealing with world-altering technologies — is to speak mythically. With the rise of AI, we are entering an era whose only corollary is the stuff of fairy tales and myths. Powers that used to be reserved for magicians and sorcerers — the power to access volumes of knowledge instantaneously, to create fully realized illusory otherworlds, to deceive, to conjure, to transport, to materialize on a massive scale — are no longer hypothetical. The age of metaphor is over. The mythic powers are real. Are human beings prepared to handle such powers?  While the AI conversation centers around regulatory laws, it may be that we also need to look deeper, to understand the chthonic drives at play. And when we do so, we see that the drive to create AI goes beyond narratives of ingenuity, progress, profit, or the creation of a more controllable, convenient world. Buried deep in this urge to tinker with animacy and sentience are core mythic drives —  the longing for mystery, the want to live again in a world of great powers beyond our control,  the longing for death, and ultimately, the unconscious longing for guidance and initiation. Traditionally, there was an initiatory process through which potentially world-altering knowledge was embodied slowly over time.  And so… what needs to be done about ‘The AI question’ might bear much more of a resemblance to the guiding principles of ancient magic and mystery schools than it does to questions of scientific ethics — because the drives at play are deeper and the consequences greater and the magic more real than it’s ever been before. Buckle up for a wild ride through myths of magic and human overreach, and all the kung fu movie and sci fi references you can handle. Featuring music by Charlotte Malin and Sidibe. Listen on a good sound system at a time when you can devote your full attention. Support the show
·open.spotify.com·
So You Want to Be a Sorcerer in the Age of Mythic Powers... (The AI Episode)
The real cost of smart speakers
The real cost of smart speakers
Alexa's recording you. What’s she doing with it?Read Sara’s article about the privacy settings on your smart speaker: https://www.vox.com/recode/2020/12/9/22...
·youtube.com·
The real cost of smart speakers
Watch an AI Julia Fox deliver a sermon about tech doomerism
Watch an AI Julia Fox deliver a sermon about tech doomerism
From AI simps to manic tech overlords, Literally No Place is the short film exploring the ups and downs of artificial intelligence – and the future Big Tech doesn’t want you to see
·dazeddigital.com·
Watch an AI Julia Fox deliver a sermon about tech doomerism
Ecosystem - Future Art Ecosystems 4: Art x Public AI
Ecosystem - Future Art Ecosystems 4: Art x Public AI
Future Art Ecosystems 4: Art x Public AI provides analyses, concepts and strategies for responding to the transformations of AI systems on culture and society.
·reader.futureartecosystems.org·
Ecosystem - Future Art Ecosystems 4: Art x Public AI