The field of immersive and interactive media is rapidly evolving, with professionals from diverse disciplines such as artists, creative technologists, curators, researchers, and producers shaping its future. IDFA's section for new media, DocLab, is helping to lead the way by showcasing, researching and developing the best interactive documentary art and XR storytelling. Open for entry.
We’re excited to see interest in the positions AI Now recently posted: (see here for more info on our open Associate Director and Operations Director roles) If you have questions about these positions or AI Now as a workplace, we’re happy to answer them. For equity reasons, we will hold two office hours sessions so […]
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.
“Remembering Our Future: shamanism, oracles and AI”A Roundtable Conversation with Li-Chun MarinaLin, Cavan McLaughlin, KarinValis, Bogna Konior and Yin-JuChe...
Open Call for 2024 Open Future Fellows – Open Future
Open Future is looking for fellows who will contribute to our work on advancing Digital Public Space and cultivating Digital Commons. For the first time, we are opening the call to creatives. We are a small, dedicated team of advocates, researchers, and community builders working to make the internet open. We strive to question the […]
Alexa's recording you. What’s she doing with it?Read Sara’s article about the privacy settings on your smart speaker: https://www.vox.com/recode/2020/12/9/22...
Watch an AI Julia Fox deliver a sermon about tech doomerism
From AI simps to manic tech overlords, Literally No Place is the short film exploring the ups and downs of artificial intelligence – and the future Big Tech doesn’t want you to see
AI Nationalism(s): Global Industrial Policy Approaches to AI
Our latest report diagnoses concentration of power in the tech industry as a pressing challenge – and points the path forward to seize this moment of change.
Ecosystem - Future Art Ecosystems 4: Art x Public AI
Future Art Ecosystems 4: Art x Public AI provides analyses, concepts and strategies for responding to the transformations of AI systems on culture and society.
Click here to pre-register as a mentor! CIRCE’s Fellowship Programme seeks mentors to guide and assist passionate researchers, creatives, practitioners,