Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023
Listen to this episode from Mystery AI Hype Theater 3000 on Spotify. Emily and Alex read through Google vice president Blaise Aguera y Arcas' recent proclamation that "artificial general intelligence is already here." Why this claim is a maze of hype and moving goalposts.References:Noema Magazine: "Artificial General Intelligence Is Already Here." "AI and the Everything in the Whole Wide World Benchmark" "Targeting the Benchmark: On Methodology and Current Natural Language Processing Research""Recoding Gender: Women's Changing Participation in Computing""The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise""Is chess the drosophila of artificial intelligence? A social history of an algorithm" "The logic of domains""Reckoning and Judgment"Fresh AI Hell:Using AI to meet "diversity goals" in modelingAI ushering in a "post-plagiarism" era in writing"Wildly effective and dirt cheap AI therapy."Applying AI to "improve diagnosis for patients with rare diseases."Using LLMs in scientific researchHealth insurance company Cigna using AI to deny medical claims.AI for your wearable-based workoutYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Iain S. Thomas & Jasmine Wang: Can AI Answer Life’s Biggest Questions?
Listen to this episode from Sounds True: Insights at the Edge on Spotify. Mention to someone the words “artificial intelligence,” and chances are you’ll get a very emotional response. For some, the thought of AI triggers fear, anger, and suspicion; for others, great excitement and anticipation. In this podcast, Tami Simon speaks with technologist and philosopher Jasmine Wang along with poet Iain S. Thomas, coauthors of the new book What Makes Us Human? An Artificial Intelligence Answers Life’s Biggest Questions. Whatever your view on AI, we think you’ll find this conversation profoundly interesting and informative! Listen now as Tami, Jasmine, and Iain discuss the artificial intelligence known as GPT-3; holding an attitude of “critical techno optimism”; finding kinship with digital beings; the question of sentience; the sometimes “hallucinatory” nature of generative AI; the three main aspects of deep learning technology—classification, recommendation, and generation; AI as a creativity compounder; bringing a moral lens to the development and deployment of AI; the central human themes of presence, love, and interconnectedness; acting with intent and living with meaning; and more. Note: This episode originally aired on Sounds True One, where these special episodes of Insights at the Edge are available to watch live on video and with exclusive access to Q&As with our guests. Learn more at join.soundstrue.com.
Listen to Mystery AI Hype Theater 3000 on Spotify. Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
This week Alex Fisher asks about Resistance/Persistence/Dall-E/AI https://artofalexfischer.com and a field recording from Linda Loh https://lindaloh.com
Listen to this episode from NerdOut@Spotify on Spotify. Get ready: we’re diving into machine learning. Hear how we’re improving personalization with reinforcement learning (RL), what makes ML engineering so different from other kinds of software engineering, and why machine learning at Spotify is really about humans on one side of an algorithm trying to better understand the humans on the other side of it. Spotify’s director of research, Mounia Lalmas-Roelleke, talks with host Dave Zolotusky about how we’re using RL to optimize recommendations for future rewards, how listening to more diverse content relates to long-term satisfaction, how to teach machines about the difference between p-funk and g-funk, and the upsides of taking the stairs. Then Dave goes deep into the everyday life of an ML engineer. He talks with senior staff engineer Joe Cauteruccio about what it takes to turn ML theory into code, the value of T-shapedness, the difference between inference errors and bugs, using proxy targets and developing your ML intuition, and why in machine learning something’s probably wrong if everything looks right. Plus, an ML glossary: our guests educate us on the definitions for cold starts, bandits, and more. This episode is the first in a series about machine learning and personalization at Spotify. Learn more about ML and personalization: Listen: Spotify: A Product Story, Ep.04: “Human vs Machine” Watch: TransformX 2021: “Creating Personalized Listening Experiences with Spotify” Recent publications from Spotify Research: “Variational User Modeling with Slow and Fast Features” (Feb. 2022) “Algorithmic Balancing of Familiarity, Similarity, & Discovery in Music Recommendations” (Nov. 2021) “Leveraging Semantic Information to Facilitate the Discovery of Underserved Podcasts” (Nov. 2021) “Shifting Consumption towards Diverse Content on Music Streaming Platforms” (Mar. 2021) Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com You should follow us on Twitter @SpotifyEng and on LinkedIn!
Approachable AI for music, model markets, new DAWs and Holly+ with Never Before Heard Sounds
Listen to this episode from Interdependence on Spotify. Super excited to share this one, on the advent of our collaboration for Holly+, we are joined by Chris Deaner and Yotam Mann of Never Before Heard Sounds, a brand new company releasing AI music tools, to discuss approachable AI tools for music making, the inevitable model economy, new approaches to DAWs and the Holly+ project more generally! Never Before Heard Sounds: https://heardsounds.com/Follow them on Twitter: https://twitter.com/HeardSoundsPlay with Holly+ (and share your results!): https://holly.plus/
Listen to this episode from This Study Shows on Spotify. AI has come a long way (it even named this episode) but what does it have to do with science communication? We find the line between the present and the future as we explore how AI will affect science communication, and how has it already taken hold, with Mara Pometti, lead data strategist at IBM, and Professor Charlie Beckett, lead of JournalismAI at the London School of Economics. We want to know what you think about This Study Shows! Take a short survey and help us make this podcast the best it can be.
521 likes, 10 comments - guapmag on May 3, 2024: "GUAP are looking to commission a writer to write a think-piece on AI in Fashion and Photography, please submit via link in bio 🔗".
Waag | Nederlandse bevolking stelt prioriteiten voor onderzoeksagenda AI
Onderzoek naar mening Nederlander over AI: 58% van de Nederlandse bevolking acht het thema "Nepnieuws, nepfoto's en polarisatie" cruciaal als het gaat over de ontwikkeling van Artificiële Intelligentie (AI) en het onderzoek hiernaar.
In the 2020s, natural language interfaces became the standard for human-AI interaction, promising accessibility for all. However, this seemingly utopian solution…
Is this the world's first AI-generated documentary?
Alan Warburton was commissioned by the ODI's Data as Culture programme to bring us 'The Wizard of AI,' a 20-minute video essay about the cultural impacts of generative AI. It was produced over three weeks at the end of October 2023, one year after the release of the infamous Midjourney v4, which the artist treats as "gamechanger" for visual cultures and creative economies. According to the artist, the video itself is "99% AI" and was produced using generative AI tools like Midjourney, Stable Diffusion, Runway and Pika. Yet the artist is careful to temper the hype of these new tools, or as he says, to give in to the "wonder-panic" brought about by generative AI. Using creative workflows unthinkable before October 2023, he takes us on a colourful journey behind the curtain of AI - through Oz, pink slime, Kanye's 'Futch' and a deep sea dredge - to explain and critique the legal, aesthetic and ethical problems engendered by AI-automated platforms. Most importantly, he focusses on the real impacts this disruptive wave of technology continues to have on artists and designers around the world.
Commissioned by Data as Culture at the ODI: www.culture.theodi.org
----------------------------------------------
Disclaimer: this work is a non-commercial work of critical/educational/satirical commentary. Under UK law, this is referred to as ‘fair dealing’ and protects the work from claims of copyright.
Data use: According to the data taxonomies provided by www.translatingnature.org, this work derives from the following data types: living and biological and non-biological data; non-living, commercial, personal and licensed data; static data; generated, processed, retrieved data and 'anecdata' (including metadata); and anonymised; identifiable and unknown data.
----------------------------------------------
Full (hyperlinked) credits can be seen at www.thewizardof.ai
· Written, directed, voiced, animated and soundtracked by Alan Warburton.
· Back to The Futch animation by Ewan Jones Morris
· Special thanks to Joanne McNeil - https://vimeo.com/justbrowsing, Tom Pounder - https://vimeo.com/user3078533, Hannah Redler-Hawes and Omar.
· Wonderpanic Theme by Sonny Baker.
· Research Assistance from Fabian Mosele
· Steve Ballmer Genie by Christian Schlaeffer - https://vimeo.com/cschlaeffer
· Pretty fishes by UglyStupidHonest - https://vimeo.com/user3746465
· In Memoriam images by Alex Czetwertynski - https://vimeo.com/user2228130
· Concept development and AI collaboration from John Butler - https://vimeo.com/user3946359, Samine Joudat, Ben Dosage, @dzennifer, Ben Dawson - https://vimeo.com/user83259586, Alejandro González Romo - https://vimeo.com/user128947837, @symbios.wiki - https://vimeo.com/user45174927, Ugur Engin Deniz - https://vimeo.com/engindeniz
----------------------------------------------
AI Tools used:
· Runway Gen 2 to generate 16:9 ‘AI Collaborator’ video clips
· Midjourney, Stable Diffusion and DALLE 3 to generate still images
· Pika to generate 3 second fish loops
· TikTok for detective speech synthesis
· HeyGen to generate AI talking detective head
· Adobe Photoshop AI to expand images
· Topaz Gigapixel AI to upscale images
-----------------------------------------------
Clips attribution:
· What is the Internet? (1995) by The Today Show
· Microsoft Clippy (1997 onwards) web compilation
· CNN Internet Report (1993) by CNN News
· Napster Report (2000) by CNN Headline News
· Tech Events in 2023 Be Like (2023), Verge, featuring footage from META
· Zane Lowe meets Kanye West (2015), BBC Radio 1.
· Unit 9 AI Workflow (2023) Unit9Ltd
· Thanos Snap, Avengers: Endgame (2019) Marvel Studios, LLC
· for AI artist clips, please see onscreen attribution.
The field of immersive and interactive media is rapidly evolving, with professionals from diverse disciplines such as artists, creative technologists, curators, researchers, and producers shaping its future. IDFA's section for new media, DocLab, is helping to lead the way by showcasing, researching and developing the best interactive documentary art and XR storytelling. Open for entry.
We’re excited to see interest in the positions AI Now recently posted: (see here for more info on our open Associate Director and Operations Director roles) If you have questions about these positions or AI Now as a workplace, we’re happy to answer them. For equity reasons, we will hold two office hours sessions so […]
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence | First Monday
The stated goal of many organizations in the field of artificial intelligence (AI) is to develop artificial general intelligence (AGI), an imagined system with more intelligence than anything we have ever seen. Without seriously questioning whether such a system can and should be built, researchers are working to create “safe AGI” that is “beneficial for all of humanity.” We argue that, unlike systems with specific applications which can be evaluated following standard engineering principles, undefined systems like “AGI” cannot be appropriately tested for safety. Why, then, is building AGI often framed as an unquestioned goal in the field of AI? In this paper, we argue that the normative framework that motivates much of this goal is rooted in the Anglo-American eugenics tradition of the twentieth century. As a result, many of the very same discriminatory attitudes that animated eugenicists in the past (e.g., racism, xenophobia, classism, ableism, and sexism) remain widespread within the movement to build AGI, resulting in systems that harm marginalized groups and centralize power, while using the language of “safety” and “benefiting humanity” to evade accountability. We conclude by urging researchers to work on defined tasks for which we can develop safety protocols, rather than attempting to build a presumably all-knowing system such as AGI.