listen

listen

54 bookmarks
Custom sorting
‘Artificial Intelligence?’ No, Collective Intelligence.
‘Artificial Intelligence?’ No, Collective Intelligence.
Listen to this episode from The Ezra Klein Show on Spotify. A.I.-generated art has flooded the internet, and a lot of it is derivative, even boring or offensive. But what could it look like for artists to collaborate with A.I. systems in making art that is actually generative, challenging, transcendent?Holly Herndon offered one answer with her 2019 album “PROTO.” Along with Mathew Dryhurst and the programmer Jules LaPlace, she built an A.I. called “Spawn” trained on human voices that adds an uncanny yet oddly personal layer to the music. Beyond her music and visual art, Herndon is trying to solve a problem that many creative people are encountering as A.I. becomes more prominent: How do you encourage experimentation without stealing others’ work to train A.I. models? Along with Dryhurst, Jordan Meyer and Patrick Hoepner, she co-founded Spawning, a company figuring out how to allow artists — and all of us creating content on the internet — to “consent” to our work being used as training data.In this conversation, we discuss how Herndon collaborated with a human chorus and her “A.I. baby,” Spawn, on “PROTO”; how A.I. voice imitators grew out of electronic music and other musical genres; why Herndon prefers the term “collective intelligence” to “artificial intelligence”; why an “opt-in” model could help us retain more control of our work as A.I. trawls the internet for data; and much more.Mentioned:“Fear, Uncertainty, Doubt” by Holly Herndon“xhairymutantx” by Holly Herndon and Mat Dryhurst, for the Whitney Museum of Art“Fade” by Holly Herndon“Swim” by Holly Herndon“Jolene” by Holly Herndon and Holly+“Movement” by Holly Herndon“Chorus” by Holly Herndon“Godmother” by Holly Herndon“The Precision of Infinity” by Jlin and Philip GlassHolly+Book Recommendations:Intelligence and Spirit by Reza NegarestaniChildren of Time by Adrian TchaikovskyPlurality by E. Glen Weyl, Audrey Tang and ⿻ CommunityThoughts? Guest suggestions? Email us at ezrakleinshow@nytimes.com.You can find transcripts (posted midday) and more episodes of “The Ezra Klein Show” at nytimes.com/ezra-klein-podcast. Book recommendations from all our guests are listed at https://www.nytimes.com/article/ezra-klein-show-book-recs.This episode of “The Ezra Klein Show” was produced by Annie Galvin. Fact-checking by Michelle Harris. Our senior engineer is Jeff Geld, with additional mixing by Aman Sahota. Our senior editor is Claire Gordon. The show’s production team also includes Rollin Hu, Elias Isquith and Kristin Lin. Original music by Isaac Jones. Audience strategy by Kristina Samulewski and Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero and Jack Hamilton.
·open.spotify.com·
‘Artificial Intelligence?’ No, Collective Intelligence.
A.I.’s Data Wall + a Surprise Privacy Bill + What Happened to the TikTok Ban?
A.I.’s Data Wall + a Surprise Privacy Bill + What Happened to the TikTok Ban?
Listen to this episode from Hard Fork on Spotify. This week, the companies building artificial intelligence are facing a limit to what training data is publicly available on the internet. Will that stop them from building God? Then, a new bipartisan national privacy law proposal just dropped. We ask what’s in it. And finally, ByteDance is building new apps instead of fighting Congress’s TikTok ban.Today’s Guests:Trevor Hughes, president and C.E.O. of the International Association of Privacy ProfessionalsAdditional Reading:How Tech Giants Cut Corners to Harvest Data for A.I.For Data-Guzzling A.I. Companies, the Internet Is Too SmallLawmakers unveil sprawling plan to expand online privacy protectionsTikTok Turns to Nuns, Veterans and Ranchers in Marketing BlitzWe want to hear from you. Email us at hardfork@nytimes.com.Find “Hard Fork” on YouTube and TikTok.
·open.spotify.com·
A.I.’s Data Wall + a Surprise Privacy Bill + What Happened to the TikTok Ban?
Measurementality #4: What are we Optimizing for? with Laura Musikanski and Jonathan Stray
Measurementality #4: What are we Optimizing for? with Laura Musikanski and Jonathan Stray
Listen to this episode from The Radical AI Podcast on Spotify. In this 4th episode of Measurementality we'll be "identifying what counts in the algorithmic age" by analyzing how existing metrics regarding human wellbeing along with environmental flourishing are being globally measured today.   Laura Musikanski is the Executive Director of The Happiness Alliance and Chair of IEEE 7010-2020 Jonathan Stray is a Visiting Scholar at Center for Human-Compatible AI and a former research partner at The Partnership on AI as well as being the author of Aligning AI to Human Values means Picking the Right Metrics
·open.spotify.com·
Measurementality #4: What are we Optimizing for? with Laura Musikanski and Jonathan Stray
Acts of Interfacing
Acts of Interfacing
Listen to this episode from DCODE Conversations on Spotify. Interfaces are not what they used to be. They have extended beyond surfaces, buttons, and levers to something that cuts across infrastructures and entangles the planet, often beyond access and control. Elisa Giaccardi interviews DCODE fellows Rob Collins, Yuxi Liu, and Grace Turtle and guest authors Christian Andersen (The Metainterface, MIT Press) and Ksenia Fedorova (Tactics of Interfacing, MIT Press) to explore how designers might navigate these new terrains.
·open.spotify.com·
Acts of Interfacing
The Mid: how culture became algorithmically optimised for mass appeal
The Mid: how culture became algorithmically optimised for mass appeal
Listen to this episode from Logged On – A Dazed Podcast on Spotify. Welcome to the inaugural episode of Logged On, the new podcast from Dazed with Günseli Yalcinkaya about all things internet culture, from memes to emerging trends, Deep Web conspiracy theories and beyond.Episode 1 – The MidWe're living in a mid-ocracy. Today, culture is algorithmically optimised for mass appeal, serving up platters of pre-packaged cool – whether that’s a Deftones tee, a Fred Again mix or a wavy mirror via your Instagram explore page. On this episode, we're joined by Shumon Basar, the co-author of two books, ‘The Age Of Earthquakes’ and ‘The Extreme Self’, and the author of recent essays on lorecore and endcore, to discuss why everything suddenly feels so... mid. Hosted on Acast. See acast.com/privacy for more information.
·open.spotify.com·
The Mid: how culture became algorithmically optimised for mass appeal
Beyond Values in Algorithmic Design
Beyond Values in Algorithmic Design
Listen to this episode from DCODE Conversations on Spotify. What are the opportunities and challenges of using human and moral values for the design of principled algorithms? Based on current trends of value-based frameworks, this podcast engages three experts in discussions of design and everyday experiences of algorithmic values such as fairness, transparency, and inclusiveness for future urban mobility and beyond. Rachel C. Smith presents this conversation, where DCODE fellows Ignacio Garnham, Mireia Yurrita and Francesco Maria Turno interview our guests Minna Ruckenstein (Professor at Helsinki University), Kars Alfrink (PhD candidate at Delft University of Technology) and Aleksej Veselij (Senior data engineer at Accenture).
·open.spotify.com·
Beyond Values in Algorithmic Design
Myths and Machine Learning with Marianna Simnett
Myths and Machine Learning with Marianna Simnett
Listen to this episode from Interdependence on Spotify. In advance of the premiere of her new AI fueled gesamtkunstwerk “Gorgon”, the artist Marianna Simnett joins us to dive into mythology, mimicry and the saga of convincing tools to do what you want them to!“Gorgon” (commissioned by LAS) will take place at HAU Berlin from the 13-17th Septemberhttps://www.las-art.foundation/programme/gorgonFollow Marianna Simnett: https://www.instagram.com/mariannasimnett/Collect OGRESS on Foundation: https://foundation.app/collection/ogress
·open.spotify.com·
Myths and Machine Learning with Marianna Simnett
Interdependence | Podcast on Spotify
Interdependence | Podcast on Spotify
Listen to Interdependence on Spotify. Holly Herndon & Mat Dryhurst, optimistic about the 21st Century. Patrons and Channel holders get access to weekly episodes as they drop, the free feed is time delayed
·open.spotify.com·
Interdependence | Podcast on Spotify
Mental models and next gen AI tools with Cristóbal Valenzuela (Runway)
Mental models and next gen AI tools with Cristóbal Valenzuela (Runway)
Listen to this episode from Interdependence on Spotify. Runway have been on an absolute tear of late offering new AI tools for their creative production suite, so we invited CEO Cristóbal Valenzuela to join us to discuss their approach, and how the mental models of artists and greater society may change in accordance with the shifts happening.     Try out Runway: https://runwayml.comFollow Runway: https://twitter.com/runwaymlFollow Cristóbal: https://twitter.com/c_valenzuelab
·open.spotify.com·
Mental models and next gen AI tools with Cristóbal Valenzuela (Runway)
family ties (with Kendrick Lamar) • Baby Keem, Kendrick Lamar
family ties (with Kendrick Lamar) • Baby Keem, Kendrick Lamar
Listen to this episode from Interdependence on Spotify. Runway have been on an absolute tear of late offering new AI tools for their creative production suite, so we invited CEO Cristóbal Valenzuela to join us to discuss their approach, and how the mental models of artists and greater society may change in accordance with the shifts happening.     Try out Runway: https://runwayml.comFollow Runway: https://twitter.com/runwaymlFollow Cristóbal: https://twitter.com/c_valenzuelab
·open.spotify.com·
family ties (with Kendrick Lamar) • Baby Keem, Kendrick Lamar
Demystifying AI training law with TechnoLlama
Demystifying AI training law with TechnoLlama
Listen to this episode from Interdependence on Spotify. Everyone and their llama is talking about AI ethics and law, so we invited the high Llama Andres Guadamuz, an eminent researcher in AI and IP, to discuss the particularities and uncertainties of AI training law, in honor of his recent paper on the subject. It is an issue of our time. We make it fun. A Scanner Darkly: Copyright Infringement in Artificial Intelligence Inputs and Outputs: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4371204Follow Andres on twitter: https://twitter.com/technollama
·open.spotify.com·
Demystifying AI training law with TechnoLlama
Spotify
Spotify
Listen to this episode from Mystery AI Hype Theater 3000 on Spotify. Researchers Sarah West and Andreas Liesenfeld join Alex and Emily to examine what software companies really mean when they say their work is 'open source,' and call for greater transparency.This episode was recorded on November 20, 2023.Dr. Sarah West is the managing director of the AI Now Institute. Her award-winning research and writing blends social science, policy, and historical methods to address the intersection of technology, labor, antitrust, and platform accountability. And she’s the author of the forthcoming book, "Tracing Code."Dr. Andreas Liesenfeld is assistant professor in both the Centre for Language Studies and department of language and communication at Radboud University in the Netherlands. He’s a co-author on research from this summer critically examining the true “open source” nature of models like LLaMA and ChatGPT – concluding.References:Yann LeCun testifies on 'open source' work at MetaMeta launches LLaMA 2Stanford Human-Centered AI's new transparency indexCoverage in The AtlanticEleuther critiqueMargaret Mitchell critiqueOpening up ChatGPT (Andreas Liesenfeld's work)WebinarFresh AI Hell:Sam Altman out at OpenAIThe Verge: Meta disbands their Responsible AI teamArs Technica: Lawsuit claims AI with 90 percent error rate forces elderly out of rehab, nursing homesCall-out of Stability and others' use of “fair use” in AI-generated artA fawning profile of OpenAI's Ilya SutskeverYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
·open.spotify.com·
Spotify
Episode 20: Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023
Episode 20: Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023
Listen to this episode from Mystery AI Hype Theater 3000 on Spotify. Emily and Alex time travel back to a conference of men who gathered at Dartmouth College in the summer of 1956 to examine problems relating to computation and "thinking machines," an event commonly mythologized as the founding of the field of artificial intelligence. But our crack team of AI hype detectives is on the case with a close reading of the grant proposal that started it all.This episode was recorder on November 6, 2023. Watch the video version on PeerTube.References:"A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" (1955)Re: methodological individualism, "The Role of General Theory in Comparative-historical Sociology," American Journal of Sociology, 1991Fresh AI Hell:Silly made-up graph about “intelligence” of AI vs. “intelligence” of AI criticismHow AI is perpetuating racism and other bias against Palestinians:The UN hired an AI company with "realistic virtual simulations" of Israel and PalestineWhatsApp's AI sticker generator is feeding users images of Palestinian children holding gunsThe Guardian on the same issueInstagram 'Sincerely Apologizes' For Inserting 'Terrorist' Into Palestinian Bio TranslationsPalette cleanser: An AI-powered smoothie shop shut down almost immediately after opening.OpenAI chief scientist: Humans could become 'part AI' in the futureA Brief History of Intelligence: Why the evolution of the brain holds the key to the future of AI.AI-centered 'monastic academy':“MAPLE is a community of practitioners exploring the intersection of AI and wisdom.”You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
·open.spotify.com·
Episode 20: Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023
Tech Won't Save Us
Tech Won't Save Us
Listen to Tech Won't Save Us on Spotify. Silicon Valley wants to shape our future, but why should we let it? Every Thursday, Paris Marx is joined by a new guest to critically examine the tech industry, its big promises, and the people behind them. Tech Won’t Save Us challenges the notion that tech alone can drive our world forward by showing that separating tech from politics has consequences for us all, especially the most vulnerable. It’s not your usual tech podcast.
·open.spotify.com·
Tech Won't Save Us
So You Want to Be a Sorcerer in the Age of Mythic Powers... (The AI Episode)
So You Want to Be a Sorcerer in the Age of Mythic Powers... (The AI Episode)
Listen to this episode from The Emerald on Spotify. The rise of Artificial Intelligence has generated a rush of conversation about benefits and risks, about sentience and intelligence, and about the need for ethics and regulatory measures. Yet it may be that the only way to truly understand the implications of AI — the powers, the potential consequences, and the protocols for dealing with world-altering technologies — is to speak mythically. With the rise of AI, we are entering an era whose only corollary is the stuff of fairy tales and myths. Powers that used to be reserved for magicians and sorcerers — the power to access volumes of knowledge instantaneously, to create fully realized illusory otherworlds, to deceive, to conjure, to transport, to materialize on a massive scale — are no longer hypothetical. The age of metaphor is over. The mythic powers are real. Are human beings prepared to handle such powers?  While the AI conversation centers around regulatory laws, it may be that we also need to look deeper, to understand the chthonic drives at play. And when we do so, we see that the drive to create AI goes beyond narratives of ingenuity, progress, profit, or the creation of a more controllable, convenient world. Buried deep in this urge to tinker with animacy and sentience are core mythic drives —  the longing for mystery, the want to live again in a world of great powers beyond our control,  the longing for death, and ultimately, the unconscious longing for guidance and initiation. Traditionally, there was an initiatory process through which potentially world-altering knowledge was embodied slowly over time.  And so… what needs to be done about ‘The AI question’ might bear much more of a resemblance to the guiding principles of ancient magic and mystery schools than it does to questions of scientific ethics — because the drives at play are deeper and the consequences greater and the magic more real than it’s ever been before. Buckle up for a wild ride through myths of magic and human overreach, and all the kung fu movie and sci fi references you can handle. Featuring music by Charlotte Malin and Sidibe. Listen on a good sound system at a time when you can devote your full attention. Support the show
·open.spotify.com·
So You Want to Be a Sorcerer in the Age of Mythic Powers... (The AI Episode)
Episode 6: Stochastic Parrot Galactica, November 23, 2022
Episode 6: Stochastic Parrot Galactica, November 23, 2022
Listen to this episode from Mystery AI Hype Theater 3000 on Spotify. Emily and Alex discuss MetaAI's bullshit science paper generator, Galactica, along with its defenders. Plus, where could AI actually help scientific research? And more Fresh AI Hell. Watch the video of this episode on PeerTube. References:Imre Lakatos on research programsShah, Chirag and Emily M. Bender. 2022. Situating Search. Proceedings of the 2022 ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR ’22). UW RAISE (Responsibility in AI Systems and Experiences)Stochastic Parrots:Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings of FAccT 2021, pp.610-623. The Octopus Paper:Bender Emily M. and Alexander Koller. 2020. Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. ACL 2020Palestinian man arrested because of bad machine translation.Katherine McKittrick, Dear Science and Other StoriesThe Sokal HoaxSafiya Noble, Algorithms of OppressionLatanya Sweeney, "Discrimination in Online Ad Delivery"Mehtab Khan and Alex Hanna, The Subjects and Stages of AI Dataset Development: A Framework for Dataset Accountability(What is 'sealioning'?)http://wondermark.com/1k62/Grover:Raji, Inioluwa Deborah, Emily M. Bender, Amandalynne Paullada, Emily Denton and Alex Hanna. 2021. AI and the Everything in the Whole Wide World Benchmark. Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks.Ben Dickson's coverage of Grover:Why we must rethink AI benchmaYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
·open.spotify.com·
Episode 6: Stochastic Parrot Galactica, November 23, 2022
Episode 7: There Are Now 15 Competing Evaluation Metrics (ft. Dr. Jeremy Kahn). December 12, 2022
Episode 7: There Are Now 15 Competing Evaluation Metrics (ft. Dr. Jeremy Kahn). December 12, 2022
Listen to this episode from Mystery AI Hype Theater 3000 on Spotify. Emily and Alex are joined by Dr. Jeremy G. Kahn to discuss the distressingly large number of evaluation metrics for artificial intelligence, and some new AI hell.Jeremy G. Kahn has a PhD in computational linguistics, with a focus on information-theoretic and empirical engineering approaches to dealing with natural language (in text and speech). He’s gregarious, polyglot, a semi-auto-didact, and occasionally prolix. He also likes comic books, coffee, progressive politics, information theory, lateral thinking, science fiction, science fact, linear thinking, bicycles, beer, meditation, love, play, and inquiry. He lives in Seattle with his wife Dorothy and son Elliott.This episode was recorded on December 12, 2022.Watch the video of this episode on PeerTube.References:XKCD: StandardsWikidataConGish GallopThe Bender RuleDJ Khaled - You Played YourselfJeff Kao's interrogation of public comment periods.Emily's blog post response to NYT pieceYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
·open.spotify.com·
Episode 7: There Are Now 15 Competing Evaluation Metrics (ft. Dr. Jeremy Kahn). December 12, 2022
Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023
Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023
Listen to this episode from Mystery AI Hype Theater 3000 on Spotify. Emily and Alex read through Google vice president Blaise Aguera y Arcas' recent proclamation that "artificial general intelligence is already here." Why this claim is a maze of hype and moving goalposts.References:Noema Magazine: "Artificial General Intelligence Is Already Here." "AI and the Everything in the Whole Wide World Benchmark" "Targeting the Benchmark: On Methodology and Current Natural Language Processing Research""Recoding Gender: Women's Changing Participation in Computing""The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise""Is chess the drosophila of artificial intelligence? A social history of an algorithm" "The logic of domains""Reckoning and Judgment"Fresh AI Hell:Using AI to meet "diversity goals" in modelingAI ushering in a "post-plagiarism" era in writing"Wildly effective and dirt cheap AI therapy."Applying AI to "improve diagnosis for patients with rare diseases."Using LLMs in scientific researchHealth insurance company Cigna using AI to deny medical claims.AI for your wearable-based workoutYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
·open.spotify.com·
Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023
Iain S. Thomas & Jasmine Wang: Can AI Answer Life’s Biggest Questions?
Iain S. Thomas & Jasmine Wang: Can AI Answer Life’s Biggest Questions?
Listen to this episode from Sounds True: Insights at the Edge on Spotify. Mention to someone the words “artificial intelligence,” and chances are you’ll get a very emotional response. For some, the thought of AI triggers fear, anger, and suspicion; for others, great excitement and anticipation.  In this podcast, Tami Simon speaks with technologist and philosopher Jasmine Wang along with poet Iain S. Thomas, coauthors of the new book What Makes Us Human? An Artificial Intelligence Answers Life’s Biggest Questions. Whatever your view on AI, we think you’ll find this conversation profoundly interesting and informative!  Listen now as Tami, Jasmine, and Iain discuss the artificial intelligence known as GPT-3; holding an attitude of “critical techno optimism”; finding kinship with digital beings; the question of sentience; the sometimes “hallucinatory” nature of generative AI; the three main aspects of deep learning technology—classification, recommendation, and generation; AI as a creativity compounder; bringing a moral lens to the development and deployment of AI; the central human themes of presence, love, and interconnectedness; acting with intent and living with meaning; and more. Note: This episode originally aired on Sounds True One, where these special episodes of Insights at the Edge are available to watch live on video and with exclusive access to Q&As with our guests. Learn more at join.soundstrue.com.
·open.spotify.com·
Iain S. Thomas & Jasmine Wang: Can AI Answer Life’s Biggest Questions?
Mystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
Listen to Mystery AI Hype Theater 3000 on Spotify. Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
·open.spotify.com·
Mystery AI Hype Theater 3000
168 Dall-E
168 Dall-E
This week Alex Fisher asks about Resistance/Persistence/Dall-E/AI https://artofalexfischer.com and a field recording from Linda Loh https://lindaloh.com
zavidova·soundcloud.com·
168 Dall-E
08: Humans in the Loop
08: Humans in the Loop
Listen to this episode from NerdOut@Spotify on Spotify. Get ready: we’re diving into machine learning. Hear how we’re improving personalization with reinforcement learning (RL), what makes ML engineering so different from other kinds of software engineering, and why machine learning at Spotify is really about humans on one side of an algorithm trying to better understand the humans on the other side of it. Spotify’s director of research, Mounia Lalmas-Roelleke, talks with host Dave Zolotusky about how we’re using RL to optimize recommendations for future rewards, how listening to more diverse content relates to long-term satisfaction, how to teach machines about the difference between p-funk and g-funk, and the upsides of taking the stairs. Then Dave goes deep into the everyday life of an ML engineer. He talks with senior staff engineer Joe Cauteruccio about what it takes to turn ML theory into code, the value of T-shapedness, the difference between inference errors and bugs, using proxy targets and developing your ML intuition, and why in machine learning something’s probably wrong if everything looks right. Plus, an ML glossary: our guests educate us on the definitions for cold starts, bandits, and more. This episode is the first in a series about machine learning and personalization at Spotify. Learn more about ML and personalization: Listen: Spotify: A Product Story, Ep.04: “Human vs Machine” Watch: TransformX 2021: “Creating Personalized Listening Experiences with Spotify” Recent publications from Spotify Research: “Variational User Modeling with Slow and Fast Features” (Feb. 2022) “Algorithmic Balancing of Familiarity, Similarity, & Discovery in Music Recommendations” (Nov. 2021) “Leveraging Semantic Information to Facilitate the Discovery of Underserved Podcasts” (Nov. 2021) “Shifting Consumption towards Diverse Content on Music Streaming Platforms” (Mar. 2021) Read what else we’re nerding out about on the Spotify Engineering Blog: engineering.atspotify.com You should follow us on Twitter @SpotifyEng and on LinkedIn!
·open.spotify.com·
08: Humans in the Loop
Approachable AI for music, model markets, new DAWs and Holly+ with Never Before Heard Sounds
Approachable AI for music, model markets, new DAWs and Holly+ with Never Before Heard Sounds
Listen to this episode from Interdependence on Spotify. Super excited to share this one, on the advent of our collaboration for Holly+, we are joined by Chris Deaner and Yotam Mann of Never Before Heard Sounds, a brand new company releasing AI music tools, to discuss approachable AI tools for music making, the inevitable model economy, new approaches to DAWs and the Holly+ project more generally! Never Before Heard Sounds: https://heardsounds.com/Follow them on Twitter: https://twitter.com/HeardSoundsPlay with Holly+ (and share your results!): https://holly.plus/
zavidova·open.spotify.com·
Approachable AI for music, model markets, new DAWs and Holly+ with Never Before Heard Sounds
6: State of the Game
6: State of the Game
Listen to this episode from This Study Shows on Spotify. AI has come a long way (it even named this episode) but what does it have to do with science communication? We find the line between the present and the future as we explore how AI will affect science communication, and how has it already taken hold, with Mara Pometti, lead data strategist at IBM, and Professor Charlie Beckett, lead of JournalismAI at the London School of Economics.   We want to know what you think about This Study Shows! Take a short survey and help us make this podcast the best it can be.
·open.spotify.com·
6: State of the Game
Interdependence
Interdependence
Holly Herndon & Mat Dryhurst, optimistic about the 21st Century. Patrons and Channel holders get access to weekly episodes as they drop, the free feed is time delayed 3
·interdependence.fm·
Interdependence
Imagining AI Countergovernance
Imagining AI Countergovernance
Blair Attard-Frost describes alternative mechanisms for community-led and worker-led AI governance.
·techpolicy.press·
Imagining AI Countergovernance
AI Countergovernance
AI Countergovernance
Blair Attard-Frost on resistance to artificial intelligence, backlashes against the governance of AI systems, and possibilities for bottom-up AI governance led by communities and workers.
·midnightsunmag.ca·
AI Countergovernance
The Culture & Technology Podcast
The Culture & Technology Podcast
How is technology changing culture? From exhibition design to the performing arts, we invite leading curators, researchers, artists and cultural experts to explore how technology is shaping the future of cultural experiences and sparking new opportunities in the process. The Culture & Technology Podcast is a virtual salon — hosted by the Vienna Business Agency together with Severin Matusek. Each conversation pairs Viennese creatives with an international expert to discuss a topic, entertain a thought and share their knowledge through conversation.  The future of cultural experiences is up to us to create. We hope you’ll join us by subscribing to The Culture and Technology Podcast wherever you listen to podcasts.
·culture-technology.podigee.io·
The Culture & Technology Podcast