Episode #183 ... Is ChatGPT really intelligent?

listen
Mystery AI Hype Theater 3000 | DAIR
The Distributed AI Research Institute is a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence.
Me, Myself, and AI
Podcast · MIT Sloan Management Review and Boston Consulting Group (BCG) · Why do only 10% of companies succeed with AI? In this series by MIT SMR and BCG, we talk to the leaders who've achieved big wins with AI in their companies and learn how they did it. Hear what gets experts from companies like NASA, Github, and others excited to do their jobs every day and what they consider the keys to their success.
In Machines we Trust
Podcast · In Machines we Trust · "In Machines We Trust" is a captivating podcast that delves deep into the world of technology and innovation. Each episode explores a range of cutting-edge topics and current news, offering listeners insights into the rapidly evolving digital landscape. Our discussions focus on how these technological advancements impact society, economy, and our daily lives. Join us as we navigate the intricate and fascinating world of machines, unraveling the mysteries and possibilities they hold.
Data Skeptic
Podcast · Kyle Polich · Data Skeptic is your source for a perspective of scientific skepticism on topics in statistics, machine learning, big data, artificial intelligence, and data science. Our weekly podcast and blog bring you stories and tutorials to help understand our data-driven world.
Gods and Robots
Eryk Salvaggio & Caroline Sinders: Glitching AI, Algorithmic Resistance, Labor Activism, Art as Research, & Feminist Technology | Urgent Futures #13
Urgent Futures with Jesse Damiani · Episode
Shrey Jain: Applied Scientist at Microsoft Research Special Projects | RadicalxChange(s)
Shrey Jain, an applied scientist at Microsoft Research Special Projects, speaks with Matt Prewitt on a very timely and topical subject: AI and – more specifically – the dangers it poses to the nature of natural human communication (“context collapse”). They take a deep dive into the current threats to privacy by expanding beyond the often discussed cryptographic sense into “privacy as contextual integrity”, and the immediate opportunity to embed ethical guardrails into this ever-changing realm of generative AI through possible solutions of designated verified signatures in “plural publics”.
Shrey’s recently published paper co-authored with Divya Siddarth and E. Glen Weyl “Plural Publics” is linked in the episode notes.
Computer Says Maybe: The Age of Noise w/ Eryk Salvaggio
Tech Won't Save Us
Podcast · Paris Marx · Silicon Valley wants to shape our future, but why should we let it? Every Thursday, Paris Marx is joined by a new guest to critically examine the tech industry, its big promises, and the people behind them. Tech Won’t Save Us challenges the notion that tech alone can drive our world forward by showing that separating tech from politics has consequences for us all, especially the most vulnerable. It’s not your usual tech podcast.
AI Snake Oil: Separating Hype from Reality
Arvind Narayanan and Sayash Kapoor are the authors of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.
In the loop by Dreaming Beyond AI
Podcast · Dreaming Beyond AI · Dreaming Beyond AI is a platform for critical and constructive knowledge, visionary fiction, speculative art, and community organizing around Artificial Intelligence. In this podcast series, we're all about opening up new dimensions of interaction and challenging the current perceptions and narratives around AI. We want to reclaim technology as a space for marginalized bodies and build an archive of alternative realities.
Knowing Machines - Podcast
Knowing Machines is a research project tracing the histories, practices, and politics of how machine learning systems are trained to interpret the world.
What is AI? — MIT Technology Review Narrated
Artificial intelligence is the hottest technology of our time. But what is it? It sounds like a stupid question, but it’s one that’s never been more urgent. MIT Technology Review takes a deep dive into the competing answers from titans of industry and helps us understand how we got here—and why you should care, no matter who you are.
Dr Abeba Birhane on what we need to know about AI
Computer Says Maybe
Interpreting Technology with AIxDESIGN — Nadia Piet
Collect this episode to support UFO and secure your spot on the leaderboard!
A plea for modest AI - Supervision (AI-generated podcast from interview)
Illuminate
Transform your content into engaging AI-generated audio discussions.
Code Green - AI & Climate Action Research & Initiatives in Asia
Code Green is a media series that separates the heat from the hype around climate tech in Asia
AI and I
Learn how the smartest people in the world are using AI to think, create, and relate. Each week I interview founders, filmmakers, writers, investors, and others about how they use AI tools like ChatGPT, Claude, and Midjourney in their work and in their lives. We screen-share through their historical chats and then experiment with AI live on the show. Join us to discover how AI is changing how we think about our world—and ourselves.
For more essays, interviews, and experiments at the forefront of AI: https://every.to/chain-of-thought?sort=newest.
Computer Says Maybe
Outside institutions can play a role in evaluating and ensuring fairness in AI systems : with Ploipailin Flynn of AIxD (Part 1) by Digital Health Review : Conversations with a Black Health Tech Nerd
In this two part episode, we sit down with Ploipailin Flynn, founding member of AI x Design, a global community and decentralized studio for critical and creative AI research & design.
In this conversation, we discuss why understanding how AI systems are built is crucial for creating better outcomes and why measuring the outcomes of AI in communities and tracking deployed models is essential. Ploi shares her expertise on how designing user interfaces for different stakeholders can facilitate feedback, uphold data rights and why involving outside organizations can help evaluate the fairness of AI systems, while also creating defensible positions on company's responsibility with AI.
What the FAccT?: Reformers and Radicals [Computer Says Maybe]
DEEP-DIVE: AI & Psychedelics
Listen to this episode from FUTURE-PROOF on Spotify. DEEP-DIVE is a series of public talks, each delving deeper into a specific topic already highlighted by a project at Blessed Foundation, enabling the nuances of important questions to be explored. The first DEEP-DIVE episode coincides with the exhibition currently on display at Blessed Foundation - RAPTURE by Andrea Khôra. We explore key themes in Andrea’s work, focusing on psychedelics and AI. Hear from Shaneihu Yawanawá, Utxi Yawanawá, Yawatume Yawanawá and Maria Fernanda Gebara, who share their views on the psychedelics boom from the perspective of indigenous traditions and ethics. We're also joined by Neşe Devenot, whose research was a major influence in Andrea’s work, offering a critical assessment of the collision of psychedelics and capitalism. With further insights from Andrea Khôra and Sylwia Serafinowicz (Managing Director at Blessed Foundation), dive into this episode for an inspiring and thought-provoking exploration of ancestral intelligence vs artificial intelligence. RAPTURE by Andrea Khôra is showing at Blessed Foundation until 27th June 2024. Contact info@blessed-foundation.org for more information.
LISTEN NOW | The Good Robot
The Good Robot
Join Dr Eleanor Drage and Dr Kerry McInerney as they ask the experts: what is good technology? Is ‘good’ technology even possible? And how can feminism help us work towards it? Each week, they invite scholars, industry practitioners, activists, and more to provide their unique perspective on what feminism can bring to the tech industry and the way that we think about technology. With each conversation, The Good Robot asks how feminism can provide new perspectives on technology’s biggest problems.
Tactics&Practice #15: (Un)real Data – Real Effects
The 15th edition of Tactics&Practice explores how the ambiguous quality of data can be used as a tool to produce real-world outcomes. Can the act of purposely creating data provide agency within data-driven systems? Is it possible to manipulate data to create specific effects? (Un)real Data – Real Effects is a programme by !Mediengruppe Bitnik […]
Generationship | Ep. #11, Ghost Workers with Adio Dinika of DAIR Institute | Heavybit
Rachel and Adio Dinika of DAIR Institute discuss the challenges faced by platform laborers around the world, including unfair compensation and job insecurity.
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In Proceedings of FAccT 2021, pp.610