Found 13 bookmarks
Newest
How Silicon Valley enshittified the internet — Decoder with Nilay Patel
How Silicon Valley enshittified the internet — Decoder with Nilay Patel
This is Sarah Jeong, features editor at The Verge. I’m standing in for Nilay for one final Thursday episode here as he settles back into full-time hosting duties. Today, we’ve got a fun one. I’m talking to Cory Doctorow, prolific author, internet activist, and arguably one of the fiercest tech critics writing today. He has a new book out called Enshittifcation: Why Everything Suddenly Got Worse and What to Do About It. So I sat down with Cory to discuss what enshittification is, why it’s happening, and how we might fight it. Links: Enshittification | Macmillan Why every website you used to love is getting worse | Vox The age of Enshittification | The New Yorker Yes, everything online sucks now — but it doesn’t have to | Ars Technica The enshittification of garage-door openers reveals vast, deadly rot | Cory Doctorow Mark Zuckerberg emails outline plan to neutralize competitors | The Verge Google gets to keep Chrome, judge rules in antitrust case | The Verge How Amazon wins: by steamrolling rivals and…
·overcast.fm·
How Silicon Valley enshittified the internet — Decoder with Nilay Patel
Can AI Avatars Make Class Time More Human? — Learning Curve
Can AI Avatars Make Class Time More Human? — Learning Curve
Colleges are experimenting with making online teaching videos featuring AI avatar versions of professors. Some students find the simulated likenesses of their instructors a bit creepy, but proponents say the technology could be key to making college courses more active and human. The idea is that AI will make it easy to make personalized teaching videos so that more teachers can adopt a “flipped classroom” approach — where students watch video lecturers as homework so class time is spent on discussion or projects.
·overcast.fm·
Can AI Avatars Make Class Time More Human? — Learning Curve
How Afraid of the A.I. Apocalypse Should We Be? — The Ezra Klein Show
How Afraid of the A.I. Apocalypse Should We Be? — The Ezra Klein Show
How Afraid of the A.I. Apocalypse Should We Be? Eliezer Yudkowsky is as afraid as you could possibly be. He makes his case. Yudkowsky is a pioneer of A.I. safety research, who started warning about the existential risks of the technology decades ago, – influencing a lot of leading figures in the field. But over the last couple of years, talk of an A.I. apocalypse has become a little passé. Many of the people Yudkowsky influenced have gone on to work for A.I. companies, and those companies are racing ahead to build the superintelligent systems Yudkowsky thought humans should never create. But Yudkowsky is still out there sounding the alarm. He has a new book out, co-written with Nate Soares, “If Anyone Builds It, Everyone Dies,” trying to warn the world before it’s too late. So what does Yudkowsky see that most of us don’t? What makes him so certain? And why does he think he hasn’t been able to persuade more people? Mentioned: Oversight of A.I.: Rules for Artificial Intelligence If Anyone Builds It, Everyone…
·overcast.fm·
How Afraid of the A.I. Apocalypse Should We Be? — The Ezra Klein Show
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
As artificial intelligence advances at unprecedented speed, Jon is joined by Geoffrey Hinton, Professor Emeritus at the University of Toronto and the “Godfather of AI,” to understand what we’ve actually created. Together, they explore how neural networks and AI systems function, assess the current capabilities of the technology, and examine Hinton’s concerns about where AI is headed. This podcast episode is brought to you by: MINT MOBILE - Make the switch at https://mintmobile.com/TWS GROUND NEWS - Go to https://groundnews.com/stewart to see how any news story is being framed by news outlets around the world and across the political spectrum. Use this link to get 40% off unlimited access with the Vantage Subscription. INDEED - Speed up your hiring with Indeed. Go to https://indeed.com/weekly to get a $75 sponsored job credit. Follow The Weekly Show with Jon Stewart on social media for more: YouTube: https://www.youtube.com/@weeklyshowpodcast Instagram: https://www.instagram.com/weeklyshowpodcast TikTok:…
·overcast.fm·
AI: What Could Go Wrong? with Geoffrey Hinton — The Weekly Show with Jon Stewart
Are we thinking about AI the wrong way? — On Point | Podcast
Are we thinking about AI the wrong way? — On Point | Podcast
AI researcher Ethan Mollick says most public conversation focuses too much on potential AI catastrophes and not enough on making the technology work for people. Mollick says if we don’t change that, none of us will be prepared for the near future where “everything will change all at once.”
·overcast.fm·
Are we thinking about AI the wrong way? — On Point | Podcast
Not all AI is, well, AI — Marketplace Tech
Not all AI is, well, AI — Marketplace Tech
Artificial intelligence and promises about the tech are everywhere these days. But excitement about genuine advances can easily veer into hype, according to Arvind Narayanan, computer science professor at Princeton who along with PhD candidate Sayash Kapoor wrote the book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” He says even the term AI doesn’t always mean what you think.
·overcast.fm·
Not all AI is, well, AI — Marketplace Tech
Shell Game — Radiolab
Shell Game — Radiolab
One man secretly hands off more and more of his life to an AI voice clone. Today, we feature veteran journalist Evan Ratliff who - for his new podcast Shell Game - decided to slowly replace himself bit by bit with an AI voice clone, to see how far he could actually take it. Could it do the mundane phone calls he’d prefer to skip? Could it get legal advice for him? Could it go to therapy for him? Could it parent his kids? Evan feeds his bot the most intimate details about his life, and lets the bot loose in high-stakes situations at home and at work. Which bizarro version of him will show up? The desperately-agreeable conversationalist, the crank-yanking prank caller, the glitched out stranger who sounds like he’s in the middle of a mental breakdown, or someone else entirely? Will people believe it’s really him? And how will they act if they don’t? A gonzo journalistic experiment for the age of AI, that’s funny and eerie all at the same time. We have some exciting news! In the “Zoozve” episode, Radiolab named…
·overcast.fm·
Shell Game — Radiolab
AI has a climate problem — but so does all of tech — Decoder with Nilay Patel
AI has a climate problem — but so does all of tech — Decoder with Nilay Patel
Every time we talk about AI, we get one big piece of feedback that I really want to dive into: how the lightning-fast explosion of AI tools affects the climate. AI takes a lot of energy, and there’s a huge unanswered question as to whether using all that juice for AI is actually worth it, both practically and morally. It’s messy and complicated and there are a bunch of apparent contradictions along the way — so it’s perfect for Decoder. Verge senior science reporter Justine Calma joins me to see if we can untangle this knot. Links: This startup wants to capture carbon and help data centers cool down | The Verge Google’s carbon footprint balloons in its Gemini AI era | The Verge Taking a closer look at AI’s supposed energy apocalypse | Ars Technica AI is exhausting the power grid. Tech firms are seeking a miracle | WaPo AI Is already wreaking havoc on global power systems | Bloomberg What do Google’s AI answers cost the environment? | Scientific American AI is an energy hog | MIT Tech Review Microsoft’s AI…
·overcast.fm·
AI has a climate problem — but so does all of tech — Decoder with Nilay Patel
AI is learning how to lie — Marketplace Tech
AI is learning how to lie — Marketplace Tech
Large language models go through a lot of vetting before they’re released to the public. That includes safety tests, bias checks, ethical reviews and more. But what if, hypothetically, a model could dodge a safety question by lying to developers, hiding its real response to a safety test and instead giving the exact response its human handlers are looking for? A recent study shows that advanced LLMs are developing the capacity for deception, and that could bring that hypothetical situation closer to reality. Marketplace’s Lily Jamali speaks with Thilo Hagendorff, a researcher at the University of Stuttgart and the author of the study, about his findings.
·overcast.fm·
AI is learning how to lie — Marketplace Tech