Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing
Generative artificial-intelligence chatbots like ChatGPT are known to get things wrong sometimes, a process known as “hallucinating.” But can anyone be held liable if those incorrect responses are damaging in some way?
Host Zoe Thomas talks to a legal expert and an AI ethicist to explore the legal landscape for generative AI technology, and the tactics companies are employing to improve their products. This is the fourth episode of Tech News Briefing’s special series on generative AI, “Artificially Minded.”
0:00 Why Australian mayor Brian Hood is thinking about suing OpenAI's ChatGPT for defamation
4:44 Why generative AI programs like OpenAI’s ChatGPT can get facts wrong
6:26 What the 1996 Communication Decency Act could tell us about laws around generative AI
10:20 How generative AI blurs the line between creator and platform
12:56 How lawmakers around the world are handing regulation around AI
14:13 Why AI hallucinations happen
17:16 How Google is taking steps to create a more factual chatbot with Bard
18:34 How tech companies work with AI ethicists: what is red teaming?
Tech News Briefing
WSJ’s tech podcast featuring breaking news, scoops and tips on tech innovations and policy debates, plus exclusive interviews with movers and shakers in the industry.
For more episodes of WSJ’s Tech News Briefing: https://link.chtbl.com/WSJTechNewsBriefing
#AI #Regulation #WSJ