Found 3549 bookmarks
Newest
Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing
Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing
Generative artificial-intelligence chatbots like ChatGPT are known to get things wrong sometimes, a process known as “hallucinating.” But can anyone be held liable if those incorrect responses are damaging in some way? Host Zoe Thomas talks to a legal expert and an AI ethicist to explore the legal landscape for generative AI technology, and the tactics companies are employing to improve their products. This is the fourth episode of Tech News Briefing’s special series on generative AI, “Artificially Minded.” 0:00 Why Australian mayor Brian Hood is thinking about suing OpenAI's ChatGPT for defamation 4:44 Why generative AI programs like OpenAI’s ChatGPT can get facts wrong 6:26 What the 1996 Communication Decency Act could tell us about laws around generative AI 10:20 How generative AI blurs the line between creator and platform 12:56 How lawmakers around the world are handing regulation around AI 14:13 Why AI hallucinations happen 17:16 How Google is taking steps to create a more factual chatbot with Bard 18:34 How tech companies work with AI ethicists: what is red teaming? Tech News Briefing WSJ’s tech podcast featuring breaking news, scoops and tips on tech innovations and policy debates, plus exclusive interviews with movers and shakers in the industry. For more episodes of WSJ’s Tech News Briefing: https://link.chtbl.com/WSJTechNewsBriefing #AI #Regulation #WSJ
·youtube.com·
Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing
Will we run out of data? An analysis of the limits of scaling...
Will we run out of data? An analysis of the limits of scaling...
We analyze the growth of dataset sizes used in machine learning for natural language processing and computer vision, and extrapolate these using two methods; using the historical growth rate and...
·arxiv.org·
Will we run out of data? An analysis of the limits of scaling...
When AI Overrules the Nurses Caring for You
When AI Overrules the Nurses Caring for You
Artificial intelligence raises difficult questions about who makes the call in a health crisis: the human or the machine?
·wsj.com·
When AI Overrules the Nurses Caring for You
Amp
Amp
·www-technologyreview-com.cdn.ampproject.org·
Amp
The AI Arms Race: Using AI to avoid AI-detection tools
The AI Arms Race: Using AI to avoid AI-detection tools
The use of artificial intelligence (AI) to detect AI-generated content has gained increasing attention in recent years as the capabilities…
·drpontus.medium.com·
The AI Arms Race: Using AI to avoid AI-detection tools
Virginia Dignum on LinkedIn: EuroGPT call for action
Virginia Dignum on LinkedIn: EuroGPT call for action
I had the honor of co-organising a meeting of leading EU networks, organisations and researchers with the European Parliament on 25 May to discuss…
·linkedin.com·
Virginia Dignum on LinkedIn: EuroGPT call for action
Let Me Take Over: Variable Autonomy for Meaningful Human Control
Let Me Take Over: Variable Autonomy for Meaningful Human Control
As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. “Human oversight” is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy—dynamically adjustable levels of autonomy—as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency.
·frontiersin.org·
Let Me Take Over: Variable Autonomy for Meaningful Human Control
Sign Up | LinkedIn
Sign Up | LinkedIn
500 million+ members | Manage your professional identity. Build and engage with your professional network. Access knowledge, insights and opportunities.
·linkedin.com·
Sign Up | LinkedIn
'Thirsty' AI: Training ChatGPT Required Enough Water to Fill a Nuclear Reactor's Cooling Tower, Study Finds
'Thirsty' AI: Training ChatGPT Required Enough Water to Fill a Nuclear Reactor's Cooling Tower, Study Finds
Popular large language models (LLMs) like OpenAI’s ChatGPT and Google’s Bard are energy-intensive, requiring massive server farms to provide enough data to train the powerful programs. Cooling those same data centers also makes the AI chatbots incredibly thirsty. New research suggests training for GPT-3 alone consumed 185,000 gallons (700,000 liters) of water. An average user’s conversational exchange with ChatGPT basically amounts to dumping a large bottle of fresh water out on the ground, acco
·news.yahoo.com·
'Thirsty' AI: Training ChatGPT Required Enough Water to Fill a Nuclear Reactor's Cooling Tower, Study Finds
Tech Elite's AI Ideologies Have Racist Foundations, Say AI Ethicists
Tech Elite's AI Ideologies Have Racist Foundations, Say AI Ethicists
More and more prominent tech figures are voicing concerns about superintelligent AI and risks to the future of humanity. But as leading AI ethicist Timnit Gebru and researcher Émile P Torres point out, these ideologies have deeply racist foundations. TESCREAL “So another “godfather” of AI, Turing Award Winner Yoshua Bengio has decided to FULLY align […]
·peopleofcolorintech.com·
Tech Elite's AI Ideologies Have Racist Foundations, Say AI Ethicists
Black men were likely underdiagnosed with lung problems because of bias in software, study suggests
Black men were likely underdiagnosed with lung problems because of bias in software, study suggests
A new study suggests racial bias built into a common medical test for lung function is likely leading to fewer Black patients getting care for breathing problems. The study released Thursday in JAMA Network Open found that as many as 40% more Black men might be diagnosed with breathing problems if current diagnosis-assisting computer software was changed.
·apnews.com·
Black men were likely underdiagnosed with lung problems because of bias in software, study suggests