Found 3593 bookmarks
Newest
The AI Apocalypse: A Scorecard
The AI Apocalypse: A Scorecard
How worried are top AI experts about the threat posed by large language models like GPT-4?
·spectrum-ieee-org.cdn.ampproject.org·
The AI Apocalypse: A Scorecard
The 'toxification' of AI
The 'toxification' of AI
As (Gen)AI's real-world utility abounds, AI is becoming a dirty word
·ninaschick.substack.com·
The 'toxification' of AI
ChatGPT’s Electricity Consumption
ChatGPT’s Electricity Consumption
ChatGPT may have consumed as much electricity as 175,000 people in January 2023.
·towardsdatascience.com·
ChatGPT’s Electricity Consumption
AI Users Are Neither AI Nor Users
AI Users Are Neither AI Nor Users
As the excitement grows around machine learning and things called “AI,” it’s only natural that we would see companies, products, and…
·rbefored.com·
AI Users Are Neither AI Nor Users
🤦🏽‍♂️ No, GPT4 can’t ace MIT
🤦🏽‍♂️ No, GPT4 can’t ace MIT
What follows is a critical analysis of “Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models”
·flower-nutria-41d.notion.site·
🤦🏽‍♂️ No, GPT4 can’t ace MIT
Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing
Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing
Generative artificial-intelligence chatbots like ChatGPT are known to get things wrong sometimes, a process known as “hallucinating.” But can anyone be held liable if those incorrect responses are damaging in some way? Host Zoe Thomas talks to a legal expert and an AI ethicist to explore the legal landscape for generative AI technology, and the tactics companies are employing to improve their products. This is the fourth episode of Tech News Briefing’s special series on generative AI, “Artificially Minded.” 0:00 Why Australian mayor Brian Hood is thinking about suing OpenAI's ChatGPT for defamation 4:44 Why generative AI programs like OpenAI’s ChatGPT can get facts wrong 6:26 What the 1996 Communication Decency Act could tell us about laws around generative AI 10:20 How generative AI blurs the line between creator and platform 12:56 How lawmakers around the world are handing regulation around AI 14:13 Why AI hallucinations happen 17:16 How Google is taking steps to create a more factual chatbot with Bard 18:34 How tech companies work with AI ethicists: what is red teaming? Tech News Briefing WSJ’s tech podcast featuring breaking news, scoops and tips on tech innovations and policy debates, plus exclusive interviews with movers and shakers in the industry. For more episodes of WSJ’s Tech News Briefing: https://link.chtbl.com/WSJTechNewsBriefing #AI #Regulation #WSJ
·youtube.com·
Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing
Will we run out of data? An analysis of the limits of scaling...
Will we run out of data? An analysis of the limits of scaling...
We analyze the growth of dataset sizes used in machine learning for natural language processing and computer vision, and extrapolate these using two methods; using the historical growth rate and...
·arxiv.org·
Will we run out of data? An analysis of the limits of scaling...
When AI Overrules the Nurses Caring for You
When AI Overrules the Nurses Caring for You
Artificial intelligence raises difficult questions about who makes the call in a health crisis: the human or the machine?
·wsj.com·
When AI Overrules the Nurses Caring for You
Amp
Amp
·www-technologyreview-com.cdn.ampproject.org·
Amp
The AI Arms Race: Using AI to avoid AI-detection tools
The AI Arms Race: Using AI to avoid AI-detection tools
The use of artificial intelligence (AI) to detect AI-generated content has gained increasing attention in recent years as the capabilities…
·drpontus.medium.com·
The AI Arms Race: Using AI to avoid AI-detection tools
Virginia Dignum on LinkedIn: EuroGPT call for action
Virginia Dignum on LinkedIn: EuroGPT call for action
I had the honor of co-organising a meeting of leading EU networks, organisations and researchers with the European Parliament on 25 May to discuss…
·linkedin.com·
Virginia Dignum on LinkedIn: EuroGPT call for action
Let Me Take Over: Variable Autonomy for Meaningful Human Control
Let Me Take Over: Variable Autonomy for Meaningful Human Control
As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. “Human oversight” is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy—dynamically adjustable levels of autonomy—as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency.
·frontiersin.org·
Let Me Take Over: Variable Autonomy for Meaningful Human Control