Putting AI Ethics into Practice: The Hourglass Model of Organizational AI Governance

Digital Ethics
The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content
As a generative AI training model is exposed to more AI-generated data, it performs worse, producing more errors, leading to model collapse.
The Taliban Government Runs on WhatsApp. There’s Just One Problem.
The Taliban administration is stuck in a cat-and-mouse game with WhatsApp, which is off-limits to the nascent government because of U.S. sanctions.
Congress is racing to regulate AI. Silicon Valley is eager to teach them how.
Lawmakers are flocking to private meetings, dinners and briefings with AI experts — including the CEOs of the companies they’re trying to regulate.
Who's Liable for AI Misinformation With Chatbots Like ChatGPT? | WSJ Tech News Briefing
Generative artificial-intelligence chatbots like ChatGPT are known to get things wrong sometimes, a process known as “hallucinating.” But can anyone be held liable if those incorrect responses are damaging in some way?
Host Zoe Thomas talks to a legal expert and an AI ethicist to explore the legal landscape for generative AI technology, and the tactics companies are employing to improve their products. This is the fourth episode of Tech News Briefing’s special series on generative AI, “Artificially Minded.”
0:00 Why Australian mayor Brian Hood is thinking about suing OpenAI's ChatGPT for defamation
4:44 Why generative AI programs like OpenAI’s ChatGPT can get facts wrong
6:26 What the 1996 Communication Decency Act could tell us about laws around generative AI
10:20 How generative AI blurs the line between creator and platform
12:56 How lawmakers around the world are handing regulation around AI
14:13 Why AI hallucinations happen
17:16 How Google is taking steps to create a more factual chatbot with Bard
18:34 How tech companies work with AI ethicists: what is red teaming?
Tech News Briefing
WSJ’s tech podcast featuring breaking news, scoops and tips on tech innovations and policy debates, plus exclusive interviews with movers and shakers in the industry.
For more episodes of WSJ’s Tech News Briefing: https://link.chtbl.com/WSJTechNewsBriefing
#AI #Regulation #WSJ
Will we run out of data? An analysis of the limits of scaling...
We analyze the growth of dataset sizes used in machine learning for natural language processing and computer vision, and extrapolate these using two methods; using the historical growth rate and...
When AI Overrules the Nurses Caring for You
Artificial intelligence raises difficult questions about who makes the call in a health crisis: the human or the machine?
Amp
Tesla’s “Self-Driving” System Never Should Have Been Allowed on the Road
Elon Musk’s automatic driving technology seems to be roughly an order of magnitude more deadly than human drivers.
The AI Arms Race: Using AI to avoid AI-detection tools
The use of artificial intelligence (AI) to detect AI-generated content has gained increasing attention in recent years as the capabilities…
3D Printer Does Homework ChatGPT Wrote
This is our next project.
ICO warns of fines for companies who do not get cookie banners right
en-GBStephen Bonner announced that the (ICO) is "paying attention" to how companies use cookies on websites and how they allow users to configure their settings
The AI revolution is powered by these contractors making $15 an hour
Two OpenAI contractors spoke to NBC News about their work training the system behind ChatGPT.
The Awkward Partnership Leading the AI Boom
As the companies lead the AI boom, their unconventional arrangement sometimes causes conflict.
MEPs ready to negotiate first-ever rules for safe and transparent AI | News | European Parliament
The rules aim to promote the uptake of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.
There was all sorts of toxic behaviour timnit gebru on her sacking by google ais dangers and big techs biases
Beth Kanter on LinkedIn: Virginia Dignum – Responsible artificial intelligence
Virginia Dignum is a professor and member of the EU’s High-Level Expert Group on AI who has been looking into the many questions that come up across society…
Virginia Dignum on LinkedIn: EuroGPT call for action
I had the honor of co-organising a meeting of leading EU networks, organisations and researchers with the European Parliament on 25 May to discuss…
Let Me Take Over: Variable Autonomy for Meaningful Human Control
As Artificial Intelligence (AI) continues to expand its reach, the demand for human control and the development of AI systems that adhere to our legal, ethical, and social values also grows. Many (international and national) institutions have taken steps in this direction and published guidelines for the development and deployment of responsible AI systems. These guidelines, however, rely heavily on high-level statements that provide no clear criteria for system assessment, making the effective control over systems a challenge. “Human oversight” is one of the requirements being put forward as a means to support human autonomy and agency. In this paper, we argue that human presence alone does not meet this requirement and that such a misconception may limit the use of automation where it can otherwise provide so much benefit across industries. We therefore propose the development of systems with variable autonomy—dynamically adjustable levels of autonomy—as a means of ensuring meaningful human control over an artefact by satisfying all three core values commonly advocated in ethical guidelines: accountability, responsibility, and transparency.
ACROCPoLis: A Descriptive Framework for Making Sense of Fairness | Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
Sign Up | LinkedIn
500 million+ members | Manage your professional identity. Build and engage with your professional network. Access knowledge, insights and opportunities.
Another Warning Letter from A.I. Researchers and Executives
We are committed to doing everything in our letter-writing power to write a strongly worded letter.
Technology 65850668
How companies use dark patterns to keep you subscribed
Unsubscribing should be easy. It’s not.
'Thirsty' AI: Training ChatGPT Required Enough Water to Fill a Nuclear Reactor's Cooling Tower, Study Finds
Popular large language models (LLMs) like OpenAI’s ChatGPT and Google’s Bard are energy-intensive, requiring massive server farms to provide enough data to train the powerful programs. Cooling those same data centers also makes the AI chatbots incredibly thirsty. New research suggests training for GPT-3 alone consumed 185,000 gallons (700,000 liters) of water. An average user’s conversational exchange with ChatGPT basically amounts to dumping a large bottle of fresh water out on the ground, acco
The rush toward ethical ai is leaving many of us behind
India’s religious AI chatbots are speaking in the voice of god — and condoning violence
Claiming wisdom based on the Bhagavad Gita, the bots frequently go way off script.
Rethinking Authenticity in the Era of Generative AI
Opinion | The latest technology exploits people’s reflexive assumptions. It's time to recalibrate how authenticity is judged.
Dark Patterns that Mislead Consumers Are All Over the Internet – The Markup
Think you can tell a dark pattern from an ethically designed prompt? Take our quiz to find out
The AI takeover of Google Search starts now
The 10 blue links aren’t gone, but AI is pushing them down the page.