Digital Ethics

Digital Ethics

3547 bookmarks
Custom sorting
Debunking AGI inevitability claims
Debunking AGI inevitability claims
Have you heard these claims? “Artificial General Intelligence (AGI) is imminent!” or “At current rate of progress, AGI is inevitable!” In a recent preprint, my co-authors an…
·irisvanrooijcogsci.com·
Debunking AGI inevitability claims
Autonomous Vehicles, Artificial Intelligence, Risk and Colliding
Autonomous Vehicles, Artificial Intelligence, Risk and Colliding
Autonomous vehicles (AVs) are often claimed to offer many societal benefits. Perhaps, the most important is the potential to save countless lives through anticipated improvements in safety by replacing human drivers with AI drivers. AVs will also dramatically...
·link.springer.com·
Autonomous Vehicles, Artificial Intelligence, Risk and Colliding
What's "up" with vision-language models? Investigating their struggle with spatial reasoning
What's "up" with vision-language models? Investigating their struggle with spatial reasoning
Recent vision-language (VL) models are powerful, but can they reliably distinguish "right" from "left"? We curate three new corpora to quantify model comprehension of such basic spatial relations. These tests isolate spatial reasoning more precisely than existing datasets like VQAv2, e.g., our What'sUp benchmark contains sets of photographs varying only the spatial relations of objects, keeping their identity fixed (see Figure 1: models must comprehend not only the usual case of a dog under a table, but also, the same dog on top of the same table). We evaluate 18 VL models, finding that all perform poorly, e.g., BLIP finetuned on VQAv2, which nears human parity on VQAv2, achieves 56% accuracy on our benchmarks vs. humans at 99%. We conclude by studying causes of this surprising behavior, finding: 1) that popular vision-language pretraining corpora like LAION-2B contain little reliable data for learning spatial relationships; and 2) that basic modeling interventions like up-weighting preposition-containing instances or fine-tuning on our corpora are not sufficient to address the challenges our benchmarks pose. We are hopeful that these corpora will facilitate further research, and we release our data and code at https://github.com/amitakamath/whatsup_vlms.
·arxiv.org·
What's "up" with vision-language models? Investigating their struggle with spatial reasoning
Sasha Luccioni: AI is dangerous, but not for the reasons you think
Sasha Luccioni: AI is dangerous, but not for the reasons you think
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.
·ted.com·
Sasha Luccioni: AI is dangerous, but not for the reasons you think
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House
Today, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and…
·whitehouse.gov·
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House
The Twisted Eye in the Sky Over Buenos Aires
The Twisted Eye in the Sky Over Buenos Aires
A scandal unfolding in Argentina shows the dangers of implementing facial recognition—even with laws and limits in place.
·wired.com·
The Twisted Eye in the Sky Over Buenos Aires
Joy Buolamwini: “We’re giving AI companies a free pass”
Joy Buolamwini: “We’re giving AI companies a free pass”
The pioneering AI researcher and activist shares her personal journey in a new book, and explains her concerns about today’s AI systems.
·www-technologyreview-com.cdn.ampproject.org·
Joy Buolamwini: “We’re giving AI companies a free pass”
DVA forms for compensation cause unnecessary suffering
DVA forms for compensation cause unnecessary suffering
The Department of Veterans' Affairs system is complex, but it is the application form that turns that complexity into something unmanageable.
·themandarin.com.au·
DVA forms for compensation cause unnecessary suffering