Digital Ethics

Digital Ethics

3328 bookmarks
Custom sorting
Autonomous Vehicles, Artificial Intelligence, Risk and Colliding
Autonomous Vehicles, Artificial Intelligence, Risk and Colliding
Autonomous vehicles (AVs) are often claimed to offer many societal benefits. Perhaps, the most important is the potential to save countless lives through anticipated improvements in safety by replacing human drivers with AI drivers. AVs will also dramatically...
·link.springer.com·
Autonomous Vehicles, Artificial Intelligence, Risk and Colliding
What's "up" with vision-language models? Investigating their struggle with spatial reasoning
What's "up" with vision-language models? Investigating their struggle with spatial reasoning
Recent vision-language (VL) models are powerful, but can they reliably distinguish "right" from "left"? We curate three new corpora to quantify model comprehension of such basic spatial relations. These tests isolate spatial reasoning more precisely than existing datasets like VQAv2, e.g., our What'sUp benchmark contains sets of photographs varying only the spatial relations of objects, keeping their identity fixed (see Figure 1: models must comprehend not only the usual case of a dog under a table, but also, the same dog on top of the same table). We evaluate 18 VL models, finding that all perform poorly, e.g., BLIP finetuned on VQAv2, which nears human parity on VQAv2, achieves 56% accuracy on our benchmarks vs. humans at 99%. We conclude by studying causes of this surprising behavior, finding: 1) that popular vision-language pretraining corpora like LAION-2B contain little reliable data for learning spatial relationships; and 2) that basic modeling interventions like up-weighting preposition-containing instances or fine-tuning on our corpora are not sufficient to address the challenges our benchmarks pose. We are hopeful that these corpora will facilitate further research, and we release our data and code at https://github.com/amitakamath/whatsup_vlms.
·arxiv.org·
What's "up" with vision-language models? Investigating their struggle with spatial reasoning
Sasha Luccioni: AI is dangerous, but not for the reasons you think
Sasha Luccioni: AI is dangerous, but not for the reasons you think
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.
·ted.com·
Sasha Luccioni: AI is dangerous, but not for the reasons you think
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House
Today, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and…
·whitehouse.gov·
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House
The Twisted Eye in the Sky Over Buenos Aires
The Twisted Eye in the Sky Over Buenos Aires
A scandal unfolding in Argentina shows the dangers of implementing facial recognition—even with laws and limits in place.
·wired.com·
The Twisted Eye in the Sky Over Buenos Aires
Joy Buolamwini: “We’re giving AI companies a free pass”
Joy Buolamwini: “We’re giving AI companies a free pass”
The pioneering AI researcher and activist shares her personal journey in a new book, and explains her concerns about today’s AI systems.
·www-technologyreview-com.cdn.ampproject.org·
Joy Buolamwini: “We’re giving AI companies a free pass”
DVA forms for compensation cause unnecessary suffering
DVA forms for compensation cause unnecessary suffering
The Department of Veterans' Affairs system is complex, but it is the application form that turns that complexity into something unmanageable.
·themandarin.com.au·
DVA forms for compensation cause unnecessary suffering
America Is Using Up Its Groundwater Like There’s No Tomorrow
America Is Using Up Its Groundwater Like There’s No Tomorrow
Unchecked overuse is draining and damaging aquifers nationwide, a data investigation by the New York Times revealed, threatening millions of people and America’s status as a food superpower.
·nytimes.com·
America Is Using Up Its Groundwater Like There’s No Tomorrow
The Future of Farming: Artificial Intelligence and Agriculture
The Future of Farming: Artificial Intelligence and Agriculture
While artificial intelligence (AI) seemed until recently to be science fiction, countless corporations across the globe are now researching ways to implement this technology in everyday life. AI works by processing [https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html] large quantities of data, interpreting patterns in that data,
·hir.harvard.edu·
The Future of Farming: Artificial Intelligence and Agriculture
Testing ChatGPT-4 for ‘UX Audits’ Shows an 80% Error Rate & 14–26% Discoverability Rate – Articles – Baymard Institute
Testing ChatGPT-4 for ‘UX Audits’ Shows an 80% Error Rate & 14–26% Discoverability Rate – Articles – Baymard Institute
We tested ChatGPT-4’s ability to do a UX Audit of 12 webpages, and compared it to the results of 6 human UX professionals. GPT-4 had a 20% accuracy rate, 80% error rate, and discovered just 14–26% of the actual UX issues.
·baymard.com·
Testing ChatGPT-4 for ‘UX Audits’ Shows an 80% Error Rate & 14–26% Discoverability Rate – Articles – Baymard Institute
Chatbot Hallucinations Are Poisoning Web Search
Chatbot Hallucinations Are Poisoning Web Search
Untruths spouted by chatbots ended up on the web—and Microsoft's Bing search engine served them up as facts. Generative AI could make search harder to trust.
·wired.com·
Chatbot Hallucinations Are Poisoning Web Search