Tech Policy Review — Localization Lab
Digital Ethics
Data Protection and Digital Agency for Refugees
How the vast amount of data collected from refugees is gathered, stored and shared today
AI-synthesized faces are indistinguishable from real faces and more trustworthy | Proceedings of the National Academy of Sciences
Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, fin...
Autonomous Vehicles, Artificial Intelligence, Risk and Colliding
Autonomous vehicles (AVs) are often claimed to offer many societal benefits. Perhaps, the most important is the potential to save countless lives through anticipated improvements in safety by replacing human drivers with AI drivers. AVs will also dramatically...
What's "up" with vision-language models? Investigating their struggle with spatial reasoning
Recent vision-language (VL) models are powerful, but can they reliably distinguish "right" from "left"? We curate three new corpora to quantify model comprehension of such basic spatial relations. These tests isolate spatial reasoning more precisely than existing datasets like VQAv2, e.g., our What'sUp benchmark contains sets of photographs varying only the spatial relations of objects, keeping their identity fixed (see Figure 1: models must comprehend not only the usual case of a dog under a table, but also, the same dog on top of the same table). We evaluate 18 VL models, finding that all perform poorly, e.g., BLIP finetuned on VQAv2, which nears human parity on VQAv2, achieves 56% accuracy on our benchmarks vs. humans at 99%. We conclude by studying causes of this surprising behavior, finding: 1) that popular vision-language pretraining corpora like LAION-2B contain little reliable data for learning spatial relationships; and 2) that basic modeling interventions like up-weighting preposition-containing instances or fine-tuning on our corpora are not sufficient to address the challenges our benchmarks pose. We are hopeful that these corpora will facilitate further research, and we release our data and code at https://github.com/amitakamath/whatsup_vlms.
Conversation on AI should not just be driven by the technology
One reason AI is tricky to regulate is that it cuts across everything
Adobe Caught Selling AI-Generated Images of Israel-Palestine Violence
Software giant Adobe has been caught selling AI-generated images of the Israel-Hamas war, as spotted by Australian news outlet Crikey.
Sasha Luccioni: AI is dangerous, but not for the reasons you think
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.
Key takeaways from the Biden administration executive order on AI
President Biden issued an Executive Order on AI with the goal of promoting “safe, secure, and trustworthy development and use of artificial intelligence.”
Publishing associations urge UK government to protect copyrighted works from AI
Statement asks government to help stop AI tools ‘using copyright-protected works with impunity’
These fake images reveal how AI amplifies our worst stereotypes
AI image generators like Stable Diffusion and DALL-E amplify bias in gender, race and beyond, despite efforts to detoxify the data fueling these results.
Your Personal Information Is Probably Being Used to Train Generative AI Models - Scientific American
Companies are training their generative AI models on vast swathes of the Internet—and there’s no real way to stop them
Against Empathy by Paul Bloom; The Empathy Instinct by Peter Bazalgette – review
Is empathy the bedrock of morality? Two new studies suggest there is confusion around its meaning – and its usefulness in creating a more caring society
Russian hacking tool floods social networks with bots, researchers say
Low-skill cybercriminals are using a new tool to create hundreds of fake social media accounts in just a few seconds.
DataCamp on LinkedIn: How can we ensure AI is free of bias? Dr. Joy Buolamwini, one of TIME’s…
How can we ensure AI is free of bias? Dr. Joy Buolamwini, one of TIME’s Top 100 Most Influential People in AI and founder of The Algorithmic Justice League…
Los angeles is using ai to predict who might become homeless and help before the
I felt numb – not sure what to do. How did deepfake images of me end up on a porn site?
I hadn’t ever had cause to think about how manipulated online content could impact my life. Then, one winter morning, someone knocked at my door …
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House
Today, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and…
The “Boring Apocalypse” Of Today’s AI
Machines writing dreary text, to be read by other machines
The Twisted Eye in the Sky Over Buenos Aires
A scandal unfolding in Argentina shows the dangers of implementing facial recognition—even with laws and limits in place.
Joy Buolamwini: “We’re giving AI companies a free pass”
The pioneering AI researcher and activist shares her personal journey in a new book, and explains her concerns about today’s AI systems.
DVA forms for compensation cause unnecessary suffering
The Department of Veterans' Affairs system is complex, but it is the application form that turns that complexity into something unmanageable.
AI doomsday warnings a distraction from the danger it already poses, warns expert
A leading researcher, who will attend this week’s AI safety summit in London, warns of ‘real threat to the public conversation’
America Is Using Up Its Groundwater Like There’s No Tomorrow
Unchecked overuse is draining and damaging aquifers nationwide, a data investigation by the New York Times revealed, threatening millions of people and America’s status as a food superpower.
ChatGPT is landing kids in the principal’s office, survey finds
While educators worry that students are using generative AI to cheat, a new report finds that students are turning to the tool more for personal problems.
The White House will reportedly reveal a ‘sweeping’ AI executive order on October 30
The Biden Administration will reportedly unveil a broad executive order on artificial intelligence next week. It’s allegedly scheduled for Monday, October 30.
The Future of Farming: Artificial Intelligence and Agriculture
While artificial intelligence (AI) seemed until recently to be science fiction,
countless corporations across the globe are now researching ways to implement
this technology in everyday life. AI works by processing
[https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html]
large quantities of data, interpreting patterns in that data,
Artists Allege Meta’s AI Data Deletion Request Process Is a ‘Fake PR Stunt’
This summer, Meta began taking requests to delete data from its AI training. Artists say this new system is broken and fake. Meta says there is no opt-out program.
Testing ChatGPT-4 for ‘UX Audits’ Shows an 80% Error Rate & 14–26% Discoverability Rate – Articles – Baymard Institute
We tested ChatGPT-4’s ability to do a UX Audit of 12 webpages, and compared it to the results of 6 human UX professionals. GPT-4 had a 20% accuracy rate, 80% error rate, and discovered just 14–26% of the actual UX issues.
MIT, Cohere for AI, others launch platform to track and filter audited AI datasets
Researchers from MIT, Cohere for AI and 11 other institutions launched the Data Provenance Platform today in order to "tackle the data transparency crisis in the AI space."