Algorithms Hijacked My Generation. I Fear For Gen Alpha.

Digital Ethics
Debunking AGI inevitability claims
Have you heard these claims? “Artificial General Intelligence (AGI) is imminent!” or “At current rate of progress, AGI is inevitable!” In a recent preprint, my co-authors an…
When AI Systems Fail: The Toll on the Vulnerable Amidst Global Crisis
Technical “errors” in Meta products resulted in dehumanizing misrepresentations of Palestinians, writes Nadah Feteih.
OpenAI blames DDoS attack for ongoing ChatGPT outage | TechCrunch
OpenAI has said a DDoS attack is behind “periodic outages” affecting ChatGPT and its developer tools
Unemployed Man Uses AI to Apply for 5,000 Jobs, Gets 20 Interviews
Software engineer Julian Joseph applied to 5,000 jobs using an AI tool called LazyApply. He got 20 interviews.
Google, OpenAI, and Microsoft want users held responsible when generative-AI tools show copyrighted material
In comments to the US Copyright Office, some companies said users were responsible when a tool such as ChatGPT answered prompts with infringing material.
Tech Policy Review — Localization Lab
Data Protection and Digital Agency for Refugees
How the vast amount of data collected from refugees is gathered, stored and shared today
AI-synthesized faces are indistinguishable from real faces and more trustworthy | Proceedings of the National Academy of Sciences
Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, fin...
Autonomous Vehicles, Artificial Intelligence, Risk and Colliding
Autonomous vehicles (AVs) are often claimed to offer many societal benefits. Perhaps, the most important is the potential to save countless lives through anticipated improvements in safety by replacing human drivers with AI drivers. AVs will also dramatically...
What's "up" with vision-language models? Investigating their struggle with spatial reasoning
Recent vision-language (VL) models are powerful, but can they reliably distinguish "right" from "left"? We curate three new corpora to quantify model comprehension of such basic spatial relations. These tests isolate spatial reasoning more precisely than existing datasets like VQAv2, e.g., our What'sUp benchmark contains sets of photographs varying only the spatial relations of objects, keeping their identity fixed (see Figure 1: models must comprehend not only the usual case of a dog under a table, but also, the same dog on top of the same table). We evaluate 18 VL models, finding that all perform poorly, e.g., BLIP finetuned on VQAv2, which nears human parity on VQAv2, achieves 56% accuracy on our benchmarks vs. humans at 99%. We conclude by studying causes of this surprising behavior, finding: 1) that popular vision-language pretraining corpora like LAION-2B contain little reliable data for learning spatial relationships; and 2) that basic modeling interventions like up-weighting preposition-containing instances or fine-tuning on our corpora are not sufficient to address the challenges our benchmarks pose. We are hopeful that these corpora will facilitate further research, and we release our data and code at https://github.com/amitakamath/whatsup_vlms.
Conversation on AI should not just be driven by the technology
One reason AI is tricky to regulate is that it cuts across everything
Adobe Caught Selling AI-Generated Images of Israel-Palestine Violence
Software giant Adobe has been caught selling AI-generated images of the Israel-Hamas war, as spotted by Australian news outlet Crikey.
Sasha Luccioni: AI is dangerous, but not for the reasons you think
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.
Key takeaways from the Biden administration executive order on AI
President Biden issued an Executive Order on AI with the goal of promoting “safe, secure, and trustworthy development and use of artificial intelligence.”
Publishing associations urge UK government to protect copyrighted works from AI
Statement asks government to help stop AI tools ‘using copyright-protected works with impunity’
These fake images reveal how AI amplifies our worst stereotypes
AI image generators like Stable Diffusion and DALL-E amplify bias in gender, race and beyond, despite efforts to detoxify the data fueling these results.
Your Personal Information Is Probably Being Used to Train Generative AI Models - Scientific American
Companies are training their generative AI models on vast swathes of the Internet—and there’s no real way to stop them
Against Empathy by Paul Bloom; The Empathy Instinct by Peter Bazalgette – review
Is empathy the bedrock of morality? Two new studies suggest there is confusion around its meaning – and its usefulness in creating a more caring society
Russian hacking tool floods social networks with bots, researchers say
Low-skill cybercriminals are using a new tool to create hundreds of fake social media accounts in just a few seconds.
DataCamp on LinkedIn: How can we ensure AI is free of bias? Dr. Joy Buolamwini, one of TIME’s…
How can we ensure AI is free of bias? Dr. Joy Buolamwini, one of TIME’s Top 100 Most Influential People in AI and founder of The Algorithmic Justice League…
Los angeles is using ai to predict who might become homeless and help before the
I felt numb – not sure what to do. How did deepfake images of me end up on a porn site?
I hadn’t ever had cause to think about how manipulated online content could impact my life. Then, one winter morning, someone knocked at my door …
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence | The White House
Today, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and…
The “Boring Apocalypse” Of Today’s AI
Machines writing dreary text, to be read by other machines
The Twisted Eye in the Sky Over Buenos Aires
A scandal unfolding in Argentina shows the dangers of implementing facial recognition—even with laws and limits in place.
Joy Buolamwini: “We’re giving AI companies a free pass”
The pioneering AI researcher and activist shares her personal journey in a new book, and explains her concerns about today’s AI systems.
DVA forms for compensation cause unnecessary suffering
The Department of Veterans' Affairs system is complex, but it is the application form that turns that complexity into something unmanageable.
AI doomsday warnings a distraction from the danger it already poses, warns expert
A leading researcher, who will attend this week’s AI safety summit in London, warns of ‘real threat to the public conversation’
America Is Using Up Its Groundwater Like There’s No Tomorrow
Unchecked overuse is draining and damaging aquifers nationwide, a data investigation by the New York Times revealed, threatening millions of people and America’s status as a food superpower.