In 2019, the Public Policy Programme, in collaboration with the UK’s Office for Artificial Intelligence and the Government Digital Service, published the UK Government’s official Public Sector Guidance on AI Ethics and Safety. This document provides public sector organisations with the Process-Based-Governance (PBG) Framework, a framework for applying principles of AI ethics and safety to the design, development, and deployment of algorithmic systems.
‘We Will Coup Whoever We Want’: Elon Musk and the Overthrow of Democracy in Bolivia
On July 24, 2020, Tesla’s Elon Musk wrote on Twitter that a second U.S. “government stimulus package is not in the best interests of the people.” Someone responded to Musk soon after, “You know what wasn’t in the best interest of people? The U.S. government organizing a coup against Evo Morales in Bolivia so you could obtain the lithium there.” Musk then wrote: “We will coup whoever we want! Deal with it.”
Portugal orders Sam Altman's Worldcoin to halt data collection
Portugal's data regulator has ordered Sam Altman's iris-scanning project Worldcoin to stop collecting biometric data for 90 days, it said on Tuesday, in the latest regulatory blow to a venture that has raised privacy concerns in multiple countries.
In a Q&A, MIT PhD student and recent Design Fellow Jonathan Zong discusses a new proposed framework to map how individuals can say “no” to technology misuses.
Ravit Dotan, PhD on LinkedIn: Africa AI regulation | 40 comments
New report on AI regulation in Africa! It’s the first I've seen and it's excellent. Some highlights and reflections. ➤ The report, by the Tech Hive Advisory,… | 40 comments on LinkedIn
Leading AI Companies OpenAI and Anthropic Are Not Keeping Their Election Promises
The discrepancies between the companies’ public promises — and their execution — raises questions about their commitment to providing accurate information during this high-stakes election year.
Karen Hao on LinkedIn: For years I’ve been interviewing data annotation workers who are the… | 13 comments
For years I’ve been interviewing data annotation workers who are the lifeblood of the AI industry. For years I’ve heard the same story: the platforms they work… | 13 comments on LinkedIn
Dismantling Public Values, One Data Center at the Time - NordMedia Network
“Nordic states are letting go of values and infrastructure resources that are dear to the welfare state", writes Julia Velkova, adding: "Rather than bending to Big Tech values and modes of operation, we should have them bend to comply with our Nordic, public values, if they are to operate in the region".
Top AI researchers say OpenAI, Meta and more hinder independent evaluations
Firms like OpenAI and Meta use strict protocols to keep bad actors from abusing AI systems. But researchers argue these rules are chilling independent evaluations.
World-making technology entangled with coloniality, race and gender: Ecomodernist and degrowth perspectives - Susan Paulson, 2024
Impelled by the intertwined expansion of capitalist institutions and fossil-fueled industry, human activity has made devastating impacts on ecosystems and earth...
Former Public Utilities Commissioner from Paonia sends up warning flags about legislation, construction of hyperscale data centers & the sharp rise in electricity consumption
Silicon Valley is pricing academics out of AI research
A growing chorus of academics say the sky-high cost of working with AI models is boxing researchers out of the field, compromising independent study of the technology.
Google Used a Black, Deaf Worker to Tout Its Diversity. Now She’s Suing for Discrimination
Jalon Hall was featured on Google’s corporate social media accounts “for making #LifeAtGoogle more inclusive!” She says the company discriminated against her on the basis of her disability and race.
A Hard Energy Use Limit of Artificial Superintelligence
We argue that the high energy use by present-day semiconductor computing technology will prevent the emergence of an artificial intelligence system that could reasonably be described as a “superintelligence”. This hard limit on artificial superintelli