The competitive nature of AI development poses a dilemma for organizations, as prioritizing speed may lead to neglecting ethical guidelines, bias detection, and safety measures. Known and emerging concerns associated with AI in the workplace include the spread of misinformation, copyright and intellectual property concerns, cybersecurity, data privacy, as well as navigating rapid and ambiguous regulations. To mitigate these risks, we propose thirteen principles for responsible AI at work.
PoisonGPT: How we hid a lobotomized LLM on Hugging Face to spread fake news
We will show in this article how one can surgically modify an open-source model, GPT-J-6B, and upload it to Hugging Face to make it spread misinformation while being undetected by standard benchmarks.
Men are hunters, women are gatherers. That was the assumption. A new study upends it.
The implications are potentially enormous, says history professor Kimberly Hamlin: "The myth that man is the hunter and woman is the gatherer ... naturalizes the inferiority of women."
AI health in the Nordic countries: privatisation, unmet promises, and limited participation
Are recent technological developments in artificial intelligence (AI) revolutionary, calamitous, something in between? Are they inevitable, spontaneous, unpredictable? Jason Tucker examines states who actively shape developments in AI health. The challenge is bringing the public back into decision-making in these developments
The carbon impact of AI vs search engines | Insights | Yard
Although the industry is still getting to grips with the range of solutions offered by AI, there is no better time to understand the carbon impact of the technology.
“I Have a Problem With the Stealing of My Material”: A Common Rallying Cry Emerges On AI
As Hollywood execs begin to test artificial intelligence, from using the tech to de-age actors to partnering with companies in the field to create AI-composed music, key players in the industry are pushing for regulations — or lawsuits.
Translating AI Risk Management Into Practice - Center for Security and Emerging Technology
CSET's AI Assessment team provides a template that helps organizations create profiles to guide the management and deployment of AI systems in line with NIST's AI Risk Management Framework.
Seismic advances in generative AI algorithms for imagery, text, and other data types has led to the temptation to use synthetic data to train next-generation models. Repeating this process creates...
In the past few months we have been deluged with headlines about new AI tools and how much they are going to change society. Some reporters have done amazing work holding the companies developing AI accountable, but many struggle to report on this new technology in a fair and accurate way. We — an investigative […]
Get a clue, says panel about buzzy AI tech: it's being "deployed as surveillance"
Earlier today at a Bloomberg conference in San Francisco, some of the biggest names in AI turned up, including, briefly, Sam Altman of OpenAI, who just ended his two-month world tour, and Stability AI founder Emad Mostaque. Still, one of the most compelling conversations happened later in the afternoon, in a panel discussion about AI […]
The United Nations is convening this week a global gathering to try to map out the frontiers of artificial intelligence and to harness its potential for empowering humanity.
The recent sale of an artificial intelligence (AI)-generated portrait for $432,000 at Christie's art auction has raised questions about how credit and responsibility should be allocated to individuals involved and how the anthropomorphic perception of ...
Inside the secret list of websites that make AI like ChatGPT sound smart
An analysis of a chatbot data set by The Washington Post reveals the proprietary, personal, and often offensive websites that go into an AI’s training data.
Visual misinformation is widespread on Facebook – and often undercounted by researchers
The flood of misinformation on social media could actually be worse than many researchers have reported. The problem is that many studies analyzed only text, leaving visual misinformation uncounted.