Character.ai: Young people turning to AI therapist bots
Test Information Space
Marine Corps’ Mahlock takes reins of Cyber National Mission Force
U.S. Moves Closer to Filing Sweeping Antitrust Case Against Apple
How crowded are the oceans? New maps show what flew under the radar until now
These AI-powered apps can hear the cause of a cough
Thousands of ai authors on the future of ai
OpenAI to launch ChatGPT app store next week
Inside the U.S. Military’s New Drone Warfare School | WSJ
Shaping the future of advanced robotics
X
WSJ News Exclusive | Jeff Bezos Bets on a Google Challenger Using AI to Try to Upend Internet Search
Vint Cerf, Vice President and Chief Internet Evangelist of Google
What’s next for AI in 2024
Global Governance of AI – on the Interim Report of the UN AI Advisory Body
Governing AI for Humanity
AI Advisory Body | United Nations
Here's what Google Assistant with Bard will look like
Political, Economic and Legal Effects of Artificial Intelligence - Google Books
The Political Philosophy of AI - Google Books
Fueling the future of space travel with robots that mine resources on the moon
AI May Not Steal Your Job, but It Could Stop You Getting Hired
More than a third of state agencies are using AI. Texas is beginning to examine its potential impact.
A global watermarking standard could help safeguard elections in the ChatGPT era
Forget technology — politicians pose the gravest misinformation threat
This is set to be a big election year, including in India, Mexico, the US, and probably the UK. People will rightly be on their guard for misinformation, but much of the policy discussion on the topic ignores the most important source: members of the political elite.As a social scientist working on political communication, I have spent years in these debates — which continue to be remarkably disconnected from what we know from research. Academic findings repeatedly underline the actual impact of politics, while policy documents focus persistently on the possible impact of new technologies.Most recently, Britain’s National Cyber Security Centre (NCSC) has warned of how “AI-created hyper-realistic bots will make the spread of disinformation easier and the manipulation of media for use in deepfake campaigns will likely become more advanced”. This is similar to warnings from many other public authorities, which ignore the misinformation from the most senior levels of domestic politics. In the US, the Washington Post stopped counting after documenting at least 30,573 false or misleading claims made by Donald Trump as president. In the UK, the non-profit FullFact has reported that as many as 50 MPs — including two prime ministers, cabinet ministers and shadow cabinet ministers — failed to correct false, unevidenced or misleading claims in 2022 alone, despite repeated calls to do so.These are actual problems of misinformation, and the phenomenon is not new. Both George W Bush and Barack Obama’s administrations obfuscated on Afghanistan. Bush’s government and that of his UK counterpart Tony Blair advanced false and misleading claims in the run-up to the Iraq war. Prominent politicians have, over the years, denied the reality of human-induced climate change, proposed quack remedies for Covid-19, and so much more. These are examples of misinformation, and, at their most egregious, of disinformation — defined as spreading false or misleading information for political advantage or profit.This basic point is strikingly absent from many policy documents — the NCSC report, for example, has nothing to say about domestic politics. It is not alone. Take the US Surgeon General’s 2021 advisory on confronting health misinformation which calls for a “whole-of-society” approach — and yet contains nothing on politicians and curiously omits the many misleading claims made by the sitting president during the pandemic, including touting hydroxychloroquine as a potential treatment.This oversight is problematic because misinformation coming from the top is likely to have a far greater impact than that from most other sources, whether social media posts by ordinary people, hostile actors, or commercial scammers. People pay more attention to what prominent politicians say, and supporters of those politicians are more inclined to believe it and act on it.We know this from years of research. Millions of Americans believed there was systematic voter fraud in the 2020 elections, that weapons of mass destruction were found in Iraq, that human activity played little role in climate change, and that the risks and side effects of Covid-19 vaccines outweighed the health benefits. What all these misleading beliefs have in common is that they have been systematically advanced by political actors — by the right in the US. But in, for example, Mexico, there is plenty of misinformation coming from the left. Meanwhile, the policy discussion remains bogged down with how to police AI-generated content, while distracting us from how some politicians — perhaps conscious of how tech companies eventually blocked Trump in the dying days of his presidency — are pushing for legal exemptions from content moderation.Of course there will be examples of AI-generated misinformation, bots, and deepfakes during various elections next year. But the key question is how politicians will be using these tools. A pro-Ron DeSantis political action committee has already used an AI version of Trump’s voice in a campaign ad. This is not some unnamed “malicious actor”, but a team working on behalf of the governor of a state with a population larger than all but five EU member states. We have seen examples of similar activity in elections in Argentina and New Zealand too.When it comes to the most serious misinformation, the calls tend to come from inside the house. Technology will not change that, so let’s stop gaslighting the public and admit clearly as we head into a big election year that misinformation often comes from the top.
Midjourney Leaps into AI Video Creation - Decrypt
January 2024 Issue - IEEE Spectrum
AR Glasses Spawn a Whole New Social Dynamic
Opinion | How the Federal Government Can Rein In A.I. in Law Enforcement
Turing's Test, a Beautiful Thought Experiment
Download PDF
Mobile industry looks ahead to 6G as 5G evolution continues
"The goal is to halve the total energy consumption of mobile networks with 6G, which means that energy efficiency will have to improve by a factor of 40, if network traffic goes up by a factor of 20."
The goal is to halve the total energy consumption of mobile networks with 6G, which means that energy efficiency will have to improve by a factor of 40, if network traffic goes up by a factor of 20.
"This requires some foundational research, and even if there are people that say we don't need 6G, we are saying 'yes, you do need 6G' because traffic analysis shows that 5G is going to run out of steam by the end of the decade," Vetter said. This, he emphasized, means 5G networks won't have the capacity to cope with the increase in traffic.
Artificial Intelligence and Human Enhancement: Can AI Technologies Make Us More (Artificially) Intelligent? | Cambridge Quarterly of Healthcare Ethics | Cambridge Core
UMass Memorial to require masks for employees starting Tuesday, encourage visitor masking