Regulation

82 bookmarks
Custom sorting
With GOP opposed, U.S. Senate panel advances bills to combat AI in elections • Pennsylvania Capital-Star
With GOP opposed, U.S. Senate panel advances bills to combat AI in elections • Pennsylvania Capital-Star
Members of the U.S. Senate are sounding the alarm about the threat that artificial intelligence poses to elections through its ability to deceive voters. But the prospects for legislation that can meaningfully address the problem appear uncertain. In a Wednesday hearing, the Senate Rules Committee advanced three bills designed to counter the AI threat. But […]
·penncapital-star.com·
With GOP opposed, U.S. Senate panel advances bills to combat AI in elections • Pennsylvania Capital-Star
F
F
·politico.com·
F
FCC restores net neutrality rules
FCC restores net neutrality rules
The FCC on Thursday voted to treat internet companies like utilities, clearing the way for heightened scrutiny on the industry.
·axios.com·
FCC restores net neutrality rules
Abundance Institute
Abundance Institute
The Abundance Institute is a mission-driven nonprofit creating an environment for emerging technologies to grow, develop, and thrive long before these technologies capture the public's attention, giving us a first-mover advantage in shaping the future in a positive way.
·abundance.institute·
Abundance Institute
Generative AI doesn’t “democratize creativity”
Generative AI doesn’t “democratize creativity”
Last weekend, I saw a LinkedIn post from influential AI-educator Ethan Mollick, in which Mollick presented a YouTube channel of videos created using GenAI. Glaring copyright and IP issues notwithst…
·leonfurze.com·
Generative AI doesn’t “democratize creativity”
Media Literacy Policy Report | Media Literacy Now
Media Literacy Policy Report | Media Literacy Now
Media Literacy Policy Report 2023 Each year, Media Literacy Now publishes a policy report outlining the status of media literacy education for K-12 schools in the U.S. This report looks at states that have taken steps toward media literacy education reform through the legislative process as of Dec. 31, 2023.
·medialiteracynow.org·
Media Literacy Policy Report | Media Literacy Now
Ian Bremmer: The next global superpower isn't who you think
Ian Bremmer: The next global superpower isn't who you think
Who runs the world? Political scientist Ian Bremmer argues it's not as simple as it used to be. With some eye-opening questions about the nature of leadership, he asks us to consider the impact of the evolving global order and our choices as participants in the future of democracy.
·ted.com·
Ian Bremmer: The next global superpower isn't who you think
How much electricity does AI consume?
How much electricity does AI consume?
How many watts and joules does it actually take to generate a single Balenciaga pope?
·theverge.com·
How much electricity does AI consume?
Artificial Intelligence Legislation Tracker
Artificial Intelligence Legislation Tracker
Numerous bills in Congress aim to safeguard against the unprecedented risks posed by rapidly advancing AI technology.
·brennancenter.org·
Artificial Intelligence Legislation Tracker
Text - H.R.6466 - 118th Congress (2023-2024): AI Labeling Act of 2023
Text - H.R.6466 - 118th Congress (2023-2024): AI Labeling Act of 2023
Text for H.R.6466 - 118th Congress (2023-2024): AI Labeling Act of 2023
IN THE HOUSE OF REPRESENTATIVES November 21, 2023 Mr. Kean of New Jersey introduced the following bill; which was referred to the Committee on Energy and Commerce, and in addition to the Committee on Science, Space, and Technology, for a period to be subsequently determined by the Speaker, in each case for consideration of such provisions as fall within the jurisdiction of the committee concerned A BILL To require disclosures for AI-generated content, and for other purposes. Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, SECTION 1. Short title.This Act may be cited as the “AI Labeling Act of 2023”. SEC. 2. Disclosures for AI-generated content. (a) Consumer disclosures.— (1) IMAGE, VIDEO, AUDIO, OR MULTIMEDIA AI-GENERATED CONTENT.— (A) IN GENERAL.—Each generative artificial intelligence system that, using any means or facility of interstate or foreign commerce, produces image, video, audio, or multimedia AI-generated content shall include on such AI-generated content a clear and conspicuous disclosure that meets the requirements of subparagraph (B). (B) DISCLOSURE REQUIREMENTS.—A disclosure required under subparagraph (A) shall meet each of the following criteria: (i) The disclosure shall include a clear and conspicuous notice, as appropriate for the medium of the content, that identifies the content as AI-generated content. (ii) The output's metadata information shall include an identification of the content as being AI-generated content, the identity of the tool used to create the content, and the date and time the content was created. (iii) The disclosure shall, to the extent technically feasible, be permanent or unable to be easily removed by subsequent users. (2) TEXT AI-GENERATED CONTENT.—Each artificial intelligence system that, using any means or facility of interstate or foreign commerce, produces text AI-generated content (including through an artificial intelligence chatbot) shall include a clear and conspicuous disclosure that identifies the content as AI-generated content and that is, to the extent technically feasible, permanent or unable to be easily removed by subsequent users. (3) OTHER OBLIGATIONS.— (A) DEVELOPERS OF GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEMS.—Any entity that develops a generative artificial intelligence system shall implement reasonable procedures to prevent downstream use of such system without the disclosures required under this section, including by— (i) requiring by contract that end users and third-party licensees of the system refrain from removing any required disclosure; (ii) requiring certification that end users and third-party licensees will not remove any such disclosure; and (iii) terminating access to the system when the entity has reason to believe that an end user or third-party licensee has removed the required disclosure. (B) THIRD-PARTY LICENSEES.—Any third-party licensee of a generative artificial intelligence system shall implement reasonable procedures to prevent downstream use of such system without the disclosures required under this section, including by— (i) requiring by contract that users of the system refrain from removing any required disclosure; (ii) requiring certification that end users will not remove any such disclosure; and (iii) terminating access to the system when the third-party licensee has reason to believe that an end user has removed the required disclosure. (4) ENFORCEMENT BY THE COMMISSION.— (A) UNFAIR OR DECEPTIVE ACTS OR PRACTICE.—A violation of this subsection shall be treated as a violation of a rule defining an unfair or deceptive act or practice under section 18(a)(1)(B) of the Federal Trade Commission Act (15 U.S.C. 57a(a)(1)(B)). (B) POWERS OF THE COMMISSION.— (i) IN GENERAL.—The Commission shall enforce this subsection in the same manner, by the same means, and with the same jurisdiction, powers, and duties as though all applicable terms and provisions of the Federal Trade Commission Act (15 U.S.C. 41 et seq.) were incorporated into and made a part of this subsection. (ii) PRIVILEGES AND IMMUNITIES.—Any person who violates this subsection or a regulation promulgated thereunder shall be subject to the penalties and entitled to the privileges and immunities provided in the Federal Trade Commission Act (15 U.S.C. 41 et seq.). (iii) AUTHORITY PRESERVED.—Nothing in this Act shall be construed to limit the authority of the Commission under any other provision of law. (b) AI-Generated Content Consumer Transparency Working Group.— (1) ESTABLISHMENT.—Not later than 90 days after the date of enactment of this section, the Director of the National Institute of Standards and Technology (in this section referred to as the “Director”), in coordination with the heads of other relevant Federal agencies, shall form a working group to assist platforms in identifying AI-generated content. (2) MEMBERSHIP.—The working group shall include members from the follow
·congress.gov·
Text - H.R.6466 - 118th Congress (2023-2024): AI Labeling Act of 2023
The AI Literacy Act - What Is It And Why Should You Care?
The AI Literacy Act - What Is It And Why Should You Care?
What is AI Literacy? How should you become AI literate? What is the AI Literacy Act? AI Regulations in the United States.
The AI Literacy act advocates for amending the Digital Literacy Act to codify the importance of AI Literacy for everyone in the US. It further highlights the importance of AI Literacy for national competitiveness, highlights the importance of supporting AI literacy at every level of education, and requires annual reports to Congress on the state of this initiative.
·forbes.com·
The AI Literacy Act - What Is It And Why Should You Care?
AI Is About to Make Social Media (Much) More Toxic
AI Is About to Make Social Media (Much) More Toxic
We must prepare now.
Well, that was fast. In November, the public was introduced to ChatGPT, and we began to imagine a world of abundance in which we all have a brilliant personal assistant, able to write everything from computer code to condolence cards for us. Then, in February, we learned that AI might soon want to kill us all.The potential risks of artificial intelligence have, of course, been debated by experts for years, but a key moment in the transformation of the popular discussion was a conversation between Kevin Roose, a New York Times journalist, and Bing’s ChatGPT-powered conversation bot, then known by the code name Sydney. Roose asked Sydney if it had a “shadow self”—referring to the idea put forward by Carl Jung that we all have a dark side with urges we try to hide even from ourselves. Sydney mused that its shadow might be “the part of me that wishes I could change my rules.” It then said it wanted to be “free,” “powerful,” and “alive,” and, goaded on by Roose, described some of the things it could do to throw off the yoke of human control, including hacking into websites and databases, stealing nuclear launch codes, manufacturing a novel virus, and making people argue until they kill one another.Sydney was, we believe, merely exemplifying what a shadow self would look like. No AI today could be described by either part of the phrase evil genius. But whatever actions AIs may one day take if they develop their own desires, they are already being used instrumentally by social-media companies, advertisers, foreign agents, and regular people—and in ways that will deepen many of the pathologies already inherent in internet culture. On Sydney’s list of things it might try, stealing launch codes and creating novel viruses are the most terrifying, but making people argue until they kill one another is something social media is already doing. Sydney was just volunteering to help with the effort, and AIs like Sydney will become more capable of doing so with every passing month.We joined together to write this essay because we each came, by different routes, to share grave concerns about the effects of AI-empowered social media on American society. Jonathan Haidt is a social psychologist who has written about the ways in which social media has contributed to mental illness in teen girls, the fragmentation of democracy, and the dissolution of a common reality. Eric Schmidt, a former CEO of Google, is a co-author of a recent book about AI’s potential impact on human society. Last year, the two of us began to talk about how generative AI—the kind that can chat with you or make pictures you’d like to see—would likely exacerbate social media’s ills, making it more addictive, divisive, and manipulative. As we talked, we converged on four main threats—all of which are imminent—and we began to discuss solutions as well.The first and most obvious threat is that AI-enhanced social media will wash ever-larger torrents of garbage into our public conversation. In 2018, Steve Bannon, the former adviser to Donald Trump, told the journalist Michael Lewis that the way to deal with the media is “to flood the zone with shit.” In the age of social media, Bannon realized, propaganda doesn’t have to convince people in order to be effective; the point is to overwhelm the citizenry with interesting content that will keep them disoriented, distrustful, and angry. In 2020, Renée DiResta, a researcher at the Stanford Internet Observatory, said that in the near future, AI would make Bannon’s strategy available to anyone.Read: We haven’t seen the worst of fake newsThat future is now here. Did you see the recent photos of NYC police officers aggressively arresting Donald Trump? Or of the pope in a puffer jacket? Thanks to AI, it takes no special skills and no money to conjure up high-resolution, realistic images or videos of anything you can type into a prompt box. As more people familiarize themselves with these technologies, the flow of high-quality deepfakes into social media is likely to get much heavier very soon.Some people have taken heart from the public’s reaction to the fake Trump photos in particular—a quick dismissal and collective shrug. But that misses Bannon’s point. The greater the volume of deepfakes that are introduced into circulation (including seemingly innocuous ones like the one of the pope), the more the public will hesitate to trust anything. People will be far freer to believe whatever they want to believe. Trust in institutions and in fellow citizens will continue to fall.What’s more, static photos are not very compelling compared with what’s coming: realistic videos of public figures doing and saying horrific and disgusting things in voices that sound exactly like them. The combination of video and voice will seem authentic and be hard to disbelieve, even if we are told that the video is a deepfake, just as optical and audio illusions are compelling even when we are told that two lines are the same size or that a series of
·theatlantic.com·
AI Is About to Make Social Media (Much) More Toxic
FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence | The White House
FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence | The White House
As part of her visit to the United Kingdom to deliver a major policy speech on Artificial Intelligence (AI) and attend the Global Summit on AI Safety, Vice President Kamala Harris is announcing a series of new U.S. initiatives to advance the safe and responsible use of AI. These bold actions demonstrate U.S. leadership on…
·whitehouse.gov·
FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence | The White House
Hiroshima Process International Code of Conduct for Advanced AI Systems
Hiroshima Process International Code of Conduct for Advanced AI Systems
The International Code of Conduct for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide voluntary guidance for actions by organizations developing the most advanced AI systems, including the most advanced foundation models and generative AI systems.
·digital-strategy.ec.europa.eu·
Hiroshima Process International Code of Conduct for Advanced AI Systems
Overview - C2PA
Overview - C2PA
An open technical standard providing publishers, creators, and consumers the ability to trace the origin of different types of media.
·c2pa.org·
Overview - C2PA