America Is Using Up Its Groundwater Like There’s No Tomorrow
Unchecked overuse is draining and damaging aquifers nationwide, a data investigation by the New York Times revealed, threatening millions of people and America’s status as a food superpower.
ChatGPT is landing kids in the principal’s office, survey finds
While educators worry that students are using generative AI to cheat, a new report finds that students are turning to the tool more for personal problems.
The White House will reportedly reveal a ‘sweeping’ AI executive order on October 30
The Biden Administration will reportedly unveil a broad executive order on artificial intelligence next week. It’s allegedly scheduled for Monday, October 30.
The Future of Farming: Artificial Intelligence and Agriculture
While artificial intelligence (AI) seemed until recently to be science fiction,
countless corporations across the globe are now researching ways to implement
this technology in everyday life. AI works by processing
[https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html]
large quantities of data, interpreting patterns in that data,
Artists Allege Meta’s AI Data Deletion Request Process Is a ‘Fake PR Stunt’
This summer, Meta began taking requests to delete data from its AI training. Artists say this new system is broken and fake. Meta says there is no opt-out program.
Testing ChatGPT-4 for ‘UX Audits’ Shows an 80% Error Rate & 14–26% Discoverability Rate – Articles – Baymard Institute
We tested ChatGPT-4’s ability to do a UX Audit of 12 webpages, and compared it to the results of 6 human UX professionals. GPT-4 had a 20% accuracy rate, 80% error rate, and discovered just 14–26% of the actual UX issues.
MIT, Cohere for AI, others launch platform to track and filter audited AI datasets
Researchers from MIT, Cohere for AI and 11 other institutions launched the Data Provenance Platform today in order to "tackle the data transparency crisis in the AI space."
Untruths spouted by chatbots ended up on the web—and Microsoft's Bing search engine served them up as facts. Generative AI could make search harder to trust.
Governing Artificial Intelligence: A Conversation with Rumman Chowdhury
Artificial intelligence, and its risks and benefits, has rapidly entered the popular consciousness in the past year. Kat Duffy and Dr. Rumman Chowdhury discuss how society can mitigate problems and e…
CA's New Delete Act Is One of the World’s Most Powerful Privacy Laws
A new law in California gives consumers real power to hit back at the companies buying and selling their data for the very first time. The Delete Act lets Californians force every data broker to delete the fruits of their data harvest with one, single click.
Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal - Amnesty International
Social security enforcement agencies worldwide are increasingly automating their processes in the hope of detecting fraud. The Netherlands is at the forefront of this development. The Dutch tax authorities adopted an algorithmic decision-making system to create risk profiles of individuals applying for childcare benefits in order to detect inaccurate and potentially fraudulent applications at an […]
Legal and technical challenges of large generative AI models | ML Workshop
Large Generative AI Models, such as ChatGPT, GPT-4 or Stable Diffusion, are revolutionizing the way we communicate, create, and work. They are rapidly and profoundly impacting all sectors of society, from business development to medicine, from education to research, and from coding to the arts. Like many other transformative technologies, they offer enormous potential, but may also carry significant risks regarding, inter alia, opacity, bias, or fake news.
Against this background, the workshop brings together leading scholars to discuss the far-reaching technical, legal, regulatory and social implications of recent advances in generative AI systems. The organizers acknowledge the generous support of the Dieter Schwarz Stiftung.
Curated by:
Philipp Hacker
Chair for Law and Ethics of the Digital Society, European New School of Digital Studies
European University Viadrina
Sarah Hammer
Excecutive Director
Wharton School
Speakers: 🎙
Rasmus Rothe
(Co-Founder & CTO, Merantix)
Frederik Zuiderveen Borgesius
(Professor of ICT and Law, Radboud University; Interdisciplinary Hub for Security, Privacy and Data Governance, Radboud University)
Lilian Edwards
(Professor of Law, Innovation & Society, Newcastle University)
Miriam Vogel
(Chair, US National AI Advisory Council; President and CEO, EqualAI)
Jeremias Adams-Prassl
(Professor of Law, Magdalen College, University of Oxford) (TBC)
Alexandre Zavaglia Coelho
(Fundação Getúlio Vargas, São Paulo)
Andreas Engel
(Postdoctoral Research Fellow, Faculty of Laws, University of Heidelberg)
Rishi Bommasani
(Stanford Center for Research on Foundation Models)
Ramayya Krishnan
(Dean, Heinz College of Information Systems and Public Policy; William W. and Ruth F. Cooper Professor of Management Science and Information Systems, CMU; Member, US National AI Advisory Council)
Rema Padman
(Trustees Professor of Management Science and Healthcare Informatics, Heinz College of Information Systems and Public Policy, Carnegie Mellon University; Thrust Leader of Healthcare Informatics Research, iLab; Research Area Director for Operations and Informatics, Center for Health Analytics, Heinz College, CMU; Adjunct Professor, Department of Biomedical Informatics at the University of Pittsburgh School of Medicine)
Sue Hendrikson
(Global Technology Governance Fellow, Berkman Klein Center for Internet & Society, Harvard; Lecturer, Harvard Law School)
Kai Zenner
(European Parliament)
Dora Kaufman
(Professor of Law; Instituto de Estudos Avançados; Intelligence Technologies and Digital Design Program, Universidade de São Paulo and Pontifícia Universidade Católica de São Paulo)
Orly Lobel
(Warren Distinguished Professor of Law, University of San Diego; Director, Center for Employment and Labor Policy)
Sandra Wachter
(Professor of Technology and Regulation, Oxford Internet Institute, University of Oxford)
Jonas Andrulis
(Founder & CEO/AI Customer, Aleph Alpha)
Brent Mittelstadt
(Oxford Internet Institute)
Michael Veale
(Associate Professor, Faculty of Laws, UCL)
Herbert Zech
(Chair of Civil Law, Technology and IT Law, Humboldt University of Berlin; Director, Weizenbaum Institute for the Networked Society)
Timestamps ⏱:
00:00 intro
0:07 Welcome Statement
1:00 Opening Remarks
6:09 Social and Legal Challenges of Generative AI: A Research Agenda
24:18 The AI Value Chain: Building Businesses with LLMs
53:10 Q&A Session
1:16:54 LGAIMs and Non-Discrimination Law
1:43:32 Contestation of LGAIM Output
2:09:02 Q&A Session
2:15:12 International Efforts to Regulate Generative AI: A conversation with Philipp Hacker and Brian Williams
2:35:07 Platforms and Worker Rights in the Age of LGAIMs
2:52:32 Generative AI and Legal Tech
3:14:23 Generative AI and Copyright
3:35:22 Q&A Session
3:58:08 Welcome Statement
4:01:05 Rishi Bommasani
4:20:48 The Future of Knowledge and Creative Work
4:46:58 Generative Medical AI
5:11:12 Q&A Session
5:39:41 Unraveling the Legal Landscape - Navigating AI Governance in the United States
6:05:06 The EU Regulatory Perspective on LGAIMs
6:33:00 Generative AI Regulation in Brazil
7:00:05 Q&A Session
7:29:18 Generative AI and Equality
8:01:00 LGAIMs and Transparency
8:28:36 Technical Transparency and the AtMan Model
8:42:43 Q&A Session
9:18:46 (Generative) AI and Finance
9:39:18 Generative AI Ethics
10:02:41 Q&A Session
10:21:09 Data Protection and LGAIMs
10:43:51 Generative AI and Patents
10:59:06 Q&A Session
11:23:39 Closing Statement: The Future of Generative AI and Regulation
🌎 Connect on our social media:
Website: https://aiforgood.itu.int/
Twitter: https://twitter.com/AIforGood
LinkedIn Page: https://www.linkedin.com/company/26511907
LinkedIn Group: https://www.linkedin.com/groups/8567748
Instagram: https://www.instagram.com/aiforgood
Facebook: https://www.facebook.com/AIforGood
Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.
Testing ChatGPT-4 for ‘UX Audits’ Shows an 80% Error Rate & 14–26% Discoverability Rate – Articles – Baymard Institute
We tested ChatGPT-4’s ability to do a UX Audit of 12 webpages, and compared it to the results of 6 human UX professionals. GPT-4 had a 20% accuracy rate, 80% error rate, and discovered just 14–26% of the actual UX issues.