Inside the largest-ever A.I. chatbot hack fest, where hackers tried to outsmart OpenAI, Microsoft, Google
Last weekend, the White House challenged thousands of hackers to try to outsmart top generative AI models from OpenAI, Google, Microsoft, Meta, Nvidia and more.
Can’t lose what you never had: Claims about digital ownership and creation in the age of generative AI
Let’s say someone walks into an old-fashioned record store looking for the Bright Eyes song “False Advertising.” Upon finding and buying the album, she’d have little reason to fear that store employees might sneak into her house later and take it back from her. She’d also have no cause to think that the album was counterfeit and not by the band at all. Now let’s say instead that the same song inspires an artist to create a mural depicting the FTC’s greatest false ad cases, and the mural gets displayed in a local gallery. The artist might be surprised if the gallery later shuts its doors and refuses to return the mural . . . or if some other company secretly reuses bits of it to make something else.
PsyArXiv Preprints | Reclaiming AI as a theoretical tool for cognitive science
The idea that human cognition is, or can be understood as, a form of computation is a useful conceptual tool for cognitive science. It was a foundational assumption during the birth of cognitive science as a multidisciplinary field, with Artificial Intelligence (AI) as one of its contributing fields. One conception of AI in this context is as a provider of computational tools (frameworks, concepts, formalisms, models, proofs, simulations, etc.) that support theory building in cognitive science. The contemporary field of AI, however, has taken the theoretical possibility of explaining human cognition as a form of computation to imply the practical feasibility of realising human(-like or -level) cognition in factual computational systems; and, the field frames this realisation as a short-term inevitability. Yet, as we formally prove herein, creating systems with human(-like or -level) cognition is intrinsically computationally intractable. This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. The situation could be remediated by releasing the grip of the currently dominant view on AI and by returning to the idea of AI as a theoretical tool for cognitive science. In reclaiming this older idea of AI, however, it is important not to repeat conceptual mistakes of the past (and present) that brought us to where we are today.
85. Timnit Gebru Looks at Corporate AI and Sees a Lot of Bad Science - Initiative for Digital Public Infrastructure
Timnit Gebru is not just a pioneering critic of dangerous AI datasets who calls bullshit on bad science pushed by the likes of OpenAI, or a tireless champion of racial, gender, and climate justice in computing. She's also someone who wants to build something different. This week on Reimagining, we t
AI Image Statistics: How Much Content Was Created by AI
Discover AI image statistics: the total number of AI images, the number of images created with Stable Diffusion, Adobe Firefly, Midjourney, DALL-E 2, and more.
How Dr. Joy Buolamwini is Working Towards Equitable and Accountable Technology - Heising-Simons Foundation
Several years ago, while still a graduate student at MIT’s Media Lab, Joy Buolamwini began to notice a troubling pattern in facial recognition technology––an inability to detect a wide range of skin tones and facial structures, even in widely available systems employed by Big Tech, government agencies, and law enforcement.
LGBTQ+ people in Ethiopia blame attacks on their community on inciteful and lingering TikTok videos
Members of Ethiopia’s LGBTQ+ community say they face a wave of online harassment and physical attacks and blame much of it on the social media platform TikTok.
Katharina Koerner on LinkedIn: Framework for Social Impact Evaluation of GenAI Systems
Generative AI systems are increasingly evaluated for their social impact, but there's no standardized approach yet. This paper from June 2023 presents a…
A New Anti-Bias A.I. Hiring Law Is Now in Effect. How to Know If You're in Compliance
The law, known as NYC 144, requires that employers hiring for jobs located in New York City comply with an annual bias audit. New Jersey, California, and D.C. are exploring similar legislation.
Katharina Koerner on LinkedIn: Decision Tree for the Responsible Application of AI (v1.0) | 45 comments
The Decision Tree for Responsible AI is a guide developed by AAAS (American Association for the Advancement of Science) to help put ethical principles into… | 45 comments on LinkedIn
The Algorithmic Justice League on LinkedIn: "Even if these systems were flawless, if they functioned perfectly, we…
"Even if these systems were flawless, if they functioned perfectly, we still have an issue," said AJL founder Dr. Joy Buolamwini in a recent appearance on…
School district uses ChatGPT to help remove library books
Faced with new legislation, Iowa's Mason City Community School District asked ChatGPT if certain books 'contain a description or depiction of a sex act.'
Andreas Birnik on LinkedIn: Statement from National Security Advisor Jake Sullivan and Principal…
Section 702 of the Foreign Intelligence Surveillance Act (FISA) gives US intelligence services backdoor access to data hosted on servers owned by American…
Need help navigating the AI policy sphere? Here's a map of national and international strategies, a list of challenges and recommendations, and a set of resources.
The 2023 legislative session has seen a surge in state AI laws proposed across the U.S., surpassing the number of AI laws proposed or passed in past legislative sessions.
The Global AI Standards Repository is the world’s first centralized, transparent notification system that captures AI and Autonomous and Intelligent Systems standards and standards in progress. If you would like to submit an entry, please use the submit button below. *This is a compilation of user submitted entries. The submitter is responsible for any and […]
Deutsche Telekom: Sharenting • Ads of the World™ | Part of The Clio Network
A new campaign from Deutsche Telekom and creative agency adam&eveBERLIN demonstrates the increased risks parents face thanks to the rise of data misuse and artificial intelligence (AI). In a hero film, AI is dramatically used to draw attention to risks of itself - AI. In addition to the opportunities of digitalization, Telekom wants to point out the urgency of responsible handling of personal data in the digital world. Pictures of holidays, family celebrations or weekend trips with the whole family - these are emotional moments that friends, and relatives want to share immediately. Often this happens all too carelessly. Once posted on the internet, this personal data is available worldwide and without limits. With the #ShareWithCare campaign, Deutsche Telekom wants to raise awareness for responsible handling of photos and data. The communication kicks off with the oppressive deepfake spot "A Message from Ella". It uses the example of a family to show the consequences of sharing children's photos on the Internet. Telekom draws attention to so-called "sharenting" - a much-criticized practice in which parents share photos, videos, and details of their children's lives online. "Telekom offers the best and most secure network," says Uli Klenke, Chief Brand Officer at Deutsche Telekom. "But in addition to access to this network, we also need the necessary knowledge and tools for safe and responsible handling of data on the Internet. Because the development of artificial intelligence holds opportunities and risks. In the spot, we let the AI warn us about itself. And thus, underline fascination and awe at the same time. We have to learn to deal with both factors appropriately." Deepfake spot raises awareness of the issue of sharenting The film stages and exaggerates a social experiment that could have taken place in the same form - because the technology for it has long been available today. The image of a 9-year-old actress, called "Ella", acts as the film’s protagonist. With the help of the latest AI technology, a deepfake of the girl was created. Deepfakes are videos, images, or even sounds artificially generated by machine learning. In the video, you can see how the "grown-up Ella" turns to her surprised parents. She sends a warning from the future and confronts mother and father with the consequences of sharing pictures of their child on the internet. For the first time, a virtually aged deepfake of a 9-year-old child has been created so that she can act and argue like an adult woman. Ella is representative of an entire generation of children.