Stability AI releases Stable Animation SDK, a powerful text-to-animation tool for developers — Stability AI
Stability AI has released Stable Animation SDK, a powerful tool that allows artists and developers to create stunning animations using advanced Stable Diffusion models. With the ability to create animations from prompts, source images, or source videos, users can fully utilize all the Stable Diffusi
It's now possible to generate videos with nothing but words - Filmmaker's guide to GEN-2 Runway ACCESS Anyone who signs up for Runway (free plan and up) will… | 95 comments on LinkedIn
AI video generators are nearing a crucial tipping point
You may have noticed some impressive video memes made with AI in recent weeks. Harry Potter reimagined as a Balenciaga commercial and nightmarish footage of Will Smith eating spaghetti both recently went viral. They highlight how quickly AI’s ability to create video is advancing, as well as how problematic some uses of the technology may be.
Open Call for Proposals for Pulitzer Center’s AI Accountability Fellowships
A new partnership with the Digital Witness Lab at Princeton University will support a journalist reporting on the influence of messaging platforms on public discourse. Se abre la convocatoria de propuestas para las becas de monitoreo de algoritmos Appel à candidatures pour les bourses du Pulitzer Center sur la Redevabilité de l’IA The Pulitzer Center is accepting applications for the second cohort of its Artificial Intelligence Accountability Fellowships. Governments and corporations use AI to make life-changing decisions in policing, criminal justice, social welfare, hiring, and more. If unchecked, these systems can harm some of the most vulnerable members of society, deepening economic gaps and amplifying the effects of racial, gender, and ability biases. The Pulitzer Center’s AI Accountability Fellowships support critical, in-depth reporting on the impact of AI systems in communities around the world and nurture a global network of journalists who report and learn together about this urgent, underreported issue. “This is not just a technology story but an equity and accountability one, too,” said Marina Walker Guevara, the Pulitzer Center’s executive editor. “At a time when AI is creating both hype and despair, we are building a global community of journalists dedicated to reporting on this fast-evolving issue with skill, nuance, and impact.” In its first year, the Fellowship supported 10 Fellows to report in 10 countries. The 2022 cohort of AI Accountability Fellows reported on themes crucial to equity and human rights, such as the impact of AI on the gig economy, social welfare, policing, migration, and border control. AI Accountability Fellowships The Pulitzer Center is recruiting six to eight journalists from anywhere in the world to report on the impacts of algorithms and automated systems in their communities. The 10-months-long Fellowship starts in September and it provides each journalist up to $20,000 to pursue projects. Funds can be used to pay for records requests, travel expenses, data analysis, and stipends, among other costs. In addition, the Fellows will have access to mentors from different fields and relevant training with a group of peers that will help strengthen their reporting projects. While we welcome projects on a broad range of issues related to the impact of AI in society, this year we are also placing special emphasis on certain topics. We are seeking to support at least one project that examines the intersection of AI and conflict, war, and peace. In partnership with Digital Witness Lab at Princeton University, we are also recruiting one project that focuses on the role the messaging platform WhatsApp plays in influencing public discourse in a particular community. The journalist selected for the shared fellowship with Digital Witness Lab will have the opportunity to be mentored by renowned investigative data journalist Surya Mattu, formerly with tech news organization the Markup, and his team, and to explore projects of common interest with Digital Witness. To learn more about the shared fellowship with Digital Witness Lab, please click here. Applications for the 2023-2024 AI Accountability Fellowships are now open. The deadline is July 1, 2023. Find more information here. Apply here. The AI Accountability Network launched in 2022 to expand and diversify the field of journalists reporting on AI and with AI in the public interest. Through its Machine Learning Reporting Grants, the initiative also supports journalists using AI to tackle big data investigations. The Network is managed by Pulitzer Center Senior Editor Boyoung Lim, with the support of Executive Editor Marina Walker Guevara and the Pulitzer Center’s Editorial team. The AI Accountability Network is funded with the support of the Open Society Foundations (OSF), Wellspring Philanthropic Fund, and individual donors and foundations who support our work more broadly. Other funders may join during 2023/2024.
microsoft/LoRA: Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models" - microsoft/LoRA: Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Lar...
Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught i...
AI for Good #4: Artificial Well Being - Felix Meritis
Tijdens Ai for Good presenteren creatieve makers en ondernemers die met AI werken hun eigen werk, best practices en nemen ze ons mee in de actualiteit van kunstmatige intelligentie.
Critical Topics: AI Images was an undergraduate class delivered for Bradley University in Spring 2023. It was an overview of the emerging contexts of AI art making tools that connected media studies and histories of new media art, with data ethics and critical data studies. Through this multidisciplinary lens, we examined current events and debates in AI and generative art, with students thinking critically about these tools as they learned to use them. They were encouraged to make work that reflected the context and longer history of these tools.
InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. Th...
AI Takeoff refers to the process of an artificial general intelligence going from a certain threshold of capability (often discussed as "human level") to being super-intelligent and capable enough to control the fate of civilization.
There has been much debate about whether AI takeoff is more likely to be "slow" or "fast".
AI takeoff is sometimes casually referred to as AI FOOM.
LoRA: Low-Rank Adaptation of Large Language Models
An important paradigm of natural language processing consists of large-scale
pre-training on general domain data and adaptation to particular tasks or
domains. As we pre-train larger models, full fine-tuning, which retrains all
model parameters, becomes less feasible. Using GPT-3 175B as an example --
deploying independent instances of fine-tuned models, each with 175B
parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or
LoRA, which freezes the pre-trained model weights and injects trainable rank
decomposition matrices into each layer of the Transformer architecture, greatly
reducing the number of trainable parameters for downstream tasks. Compared to
GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable
parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA
performs on-par or better than fine-tuning in model quality on RoBERTa,
DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher
training throughput, and, unlike adapters, no additional inference latency. We
also provide an empirical investigation into rank-deficiency in language model
adaptation, which sheds light on the efficacy of LoRA. We release a package
that facilitates the integration of LoRA with PyTorch models and provide our
implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at
https://github.com/microsoft/LoRA.
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
The premise of the paper is that while OpenAI and Google continue to race to build the most powerful language models, their efforts are rapidly being eclipsed by the work happening in the open source community.