AIxDesign Library

AIxDesign Library

1643 bookmarks
Newest
The Hmm ON Co-Creation with AI - The Hmm
The Hmm ON Co-Creation with AI - The Hmm
AI technologies have been around for a long time, but it is only recently that we’ve had a chance to experience some of these tools ourselves in our daily lives. ChatGPT, an advanced and exponentially popular AI chatbot has helped us write code, made crochet patterns, written cover letters for job applications, and given us ...
·thehmm.nl·
The Hmm ON Co-Creation with AI - The Hmm
The Extra Nice Fund It's Nice That
The Extra Nice Fund It's Nice That
Creative Lives in Progress is an inclusive creative careers resource, on a mission to transform the way emerging talent access, understand and connect with the industry.
·creativelivesinprogress.com·
The Extra Nice Fund It's Nice That
Design+Science Summer School Ljubljana 2023
Design+Science Summer School Ljubljana 2023
An estimated 8.7 million species populate planet Earth today. We, humans, are the only representatives of at least eight other human species that existed with us 300,000 years ago. Due to our imperative of progress, we are in the process of wiping out one species after another, making us, humans, the dominant species on this planet.
·designscience.school·
Design+Science Summer School Ljubljana 2023
Piercing the Algorithmic Veil
Piercing the Algorithmic Veil
“Piercing the corporate veil” is when there is a legal decision made to hold a company’s shareholders or directors responsible for the…
·medium.com·
Piercing the Algorithmic Veil
Stability AI releases Stable Animation SDK, a powerful text-to-animation tool for developers — Stability AI
Stability AI releases Stable Animation SDK, a powerful text-to-animation tool for developers — Stability AI
Stability AI has released Stable Animation SDK, a powerful tool that allows artists and developers to create stunning animations using advanced Stable Diffusion models. With the ability to create animations from prompts, source images, or source videos, users can fully utilize all the Stable Diffusi
·stability.ai·
Stability AI releases Stable Animation SDK, a powerful text-to-animation tool for developers — Stability AI
Open Call for Proposals for Pulitzer Center’s AI Accountability Fellowships
Open Call for Proposals for Pulitzer Center’s AI Accountability Fellowships
A new partnership with the Digital Witness Lab at Princeton University will support a journalist reporting on the influence of messaging platforms on public discourse. Se abre la convocatoria de propuestas para las becas de monitoreo de algoritmos Appel à candidatures pour les bourses du Pulitzer Center sur la Redevabilité de l’IA   The Pulitzer Center is accepting applications for the second cohort of its Artificial Intelligence Accountability Fellowships.  Governments and corporations use AI to make life-changing decisions in policing, criminal justice, social welfare, hiring, and more. If unchecked, these systems can harm some of the most vulnerable members of society, deepening economic gaps and amplifying the effects of racial, gender, and ability biases. The Pulitzer Center’s AI Accountability Fellowships support critical, in-depth reporting on the impact of AI systems in communities around the world and nurture a global network of journalists who report and learn together about this urgent, underreported issue.  “This is not just a technology story but an equity and accountability one, too,” said Marina Walker Guevara, the Pulitzer Center’s executive editor. “At a time when AI is creating both hype and despair, we are building a global community of journalists dedicated to reporting on this fast-evolving issue with skill, nuance, and impact.”  In its first year, the Fellowship supported 10 Fellows to report in 10 countries. The 2022 cohort of AI Accountability Fellows reported on themes crucial to equity and human rights, such as the impact of AI on the gig economy, social welfare, policing, migration, and border control.  AI Accountability Fellowships   The Pulitzer Center is recruiting six to eight journalists from anywhere in the world to report on the impacts of algorithms and automated systems in their communities. The 10-months-long Fellowship starts in September and it provides each journalist up to $20,000 to pursue projects. Funds can be used to pay for records requests, travel expenses, data analysis, and stipends, among other costs. In addition, the Fellows will have access to mentors from different fields and relevant training with a group of peers that will help strengthen their reporting projects. While we welcome projects on a broad range of issues related to the impact of AI in society, this year we are also placing special emphasis on certain topics. We are seeking to support at least one project that examines the intersection of AI and conflict, war, and peace. In partnership with Digital Witness Lab at Princeton University, we are also recruiting one project that focuses on the role the messaging platform WhatsApp plays in influencing public discourse in a particular community.  The journalist selected for the shared fellowship with Digital Witness Lab will have the opportunity to be mentored by renowned investigative data journalist Surya Mattu, formerly with tech news organization the Markup, and his team, and to explore projects of common interest with Digital Witness. To learn more about the shared fellowship with Digital Witness Lab, please click here.  Applications for the 2023-2024 AI Accountability Fellowships are now open. The deadline is July 1, 2023. Find more information here. Apply here.  The AI Accountability Network launched in 2022 to expand and diversify the field of journalists reporting on AI and with AI in the public interest. Through its Machine Learning Reporting Grants, the initiative also supports journalists using AI to tackle big data investigations. The Network is managed by Pulitzer Center Senior Editor Boyoung Lim, with the support of Executive Editor Marina Walker Guevara and the Pulitzer Center’s Editorial team.  The AI Accountability Network is funded with the support of the Open Society Foundations (OSF), Wellspring Philanthropic Fund, and individual donors and foundations who support our work more broadly. Other funders may join during 2023/2024.
·pulitzercenter.org·
Open Call for Proposals for Pulitzer Center’s AI Accountability Fellowships
Inflection
Inflection
We are an AI studio creating a personal AI for everyone.
·inflection.ai·
Inflection
The A.I. Dilemma - March 9, 2023
The A.I. Dilemma - March 9, 2023
Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught i...
·youtube.com·
The A.I. Dilemma - March 9, 2023
Varia
Varia
The Center of Everyday Technology is a Rotterdam based initiative that collects, conducts and instigates research into everyday technology.
·varia.zone·
Varia
AI for Good #4: Artificial Well Being - Felix Meritis
AI for Good #4: Artificial Well Being - Felix Meritis
Tijdens Ai for Good presenteren creatieve makers en ondernemers die met AI werken hun eigen werk, best practices en nemen ze ons mee in de actualiteit van kunstmatige intelligentie.
·felixmeritis.nl·
AI for Good #4: Artificial Well Being - Felix Meritis
HuggingChat
HuggingChat
The first open source alternative to ChatGPT. 💪
·huggingface.co·
HuggingChat
Critical Topics: AI Images — Cybernetic Forests.
Critical Topics: AI Images — Cybernetic Forests.
Critical Topics: AI Images was an undergraduate class delivered for Bradley University in Spring 2023. It was an overview of the emerging contexts of AI art making tools that connected media studies and histories of new media art, with data ethics and critical data studies. Through this multidisciplinary lens, we examined current events and debates in AI and generative art, with students thinking critically about these tools as they learned to use them. They were encouraged to make work that reflected the context and longer history of these tools.
·cyberneticforests.com·
Critical Topics: AI Images — Cybernetic Forests.
Releases · invoke-ai/InvokeAI
Releases · invoke-ai/InvokeAI
InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. Th...
·github.com·
Releases · invoke-ai/InvokeAI
AI Takeoff - LessWrong
AI Takeoff - LessWrong
AI Takeoff refers to the process of an artificial general intelligence going from a certain threshold of capability (often discussed as "human level") to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be "slow" or "fast". AI takeoff is sometimes casually referred to as AI FOOM.
·lesswrong.com·
AI Takeoff - LessWrong
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.
·arxiv.org·
LoRA: Low-Rank Adaptation of Large Language Models
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
The premise of the paper is that while OpenAI and Google continue to race to build the most powerful language models, their efforts are rapidly being eclipsed by the work happening in the open source community.
·simonwillison.net·
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”