AIxDESIGN Bookmark Library

AIxDESIGN Bookmark Library

2043 bookmarks
Custom sorting
Open Call for Proposals for Pulitzer Center’s AI Accountability Fellowships
Open Call for Proposals for Pulitzer Center’s AI Accountability Fellowships
A new partnership with the Digital Witness Lab at Princeton University will support a journalist reporting on the influence of messaging platforms on public discourse. Se abre la convocatoria de propuestas para las becas de monitoreo de algoritmos Appel à candidatures pour les bourses du Pulitzer Center sur la Redevabilité de l’IA   The Pulitzer Center is accepting applications for the second cohort of its Artificial Intelligence Accountability Fellowships.  Governments and corporations use AI to make life-changing decisions in policing, criminal justice, social welfare, hiring, and more. If unchecked, these systems can harm some of the most vulnerable members of society, deepening economic gaps and amplifying the effects of racial, gender, and ability biases. The Pulitzer Center’s AI Accountability Fellowships support critical, in-depth reporting on the impact of AI systems in communities around the world and nurture a global network of journalists who report and learn together about this urgent, underreported issue.  “This is not just a technology story but an equity and accountability one, too,” said Marina Walker Guevara, the Pulitzer Center’s executive editor. “At a time when AI is creating both hype and despair, we are building a global community of journalists dedicated to reporting on this fast-evolving issue with skill, nuance, and impact.”  In its first year, the Fellowship supported 10 Fellows to report in 10 countries. The 2022 cohort of AI Accountability Fellows reported on themes crucial to equity and human rights, such as the impact of AI on the gig economy, social welfare, policing, migration, and border control.  AI Accountability Fellowships   The Pulitzer Center is recruiting six to eight journalists from anywhere in the world to report on the impacts of algorithms and automated systems in their communities. The 10-months-long Fellowship starts in September and it provides each journalist up to $20,000 to pursue projects. Funds can be used to pay for records requests, travel expenses, data analysis, and stipends, among other costs. In addition, the Fellows will have access to mentors from different fields and relevant training with a group of peers that will help strengthen their reporting projects. While we welcome projects on a broad range of issues related to the impact of AI in society, this year we are also placing special emphasis on certain topics. We are seeking to support at least one project that examines the intersection of AI and conflict, war, and peace. In partnership with Digital Witness Lab at Princeton University, we are also recruiting one project that focuses on the role the messaging platform WhatsApp plays in influencing public discourse in a particular community.  The journalist selected for the shared fellowship with Digital Witness Lab will have the opportunity to be mentored by renowned investigative data journalist Surya Mattu, formerly with tech news organization the Markup, and his team, and to explore projects of common interest with Digital Witness. To learn more about the shared fellowship with Digital Witness Lab, please click here.  Applications for the 2023-2024 AI Accountability Fellowships are now open. The deadline is July 1, 2023. Find more information here. Apply here.  The AI Accountability Network launched in 2022 to expand and diversify the field of journalists reporting on AI and with AI in the public interest. Through its Machine Learning Reporting Grants, the initiative also supports journalists using AI to tackle big data investigations. The Network is managed by Pulitzer Center Senior Editor Boyoung Lim, with the support of Executive Editor Marina Walker Guevara and the Pulitzer Center’s Editorial team.  The AI Accountability Network is funded with the support of the Open Society Foundations (OSF), Wellspring Philanthropic Fund, and individual donors and foundations who support our work more broadly. Other funders may join during 2023/2024.
·pulitzercenter.org·
Open Call for Proposals for Pulitzer Center’s AI Accountability Fellowships
Inflection
Inflection
We are an AI studio creating a personal AI for everyone.
·inflection.ai·
Inflection
The A.I. Dilemma - March 9, 2023
The A.I. Dilemma - March 9, 2023
Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught i...
·youtube.com·
The A.I. Dilemma - March 9, 2023
Varia
Varia
The Center of Everyday Technology is a Rotterdam based initiative that collects, conducts and instigates research into everyday technology.
·varia.zone·
Varia
AI for Good #4: Artificial Well Being - Felix Meritis
AI for Good #4: Artificial Well Being - Felix Meritis
Tijdens Ai for Good presenteren creatieve makers en ondernemers die met AI werken hun eigen werk, best practices en nemen ze ons mee in de actualiteit van kunstmatige intelligentie.
·felixmeritis.nl·
AI for Good #4: Artificial Well Being - Felix Meritis
HuggingChat
HuggingChat
The first open source alternative to ChatGPT. 💪
·huggingface.co·
HuggingChat
Critical Topics: AI Images — Cybernetic Forests.
Critical Topics: AI Images — Cybernetic Forests.
Critical Topics: AI Images was an undergraduate class delivered for Bradley University in Spring 2023. It was an overview of the emerging contexts of AI art making tools that connected media studies and histories of new media art, with data ethics and critical data studies. Through this multidisciplinary lens, we examined current events and debates in AI and generative art, with students thinking critically about these tools as they learned to use them. They were encouraged to make work that reflected the context and longer history of these tools.
·cyberneticforests.com·
Critical Topics: AI Images — Cybernetic Forests.
Releases · invoke-ai/InvokeAI
Releases · invoke-ai/InvokeAI
InvokeAI is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. Th...
·github.com·
Releases · invoke-ai/InvokeAI
AI Takeoff - LessWrong
AI Takeoff - LessWrong
AI Takeoff refers to the process of an artificial general intelligence going from a certain threshold of capability (often discussed as "human level") to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be "slow" or "fast". AI takeoff is sometimes casually referred to as AI FOOM.
·lesswrong.com·
AI Takeoff - LessWrong
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.
·arxiv.org·
LoRA: Low-Rank Adaptation of Large Language Models
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
The premise of the paper is that while OpenAI and Google continue to race to build the most powerful language models, their efforts are rapidly being eclipsed by the work happening in the open source community.
·simonwillison.net·
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
Perplexity AI
Perplexity AI
Perplexity AI unlocks the power of knowledge with information discovery and sharing.
·perplexity.ai·
Perplexity AI
reflection of a reflection of a reflection | Trailer
reflection of a reflection of a reflection | Trailer
synthographic motion picture generated with Stable Diffusion - 5.5.2023 - a story inspired by "I, We, Waluigi: a Post-Modern analysis of Waluigi" by Franck Ribery https://theemptypage.wordpress.com/2013/05/20/critical-perspectives-on-waluigi/ #roaroar
·youtube.com·
reflection of a reflection of a reflection | Trailer
Blaize AI Studio
Blaize AI Studio
Bringing sharper focus to AI modeling. A user-guided interface that focuses on the problem space rather than the tools.
·argodesign.com·
Blaize AI Studio
Public Stack
Public Stack
Towards open, democratic and sustainable digital public spaces
·publicstack.net·
Public Stack
The Public Stack: a Model to Incorporate Public Values in Technology - Amsterdam Smart City
The Public Stack: a Model to Incorporate Public Values in Technology - Amsterdam Smart City
*Public administrators, public tech developers, and public service providers face the same challenge: How to develop and use technology in accordance with public values like openness, fairness, and inclusivity? The question is urgent as we continue to rely upon proprietary technology that is developed within a surveillance capitalist context and is incompatible with the goals and missions of our democratic institutions. This problem has been a driving force behind the development of the [public stack](https://publicstack.net/), a conceptual model developed by [Waag](https://waag.org/en/) through [ACROSS ]()and other projects, which roots technical development in public values.* The idea behind the public stack is simple: There are unseen [layers](https://publicstack.net/layers/) behind the technology we use, including hardware, software, design processes, and business models. All of these layers affect the relationship between people and technology – as consumers, subjects, or (as the public stack model advocates) citizens and human beings in a democratic society. The public stack challenges developers, funders, and other stakeholders to develop technology based on shared public values by utilising participatory design processes and open technology. The goal is to position people and the planet as democratic agents; and as more equal stakeholders in deciding how technology is developed and implemented. ACROSS is a Horizon2020 European project that develops open source resources to protect digital identity and personal data across European borders. In this context, Waag is developing the public stack model into a service design approach – a resource to help others reflect upon and improve the extent to which their own ‘stack’ is reflective of public values. In late 2022, Waag developed a method using the public stack as a lens to prompt reflection amongst developers. A more extensive public stack reflection process is now underway in ACROSS; resources to guide other developers through this same process will be made available later in 2023. The public stack is a useful model for anyone involved in technology, whether as a developer, funder, active, or even passive user. In the case of ACROSS, its adoption helped project partners to implement decentralised privacy-by-design technology based on values like privacy and user control. The model lends itself to be applied just as well in other use cases: * Municipalities can use the public stack to maintain democratic approaches to technology development and adoption in cities. * Developers of both public and private tech can use the public stack to reflect on which values are embedded in their technology. * Researchers can use the public stack as a way to ethically assess technology. * Policymakers can use the public stack as a way to understand, communicate, and shape the context in which technology development and implementation occurs. ***Are you interested in using the public stack in your own project, initiative, or development process? We’d love to hear about it. Let us know more by emailing us at publicstack@waag.org.***
·amsterdamsmartcity.com·
The Public Stack: a Model to Incorporate Public Values in Technology - Amsterdam Smart City