Found 430 bookmarks
Newest
How elite schools like Stanford became fixated on the AI apocalypse
How elite schools like Stanford became fixated on the AI apocalypse
"More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats. Open Philanthropy alone has funneled nearly half a billion dollars into developing a pipeline of talent to fight rogue AI, building a scaffolding of think tanks, YouTube channels, prize competitions, grants, research funding and scholarships — as well as a new fellowship that can pay student leaders as much as $80,000 a year, plus tens of thousands of dollars in expenses."
How elite schools like Stanford became fixated on the AI apocalypse
Waag | Abdo Hassan on cultivating joyful resistance
Waag | Abdo Hassan on cultivating joyful resistance
The countdown has begun for The PublicSpaces Conference: For a Collective Internet, set to take place on June 27 and 28. Emma Yedema, editor and producer at PublicSpaces, sat down with Abdo Hassan for an interview.
Waag | Abdo Hassan on cultivating joyful resistance
Risk and Harm: Unpacking Ideologies in the AI Discourse
Risk and Harm: Unpacking Ideologies in the AI Discourse
Last March, there was a very interesting back-and-forth on AI regulation between the Future of Life Institute (FLI) and the Distributed AI Research (DAIR) institute. We analyzed the language and rhetorics in the two letters and we teased out fundamental ideological differences between DAIR and FLI. We are concerned about problematic ethical views connected with longtermism entering the mainstream by means of public campaigns such as FLI's, and we offer two analytical lenses (Existential Risk, Ongoing Harm) for assessing them.
Risk and Harm: Unpacking Ideologies in the AI Discourse
Design for AI
Design for AI
Part 1 # Models, Views, Controllers # Model View Controller is a server architecture model invented at Xerox Parc in the late 70s. Initially, a system to implement graphic user interfaces for the desktop machine; it is now the core of practically...
Design for AI
9 ways to see a Dataset
9 ways to see a Dataset
To further the understanding of training data, the Knowing Machines Project developed SeeSet, an investigative tool for examining the training datasets for AI. Here you will find nine essays from individual members of our team. Each one uses SeeSet to explore a key AI dataset and its role in the construction of 'ground truth.'
9 ways to see a Dataset
Parables of AI in/from the Majority World: An Anthology
Parables of AI in/from the Majority World: An Anthology
This anthology was curated from stories of living with data and AI in/from the majority world, narrated at a storytelling workshop in October 2021 organized by Data & Society Research Institute.
Parables of AI in/from the Majority World: An Anthology
EKILA: Synthetic Media Provenance and Attribution for Generative Art
EKILA: Synthetic Media Provenance and Attribution for Generative Art
We present EKILA; a decentralized framework that enables creatives to receive recognition and reward for their contributions to generative AI (GenAI). EKILA proposes a robust visual attribution technique and combines this with an emerging content provenance standard (C2PA) to address the problem of synthetic image provenance – determining the generative model and training data responsible for an AI-generated image. Furthermore, EKILA extends the non-fungible token (NFT) ecosystem to introduce a tokenized representation for rights, enabling a triangular relationship between the asset’s Ownership, Rights, and Attribution (ORA). Leveraging the ORA relationship enables creators to express agency over training consent and, through our attribution model, to receive apportioned credit, including royalty payments for the use of their assets in GenAI.
EKILA: Synthetic Media Provenance and Attribution for Generative Art
The Curse of Recursion: Training on Generated Data Makes Models Forget
The Curse of Recursion: Training on Generated Data Makes Models Forget
Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.
The Curse of Recursion: Training on Generated Data Makes Models Forget
AI Is a Lot of Work
AI Is a Lot of Work
How many humans does it take to make tech seem human? Millions.
AI Is a Lot of Work
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence
This paper explores the important role of critical science, and in particular of post-colonial and decolonial theories, in understanding and shaping the ongoing advances in artificial intelligence. Artificial Intelligence (AI) is viewed as amongst the technological advances that will reshape modern societies and their relations. Whilst the design and deployment of systems that continually adapt holds the promise of far-reaching positive change, they simultaneously pose significant risks, especially to already vulnerable peoples. Values and power are central to this discussion. Decolonial theories use historical hindsight to explain patterns of power that shape our intellectual, political, economic, and social world. By embedding a decolonial critical approach within its technical practice, AI communities can develop foresight and tactics that can better align research and technology development with established ethical principles, centring vulnerable peoples who continue to bear the brunt of negative impacts of innovation and scientific progress. We highlight problematic applications that are instances of coloniality, and using a decolonial lens, submit three tactics that can form a decolonial field of artificial intelligence: creating a critical technical practice of AI, seeking reverse tutelage and reverse pedagogies, and the renewal of affective and political communities. The years ahead will usher in a wave of new scientific breakthroughs and technologies driven by AI research, making it incumbent upon AI communities to strengthen the social contract through ethical foresight and the multiplicity of intellectual perspectives available to us; ultimately supporting future technologies that enable greater well-being, with the goal of beneficence and justice for all.
Decolonial AI: Decolonial Theory as Sociotechnical Foresight in Artificial Intelligence
Positive AI: Key Challenges for Designing Wellbeing-aligned Artificial Intelligence
Positive AI: Key Challenges for Designing Wellbeing-aligned Artificial Intelligence
Artificial Intelligence (AI) is transforming the world as we know it, implying that it is up to the current generation to use the technology for ''good.'' We argue that making good use of AI constitutes aligning it with the wellbeing of conscious creatures. However, designing wellbeing-aligned AI systems is difficult. In this article, we investigate a total of twelve challenges that can be categorized as related to a lack of knowledge (how to contextualize, operationalize, optimize, and design AI for wellbeing), and lack of motivation (designing AI for wellbeing is seen as risky and unrewarding). Our discussion can be summarized into three key takeaways: 1) our understanding of the impact of systems on wellbeing should be advanced, 2) systems should be designed to promote and sustain wellbeing intentionally, and 3), above all, Positive AI starts with believing that we can change the world for the better and that it is profitable.
Positive AI: Key Challenges for Designing Wellbeing-aligned Artificial Intelligence
Instagram effect in ChatGPT — Abhishek Gupta | Responsible AI | ACI
Instagram effect in ChatGPT — Abhishek Gupta | Responsible AI | ACI
A lot is required to achieve those shareable trophies of taming ChatGPT into producing what you want: tinkering, rejected drafts, invocations of the right spells (I mean prompts!), and learning from observing Twitter threads and Reddit forums. But, those early efforts remain hidden, a kind of surviv
Instagram effect in ChatGPT — Abhishek Gupta | Responsible AI | ACI
Mechanisms of Techno-Moral Change: A Taxonomy and Overview
Mechanisms of Techno-Moral Change: A Taxonomy and Overview
Ethical Theory and Moral Practice - The idea that technologies can change moral beliefs and practices is an old one. But how, exactly, does this happen? This paper builds on an emerging field of...
Mechanisms of Techno-Moral Change: A Taxonomy and Overview