read

read

440 bookmarks
Custom sorting
These Women Tried to Warn Us About AI
These Women Tried to Warn Us About AI
Rumman Chowdhury, Timnit Gebru, Safiya Noble, Seeta Peña Gangadharan, and Joy Buolamwini open up about their artificial intelligence fears
These Women Tried to Warn Us About AI
AI By the People, For the People
AI By the People, For the People
This startup wants to help millions of people whose languages are marginalized online gain better access to AI tools
AI By the People, For the People
Datasheets for Datasets
Datasheets for Datasets
Documentation to facilitate communication between dataset creators and consumers.
Datasheets for Datasets
What AI Teaches Us About Good Writing | NOEMA
What AI Teaches Us About Good Writing | NOEMA
While AI can speed up the writing process, it doesn’t optimize quality — and it endangers our sense of connection to ourselves and others.
What AI Teaches Us About Good Writing | NOEMA
How elite schools like Stanford became fixated on the AI apocalypse
How elite schools like Stanford became fixated on the AI apocalypse
"More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats. Open Philanthropy alone has funneled nearly half a billion dollars into developing a pipeline of talent to fight rogue AI, building a scaffolding of think tanks, YouTube channels, prize competitions, grants, research funding and scholarships — as well as a new fellowship that can pay student leaders as much as $80,000 a year, plus tens of thousands of dollars in expenses."
How elite schools like Stanford became fixated on the AI apocalypse
Waag | Abdo Hassan on cultivating joyful resistance
Waag | Abdo Hassan on cultivating joyful resistance
The countdown has begun for The PublicSpaces Conference: For a Collective Internet, set to take place on June 27 and 28. Emma Yedema, editor and producer at PublicSpaces, sat down with Abdo Hassan for an interview.
Waag | Abdo Hassan on cultivating joyful resistance
Risk and Harm: Unpacking Ideologies in the AI Discourse
Risk and Harm: Unpacking Ideologies in the AI Discourse
Last March, there was a very interesting back-and-forth on AI regulation between the Future of Life Institute (FLI) and the Distributed AI Research (DAIR) institute. We analyzed the language and rhetorics in the two letters and we teased out fundamental ideological differences between DAIR and FLI. We are concerned about problematic ethical views connected with longtermism entering the mainstream by means of public campaigns such as FLI's, and we offer two analytical lenses (Existential Risk, Ongoing Harm) for assessing them.
Risk and Harm: Unpacking Ideologies in the AI Discourse
Design for AI
Design for AI
Part 1 # Models, Views, Controllers # Model View Controller is a server architecture model invented at Xerox Parc in the late 70s. Initially, a system to implement graphic user interfaces for the desktop machine; it is now the core of practically...
Design for AI
9 ways to see a Dataset
9 ways to see a Dataset
To further the understanding of training data, the Knowing Machines Project developed SeeSet, an investigative tool for examining the training datasets for AI. Here you will find nine essays from individual members of our team. Each one uses SeeSet to explore a key AI dataset and its role in the construction of 'ground truth.'
9 ways to see a Dataset
Parables of AI in/from the Majority World: An Anthology
Parables of AI in/from the Majority World: An Anthology
This anthology was curated from stories of living with data and AI in/from the majority world, narrated at a storytelling workshop in October 2021 organized by Data & Society Research Institute.
Parables of AI in/from the Majority World: An Anthology
EKILA: Synthetic Media Provenance and Attribution for Generative Art
EKILA: Synthetic Media Provenance and Attribution for Generative Art
We present EKILA; a decentralized framework that enables creatives to receive recognition and reward for their contributions to generative AI (GenAI). EKILA proposes a robust visual attribution technique and combines this with an emerging content provenance standard (C2PA) to address the problem of synthetic image provenance – determining the generative model and training data responsible for an AI-generated image. Furthermore, EKILA extends the non-fungible token (NFT) ecosystem to introduce a tokenized representation for rights, enabling a triangular relationship between the asset’s Ownership, Rights, and Attribution (ORA). Leveraging the ORA relationship enables creators to express agency over training consent and, through our attribution model, to receive apportioned credit, including royalty payments for the use of their assets in GenAI.
EKILA: Synthetic Media Provenance and Attribution for Generative Art
HOLO 3: Mirror Stage
HOLO 3: Mirror Stage
Nora N. Khan assembles a cast of luminaries to consider the far-reaching implications of AI and computational culture. –$40
HOLO 3: Mirror Stage