AIxDESIGN Bookmark Library

AIxDESIGN Bookmark Library

1855 bookmarks
Newest
Datasheets for Datasets
Datasheets for Datasets
Documentation to facilitate communication between dataset creators and consumers.
Datasheets for Datasets
A Human Rights-Based Approach to Responsible AI
A Human Rights-Based Approach to Responsible AI
"We argue that a human rights framework orients the research in this space away from the machines and the risks of their biases, and towards humans and the risks to their rights, essentially helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated."
We argue that a human rights framework orients the research in this space away from the machines and the risks of their biases, and towards humans and the risks to their rights, essentially helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
A Human Rights-Based Approach to Responsible AI
What AI Teaches Us About Good Writing | NOEMA
What AI Teaches Us About Good Writing | NOEMA
While AI can speed up the writing process, it doesn’t optimize quality — and it endangers our sense of connection to ourselves and others.
What AI Teaches Us About Good Writing | NOEMA
DAIR (Distributed AI Research Institute)
DAIR (Distributed AI Research Institute)
The Distributed AI Research Institute is a space for independent, community-rooted AI research, free from Big Tech’s pervasive influence.
DAIR (Distributed AI Research Institute)
Bad Input - Consumer Reports
Bad Input - Consumer Reports
Three short films by Consumer Reports on biases in algorithms that result in unfair practices towards communities of color. Directed by Alice Gu.
Bad Input - Consumer Reports
How elite schools like Stanford became fixated on the AI apocalypse
How elite schools like Stanford became fixated on the AI apocalypse
"More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats. Open Philanthropy alone has funneled nearly half a billion dollars into developing a pipeline of talent to fight rogue AI, building a scaffolding of think tanks, YouTube channels, prize competitions, grants, research funding and scholarships — as well as a new fellowship that can pay student leaders as much as $80,000 a year, plus tens of thousands of dollars in expenses."
How elite schools like Stanford became fixated on the AI apocalypse
Join Us
Join Us
Studio for Sonic Experiences
Join Us
AI Consensus
AI Consensus
Student Mouvement for the Ethical and Responsible Use of AI Tools in Education
Instagram
AI Consensus
Story Jam REVEAL: The secret life of media
Story Jam REVEAL: The secret life of media
Make social impact with stories that connect at the intersection of storytelling and technology. Collaborate with storytellers from different disciplines. Develop innovative stories under healthy time pressure, with inspiring input and professional guidance.
Story Jam REVEAL: The secret life of media
Waag | Abdo Hassan on cultivating joyful resistance
Waag | Abdo Hassan on cultivating joyful resistance
The countdown has begun for The PublicSpaces Conference: For a Collective Internet, set to take place on June 27 and 28. Emma Yedema, editor and producer at PublicSpaces, sat down with Abdo Hassan for an interview.
Waag | Abdo Hassan on cultivating joyful resistance
Mentors — NEW INC
Mentors — NEW INC
Our community is supported by some of NYC's top professionals in the fields of art, design, technology, and entrepreneurship. Through monthly meetings and ongoing guidance, mentors help our members develop as creatives and future leaders.
Mentors — NEW INC
Dear Ai
Dear Ai
Use the power of Artificial Intelligence to generate intimate, thoughtful, and beautiful letters.
Dear Ai
Intersectional AI Toolkit
Intersectional AI Toolkit
The Intersectional AI Toolkit gathers ideas, ethics, and tactics for more ethical, equitable tech. It shows how established queer, antiracist, antiableist, neurodiverse, feminist communities contribute needed perspectives to reshape digital systems. The toolkit also offers approachable guides to both intersectionality and AI. This endeavor works from the hope that code can feel approachable for everyone, can move us toward care and repair—rather than perpetuating power imbalances—and can do so by embodying lessons from intersectionality.
Intersectional AI Toolkit
Risk and Harm: Unpacking Ideologies in the AI Discourse
Risk and Harm: Unpacking Ideologies in the AI Discourse
Last March, there was a very interesting back-and-forth on AI regulation between the Future of Life Institute (FLI) and the Distributed AI Research (DAIR) institute. We analyzed the language and rhetorics in the two letters and we teased out fundamental ideological differences between DAIR and FLI. We are concerned about problematic ethical views connected with longtermism entering the mainstream by means of public campaigns such as FLI's, and we offer two analytical lenses (Existential Risk, Ongoing Harm) for assessing them.
Risk and Harm: Unpacking Ideologies in the AI Discourse
AI Designer @ Microsoft, Amsterdam
AI Designer @ Microsoft, Amsterdam
Do you enjoy interfacing with customers, working on complex problems that involve digitally enabled experiences? Are you excited by the idea of using your broad experience design skillset and applying it to the holistic project lifecycle including discovery, ideation, design and definition of AI enabled digital experiences?
AI Designer @ Microsoft, Amsterdam