AIxDESIGN Bookmark Library

AIxDESIGN Bookmark Library

2033 bookmarks
Custom sorting
AI_Commons – Open Future
AI_Commons – Open Future
This activity explored how AI training datasets and openly licensed works included in those datasets can be better governed and shared as a commons. The project has now been completed.
aixdesign·openfuture.eu·
AI_Commons – Open Future
A Human Rights-Based Approach to Responsible AI
A Human Rights-Based Approach to Responsible AI
"We argue that a human rights framework orients the research in this space away from the machines and the risks of their biases, and towards humans and the risks to their rights, essentially helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated."
We argue that a human rights framework orients the research in this space away from the machines and the risks of their biases, and towards humans and the risks to their rights, essentially helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
aixdesign·arxiv.org·
A Human Rights-Based Approach to Responsible AI
Why ‘good’ AI systems aren’t actually good for anyone
Why ‘good’ AI systems aren’t actually good for anyone
“The technologies built by a few techies from Silicon Valley run everything and none of us know how they work.” - David Middleback, Re-inventing Education for the Digital Age A few months ago I was struck with the genius idea of embarking on a PhD program. My project aimed at figuring out how we can make some of those Silicon Valley AI technologies ‘better’ for everyone affected by them. It sounded simple enough, right? Well, it turns out that this simple description I was using to explain to fr
aixdesign·synapse-analytics.io·
Why ‘good’ AI systems aren’t actually good for anyone
Glaze: Protecting Artists from Style Mimicry
Glaze: Protecting Artists from Style Mimicry
We are an academic research group of PhD students and CS professors interested in protecting Internet users from invasive uses of machine learning.
We are an academic research group of PhD students and CS professors interested in protecting Internet users from invasive uses of machine learning.
aixdesign·glaze.cs.uchicago.edu·
Glaze: Protecting Artists from Style Mimicry