Found 90 bookmarks
Custom sorting
Odi Fridays can we please stop talking about AI
Odi Fridays can we please stop talking about AI
'Machine learning is revolutionising healthcare provision and delivery, from mobilising previously inaccessible data sources to generating increasingly powerful algorithmic constructs for prognostic modelling. However, it is becoming increasingly obvious that if we do not learn from the mistakes of our past, that we are doomed to repeat them; if it isn’t already too late'
del-1721721176188·theodi.org·
Odi Fridays can we please stop talking about AI
Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms | NEJM
Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms | NEJM
"By embedding race into the basic data and decisions of health care, these algorithms propagate race-based medicine. Many of these race-adjusted algorithms guide decisions in ways that may direct more attention or resources to white patients than to members of racial and ethnic minorities"
tgyateng69·nejm.org·
Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms | NEJM
3 mantras for women in data | MIT Sloan
3 mantras for women in data | MIT Sloan
“It’s almost an imperative, I think, to drive that diversity,” she said. “Diversity from a gender perspective, but also from other perspectives such as age, race, ethnicity, geography, and many others, because we’re seeing AI is such a powerful technology, and we need to make sure it is equitable.”
·mitsloan.mit.edu·
3 mantras for women in data | MIT Sloan
Gender Shades / Joy Buolamwini (US) Timnit Gebru (ETH) | Flickr
Gender Shades / Joy Buolamwini (US) Timnit Gebru (ETH) | Flickr
Joy Buolamwini and Timnit Gebru investigated the bias of AI facial recognition programs. The study reveals that popular applications that are already part of the programming display obvious discrimination on the basis of gender or skin color. One reason for the unfair results can be found in erroneous or incomplete data sets on which the program is being trained. In things like medical applications, this can be a problem: simple convolutional neural nets are already as capable of detecting melanoma (malignant skin changes) as experts are. However, skin color information is crucial to this p...
·flickr.com·
Gender Shades / Joy Buolamwini (US) Timnit Gebru (ETH) | Flickr
Algorithmic Justice League (AJL) / Joy Buolamwini | This pro… | Flickr
Algorithmic Justice League (AJL) / Joy Buolamwini | This pro… | Flickr
This project is part of the Ars Electronica CyberArts Exhibition. Algorithms have become essential elements of our daily lives, used in almost all areas of society: in online searches and navigation, in ratings systems and smart devices or bots, but also in banking, speech and facial recognition, health care, policing, and so on. However, the systems that are developed are never neutral, which means that algorithms may be biased and discriminatory. The Algorithmic Justice League (AJL) is an organization that combines art and research to increase public awareness of the social implications a...
·flickr.com·
Algorithmic Justice League (AJL) / Joy Buolamwini | This pro… | Flickr
AI Ethics: Global Perspectives
AI Ethics: Global Perspectives
AI Ethics: Global Perspectives is a free, online course designed to raise awareness and help institutions work toward a more responsible use of AI. It brings together leading experts in the field of AI from around the world to consider ethical ramifications of AI and rectify initiatives that might be harmful to particular people and groups in society. We will release videos on a rolling basis. Sign up at bit.ly/ai-ethics-course to get notified about the launch of the course; and future releases!
·vimeo.com·
AI Ethics: Global Perspectives
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
Tools like GPT-3 are stunningly good, but they feed on the cesspits of the internet. How can we make them safe for the public to actually use? § Sometimes, to reckon with the effects of biased training data is to realize that the app shouldn't be built. That without human supervision, there is no way to stop the app from saying problematic stuff to its users, and that it's unacceptable to let it do so.
·technologyreview.com·
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
OPEN-ODI-2020-01_Monitoring-Equality-in-Digital-Public-Services-report-1.pdf
OPEN-ODI-2020-01_Monitoring-Equality-in-Digital-Public-Services-report-1.pdf
Many of the public and private services we use are now digital. The move to digital is likely to increase as technology becomes more embedded in our lives. But what does this mean for how essential public services understand who is using, or indeed not using, them and why? § Data about the protected characteristics of people using these services isn’t currently collected andstatistics aren’t published in a consistent or collective way. This means it is harder tofind out who is excluded from using these services and why.
·up.raindrop.io·
OPEN-ODI-2020-01_Monitoring-Equality-in-Digital-Public-Services-report-1.pdf
Co-op is using facial recognition tech to scan and track shoppers | WIRED UK
Co-op is using facial recognition tech to scan and track shoppers | WIRED UK
Branches of Co-op in the south of England have been using real-time facial recognition cameras to scan shoppers entering stores. § See US based studies on FRT which shows the technology can be unreliable for black people, especially black women.
·wired.co.uk·
Co-op is using facial recognition tech to scan and track shoppers | WIRED UK
The dividing line: how we represent race in data – The ODI
The dividing line: how we represent race in data – The ODI
The point of this essay is to encourage a critical approach to the relationship between race and data. It points to three questions that anyone working with data should ask if they are going to be collecting and using data about race. § If we are not careful, data can divide and sort us in exactly the sort of essentialising ways that the colonial idea of race supported. But if researchers ask the right questions, and know their history, we can use data to advocate for racial justice.
·theodi.org·
The dividing line: how we represent race in data – The ODI