Found 27 bookmarks
Newest
Study finds gender and skin-type bias in commercial artificial-intelligence systems
Study finds gender and skin-type bias in commercial artificial-intelligence systems
A new paper from the MIT Media Lab’s Joy Buolamwini shows that three commercial facial-analysis programs demonstrate gender and skin-type biases, and suggests a new, more accurate method for evaluating the performance of such machine-learning systems.
·news.mit.edu·
Study finds gender and skin-type bias in commercial artificial-intelligence systems
The Shuri Network Achievements Summary 2020
The Shuri Network Achievements Summary 2020
How many times have you seen an all-female and black and ethnic minority (BME) panel talking about technology? For many people their first time would have been the Shuri Network launch last July. The Shuri Network was launched in 2019 to support women of colour in NHS digital health develop the skills and confidence to progress into senior leadership positions and help NHS leadership teams more closely represent the diversity of their workforce.
·up.raindrop.io·
The Shuri Network Achievements Summary 2020
Artificial Intelligence in Hiring: Assessing Impacts on Equality
Artificial Intelligence in Hiring: Assessing Impacts on Equality
The use of artificial intelligence (AI) presents risks to equality, potentially embedding bias and discrimination. Auditing tools are often promised as a solution. However our new research, which examines tools for auditing AI used in recruitment, finds these tools are often inadequate in ensuring compliance with UK Equality Law, good governance and best practice. We argue in this report that a more comprehensive approach than technical auditing is needed to safeguard equality in the use of AI for hiring, which shapes access to work. Here, we present first steps which could be taken to achieve this. We also publish a prototype AI Equality Impact Assessment which we plan to develop and pilot.
·up.raindrop.io·
Artificial Intelligence in Hiring: Assessing Impacts on Equality
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
Racism is a systemic issue that pervades every aspect of life in the United States and around the world. In recent months, its corrosive…
·medium.com·
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
As we hand over decision-making regarding social issues to automated systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by the profit incentive, but we are also handing over moral and ethical questions to the corporate world, argues ABEBA BIRHANE
·theelephant.info·
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe
Automated decision making threatens to disproportionately impact society’s most vulnerable communities living at the intersection of economic and social marginalization. The report discusses
tgyateng69·eprints.lse.ac.uk·
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe
Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms | NEJM
Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms | NEJM
"By embedding race into the basic data and decisions of health care, these algorithms propagate race-based medicine. Many of these race-adjusted algorithms guide decisions in ways that may direct more attention or resources to white patients than to members of racial and ethnic minorities"
tgyateng69·nejm.org·
Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms | NEJM
3 mantras for women in data | MIT Sloan
3 mantras for women in data | MIT Sloan
“It’s almost an imperative, I think, to drive that diversity,” she said. “Diversity from a gender perspective, but also from other perspectives such as age, race, ethnicity, geography, and many others, because we’re seeing AI is such a powerful technology, and we need to make sure it is equitable.”
·mitsloan.mit.edu·
3 mantras for women in data | MIT Sloan
OPEN-ODI-2020-01_Monitoring-Equality-in-Digital-Public-Services-report-1.pdf
OPEN-ODI-2020-01_Monitoring-Equality-in-Digital-Public-Services-report-1.pdf
Many of the public and private services we use are now digital. The move to digital is likely to increase as technology becomes more embedded in our lives. But what does this mean for how essential public services understand who is using, or indeed not using, them and why? § Data about the protected characteristics of people using these services isn’t currently collected andstatistics aren’t published in a consistent or collective way. This means it is harder tofind out who is excluded from using these services and why.
·up.raindrop.io·
OPEN-ODI-2020-01_Monitoring-Equality-in-Digital-Public-Services-report-1.pdf
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
Tools like GPT-3 are stunningly good, but they feed on the cesspits of the internet. How can we make them safe for the public to actually use? § Sometimes, to reckon with the effects of biased training data is to realize that the app shouldn't be built. That without human supervision, there is no way to stop the app from saying problematic stuff to its users, and that it's unacceptable to let it do so.
·technologyreview.com·
How to make a chatbot that isn’t racist or sexist | MIT Technology Review