Found 79 bookmarks
Custom sorting
Artificial Intelligence in Hiring: Assessing Impacts on Equality
Artificial Intelligence in Hiring: Assessing Impacts on Equality
The use of artificial intelligence (AI) presents risks to equality, potentially embedding bias and discrimination. Auditing tools are often promised as a solution. However our new research, which examines tools for auditing AI used in recruitment, finds these tools are often inadequate in ensuring compliance with UK Equality Law, good governance and best practice. We argue in this report that a more comprehensive approach than technical auditing is needed to safeguard equality in the use of AI for hiring, which shapes access to work. Here, we present first steps which could be taken to achieve this. We also publish a prototype AI Equality Impact Assessment which we plan to develop and pilot.
·up.raindrop.io·
Artificial Intelligence in Hiring: Assessing Impacts on Equality
The coming war on the hidden algorithms that trap people in poverty
The coming war on the hidden algorithms that trap people in poverty
A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services. Increasingly, the fight over a client’s eligibility now involves some kind of algorithm. “In some cases, it probably should just be shut down because there’s no way to make it equitable,”
·technologyreview.com·
The coming war on the hidden algorithms that trap people in poverty
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
Racism is a systemic issue that pervades every aspect of life in the United States and around the world. In recent months, its corrosive…
·medium.com·
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
Councils scrapping use of algorithms in benefit and welfare decisions
Councils scrapping use of algorithms in benefit and welfare decisions
Call for more transparency on how such tools are used in public services as 20 councils stop using computer algorithms. The Home Office recently stopped using an algorithm to help decide visa applications after allegations that it contained “entrenched racism”.
·theguardian.com·
Councils scrapping use of algorithms in benefit and welfare decisions
Meaningful Transparency and (in)visible Algorithms
Meaningful Transparency and (in)visible Algorithms
Can transparency bring accountability to public-sector algorithmic decision-making (ADM) systems? High-profile retractions have taken place against a shift in public sentiment towards greater scepticism and mistrust of ‘black box’ technologies, evidenced in increasing awareness of the possible risks for citizens of the potentially invasive profiling.
·adalovelaceinstitute.org·
Meaningful Transparency and (in)visible Algorithms
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
A Home Office-funded project that used artificial intelligence to predict gun and knife crime was found to be wildly inaccurate. “Basing our arguments on inaccuracy is problematic because the tech deficiencies are solvable through time. Even if the algorithm was set to be 100 percent accurate, there would still be bias in this system.”
·wired.co.uk·
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
Algorithmic Colonisation of Africa - Abeba Birhane
Algorithmic Colonisation of Africa - Abeba Birhane
Colonialism in the age of Artificial Intelligence takes the form of “state-of-the-art algorithms” and “AI driven solutions” unsuited to African problems, and hinders the development of local products, leaving the continent dependent on Western software and infrastructure.
·theelephant.info·
Algorithmic Colonisation of Africa - Abeba Birhane
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
As we hand over decision-making regarding social issues to automated systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by the profit incentive, but we are also handing over moral and ethical questions to the corporate world, argues ABEBA BIRHANE
·theelephant.info·
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
Black programmers and technologists who inspire us
Black programmers and technologists who inspire us
This year, in honor of Black History Month, the Codecademy Team is celebrating Black leaders that are working to build a more inclusive, more welcoming, and more diverse tech industry. It's important to celebrate Black people in all our roles and diversity. For UK Black History Month (BHM), we're keen to see similar profiling of technologists who want to raise their visibility, so we can celebrate their work.
·news.codecademy.com·
Black programmers and technologists who inspire us
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe
Automated decision making threatens to disproportionately impact society’s most vulnerable communities living at the intersection of economic and social marginalization. The report discusses
tgyateng69·eprints.lse.ac.uk·
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe
Odi Fridays can we please stop talking about AI
Odi Fridays can we please stop talking about AI
'Machine learning is revolutionising healthcare provision and delivery, from mobilising previously inaccessible data sources to generating increasingly powerful algorithmic constructs for prognostic modelling. However, it is becoming increasingly obvious that if we do not learn from the mistakes of our past, that we are doomed to repeat them; if it isn’t already too late'
adeadewunmi·theodi.org·
Odi Fridays can we please stop talking about AI
Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms | NEJM
Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms | NEJM
"By embedding race into the basic data and decisions of health care, these algorithms propagate race-based medicine. Many of these race-adjusted algorithms guide decisions in ways that may direct more attention or resources to white patients than to members of racial and ethnic minorities"
tgyateng69·nejm.org·
Hidden in Plain Sight — Reconsidering the Use of Race Correction in Clinical Algorithms | NEJM
3 mantras for women in data | MIT Sloan
3 mantras for women in data | MIT Sloan
“It’s almost an imperative, I think, to drive that diversity,” she said. “Diversity from a gender perspective, but also from other perspectives such as age, race, ethnicity, geography, and many others, because we’re seeing AI is such a powerful technology, and we need to make sure it is equitable.”
·mitsloan.mit.edu·
3 mantras for women in data | MIT Sloan