The UK is secretly testing a controversial web snooping tool | WIRED UK
Black Tech Employees Rebel Against ‘Diversity Theater’
Companies pledged money and support for people of color. But some say they still face a hostile work environment for speaking out or simply doing their jobs.
How Apps Can Help Migrants Achieve Better Health Outcomes
Mobile applications aided by artificial intelligence may help migrants better address their physical and mental health.
How to poison the data that Big Tech uses to surveil you
Algorithms are meaningless without good data. The public can exploit that to demand change.
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further. Researchers experimenting with GPT-3, the AI text-generation model, found that it is not ready to replace human respondents in the chatbox. Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves.
The Shuri Network Achievements Summary 2020
How many times have you seen an all-female and black and ethnic minority (BME) panel talking about technology? For many people their first time would have been the Shuri Network launch last July. The Shuri Network was launched in 2019 to support women of colour in NHS digital health develop the skills and confidence to progress into senior leadership positions and help NHS leadership teams more closely represent the diversity of their workforce.
Artificial Intelligence Poses New Threat to Equal Employment Opportunity
Just when we thought it was safe to go back in the water, a new threat has emerged to equal employment opportunity as employers base hiring decisions on artificial intelligence powered video and game-based “pre-employment” assessments of job candidates.
Clearview AI's plan for invasive facial recognition is worse than you think
Clearview AI's latest patent application reveals the firm's ongoing plans to use surveillance against vulnerable individuals. According to BuzzFeed News, a patent was filed in August which describes in detail how the applications of facial recognition can range from governmental to social — like dating and professional networking. Clearview AI's patent claims that people will be able to identify individuals who are unhoused and are drug users by simply accessing the company's face-matching system.
Gender Shades
Computer vision technology has lower accuracy rates for black females
Couriers say Uber’s ‘racist’ facial identification tech got them fired
BAME couriers working for Uber Eats and Uber claim that the company’s flawed identification technology is costing them their livelihoods
Universities are using surveillance software to spy on students
With remote learning now the norm, universities are using surveillance software to keep tabs on students who are not engaging with their studies
THE UK’S PRIVATISED MIGRATION SURVEILLANCE REGIME: A rough guide for civil society
Guide on how the UK’s borders, immigrations, and citizenship system tracks and spies on people, and which companies profit.
Artificial Intelligence in Hiring: Assessing Impacts on Equality
The use of artificial intelligence (AI) presents risks to equality, potentially embedding bias and discrimination. Auditing tools are often promised as a solution. However our new research, which examines tools for auditing AI used in recruitment, finds these tools are often inadequate in ensuring compliance with UK Equality Law, good governance and best practice.
We argue in this report that a more comprehensive approach than technical auditing is needed to safeguard equality in the use of AI for hiring, which shapes access to work. Here, we present first steps which could be taken to achieve this. We also publish a prototype AI Equality Impact Assessment which we plan to develop and pilot.
The coming war on the hidden algorithms that trap people in poverty
A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services. Increasingly, the fight over a client’s eligibility now involves some kind of algorithm. “In some cases, it probably should just be shut down because there’s no way to make it equitable,”
How Data Can Map and Make Racial Inequality More Visible (If Done Responsibly) | by The GovLab | Data Stewards Network | Medium
Racism is a systemic issue that pervades every aspect of life in the United States and around the world. In recent months, its corrosive…
Ongoing Data-Driven Efforts to Address Racial Inequality
Many of the issues discussed about data & racial inequality have been the focus by numerous organizations. This is a list of these organisations (largely US based)
Councils scrapping use of algorithms in benefit and welfare decisions
Call for more transparency on how such tools are used in public services as 20 councils stop using computer algorithms. The Home Office recently stopped using an algorithm to help decide visa applications after allegations that it contained “entrenched racism”.
Objective or Biased
On the questionable use of Artificial Intelligence for job applications. An exclusive data analysis by BR (Bavarian Broadcasting) data journalists shows that an AI for personality assessment can be swayed by appearances. This might perpetuate stereotypes while potentially costing candidates the job.
Center for Critical Race and Digital Studies - Publications & Public Works
Centre for Critical Race and Digital Studies. List of publications, books, reports and videos regarding the intersection of race & digital/data/tech.
Meaningful Transparency and (in)visible Algorithms
Can transparency bring accountability to public-sector algorithmic decision-making (ADM) systems? High-profile retractions have taken place against a shift in public sentiment towards greater scepticism and mistrust of ‘black box’ technologies, evidenced in increasing awareness of the possible risks for citizens of the potentially invasive profiling.
Police Built an AI To Predict Violent Crime. It Was Seriously Flawed
A Home Office-funded project that used artificial intelligence to predict gun and knife crime was found to be wildly inaccurate. “Basing our arguments on inaccuracy is problematic because the tech deficiencies are solvable through time. Even if the algorithm was set to be 100 percent accurate, there would still be bias in this system.”
New Algorithms Could Reduce Racial Disparities in Health Care
Machine learning programs trained with patients’ own reports find problems that doctors miss—especially in Black people. Full study here: https://go.nature.com/3pnOwWP
Algorithmic Colonisation of Africa - Abeba Birhane
Colonialism in the age of Artificial Intelligence takes the form of “state-of-the-art algorithms” and “AI driven solutions” unsuited to African problems, and hinders the development of local products, leaving the continent dependent on Western software and infrastructure.
The Dark Side of Digitisation and the Dangers of Algorithmic Decision-Making - Abeba Birhane
As we hand over decision-making regarding social issues to automated systems developed by profit-driven corporates, not only are we allowing our social concerns to be dictated by the profit incentive, but we are also handing over moral and ethical questions to the corporate world, argues ABEBA BIRHANE
Listening to Black Women: The Innovation Tech Can't Crack | WIRED
Artificial Intelligence in Hiring. Assessing Impacts in Equality
5fad57ea4869b00f399bd2c3_IFOW-ETF-Mind the gap (v9-12.11.20).pdf
Mind the Gap: The Final Report of the Equality Task Force - IFOW
Black programmers and technologists who inspire us
This year, in honor of Black History Month, the Codecademy Team is celebrating Black leaders that are working to build a more inclusive, more welcoming, and more diverse tech industry. It's important to celebrate Black people in all our roles and diversity. For UK Black History Month (BHM), we're keen to see similar profiling of technologists who want to raise their visibility, so we can celebrate their work.
Between Antidiscrimination and Data: Understanding Human Rights Discourse on Automated Discrimination in Europe
Automated decision making threatens to
disproportionately impact society’s most vulnerable
communities living at the intersection of economic and
social marginalization. The report discusses