Found 77 bookmarks
By relevance
Rachel Coldicutt on Twitter
Rachel Coldicutt on Twitter
What do I mean by a cock-up? Well, @CDEIUK more tactfully call them “faulty or biased systems”. They include: 📌 Black women being twice as likely to have their passport photos rejected by the Home Office. 📌 The A-level saga 📌 The London Gangs Matrix— Rachel Coldicutt (@rachelcoldicutt) November 26, 2020
·twitter.com·
Rachel Coldicutt on Twitter
For Freedoms on Twitter
For Freedoms on Twitter
The NYPD uses biometric technologies such as facial recognition to identify crime suspects, but dark-skinned people are misidentified 40% of the time. To imagine a future where we don't deploy facial recognition & policing, #ForFreedoms & @ai4theppl launched #HeartRecognition: pic.twitter.com/5dnmxZAFLv— For Freedoms (@forfreedoms) June 21, 2021
·twitter.com·
For Freedoms on Twitter
With AI on the rise, is it time to join a union? | IT PRO
With AI on the rise, is it time to join a union? | IT PRO
Workplace challenges posed by new technology could be answered by very traditional solutions. "Nevertheless, there is also substantial historical evidence that technological changes tend to affect lower-paid and lower-qualified workers more than others. This suggests there are likely to be unequal effects that cause disruption for some people or places more than others,"
·itpro.co.uk·
With AI on the rise, is it time to join a union? | IT PRO
Study finds gender and skin-type bias in commercial artificial-intelligence systems
Study finds gender and skin-type bias in commercial artificial-intelligence systems
A new paper from the MIT Media Lab’s Joy Buolamwini shows that three commercial facial-analysis programs demonstrate gender and skin-type biases, and suggests a new, more accurate method for evaluating the performance of such machine-learning systems.
·news.mit.edu·
Study finds gender and skin-type bias in commercial artificial-intelligence systems
Deb Raji on Twitter
Deb Raji on Twitter
These are the four most popular misconceptions people have about race & gender bias in algorithms.I'm wary of wading into this conversation again, but it's important to acknowledge the research that refutes each point, despite it feeling counter-intuitive.Let me clarify.👇🏾 https://t.co/WdzmnGLaFm— Deb Raji (@rajiinio) March 27, 2021
tgyateng69·twitter.com·
Deb Raji on Twitter
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further. Researchers experimenting with GPT-3, the AI text-generation model, found that it is not ready to replace human respondents in the chatbox. Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves.
·artificialintelligence-news.com·
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
The Shuri Network Achievements Summary 2020
The Shuri Network Achievements Summary 2020
How many times have you seen an all-female and black and ethnic minority (BME) panel talking about technology? For many people their first time would have been the Shuri Network launch last July. The Shuri Network was launched in 2019 to support women of colour in NHS digital health develop the skills and confidence to progress into senior leadership positions and help NHS leadership teams more closely represent the diversity of their workforce.
·up.raindrop.io·
The Shuri Network Achievements Summary 2020
Clearview AI's plan for invasive facial recognition is worse than you think
Clearview AI's plan for invasive facial recognition is worse than you think
Clearview AI's latest patent application reveals the firm's ongoing plans to use surveillance against vulnerable individuals. According to BuzzFeed News, a patent was filed in August which describes in detail how the applications of facial recognition can range from governmental to social — like dating and professional networking. Clearview AI's patent claims that people will be able to identify individuals who are unhoused and are drug users by simply accessing the company's face-matching system.
·inputmag.com·
Clearview AI's plan for invasive facial recognition is worse than you think
Artificial Intelligence in Hiring: Assessing Impacts on Equality
Artificial Intelligence in Hiring: Assessing Impacts on Equality
The use of artificial intelligence (AI) presents risks to equality, potentially embedding bias and discrimination. Auditing tools are often promised as a solution. However our new research, which examines tools for auditing AI used in recruitment, finds these tools are often inadequate in ensuring compliance with UK Equality Law, good governance and best practice. We argue in this report that a more comprehensive approach than technical auditing is needed to safeguard equality in the use of AI for hiring, which shapes access to work. Here, we present first steps which could be taken to achieve this. We also publish a prototype AI Equality Impact Assessment which we plan to develop and pilot.
·up.raindrop.io·
Artificial Intelligence in Hiring: Assessing Impacts on Equality