Found 9 bookmarks
Newest
For Freedoms on Twitter
For Freedoms on Twitter
The NYPD uses biometric technologies such as facial recognition to identify crime suspects, but dark-skinned people are misidentified 40% of the time. To imagine a future where we don't deploy facial recognition & policing, #ForFreedoms & @ai4theppl launched #HeartRecognition: pic.twitter.com/5dnmxZAFLv— For Freedoms (@forfreedoms) June 21, 2021
·twitter.com·
For Freedoms on Twitter
Training data that is meant to make predictive policing less biased is still racist
Training data that is meant to make predictive policing less biased is still racist
Arrest data biases predictive tools because police are known to arrest more people in Black and other minority neighborhoods, which leads algorithms to direct more policing to those areas, which leads to more arrests. The result is that predictive tools misallocate police patrols: some neighborhoods are unfairly designated crime hot spots while others are underpoliced.
·technologyreview.com·
Training data that is meant to make predictive policing less biased is still racist
The Scottish Racism Project
The Scottish Racism Project
The Scottish Racism Project does two things: (1) take deep dives into the various ways racism has manifested itself up north, explore courses of action to remedy this, and look at how BAME communities can empower themselves in the face of adversity (2) offer and find solidarity with BAME individuals who have shared real lives personal stories of racism and want the truth of their experiences to be known far and wide, often because the wider Scottish Press were uninterested when approached.
·scottish-racism.blogspot.com·
The Scottish Racism Project
Councils scrapping use of algorithms in benefit and welfare decisions
Councils scrapping use of algorithms in benefit and welfare decisions
Call for more transparency on how such tools are used in public services as 20 councils stop using computer algorithms. The Home Office recently stopped using an algorithm to help decide visa applications after allegations that it contained “entrenched racism”.
·theguardian.com·
Councils scrapping use of algorithms in benefit and welfare decisions
Black Lives Matter: Why forming 'diversity'​ committees with Black staff, who have no expertise in D&I or Anti-Racism, is tokenising. | LinkedIn
Black Lives Matter: Why forming 'diversity'​ committees with Black staff, who have no expertise in D&I or Anti-Racism, is tokenising. | LinkedIn
Black people are experts in their lived experience of racism.That does not automatically mean Black people are experts in Diversity and Inclusion (D&I) or Anti-Racism. § Having a corridor chat with a random member of staff who happens to be Black to pick their brains about D&I and Anti-Racism policy is not strategic. It's tokenism. And tokenism is a byproduct of racism.
·linkedin.com·
Black Lives Matter: Why forming 'diversity'​ committees with Black staff, who have no expertise in D&I or Anti-Racism, is tokenising. | LinkedIn
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
Tools like GPT-3 are stunningly good, but they feed on the cesspits of the internet. How can we make them safe for the public to actually use? § Sometimes, to reckon with the effects of biased training data is to realize that the app shouldn't be built. That without human supervision, there is no way to stop the app from saying problematic stuff to its users, and that it's unacceptable to let it do so.
·technologyreview.com·
How to make a chatbot that isn’t racist or sexist | MIT Technology Review