Found 6 bookmarks
Newest
For Freedoms on Twitter
For Freedoms on Twitter
The NYPD uses biometric technologies such as facial recognition to identify crime suspects, but dark-skinned people are misidentified 40% of the time. To imagine a future where we don't deploy facial recognition & policing, #ForFreedoms & @ai4theppl launched #HeartRecognition: pic.twitter.com/5dnmxZAFLv— For Freedoms (@forfreedoms) June 21, 2021
·twitter.com·
For Freedoms on Twitter
Councils scrapping use of algorithms in benefit and welfare decisions
Councils scrapping use of algorithms in benefit and welfare decisions
Call for more transparency on how such tools are used in public services as 20 councils stop using computer algorithms. The Home Office recently stopped using an algorithm to help decide visa applications after allegations that it contained “entrenched racism”.
·theguardian.com·
Councils scrapping use of algorithms in benefit and welfare decisions
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
Tools like GPT-3 are stunningly good, but they feed on the cesspits of the internet. How can we make them safe for the public to actually use? § Sometimes, to reckon with the effects of biased training data is to realize that the app shouldn't be built. That without human supervision, there is no way to stop the app from saying problematic stuff to its users, and that it's unacceptable to let it do so.
·technologyreview.com·
How to make a chatbot that isn’t racist or sexist | MIT Technology Review