Found 5 bookmarks
Newest
"Any data scientist working to automate issues of a social nature, in effect, is engaged in making moral and ethical decisions – they are not simply dealing with purely technical work but with a practice that actively impacts individual people." - Abeba Birhane
"Any data scientist working to automate issues of a social nature, in effect, is engaged in making moral and ethical decisions – they are not simply dealing with purely technical work but with a practice that actively impacts individual people." - Abeba Birhane
"Any data scientist working to automate issues of a social nature, in effect, is engaged in making moral and ethical decisions – they are not simply dealing with purely technical work but with a practice that actively impacts individual people." - Abeba Birhane
·up.raindrop.io·
"Any data scientist working to automate issues of a social nature, in effect, is engaged in making moral and ethical decisions – they are not simply dealing with purely technical work but with a practice that actively impacts individual people." - Abeba Birhane
The coming war on the hidden algorithms that trap people in poverty
The coming war on the hidden algorithms that trap people in poverty
A growing group of lawyers are uncovering, navigating, and fighting the automated systems that deny the poor housing, jobs, and basic services. Increasingly, the fight over a client’s eligibility now involves some kind of algorithm. “In some cases, it probably should just be shut down because there’s no way to make it equitable,”
·technologyreview.com·
The coming war on the hidden algorithms that trap people in poverty
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
How to make a chatbot that isn’t racist or sexist | MIT Technology Review
Tools like GPT-3 are stunningly good, but they feed on the cesspits of the internet. How can we make them safe for the public to actually use? § Sometimes, to reckon with the effects of biased training data is to realize that the app shouldn't be built. That without human supervision, there is no way to stop the app from saying problematic stuff to its users, and that it's unacceptable to let it do so.
·technologyreview.com·
How to make a chatbot that isn’t racist or sexist | MIT Technology Review