Found 5 bookmarks
Custom sorting
Seven intersectional feminist principles for equitable and actionable COVID-19 data
Seven intersectional feminist principles for equitable and actionable COVID-19 data
This essay offers seven intersectional feminist principles for equitable and actionable COVID-19 data, drawing from the authors' prior work on data feminism. Our book, Data Feminism (D'Ignazio and Klein, 2020), offers seven principles which suggest possible points of entry for challenging and changing power imbalances in data science. In this essay, we offer seven sets of examples, one inspired by each of our principles, for both identifying existing power imbalances with respect to the impact of the novel coronavirus and its response, and for beginning the work of change.
tgyateng69·journals.sagepub.com·
Seven intersectional feminist principles for equitable and actionable COVID-19 data
"Any data scientist working to automate issues of a social nature, in effect, is engaged in making moral and ethical decisions – they are not simply dealing with purely technical work but with a practice that actively impacts individual people." - Abeba Birhane
"Any data scientist working to automate issues of a social nature, in effect, is engaged in making moral and ethical decisions – they are not simply dealing with purely technical work but with a practice that actively impacts individual people." - Abeba Birhane
"Any data scientist working to automate issues of a social nature, in effect, is engaged in making moral and ethical decisions – they are not simply dealing with purely technical work but with a practice that actively impacts individual people." - Abeba Birhane
·up.raindrop.io·
"Any data scientist working to automate issues of a social nature, in effect, is engaged in making moral and ethical decisions – they are not simply dealing with purely technical work but with a practice that actively impacts individual people." - Abeba Birhane
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves
We’re used to medical chatbots giving dangerous advice, but one based on OpenAI’s GPT-3 took it much further. Researchers experimenting with GPT-3, the AI text-generation model, found that it is not ready to replace human respondents in the chatbox. Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves.
·artificialintelligence-news.com·
Medical chatbot using OpenAI’s GPT-3 told a fake patient to kill themselves