Digital Ethics

Digital Ethics

3767 bookmarks
Custom sorting
A.I. Can’t Detect Our Emotions
A.I. Can’t Detect Our Emotions
Emotion A.I., affective computing, and artificial emotional intelligence are all fields creating technology to understand, respond to, measure, and simulate human emotions. Hope runs so high for…
·onezero.medium.com·
A.I. Can’t Detect Our Emotions
Beware technical solutions to non-technical problems
Beware technical solutions to non-technical problems
Technical approaches are only one part of the trust relationship between AI and users You may have heard of an AI method called Explainable Artificial Intelligence or XAI. XAI refers to the discipline that aims to make the behaviour of AI models more understandable.
·linkedin.com·
Beware technical solutions to non-technical problems
Digital Ethics
Digital Ethics
Our society needs a constructive discourse around ethics in the digital realm, as well as a wide-spread literacy on how to design for ethics in a digitalised environment.
·ri.se·
Digital Ethics
Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation | International Data Privacy Law | Oxford Academic
Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation | International Data Privacy Law | Oxford Academic
Key PointsSince approval of the European Union General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a ‘right to exp
·academic.oup.com·
Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation | International Data Privacy Law | Oxford Academic
Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law by Sandra Wachter, Brent Mittelstadt, Chris Russell :: SSRN
Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law by Sandra Wachter, Brent Mittelstadt, Chris Russell :: SSRN
Western societies are marked by diverse and extensive biases and inequality that are unavoidably embedded in the data used to train machine learning. Algorithms
·papers.ssrn.com·
Bias Preservation in Machine Learning: The Legality of Fairness Metrics Under EU Non-Discrimination Law by Sandra Wachter, Brent Mittelstadt, Chris Russell :: SSRN
Laura Klein on Twitter
Laura Klein on Twitter
Here is what Slack should do: find the people in the org who warned you the DM feature could be abused by bad actors and LISTEN TO THEM NEXT TIME. Find the people who said that was an overreaction or too negative and make sure they understand they were wrong and why.— Laura Klein (@lauraklein) March 25, 2021
·twitter.com·
Laura Klein on Twitter
ρђ๏є๒є Շเςкєɭɭ on Twitter
ρђ๏є๒є Շเςкєɭɭ on Twitter
In Hungarian, we don’t use he/she there is only one gender pronoun “Ö”. But it’s fascinating when this is fed through Google Translate, the algorithms highlight the biases that are there. Then imagine enacting any kind of change from those biases, encoded into computer code. pic.twitter.com/DygBtaHShU— ρђ๏є๒є Շเςкєɭɭ (@solarpunk_girl) March 20, 2021
·twitter.com·
ρђ๏є๒є Շเςкєɭɭ on Twitter
Miss IG Geek (she/her) 🏳️‍🌈 on Twitter
Miss IG Geek (she/her) 🏳️‍🌈 on Twitter
Reading this paper on the applicability of the GDPR to ‘Affective AI’ and it strikes me that humanity has spent an inordinate amount of time, energy and money to merely replicate the efficacy and accuracy of temple prophets fondling sheep entrails for insights— Miss IG Geek (she/her) 🏳️‍🌈 (@MissIG_Geek) March 18, 2021
·twitter.com·
Miss IG Geek (she/her) 🏳️‍🌈 on Twitter