Digital Ethics

Digital Ethics

3325 bookmarks
Custom sorting
Unknown Influence | Social Media, Democracy and Transparency
Unknown Influence | Social Media, Democracy and Transparency
Is Instagram fueling eating disorders in teenagers? Does TikTok harm your mental health? Are Facebook groups encouraging people to take part in offline violence? The answer is… we don’t know for sure. And that’s a serious problem. Sign our petition calling on platform transparency: https://mzl.la/3zjd1fL
·youtube.com·
Unknown Influence | Social Media, Democracy and Transparency
Morning Brew ☕️ on Twitter
Morning Brew ☕️ on Twitter
This sweater developed by the University of Maryland is an invisibility cloak against AI.It uses "adversarial patterns" to stop AI from recognizing the person wearing it. pic.twitter.com/aJ8LlHixvX— Morning Brew ☕️ (@MorningBrew) October 25, 2022
·twitter.com·
Morning Brew ☕️ on Twitter
Dutch anti-fraud system violates human rights, court rules
Dutch anti-fraud system violates human rights, court rules
A Dutch court ruled Wednesday that a government system that uses artificial intelligence to identify potential welfare fraudsters is illegal because it violates laws that shield human rights and privacy.
·upi.com·
Dutch anti-fraud system violates human rights, court rules
Jay Van Bavel on Twitter
Jay Van Bavel on Twitter
A massive new study of over 400,000 college students finds that #Facebook exposure increased depression, especially among those who were most susceptible to mental illness, and reduced academic performance, by fostering unfavorable social comparisons. https://t.co/0GddnmTcAc pic.twitter.com/cwjjI1kE50— Jay Van Bavel (@jayvanbavel) October 20, 2022
·twitter.com·
Jay Van Bavel on Twitter
Location data could be exposed in WhatsApp, Signal, and Threema
Location data could be exposed in WhatsApp, Signal, and Threema
Security researchers have found a surprising method for exposing location data in otherwise secure messaging apps WhatsApp, Signal, and Threema. While the method sounds imprecise, tests showed that it provided greater than 80% reliability … Restore Privacy reports. A team of researchers has found that it’s possible to infer the locations of users of popular […]
·9to5mac.com·
Location data could be exposed in WhatsApp, Signal, and Threema
ssoɯpuosɐɾ@ 🇨🇦 on Twitter
ssoɯpuosɐɾ@ 🇨🇦 on Twitter
A really great interview with @mer__edith, president of @SignalApp"...we are not in the business of compromising on #privacy, and we are not in the business of handing people who want and need #Signal a compromised version of it..."@verge @recklesshttps://t.co/tkUJk9WDVi— ssoɯpuosɐɾ@ 🇨🇦 (@jasondmoss) October 20, 2022
·twitter.com·
ssoɯpuosɐɾ@ 🇨🇦 on Twitter
Ethics, Bias, and Fairness in AI - A TWIML Playlist
Ethics, Bias, and Fairness in AI - A TWIML Playlist
The ML community has a unique responsibility to ensure the technologies we produce are fair, responsible and don’t reinforce racial & socioeconomic biases.
·twimlai.com·
Ethics, Bias, and Fairness in AI - A TWIML Playlist
Nobody Wants Touch-Screen Glove Box Latches And It Needs To Stop Now - The Autopian
Nobody Wants Touch-Screen Glove Box Latches And It Needs To Stop Now - The Autopian
I’ve been seeing some absolute nonsense online recently, nonsense showing some actual real-world car features, and I’ve realized it’s my duty to take a moment and let the whole world know what’s going on here is very much not okay. It’s not okay. I’m not going to sit back and just let it happen, let …
·theautopian.com·
Nobody Wants Touch-Screen Glove Box Latches And It Needs To Stop Now - The Autopian
Sven Nyholm on Twitter
Sven Nyholm on Twitter
"What, if any, harm can a self-driving car do?" - blog post by Fiona Woollard (@f_woollard), which summarises the ideas in her great recent article in the Journal of Applied Philosophy about what she calls "the new trolley problem": https://t.co/tQxZ8z7ztu— Sven Nyholm (@SvenNyholm) October 12, 2022
·twitter.com·
Sven Nyholm on Twitter
Overperception of moral outrage in online social networks inflates beliefs about intergroup hostility
Overperception of moral outrage in online social networks inflates beliefs about intergroup hostility
As individuals and political leaders increasingly interact in online social networks, it is important to understand how the affordances of social media shape social knowledge of morality and politics. Here, we propose that social media users overperceive levels of moral outrage felt by individuals and groups, inflating beliefs about intergroup hostility. Utilizing a Twitter field survey, we measured authors’ moral outrage in real time and compared authors’ reports to observers’ judgments of the authors’ moral outrage. We find that observers systematically overperceive moral outrage in authors, inferring more intense moral outrage experiences from messages than the authors of those messages actually reported. This effect was stronger in participants who spent more time on social media to learn about politics. Pre-registered confirmatory behavioral experiments found that overperception of individuals’ moral outrage causes overperception of collective moral outrage and inflates beliefs about hostile communication norms, group affective polarization and ideological extremity. Together, these results highlight how individual-level overperceptions of online moral outrage produce collective overperceptions that have the potential to warp our social knowledge of moral and political attitudes.
·osf.io·
Overperception of moral outrage in online social networks inflates beliefs about intergroup hostility