Digital Ethics

Digital Ethics

3547 bookmarks
Custom sorting
Scientists use deep learning algorithms to predict political ideology based on facial characteristics
Scientists use deep learning algorithms to predict political ideology based on facial characteristics
A new study in Denmark used machine learning techniques on photographs of faces of Danish politicians to predict whether their political ideology is left- or right-wing. The accuracy of predictions was 61%. Faces of right-wing politicians were more likely to have happy and less likely to have neutral facial expressions. Women with attractive faces were more likely to be right-wing, while women whose faces showed contempt were more likely to be left-wing. The study was published in Scientific Reports. ...
·psypost.org·
Scientists use deep learning algorithms to predict political ideology based on facial characteristics
Time to deal with our ethical debt - Part I: Behold the disruption - Nitor
Time to deal with our ethical debt - Part I: Behold the disruption - Nitor
We’re currently witnessing a technological disruption that will have an unimaginable impact on our society. The only problem is, that we haven’t dealt with racked up ethical debt in the tech industry, and now we’re building the future on a shaky foundation. In this article you will find out why caring about ethics is risk management, and brand protection.
·nitor.com·
Time to deal with our ethical debt - Part I: Behold the disruption - Nitor
Mirages: On Anthropomorphism in Dialogue Systems
Mirages: On Anthropomorphism in Dialogue Systems
Automated dialogue or conversational systems are anthropomorphised by developers and personified by users. While a degree of anthropomorphism is inevitable, conscious and unconscious design choices can guide users to personify them to varying degrees. Encouraging users to relate to automated systems as if they were human can lead to transparency and trust issues, and high risk scenarios caused by over-reliance on their outputs. As a result, natural language processing researchers have begun to investigate factors that induce personification and develop resources to mitigate such effects. However, these efforts are fragmented, and many aspects of anthropomorphism have yet to be considered. In this paper, we discuss the linguistic factors that contribute to the anthropomorphism of dialogue systems and the harms that can arise, arguing that it can reinforce stereotypes of gender roles and notions of acceptable language. We recommend that future efforts towards developing dialogue systems take particular care in their design, development, release, and description; and attend to the many linguistic cues that can elicit personification by users.
·arxiv.org·
Mirages: On Anthropomorphism in Dialogue Systems
What we lose when we work with a ‘giant AI’ like ChatGPT
What we lose when we work with a ‘giant AI’ like ChatGPT
Giant AIs abstract away the ‘knowledge of the territory’ in favour of the atlas view of all that is present on the internet. Yet the territory can only be captured by the people doing the tasks that giant AIs are trying to replace.
·inkl.com·
What we lose when we work with a ‘giant AI’ like ChatGPT
Statement | No to AI-generated Images (Part I) – Iris Luckhaus | Illustration & Design
Statement | No to AI-generated Images (Part I) – Iris Luckhaus | Illustration & Design
I’ve been learning a lot lately about how the currently popular AIs that generate images from text prompts work. Now that I can find virtually everything I’ve ever published in those databases, I think it’s time to explain this to others: Opening Remarks I regularly revise and add to this article when I learn something […]
·irisluckhaus.de·
Statement | No to AI-generated Images (Part I) – Iris Luckhaus | Illustration & Design
Affidavit Affidavit of Steven Schwartz – #32, Att. #1 in Mata v. Avianca, Inc. (S.D.N.Y., 1:22-cv-01461) – CourtListener.com
Affidavit Affidavit of Steven Schwartz – #32, Att. #1 in Mata v. Avianca, Inc. (S.D.N.Y., 1:22-cv-01461) – CourtListener.com
AFFIDAVIT of Peter LoDuca in Opposition re: 16 MOTION to Dismiss pursuant to Fed. R. Civ. P. 12(b)(6).. Document filed by Roberto Mata. (Attachments: # 1 Affidavit Affidavit of Steven Schwartz).(LoDuca, Peter) (Entered: 05/25/2023)
·storage.courtlistener.com·
Affidavit Affidavit of Steven Schwartz – #32, Att. #1 in Mata v. Avianca, Inc. (S.D.N.Y., 1:22-cv-01461) – CourtListener.com
EU official says Twitter abandons bloc's voluntary pact against disinformation
EU official says Twitter abandons bloc's voluntary pact against disinformation
A top European Union official says Twitter has dropped out of the bloc's voluntary agreement to combat online disinformation. European Commissioner Thierry Breton tweeted Friday that Twitter had pulled out of the EU’s disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter’s “obligation” remained, referring to the EU’s tough new digital rules taking effect in August. San Francisco-based Twitter responded with an automated reply, as it does to most press inquiries, and did not comment.
·apnews.com·
EU official says Twitter abandons bloc's voluntary pact against disinformation
The Larger They Are, the Harder They Fail: Language Models do not...
The Larger They Are, the Harder They Fail: Language Models do not...
Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming. Typical programming languages have...
·arxiv.org·
The Larger They Are, the Harder They Fail: Language Models do not...
Aleksandr Tiulkanov on LinkedIn: #ai
Aleksandr Tiulkanov on LinkedIn: #ai
An interesting thought from Edward Snowden at Consensus 2023 (at 29 min 50 sec, link in the comment), on #AI, understanding of and difference between utility…
·linkedin.com·
Aleksandr Tiulkanov on LinkedIn: #ai
Thought experiment in the National Library of Thailand
Thought experiment in the National Library of Thailand
With the advent of ChatGPT, large language models (LLMs) went from a relatively niche topic to something that many, many people have been…
·medium.com·
Thought experiment in the National Library of Thailand
Humans and algorithms work together — so study them together
Humans and algorithms work together — so study them together
Adaptive algorithms have been linked to terrorist attacks and beneficial social movements. Governing them requires new science on collective human–algorithm behaviour.
·nature.com·
Humans and algorithms work together — so study them together