‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI
Digital Ethics
Reid Blackman, Ph.D. on LinkedIn: #ai #ethics #aiethics
People don’t like saying AI “ethics”. I think that’s a mistake. Name your problems accurately if you’re going to solve them effectively. #ai #ethics #aiethics
Who should take responsibility for evil UX design and digital ethics?
The fine line between persuasion and manipulation: who will protect the user?
The Efficiency Illusion: How Technology Can Create More Jobs Than It Replaces
One of the evergreen debates repeatedly reignited by technological advancements and their adoption has been the fear of how it will lead to wholesale replacement of jobs. However, historical data and trends show that technology has the potential to create more jobs and not necessarily lead to mass u
uzayran (@uzayran@cyberplace.social)
Attached: 2 images How it started... how it's going
The UX of AI Art Generators: Magical, Mystifying, and Macabre
AI-driven art generation can be fun and delightful, but it's also frustrating and disturbing. Two leading tools need a lot of UX work.
‘I do not think ethical surveillance can exist’: Rumman Chowdhury on accountability in AI
One of the leading thinkers on artificial intelligence discusses responsibility, ‘moral outsourcing’ and bridging the gap between people and technology
The Artificial Intelligence Revolution: Part 1 - Wait But Why
Part 1 of 2: "The Road to Superintelligence". Artificial Intelligence — the topic everyone in the world should be talking about.
The Web Won't Survive AI
The digital war of tomorrow pitches generative AI against digital ID
The Andy Warhol Copyright Case That Could Transform Generative AI
The US Supreme Court’s upcoming decision could shift the interpretation of fair use law—and all the people, and tools, that turn to it for protection.
Supreme Court sides against Andy Warhol Foundation in copyright infringement case
In its 7-2 ruling Thursday, the Supreme Court said the late artist infringed on a photographer's copyright when he created a series of works based on an image of the pop star Prince.
Scientists use deep learning algorithms to predict political ideology based on facial characteristics
A new study in Denmark used machine learning techniques on photographs of faces of Danish politicians to predict whether their political ideology is left- or right-wing. The accuracy of predictions was 61%. Faces of right-wing politicians were more likely to have happy and less likely to have neutral facial expressions. Women with attractive faces were more likely to be right-wing, while women whose faces showed contempt were more likely to be left-wing. The study was published in Scientific Reports. ...
Captcha Is Asking Users to Identify Objects That Don't Exist
Discord's captcha asked users to identify a 'Yoko,' a snail-like object that does not exist and was created by AI.
Time to deal with our ethical debt - Part I: Behold the disruption - Nitor
We’re currently witnessing a technological disruption that will have an unimaginable impact on our society. The only problem is, that we haven’t dealt with racked up ethical debt in the tech industry, and now we’re building the future on a shaky foundation. In this article you will find out why caring about ethics is risk management, and brand protection.
Mirages: On Anthropomorphism in Dialogue Systems
Automated dialogue or conversational systems are anthropomorphised by
developers and personified by users. While a degree of anthropomorphism is
inevitable, conscious and unconscious design choices can guide users to
personify them to varying degrees. Encouraging users to relate to automated
systems as if they were human can lead to transparency and trust issues, and
high risk scenarios caused by over-reliance on their outputs. As a result,
natural language processing researchers have begun to investigate factors that
induce personification and develop resources to mitigate such effects. However,
these efforts are fragmented, and many aspects of anthropomorphism have yet to
be considered. In this paper, we discuss the linguistic factors that contribute
to the anthropomorphism of dialogue systems and the harms that can arise,
arguing that it can reinforce stereotypes of gender roles and notions of
acceptable language. We recommend that future efforts towards developing
dialogue systems take particular care in their design, development, release,
and description; and attend to the many linguistic cues that can elicit
personification by users.
AI statement – Neil Clarke
How computer games encourage kids to spend cash
Many parents are frustrated with games that encourage children to spend money to advance.
Workers Are Terrified About AI. What Can They Do About It?
The Writers Guild of America has its work cut out in the latest struggle against the perennial problem of automation.
What we lose when we work with a ‘giant AI’ like ChatGPT
Giant AIs abstract away the ‘knowledge of the territory’ in favour of the atlas view of all that is present on the internet. Yet the territory can only be captured by the people doing the tasks that giant AIs are trying to replace.
Statement | No to AI-generated Images (Part I) – Iris Luckhaus | Illustration & Design
I’ve been learning a lot lately about how the currently popular AIs that generate images from text prompts work. Now that I can find virtually everything I’ve ever published in those databases, I think it’s time to explain this to others: Opening Remarks I regularly revise and add to this article when I learn something […]
FTC Warns About Misuses of Biometric Information and Harm to Consumers
The Federal Trade Commission today issued a warning that the increasing use of consumers’ biometric information and related technologies, including those powered by machine learni
Graeme J. on LinkedIn: Mata v. Avianca, Inc., 1:22-cv-01461 - CourtListener.com | 45 comments
Intrigued by reports of the New York case in which lawyers cited non-existent case law produced by generative AI, I took a quick look at the public court… | 45 comments on LinkedIn
Affidavit Affidavit of Steven Schwartz – #32, Att. #1 in Mata v. Avianca, Inc. (S.D.N.Y., 1:22-cv-01461) – CourtListener.com
AFFIDAVIT of Peter LoDuca in Opposition re: 16 MOTION to Dismiss pursuant to Fed. R. Civ. P. 12(b)(6).. Document filed by Roberto Mata. (Attachments: # 1 Affidavit Affidavit of Steven Schwartz).(LoDuca, Peter) (Entered: 05/25/2023)
EU official says Twitter abandons bloc's voluntary pact against disinformation
A top European Union official says Twitter has dropped out of the bloc's voluntary agreement to combat online disinformation. European Commissioner Thierry Breton tweeted Friday that Twitter had pulled out of the EU’s disinformation “code of practice” that other major social media platforms have pledged to support. But he added that Twitter’s “obligation” remained, referring to the EU’s tough new digital rules taking effect in August. San Francisco-based Twitter responded with an automated reply, as it does to most press inquiries, and did not comment.
The Larger They Are, the Harder They Fail: Language Models do not...
Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming. Typical programming languages have...
Aleksandr Tiulkanov on LinkedIn: #ai
An interesting thought from Edward Snowden at Consensus 2023 (at 29 min 50 sec, link in the comment), on #AI, understanding of and difference between utility…
Marc Rotenberg on LinkedIn: #business #people #development #development #aigovernance
I hope all the people who are gushing over Sam Altman's recent statements about AI and regulation take a close look at the document that he and others at Open…
Ravit Dotan, PhD on LinkedIn: #ai #aiethics #responsibleai #openai | 13 comments
OpenAI's ethics washing: yesterday Altman threatened to stop operating in Europe due to the EU AI Act (far in the future, not now). Contrast with his last… | 13 comments on LinkedIn
Thought experiment in the National Library of Thailand
With the advent of ChatGPT, large language models (LLMs) went from a relatively niche topic to something that many, many people have been…
‘There was all sorts of toxic behaviour’: Timnit Gebru on her sacking by Google, AI’s dangers and big tech’s biases
The Ethiopian-born computer scientist lost her job after pointing out the inequalities built into AI. But after decades working with technology companies, she knows all too much about discrimination