Digital Ethics

Digital Ethics

3739 bookmarks
Custom sorting
Google, AI firm must face lawsuit filed by a mother over suicide of son, US court says
Google, AI firm must face lawsuit filed by a mother over suicide of son, US court says
Alphabet's Google and artificial-intelligence startup Character.AI must face a lawsuit from a Florida woman who said Character.AI's chatbots caused her 14-year-old son's suicide, a judge ruled on Wednesday.
technology.Garcia sued both companies
·reuters.com·
Google, AI firm must face lawsuit filed by a mother over suicide of son, US court says
Who’s Afraid Of Agentic AI?
Who’s Afraid Of Agentic AI?
What if your doctor’s AI agent leaked your private health information, directly to the internet?
·disesdi.substack.com·
Who’s Afraid Of Agentic AI?
On "Vibe Coding"
On "Vibe Coding"
The hype about the potentials (it’s always future potential, never real current use) of AI has discarded its last cycle (“reasoning models”/”deep research”, both terms being factually untrue and deeply deceiving at best) and moved to a new double whammy of “agentic AI” and “Vibe Coding”. Now “agentic AI” basically just means that some LLM […]
·tante.cc·
On "Vibe Coding"
The Copilot Delusion
The Copilot Delusion
Chapter 1: My Coworker, The Programmer A shell of a man—more of a parrot than a person. My boss, a true believer in the sacred rite of Pair Programming, chained myself and this "programmer"-colleague together like conjoined twins from different planets. We shared a keyboard, but not a brain. Lord, not even close. "Hold up. I’ve got an idea. Gimme the keyboard real quick." An idea. Yes. The same way a toddler has “an idea” to stick a fork in a wall socket. I was halfway through construc
·deplet.ing·
The Copilot Delusion
Teaching students to use Google and digital archives.
Teaching students to use Google and digital archives.
Everybody wants to talk about teaching students how to use AI responsibly in research projects, and nobody wants to talk about how to teach students to use Google advanced search tools to find results from a specific website or date range; how to save a webpage as a PDF so you can access it later if a paywall goes up; or how to explore a digital newspaper archives. Yet that's how I spent this week in my ninth-grade world history course, and I think maybe that was the better use of time. Kids need fewer intellectual black boxes in their lives, and more understanding of how digital tools can help them break free of them. | 55 comments on LinkedIn
·linkedin.com·
Teaching students to use Google and digital archives.
By Default, Signal Doesn't Recall
By Default, Signal Doesn't Recall
Signal Desktop now includes support for a new “Screen security” setting that is designed to help prevent your own computer from capturing screenshots of your Signal chats on Windows. This setting is automatically enabled by default in Signal Desktop on Windows 11. If you’re wondering why we’re on...
·signal.org·
By Default, Signal Doesn't Recall
Veil of Ignorance - Ethics Unwrapped
Veil of Ignorance - Ethics Unwrapped
The Veil of Ignorance is a device for helping people more fairly envision a fair society by pretending that they are ignorant of their personal circumstances.
·ethicsunwrapped.utexas.edu·
Veil of Ignorance - Ethics Unwrapped
#noaiprotestpack #protestpack #starterpack #starterpacknoaii… | Iris Luckhaus | 41 comments
#noaiprotestpack #protestpack #starterpack #starterpacknoaii… | Iris Luckhaus | 41 comments
This Protest Starter Pack has been on my mind for weeks, so I've finally drawn it! The AI generated starter packs so many people have been posting only exist because our copyrighted and private data was trained without consent or compensation, leading to AI companies trying to replace creators with their own work (!), wasting resources, reproducing bias, inviting fraud and congesting the internet. Their users support this with money and data. Hashtags featuring self-portraits have been around for decades, and it's been such a delight to see human creators reclaim the stolen trend with illustrations that are thought through down to the very detail, as unique as their creators! AI-free starter packs not only showcase the skill, style and inner world of human artists, but also their wonderful, deeply human sense of humour. So, here's my own take on the topic, the Protest Starter Pack: "Creators and Allies unite, stand up and fight! ★ All major AI generators are trained on stolen content. That’s unethical and unfair. ★ They’re built to deskill and replace human creators. With their own work. ★ AI users support theft, data laundering and broligarchy. Don’t feed the bullies! ★ It’s not “just” artists. They steal your data as well, and will replace your job too. ★ We need consent, compensation and transparency. It’s called Opt-In. ★ AI enables fraud & destroys the web with false facts. Stop the slop! ★ If it’s in the training data, AI reproduces bias, stereotypes and prejudice. ★ Care about the climate? Stop AI wasting the resources of our planet! ★ It is not “time to adapt or die”. It is time to fight – or adapt and die. ★ So let’s unite, petition, protect, inform, explain and call out! Together, we are strong. ★ Thank you for fighting unethical generative AI! Or at least not using it. Artists fight against generative AI, which – unlike technical or scientific AI – is trained on copyrighted and private data without consent or compensation, and used commercially, in unfair competition with our own work." ––– #NoAIProtestPack #ProtestPack #StarterPack #StarterPackNoAIi #NoAIStarterPack #BuchBrauchtMensch #ZusammenGegenKI #MachMitGegenKI #SupportHumanArtists #ArtistsAgainstAI #NoAIArt #CreateDontScrape #MyArtMyChoice #OptInStattOptOut #CreativesAgainstAI #IllustratorenGegenKI #StopAI #StopTheSlop | 41 comments on LinkedIn
·linkedin.com·
#noaiprotestpack #protestpack #starterpack #starterpacknoaii… | Iris Luckhaus | 41 comments
Generalization bias in large language model summarization of scientific research | Royal Society Open Science
Generalization bias in large language model summarization of scientific research | Royal Society Open Science
Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex scientific information in accessible terms. However, when ...
·royalsocietypublishing.org·
Generalization bias in large language model summarization of scientific research | Royal Society Open Science
Combining Psychology with Artificial Intelligence: What could possibly go wrong?
Combining Psychology with Artificial Intelligence: What could possibly go wrong?
The current AI hype cycle combined with Psychology's various crises make for a perfect storm. Psychology, on the one hand, has a history of weak theoretical foundations, a neglect for computational and formal skills, and a hyperempiricist privileging of experimental tasks and testing for effects. Artificial Intelligence, on the other hand, has a history of conflating artifacts for theories of cognition, or even minds themselves, and its engineering offspring likes to move fast and break things. Many of our contemporaries now want to combine the worst of these two worlds. What could possibly go wrong? Quite a lot. Does this mean that Psychology and Artificial Intelligence can best part ways? Not at all. There are very fruitful ways in which the two disciplines can interact and theoretically inform the interdisciplinary study of cognition. But to reap the fruits one needs to understand how to steer clear of potential traps.
·osf.io·
Combining Psychology with Artificial Intelligence: What could possibly go wrong?