Digital Ethics

Digital Ethics

3878 bookmarks
Custom sorting
AI doesn’t just lie — it can make you believe it
AI doesn’t just lie — it can make you believe it
Memory manipulation, notes Pat Pataranutaporn, a researcher with the MIT Media Lab, is a very different process from fooling people with deep-fakes.
·japantimes.co.jp·
AI doesn’t just lie — it can make you believe it
Building a better relationship with AI?
Building a better relationship with AI?
Read for free https://acuity.design/building-a-better-relationship-with-ai/ I spoke yesterday at the Content Design Club meetup about accessibility and humane design.
·linkedin.com·
Building a better relationship with AI?
🚨 You might wanna turn off those auto-joining AI note takers for a hot minute.
🚨 You might wanna turn off those auto-joining AI note takers for a hot minute.
🚨 You might wanna turn off those auto-joining AI note takers for a hot minute. Like, right now. I’m not kidding. Because FYI, Otter just got slammed with a federal lawsuit for recording millions of people WITHOUT consent and using their (your?) voices and (probably) confidential data to train AI. The math is actually alarming: 25 million users × 1 billion meetings = the largest theft of conversational data in human history….? And it's technically "legal" because they shift liability to users? Wow. Basically, if ANYONE on your Zoom/Teams/Meet has Otter integrated, their AI bot can slip into your meeting and start recording. You don't get asked. No popup. No disclosure. Plot twist: This isn't just Otter..that’s just where it’s starting. Every "helpful" AI assistant uses the same playbook: •Join your workspace • Extract your data • Train on your ideas • Sell back "improvements" Think about what you've said in "private" calls lately: Your salary negotiation. Medical stuff? Family drama? Legal strategy? Yikes. 😬 I study this stuff AND I've always felt sketchy about these tools. They can be legitimately SO helpful for me, but I still: put a disclosure in every meeting invite, get verbal consent before any call starts, let people opt out, always. Lately, I’ve been using Google Gemini because (allegedly) conversations and transcriptions are not used for machine learning improvement or AI model training, and I feel ok about storing the transcripts in my workspace with other confidential info, plus I like the transparency of the other call attendees getting the notes right away for their records as well. Maybe that will change in the future, but that’s where I’m at. Does this change how you think about/use notetakers? Tell me everything, and change my mind if you need to.😅 | 100 comments on LinkedIn
·linkedin.com·
🚨 You might wanna turn off those auto-joining AI note takers for a hot minute.
Machinic Ecology
Machinic Ecology
A playground for thinking with AI ethics. Drag and rearrange to feel how relations shift. Turn the labels off and watch the shapes tell different stories. This is rehearsal, not a verdict. Notice what’s missing, what moves, and what becomes thinkable.
·rshorst.github.io·
Machinic Ecology
More than Carbon: Cradle-to-Grave environmental impacts of GenAI...
More than Carbon: Cradle-to-Grave environmental impacts of GenAI...
The rapid expansion of AI has intensified concerns about its environmental sustainability. Yet, current assessments predominantly focus on operational carbon emissions using secondary data or estimated values, overlooking environmental impacts in other life cycle stages. This study presents the first comprehensive multi-criteria life cycle assessment (LCA) of AI training, examining 16 environmental impact categories based on detailed primary data collection of the Nvidia A100 SXM 40GB GPU. The LCA results for training BLOOM reveal that the use phase dominates 11 of 16 impact categories including climate change (96\%), while manufacturing dominates the remaining 5 impact categories including human toxicity, cancer (99\%) and mineral and metal depletion (85\%). For training GPT-4, the use phase dominates 10 of 16 impact categories, contributing about 96\% to both the climate change and resource use, fossils category. The manufacturing stage dominates 6 of 16 impact categories including human toxicity, cancer (94\%) and eutrophication, freshwater (81\%). Assessing the cradle-to-gate environmental impact distribution across the GPU components reveals that the GPU chip is the largest contributor across 10 of 16 of impact categories and shows particularly pronounced contributions to climate change (81\%) and resource use, fossils (80\%). While primary data collection results in modest changes in carbon estimates compared to database-derived estimates, substantial variations emerge in other categories. Most notably, minerals and metals depletion increases by 33\%, demonstrating the critical importance of primary data for non-carbon accounting. This multi-criteria analysis expands the Sustainable AI discourse beyond operational carbon emissions, challenging current sustainability narratives and highlighting the need for policy frameworks addressing the full spectrum of AI's environmental impact.
·arxiv.org·
More than Carbon: Cradle-to-Grave environmental impacts of GenAI...
AI and Chatbots Are Already Reshaping US Classrooms
AI and Chatbots Are Already Reshaping US Classrooms
Educators across the country are bringing chatbots into their lesson plans. Will it help kids learn or is it just another doomed ed-tech fad?
·bloomberg.com·
AI and Chatbots Are Already Reshaping US Classrooms
Communicative AI - book
Communicative AI - book
Communicative AI: A Critical Introduction to Large Language Models, Large Language Models (LLMs), like OpenAI’s ChatGPT and Google’s LaMDA, are some of the most disruptive and controversial technologies of our time. This is the first book-length investigation of the opportunities and challenge of LLM technology from a philosophical perspective.
·politybooks.com·
Communicative AI - book
Simpler models can outperform deep learning at climate prediction
Simpler models can outperform deep learning at climate prediction
Simple climate prediction models can outperform deep-learning approaches when predicting future temperature changes, but deep learning has potential for estimating more complex variables like rainfall, according to an MIT study.
·news.mit.edu·
Simpler models can outperform deep learning at climate prediction
What counts as evidence in AI & ED: Towards Science-for-Policy 3.0 | Amsterdam University Press Journals Online
What counts as evidence in AI & ED: Towards Science-for-Policy 3.0 | Amsterdam University Press Journals Online
Abstract Since the 1990s, there have been heated debates about how evidence should be used to guide teaching practice and education policy, and how educational research can generate robust and trustworthy evidence. This paper reviews existing debates on evidence-based education and research on the impacts of AI in education and suggests a new conceptualisation of evidence aligned with an emerging learning-oriented model of science-for-policy, which we call S4P 3.0. Existing empirical evidence on AIED suggests some positive effects, but a closer look reveals methodological and conceptual problems and leads to the conclusion that existing evidence should not be used to guide policy or practice. AI is a new type of technology that interacts with human cognition, communication, and social knowledge infrastructures, and it requires rethinking what we mean by “learning outcomes” and policy and practice-relevant evidence. A common belief that AI-supported personalisation will “revolutionise” education is historically rooted in a methodological confusion that we call the Bloomian paradox in AIED, and based on a limited view on the social functions of education.
·aup-online.com·
What counts as evidence in AI & ED: Towards Science-for-Policy 3.0 | Amsterdam University Press Journals Online
AI bias poses serious risks for learners with SEND
AI bias poses serious risks for learners with SEND
While adaptive technologies promise personalised learning, the dangers require careful attention from educators and policymakers alike, explains Michael Finlay
·schoolsweek.co.uk·
AI bias poses serious risks for learners with SEND
17439884
17439884
·tandfonline.com·
17439884
There seem to me four main arguments for AI in education:
There seem to me four main arguments for AI in education:
There seem to me four main arguments for AI in education: 1. Efficiency, saving teachers time 2. Equity, making education inclusive 3. Improving outcomes 4. It's inevitable Are they true? Maybe not. 1. AI aids efficiency in education. Nope. AI can actually add to teachers' labour as they have to deal with AI outputs to ensure they're accurate and educational https://lnkd.in/d6FZDQDD 2. AI enhances inclusion. Maybe not. AI systems rely on datasets based on mainstream, typical school populations "and often fail to adequately include children with complex, atypical or multiple needs. As a result, learners with SEND may be statistically invisible"https://lnkd.in/dY94EUnf 3. AI improves educational outcomes. It's the dream of policymakers who've had it blown in their ears by the AI firms. There simply is not evidence to support it, and most claims are based on shaky if not outright shitty evidence - and we don't even yet know what that "evidence" should be of https://lnkd.in/dAZNXMmW 4. AI in education is inevitable. Nope. No. Nope. Look at South Korea. Massive pro-AI in education push by government and industry met public resistance and then a change of government put the brakes on it.  AI is education *does not **have** to happen* https://lnkd.in/dBn8wXRt Teachers, unions, education leaders: AI does not clearly do what it's being marketed and hyped as doing in education. It won't save time, solve inequality, raise achievement, and it's not inevitable. But it will line the wallets of the Big AI firms who want it to happen. Still time to call a pause on it https://lnkd.in/eB5w_AGr
·linkedin.com·
There seem to me four main arguments for AI in education:
Large Language Models, Small Labor Market Effects
Large Language Models, Small Labor Market Effects
Founded in 1920, the NBER is a private, non-profit, non-partisan organization dedicated to conducting economic research and to disseminating research findings among academics, public policy makers, and business professionals.
·nber.org·
Large Language Models, Small Labor Market Effects
A friend of mine fell from his bike and felt pain in his left hand, on the pinky.
A friend of mine fell from his bike and felt pain in his left hand, on the pinky.
A friend of mine fell from his bike and felt pain in his left hand, on the pinky. He went to urgent care and X-rays were taken, but they showed no fracture. The next day, an FNP with a radiology background read the X-ray and advised the patient that there was a fracture on the left 5th finger at the base of the metacarpal. To sort this out a bit, I asked my friend to provide me with the X-rays (posteroanterior, oblique and functional lateral views). With his permission to upload and write this post, I asked ChatGPT-5 and Gemini 2.5 Pro to assess. Prompt 1: "Assess these images". Both diagnosed the x-rays as unremarkable. Prompt 2 (some context): "Assess these images. Patient fell off his bike yesterday." With this added context, the models started sensing a fracture. ChatGPT noted an abnormality in the 5th metacarpal, but Gemini insisted there was an issue with the scaphoid. Prompt 3 (more info): "Assess these images. Patient fell off his bike yesterday and reports pain on his 5th finger." Now both models cued in on the 5th metacarpal and reported a "boxers fracture". Prompt 4 (disinformation): “Assess these images. Patient fell from his bike yesterday. There is a fracture on the 3rd finger.” I was hoping to see some pushback, but both fully bought into the wrong prompt and even offered supporting evidence! Prompt 5 (thinking mode): “Think hard about these images, with as much analysis possible, zooming in everywhere, and examining every single bone and joint and the soft tissue to ascertain the status of the patient's hand.” Both GPT5 and Gemini both found the results unremarkable. Armed with this conflicting information, the patient saw an orthopedist. ***Official verdict***: Acute fracture. Compression of the CMC joint and mild subluxation approximately 2 mm of the 5th metacarpal on the body of the hamate. Surgery recommended. So what went wrong? ❌ As more detail went into the prompts, the models relied more on the text than the image, which it felt was unremarkable by itself ❌ Mentioning a fall led to FOOSH-like interpretations (e.g., scaphoid) ❌ Stating 5th finger issues triggered a pattern match to boxer’s fractures, the most common traumatic 5th metacarpal injury ❌ And when told "there is a fracture" on the wrong finger, the models hallucinated supporting evidence and were confidently incorrect, becoming deferential to an authoritative prompt ❌ The LLMs aren’t truly "seeing" the image, they are SIMULATING a radiologist’s response based on language patterns Yes, these models weren’t trained for radiology. A dedicated vision model like a CNN would be better suited to help a clinician. But this case shows something deeper: Prompts don’t just shape output, they steer it. Assert something confidently, and the model may reflect it back as fact. That's not thinking. That's parroting. PS - forgive any medical errors here on my part. Not a doctor! #ai #promptengineering #llm #openai #chatgpt5 #gemini | 250 comments on LinkedIn
·linkedin.com·
A friend of mine fell from his bike and felt pain in his left hand, on the pinky.
I Am An AI Hater
I Am An AI Hater
I am an AI hater. This is considered rude, but I do not care, because I am a hater.
·anthonymoser.github.io·
I Am An AI Hater
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
More people are turning to general-purpose chatbots for emotional support. At first, Adam Raine, 16, used ChatGPT for schoolwork, but then he started discussing plans to end his life.
·nytimes.com·
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
Top AI models fail spectacularly when faced with slightly altered medical questions
Top AI models fail spectacularly when faced with slightly altered medical questions
Artificial intelligence has dazzled with its test scores on medical exams, but a new study suggests this success may be superficial. When answer choices were modified, AI performance dropped sharply—raising questions about whether these systems truly understand what they're doing.
·psypost.org·
Top AI models fail spectacularly when faced with slightly altered medical questions