Digital Ethics

Digital Ethics

3749 bookmarks
Custom sorting
I’m Is there a point at which the sane part of the world just goes “maybe, just maybe we should stop, take a breath and ask ourselves ‘Is this a direction we want to be travelling’” before enacting first principles based regulation to alter the default trajectory?
I’m Is there a point at which the sane part of the world just goes “maybe, just maybe we should stop, take a breath and ask ourselves ‘Is this a direction we want to be travelling’” before enacting first principles based regulation to alter the default trajectory?
I’m Is there a point at which the sane part of the world just goes “maybe, just maybe we should stop, take a breath and ask ourselves ‘Is this a direction we want to be travelling’” before enacting first principles based regulation to alter the default trajectory? How a leader in the field unilaterally ran a test to see if there’s any pushback to him amplifying his voice via an AI trained to defer to his opinions? What possible additional warning would you need? How can we have binders of law and regulation on TV and Radio and Print, for good reasons and … fail for decades to regulate platforms? How long can the cognitive dissonance between “there will be growth and this is all teething problems on the way to tech utopia” and the clear and present trajectory to civilization collapse before model collapse can be maintained? How much longer will we be forced to endure performative AI ethics summits talking about aligning a technology whose makers have taken control of societies control mechanism and are long beyond alignment? Are we so far down the drain that no global government dares to pull the off switch on X out of fear of the oligarch controlling it or the US government and the platform having zero economic upsides, barely any jobs anymore? Can we get serious now? Break the whole AI and Agent ketamine haze for a moment and actually talk about how we deal with this all before it’s too late? Can we talk about how fundamental flaws like prompt injection destroy any chance for the technology to be made useful in the far marjority of hinted usecases in the next few years? How it forces us to abandon cybersecurity and corporate sovereignty to reap the “benefits”, how the “AI Layoffs” are not because the technology is working out but because it’s expensive. - before talking about “AI adoption”. How it’s essentially outsourcing, with all warts like knowledge transfer and disintermediation risk, but you pay for failed results too, per token, and the vendor forces you to do all the QA and shoulders no responsibility? Can we have leaders who lead instead of endlessly trying to just keep things running? Because we’re going to need that. We need bold leaders with understanding and vision. Globally. Not managers who just try to keep the box from falling apart, because the box is on the train tracks. | 17 comments on LinkedIn
·linkedin.com·
I’m Is there a point at which the sane part of the world just goes “maybe, just maybe we should stop, take a breath and ask ourselves ‘Is this a direction we want to be travelling’” before enacting first principles based regulation to alter the default trajectory?
Jevons Paradox in action!
Jevons Paradox in action!
Jevons Paradox in action! 📈 NVIDIA likes talking about how each generation of AI hardware is getting more efficient -- with Blackwell using 50x less energy per token than Hopper in 2022 -- but LLM usage (conservatively) went up from 100B tokens per month to 2T tokens per month 𝗷𝘂𝘀𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗹𝗮𝘀𝘁 𝘆𝗲𝗮𝗿, according to Open Router (https://lnkd.in/evAWESXX). This means that efficiency gains are being outpaced by the growth in usage, by far! Focusing on efficiency alone is missing the forest for the trees... | 13 comments on LinkedIn
·linkedin.com·
Jevons Paradox in action!
Morally corrupt innovations are the easiest innovations to create – It’s the lazy approach with dangerous consequences - The CEO Retort with Tim El-Sheikh
Morally corrupt innovations are the easiest innovations to create – It’s the lazy approach with dangerous consequences - The CEO Retort with Tim El-Sheikh
We live in an era where moral corruption is the current economic priority – an economy built on shortcuts, cutting corners, and breaking the law. All, of course…
·ceoretort.com·
Morally corrupt innovations are the easiest innovations to create – It’s the lazy approach with dangerous consequences - The CEO Retort with Tim El-Sheikh
AI coders think they’re 20% faster — but they’re actually 19% slower
AI coders think they’re 20% faster — but they’re actually 19% slower
Model Evaluation and Threat Research is an AI research charity that looks into the threat of AI agents! That sounds a bit AI doomsday cult, and they take funding from the AI doomsday cult organisat…
·pivot-to-ai.com·
AI coders think they’re 20% faster — but they’re actually 19% slower
AI Hallucinations and Reliability
AI Hallucinations and Reliability
The NYT published a fascinating article last month on the conundrum of AI accuracy and reliability. They found that even as AI models were getting more powerful, they generated more errors, not fewer. In OpenAI’s own tests, their newest models hallucinated at higher rates than their previous models. One of their benchmarks is called a
·marketoonist.com·
AI Hallucinations and Reliability
Peeling Back The Onion
Peeling Back The Onion
Ben Collins explains how The Onion is thriving by saying what others won’t—and why human-created satire matters in a media landscape increasingly saturated by noise and A.I. slop.
·status.news·
Peeling Back The Onion
Read Receipt
Read Receipt
One of the popular uses of "AI" that I truly do not understand involves "brainstorming" – and okay, I admit, I truly do not understanding using "AI" at all with what we know about its politically, psychologically, cognitively, and environmentally destructive effects. I've written before about how "brainstorming" is a Cold War invention, and how marketing has convinced us that we're lacking something that only its products or services can fulfill, that "creativity" is something special that few p
·2ndbreakfast.audreywatters.com·
Read Receipt
‘The vehicle suddenly accelerated with our baby in it’: the terrifying truth about why Tesla’s cars keep crashing
‘The vehicle suddenly accelerated with our baby in it’: the terrifying truth about why Tesla’s cars keep crashing
Elon Musk is obsessive about the design of his supercars, right down to the disappearing door handles. But a series of shocking incidents – from drivers trapped in burning vehicles to dramatic stops on the highway – have led to questions about the safety of the brand. Why won’t Tesla give any answers?
·theguardian.com·
‘The vehicle suddenly accelerated with our baby in it’: the terrifying truth about why Tesla’s cars keep crashing
Springer Nature book on machine learning is full of made-up citations
Springer Nature book on machine learning is full of made-up citations
Would you pay $169 for an introductory ebook on machine learning with citations that appear to be made up? If not, you might want to pass on purchasing Mastering Machine Learning: From Basics to Ad…
·retractionwatch.com·
Springer Nature book on machine learning is full of made-up citations
The rise of Whatever
The rise of Whatever
This was originally titled “I miss when computers were fun”. But in the course of writing it, I discovered that there is a reason computers became less fun, a dark thread woven through a number of events in recent history. Let me back up a bit.
·eev.ee·
The rise of Whatever
The rise of Whatever
The rise of Whatever
This was originally titled “I miss when computers were fun”. But in the course of writing it, I discovered that there is a reason computers became less fun, a dark thread woven through a number of events in recent history. Let me back up a bit.
·eev.ee·
The rise of Whatever
Opinion | The Monster Inside ChatGPT
Opinion | The Monster Inside ChatGPT
We discovered how easily a model’s safety training falls off, and below that mask is a lot of darkness.
·wsj.com·
Opinion | The Monster Inside ChatGPT
Against AI: An Open Letter From Writers to Publishers
Against AI: An Open Letter From Writers to Publishers
To Penguin Random House, HarperCollins, Simon & Schuster, Hachette Book Group, Macmillan, and all other publishers of America: We are standing on a precipice. At its simplest level, our job as …
·lithub.com·
Against AI: An Open Letter From Writers to Publishers
- VSD Lab
- VSD Lab
ENVISIONING CARDS. A VALUE SENSITIVE DESIGN TOOLKIT Use the Envisioning Cards to create ethical technology and improve your design practice. The 2nd Edition consists of 42 Envisioning Cards downloadable in PDF format.   DOWNLOAD & PRINT   The Envisioning Cards … Continued
·vsdesign.org·
- VSD Lab
I’m not anti-AI.
I’m not anti-AI.
I’m not anti-AI. I’m anti-bullshit. For those struggling to reconcile my work in AI product and strategy with the things I’ve been saying - let me make it plain. Yes, I help teams build real AI products. And yes, I refuse to prop up fantasies just because they’re lucrative. These aren’t contradictions. They’re what make me credible. I know what this tech can do. I also know what it can’t. And I know exactly how it’s being spun to look like something it isn’t - not to help people, but to sell illusions dressed up as inevitability. We’re deliberately engineering the illusion of cognition. Not building minds - just engineering machines that mimic well enough to blur the line. Augmentation is promised whilst replacement is sold. And when the flaws emerge - hallucinations, cascading failure modes, confidently wrong outputs at scale - the spin kicks in: “That’s just temporary.” “That’s just a data problem.” “That’s just a prompt away.” It’s not. It’s structural. It’s endemic to the heart of this technology. Calling that out doesn’t make me anti-AI. It makes me more qualified to work in this field - because I don’t have to lie to make it useful. That’s why I’m valuable in what I do. I don’t just know what to build - I know what not to build. I understand what we’re building toward. I understand the moral, ethical, philosophical, reputational, financial implications. I’m not high on my own supply. So no I won’t dress this tech in a halo. I won’t help gaslight the world into trusting a system that doesn’t understand a word it says. But if you want real clarity - the kind that holds up after the hype collapses - then yes, I’m someone worth talking to. This moment doesn’t need more AI evangelists. It needs realism. It needs judgment. It needs people who can filter the bullshit and advise with clarity, those who see the cracks and still deliver. And that’s exactly what I do. On here, on the frontline, and in the boardroom. | 124 comments on LinkedIn
·linkedin.com·
I’m not anti-AI.
ILO Live - Revolutionizing health and safety: The role of AI and digitalization at work
ILO Live - Revolutionizing health and safety: The role of AI and digitalization at work
AI and digital tools are revolutionizing occupational safety and health. Today, robots are operating in hazardous environments, doing the heavy lifting, managing toxic materials and working in extreme temperatures. They take on repetitive and monotonous tasks, while digital devices and sensors can detect hazards early on. At the same time, in the absence of adequate OSH measures, digital technologies can lead to accidents, ergonomic risks, work intensification, reduced job control and blurred boundaries. On the occasion of World Day for Safety and Health at Work 2025 this event brings together ILO constituents and international experts to explore how AI and digitalization are reshaping OSH systems across sectors and countries.
·live.ilo.org·
ILO Live - Revolutionizing health and safety: The role of AI and digitalization at work