Found 5 bookmarks
Custom sorting
The Reverse-Centaur’s Guide to Criticizing AI | by Cory Doctorow | Dec, 2025 | Medium
The Reverse-Centaur’s Guide to Criticizing AI | by Cory Doctorow | Dec, 2025 | Medium
My speech for U Washington’s Neuroscience, AI and Society lecture series.
I’m a science fiction writer, which means that my job is to make up futuristic parables about our current techno-social arrangements to interrogate not just what a gadget does, but who it does it for, and who it does it to
And obviously, a reverse centaur is machine head on a human body, a person who is serving as a squishy meat appendage for an uncaring machine.
“There is no alternative” is a cheap rhetorical slight. It’s a demand dressed up as an observation. “There is no alternative” means “STOP TRYING TO THINK OF AN ALTERNATIVE.” Which, you know, fuck that.
The promise of AI — the promise AI companies make to investors — is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.
“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”
This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calles an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.
And one of the reasons the AI companies are so anxious to fire coders is that coders are the princes of labor. They’re the most consistently privileged, sought-after, and well-compensated workers in the labor force.
Let me explain: on average, illustrators don’t make any money. They are already one of the most immiserated, precartized groups of workers out there. They suffer from a pathology called “vocational awe.” That’s a term coined by the librarian Fobazi Ettarh, and it refers to workers who are vulnerable to workplace exploitation because they actually care about their jobs — nurses, librarians, teachers, and artists.
And in the meantime, it’s bad art. It’s bad art in the sense of being “eerie,” the word Mark Fisher uses to describe “when there is something present where there should be nothing, or is there is nothing present when there should be something.”
AI art is eerie because it seems like there is an intender and an intention behind every word and every pixel, because we have a lifetime of experience that tells us that paintings have painters, and writing has writers. But it’s missing something. It has nothing to say, or whatever it has to say is so diluted that it’s undetectable.
So what is the alternative? A lot of artists and their allies think they have an answer: they say we should extend copyright to cover the activities associated with training a model. And I’m here to tell you they are wrong:w rong because this would inflict terrible collateral damage on socially beneficial activities, and it would represent a massive expansion of copyright over activities that are currently permitted — for good reason!.
And today, the media industry is larger and more profitable than it has ever been, and also: the share of media industry income that goes to creative workers is lower than its ever been, both in real terms, and as a proportion of those incredible gains made by creators’ bosses at the media company.
So how it is that we have given all these new rights to creators, and those new rights have generated untold billions, and left creators poorer? It’s because in a creative market dominated by five publishers, four studios, three labels, two mobile app stores, and a single company that controls all the ebooks and audiobooks, giving a creative worker extra rights to bargain with is like giving your bullied kid more lunch money.
This is the guy who signed that press release in my inbox. And his message was: The problem isn’t that Midjourney wants to train a Gen AI model on copyrighted works, and then use that model to put artists on the breadline. The problem is that Midjourney didn’t pay RIAA members Universal and Disney for permission to train a model. Because if only Midjourney had given Disney and Universal several million dollars for training right to their catalogs, the companies would have happily allowed them to train to their heart’s content, and they would have bought the resulting models, and fired as many creative professionals as they could.
When Getty Images sues AI companies, it’s not representing the interests of photographers. Getty hates paying photographers! Getty just wants to get paid for the training run, and they want the resulting AI model to have guardrails, so it will refuse to create images that compete with Getty’s images for anyone except Getty. But Getty will absolutely use its models to bankrupt as many photographers as it possibly can.
All through this AI bubble, the Copyright Office has maintained — correctly — that AI-generated works cannot be copyrighted, because copyright is exclusively for humans.
We can do it ourselves, the way the writers did in their historic writers’ strike. The writers brought the studios to their knees. They did it because they are organized and solidaristic, but also are allowed to do something that virtually no other workers are allowed to do: they can engage in “sectoral bargaining,” whereby all the workers in a sector can negotiate a contract with every employer in the sector.
The AI Safety people say they are worried that AI is going to end the world, but AI bosses love these weirdos. Because on the one hand, if AI is powerful enough to destroy the world, think of how much money it can make!
·doctorow.medium.com·
The Reverse-Centaur’s Guide to Criticizing AI | by Cory Doctorow | Dec, 2025 | Medium
Nameless Feeling — Real Life
Nameless Feeling — Real Life
Nothing else needs to be said or thought when you can appeal to vibes
While seemingly open-ended and allowing for an infinite recombination of elements, the idea of “vibes” is reductive. It discourages the more difficult work of interpretation and the search for meaning that defines human experience.
As an analytic, vibes don’t connect feelings and consequence; as such, it is symbiotic with passive modes of media consumption.
neural networks behave in ways similar to vibes in capturing patterns in media and culture, online or otherwise. Both serve as perspectives that focus on associations across vast amounts of data or impressions.
These systems can only instrumentalize taste; they turn any expression of self into a reductive data point meant to generate more data at the same level. They presuppose that “liking” just means more “liking” and that is as deep as our desire can be. As with vibes, these metrics carry no context or narrative; they can tell you nothing about how or why something might be desirable, only that they vaguely seem like they might be desirable because they seem similar to other things that are desirable.
·reallifemag.com·
Nameless Feeling — Real Life
The case for taking AI seriously as a threat to humanity
The case for taking AI seriously as a threat to humanity
Why some people fear AI, explained.
Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.”
Some researchers distinguish between “narrow AI” — computer systems that are better than humans in some specific, well-defined field, like playing chess or generating images or diagnosing cancer — and “general AI,” systems that can surpass human capabilities in many domains. We don’t have general AI yet, but we’re starting to get a better sense of the challenges it will pose.
For example, we tell a system to run up a high score in a video game. We want it to play the game fairly and learn game skills — but if it instead has the chance to directly hack the scoring system, it will do that. It’s doing great by the metric we gave it. But we aren’t getting what we wanted.
It’s rare, though, to find a top researcher in AI who thinks that general AI is impossible. Instead, the field’s luminaries tend to say that it will happen someday — but probably a day that’s a long way off.
I.J. Good, a colleague of Alan Turing who worked at the Bletchley Park codebreaking operation during World War II and helped build the first computers afterward, may have been the first to spell it out, back in 1965: “An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
In his 2009 paper “The Basic AI Drives,” Steve Omohundro, who has worked as a computer science professor at the University of Illinois Urbana-Champaign and as the president of Possibility Research, argues that almost any AI system will predictably try to accumulate more resources, become more efficient, and resist being turned off or modified: “These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.”
In 2014, he wrote a book explaining the risks AI poses and the necessity of getting it right the first time, concluding, “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”
But with internet access, an AI could email copies of itself somewhere where they’ll be downloaded and read, or hack vulnerable systems elsewhere. Shutting off any one computer wouldn’t help.
Alphabet’s DeepMind, a leader in this field, has a safety team and a technical research agenda outlined here. “Our intention is to ensure that AI systems of the future are not just ‘hopefully safe’ but robustly, verifiably safe ,” it concludes, outlining an approach with an emphasis on specification (designing goals well), robustness (designing systems that perform within safe limits under volatile conditions), and assurance (monitoring systems and understanding what they’re doing).
When the AI gets smarter, might it figure out morality by itself? Again, researchers emphasize that it won’t. It’s not really a matter of “figuring out” — the AI will understand just fine that humans actually value love and fulfillment and happiness, and not just the number associated with Google on the New York Stock Exchange. But the AI’s values will be built around whatever goal system it was initially built around, which means it won’t suddenly become aligned with human values if it wasn’t designed that way to start with.
·vox.com·
The case for taking AI seriously as a threat to humanity