Read

Read

877 bookmarks
Custom sorting
Inside the AI Factory: the humans that make tech seem human - The Verge
Inside the AI Factory: the humans that make tech seem human - The Verge
I am shocked, shocked that capitalism would find another mechanism for devaluing labor.
the rise of AI will look like past labor-saving technologies, maybe like the telephone or typewriter, which vanquished the drudgery of message delivering and handwriting but generated so much new correspondence, commerce, and paperwork that new offices staffed by new types of workers — clerks, accountants, typists — were required to manage it
Imagine simplifying complex realities into something that is readable for a machine that is totally dumb
The question is, Who bears the cost for these fluctuations?” said Jindal of Partnership on AI. “Because right now, it’s the workers.”
“I remember that someone posted that we will be remembered in the future,” he said. “And somebody else replied, ‘We are being treated worse than foot soldiers. We will be remembered nowhere in the future.’ I remember that very well. Nobody will recognize the work we did or the effort we put in.”
·theverge.com·
Inside the AI Factory: the humans that make tech seem human - The Verge
No, large language models aren’t like disabled people (and it’s problematic to argue that they are)
No, large language models aren’t like disabled people (and it’s problematic to argue that they are)
I wish I had been following some of this discussion a year ago. Great stuff, and really showing how software people tend to rush into a new to them field like they can reason about anything from first principles. Good commentary on show ableist it can be to compare these models to certain humans, implicitly ranking them. I like the highlighted questions because they're very similar to what I ask in reviews.
If the point is to build effective and trustworthy technology, then I think these are the wrong questions to ask. In that scenario, we should be asking questions like: What do we want this system to do? How do we verify that it can carry out those tasks reliably? How can we make its affordances transparent to the humans that interact with it, so that they can appropriately contextualize its behavior? What are the system’s failure modes, who might be harmed by them and how? When the system is working as intended, who might be harmed and how?
·medium.com·
No, large language models aren’t like disabled people (and it’s problematic to argue that they are)