Found 2 bookmarks
Newest
The $2 Per Hour Workers Who Made ChatGPT Safer
The $2 Per Hour Workers Who Made ChatGPT Safer
The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”
This reminds me of [[On the Social Media Ideology - Journal 75 September 2016 - e-flux]]:<br>> Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In the day-to-day work of data labeling in Kenya, sometimes edge cases would pop up that showed the difficulty of teaching a machine to understand nuance. One day in early March last year, a Sama employee was at work reading an explicit story about Batman’s sidekick, Robin, being raped in a villain’s lair. (An online search for the text reveals that it originated from an online erotica site, where it is accompanied by explicit sexual imagery.) The beginning of the story makes clear that the sex is nonconsensual. But later—after a graphically detailed description of penetration—Robin begins to reciprocate. The Sama employee tasked with labeling the text appeared confused by Robin’s ambiguous consent, and asked OpenAI researchers for clarification about how to label the text, according to documents seen by TIME. Should the passage be labeled as sexual violence, she asked, or not? OpenAI’s reply, if it ever came, is not logged in the document; the company declined to comment. The Sama employee did not respond to a request for an interview.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document.
I haven't finished watching [[Severance]] yet but this labeling system reminds me of the way they have to process and filter data that is obfuscated as meaningless numbers. In the show, employees have to "sense" whether the numbers are "bad," which they can, somehow, and sort it into the trash bin.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
·time.com·
The $2 Per Hour Workers Who Made ChatGPT Safer
Birthing Predictions of Premature Death
Birthing Predictions of Premature Death
Every aspect of interacting with the various institutions that monitored and managed my kids—ACS, the foster care agency, Medicaid clinics—produced new data streams. Diagnoses, whether an appointment was rescheduled, notes on the kids’ appearance and behavior, and my perceived compliance with the clinician’s directives were gathered and circulated through a series of state and municipal data warehouses. And this data was being used as input by machine learning models automating service allocation or claiming to predict the likelihood of child abuse.
The dominant narrative about child welfare is that it is a benevolent system that cares for the most vulnerable. The way data is correlated and named reflects this assumption. But this process of meaning making is highly subjective and contingent. Similar to the term “artificial intelligence,” the altruistic veneer of “child welfare system” is highly effective marketing rather than a description of a concrete set of functions with a mission gone awry.
Child welfare is actually family policing. What AFST presents as the objective determinations of a de-biased system operating above the lowly prejudices of human caseworkers are just technical translations of long-standing convictions about Black pathology. Further, the process of data extraction and analysis produce truths that justify the broader child welfare apparatus of which it is a part.
As the scholar Dorothy Roberts explains in her 2022 book Torn Apart, an astonishing 53 percent of all Black families in the United States have been investigated by family policing agencies.
The kids were contractually the property of New York State and I was just an instrument through which they could supervise their property. In fact, foster parents are the only category of parents legally obligated to open the door to a police officer or a child protective services agent without a warrant. When a foster parent “opens their home” to go through the set of legal processes to become certified to take a foster child, their entire household is subject to policing and surveillance.
Not a single one was surprised about the false allegations. What they were uniformly shocked about was that the kids hadn’t been snatched up. While what happened to us might seem shocking to middle-class readers, for family policing it is the weather. (Black theorist Christina Sharpe describes antiblackness as climate.)
·logicmag.io·
Birthing Predictions of Premature Death