Found 7 bookmarks
Newest
The $2 Per Hour Workers Who Made ChatGPT Safer
The $2 Per Hour Workers Who Made ChatGPT Safer
The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”
This reminds me of [[On the Social Media Ideology - Journal 75 September 2016 - e-flux]]:<br>> Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In the day-to-day work of data labeling in Kenya, sometimes edge cases would pop up that showed the difficulty of teaching a machine to understand nuance. One day in early March last year, a Sama employee was at work reading an explicit story about Batman’s sidekick, Robin, being raped in a villain’s lair. (An online search for the text reveals that it originated from an online erotica site, where it is accompanied by explicit sexual imagery.) The beginning of the story makes clear that the sex is nonconsensual. But later—after a graphically detailed description of penetration—Robin begins to reciprocate. The Sama employee tasked with labeling the text appeared confused by Robin’s ambiguous consent, and asked OpenAI researchers for clarification about how to label the text, according to documents seen by TIME. Should the passage be labeled as sexual violence, she asked, or not? OpenAI’s reply, if it ever came, is not logged in the document; the company declined to comment. The Sama employee did not respond to a request for an interview.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document.
I haven't finished watching [[Severance]] yet but this labeling system reminds me of the way they have to process and filter data that is obfuscated as meaningless numbers. In the show, employees have to "sense" whether the numbers are "bad," which they can, somehow, and sort it into the trash bin.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
·time.com·
The $2 Per Hour Workers Who Made ChatGPT Safer
Birthing Predictions of Premature Death
Birthing Predictions of Premature Death
Every aspect of interacting with the various institutions that monitored and managed my kids—ACS, the foster care agency, Medicaid clinics—produced new data streams. Diagnoses, whether an appointment was rescheduled, notes on the kids’ appearance and behavior, and my perceived compliance with the clinician’s directives were gathered and circulated through a series of state and municipal data warehouses. And this data was being used as input by machine learning models automating service allocation or claiming to predict the likelihood of child abuse.
The dominant narrative about child welfare is that it is a benevolent system that cares for the most vulnerable. The way data is correlated and named reflects this assumption. But this process of meaning making is highly subjective and contingent. Similar to the term “artificial intelligence,” the altruistic veneer of “child welfare system” is highly effective marketing rather than a description of a concrete set of functions with a mission gone awry.
Child welfare is actually family policing. What AFST presents as the objective determinations of a de-biased system operating above the lowly prejudices of human caseworkers are just technical translations of long-standing convictions about Black pathology. Further, the process of data extraction and analysis produce truths that justify the broader child welfare apparatus of which it is a part.
As the scholar Dorothy Roberts explains in her 2022 book Torn Apart, an astonishing 53 percent of all Black families in the United States have been investigated by family policing agencies.
The kids were contractually the property of New York State and I was just an instrument through which they could supervise their property. In fact, foster parents are the only category of parents legally obligated to open the door to a police officer or a child protective services agent without a warrant. When a foster parent “opens their home” to go through the set of legal processes to become certified to take a foster child, their entire household is subject to policing and surveillance.
Not a single one was surprised about the false allegations. What they were uniformly shocked about was that the kids hadn’t been snatched up. While what happened to us might seem shocking to middle-class readers, for family policing it is the weather. (Black theorist Christina Sharpe describes antiblackness as climate.)
·logicmag.io·
Birthing Predictions of Premature Death
Embracing Being a Generalist.
Embracing Being a Generalist.
Generalists can pursue broader themes, questions, and lenses which, across their interests give them a deep perspective from breadth.For example, a specialist is someone who is obsessed with chess and spends their waking hours practicing, playing, and studying.A generalist is someone who is obsessed with the idea of game-play, and has researched and gone deep on sports, childhood psychology, board games, and philosophy.
Embracing being a coordinate on the map for a point in time is about allowing yourself to be seen as something specific. Generalists can feel trapped by that but the truth is being specific, and being on the map for others is a way of being in service. If you never pin yourself down (just for a time) you miss the benefits of being connected or in service.
·caffeine.blog·
Embracing Being a Generalist.
On the Internet, We’re Always Famous - The New Yorker
On the Internet, We’re Always Famous - The New Yorker
I’ve come to believe that, in the Internet age, the psychologically destabilizing experience of fame is coming for everyone. Everyone is losing their minds online because the combination of mass fame and mass surveillance increasingly channels our most basic impulses—toward loving and being loved, caring for and being cared for, getting the people we know to laugh at our jokes—into the project of impressing strangers, a project that cannot, by definition, sate our desires but feels close enough to real human connection that we cannot but pursue it in ever more compulsive ways.
It seems distant now, but once upon a time the Internet was going to save us from the menace of TV. Since the late fifties, TV has had a special role, both as the country’s dominant medium, in audience and influence, and as a bête noire for a certain strain of American intellectuals, who view it as the root of all evil. In “Amusing Ourselves to Death,” from 1985, Neil Postman argues that, for its first hundred and fifty years, the U.S. was a culture of readers and writers, and that the print medium—in the form of pamphlets, broadsheets, newspapers, and written speeches and sermons—structured not only public discourse but also modes of thought and the institutions of democracy itself. According to Postman, TV destroyed all that, replacing our written culture with a culture of images that was, in a very literal sense, meaningless. “Americans no longer talk to each other, they entertain each other,” he writes. “They do not exchange ideas; they exchange images. They do not argue with propositions; they argue with good looks, celebrities and commercials.”
·newyorker.com·
On the Internet, We’re Always Famous - The New Yorker