Found 384 bookmarks
Custom sorting
This time, it feels different
This time, it feels different
In the past several months, I have come across people who do programming, legal work, business, accountancy and finance, fashion design, architecture, graphic design, research, teaching, cooking, travel planning, event management etc., all of whom have started using the same tool, ChatGPT, to solve use cases specific to their domains and problems specific to their personal workflows. This is unlike everyone using the same messaging tool or the same document editor. This is one tool, a single class of technology (LLM), whose multi-dimensionality has achieved widespread adoption across demographics where people are discovering how to solve a multitude of problems with no technical training, in the one way that is most natural to humans—via language and conversations.
I cannot recall the last time a single tool gained such widespread acceptance so swiftly, for so many use cases, across entire demographics.
there is significant substance beneath the hype. And that is what is worrying; the prospect of us starting to depend indiscriminately on poorly understood blackboxes, currently offered by megacorps, that actually work shockingly well.
If a single dumb, stochastic, probabilistic, hallucinating, snake oil LLM with a chat UI offered by one organisation can have such a viral, organic, and widespread adoption—where large disparate populations, people, corporations, and governments are integrating it into their daily lives for use cases that they are discovering themselves—imagine what better, faster, more “intelligent” systems to follow in the wake of what exists today would be capable of doing.
A policy for “AI anxiety” We ended up codifying this into an actual AI policy to bring clarity to the organisation.[10] It states that no one at Zerodha will lose their job if a technology implementation (AI or non-AI) directly renders their existing responsibilities and tasks obsolete. The goal is to prevent unexpected rug-pulls from underneath the feet of humans. Instead, there will be efforts to create avenues and opportunities for people to upskill and switch between roles and responsibilities
To those who believe that new jobs will emerge at meaningful rates to absorb the losses and shocks, what exactly are those new jobs? To those who think that governments will wave magic wands to regulate AI technologies, one just has to look at how well governments have managed to regulate, and how well humanity has managed to self-regulate, human-made climate change and planetary destruction. It is not then a stretch to think that the unraveling of our civilisation and its socio-politico-economic systems that are built on extracting, mass producing, and mass consuming garbage, might be exacerbated. Ted Chiang’s recent essay is a grim, but fascinating exploration of this. Speaking of grim, we can always count on us to ruin nice things! Along the lines of Murphy’s Law,[11] I present: Anything that can be ruined, will be ruined — Grumphy’s law
I asked GPT-4 to summarise this post and write five haikus on it. I have always operated a piece of software, but never asked it anything—that is, until now. Anyway, here is the fifth one. Future’s tangled web, Offloading choices to black boxes, Humanity’s voice fades
·nadh.in·
This time, it feels different
“I can’t make products just for 41-year-old tech founders”: Airbnb CEO Brian Chesky is taking it back to basics
“I can’t make products just for 41-year-old tech founders”: Airbnb CEO Brian Chesky is taking it back to basics
Of course, you shouldn’t discriminate, but when we say belonging, it has to be more than just inclusion. It has to actually be the proactive manifestation of meeting people, creating connections in friendships. And Jony Ive said, “Well, you need to reframe it. It’s not just about belonging, it’s about human connection and belonging.”And that was, I think, a really big unlock. The next thing Jony Ive said is he created this book for me, a book of his ideas, and the book was called “Beyond Where and When,” and he basically said that Airbnb should shift from beyond where and when to who and what?Who are you and what do you want in your life? And that was a part of the inspiration behind Airbnb categories, that we wanted people to come to Airbnb without a destination in mind and that we could categorize properties not just by location but by what makes them unique, and that really influenced Airbnb categories and some of the stuff we’re doing now.
·theverge.com·
“I can’t make products just for 41-year-old tech founders”: Airbnb CEO Brian Chesky is taking it back to basics
Funniest/Most Insightful Comments Of The Week At Techdirt
Funniest/Most Insightful Comments Of The Week At Techdirt
Twitter is not a public square controlled by a socialist government – it is a private company in a capitalist economy for the purpose of making money through advertising. Twitter has ZERO interest in promoting the public good.
Congressmembers would have better expertise on tech matters if the Office of Technology Assessment still existed. It was defunded in 1995 under Newt Gingrich’s “Contract to America” plan, because it was an unbiased organization that wouldn’t cow to political narratives. The Chew hearing is one of many instances that highlight both why Newt wanted to defund it, and why eliminating the agency was a detriment to politicians. (Ironically, Newt suggested shortly after the midterms that Republicans should come around to using TikTok to court young voters, despite the allegations of the app’s security risk.) Hopefully, someone in Congress will introduce legislation aimed at reviving the OTA somewhere down the line.
·techdirt.com·
Funniest/Most Insightful Comments Of The Week At Techdirt
Society's Technical Debt and Software's Gutenberg Moment
Society's Technical Debt and Software's Gutenberg Moment
Past innovations have made costly things become cheap enough to proliferate widely across society. He suggests LLMs will make software development vastly more accessible and productive, alleviating the "technical debt" caused by underproduction of software over decades.
Software is misunderstood. It can feel like a discrete thing, something with which we interact. But, really, it is the intrusion into our world of something very alien. It is the strange interaction of electricity, semiconductors, and instructions, all of which somehow magically control objects that range from screens to robots to phones, to medical devices, laptops, and a bewildering multitude of other things. It is almost infinitely malleable, able to slide and twist and contort itself such that, in its pliability, it pries open doorways as yet unseen.
the clearing price for software production will change. But not just because it becomes cheaper to produce software. In the limit, we think about this moment as being analogous to how previous waves of technological change took the price of underlying technologies—from CPUs, to storage and bandwidth—to a reasonable approximation of zero, unleashing a flood of speciation and innovation. In software evolutionary terms, we just went from human cycle times to that of the drosophila: everything evolves and mutates faster.
A software industry where anyone can write software, can do it for pennies, and can do it as easily as speaking or writing text, is a transformative moment. It is an exaggeration, but only a modest one, to say that it is a kind of Gutenberg moment, one where previous barriers to creation—scholarly, creative, economic, etc—are going to fall away, as people are freed to do things only limited by their imagination, or, more practically, by the old costs of producing software.
We have almost certainly been producing far less software than we need. The size of this technical debt is not knowable, but it cannot be small, so subsequent growth may be geometric. This would mean that as the cost of software drops to an approximate zero, the creation of software predictably explodes in ways that have barely been previously imagined.
Entrepreneur and publisher Tim O’Reilly has a nice phrase that is applicable at this point. He argues investors and entrepreneurs should “create more value than you capture.” The technology industry started out that way, but in recent years it has too often gone for the quick win, usually by running gambits from the financial services playbook. We think that for the first time in decades, the technology industry could return to its roots, and, by unleashing a wave of software production, truly create more value than its captures.
Software production has been too complex and expensive for too long, which has caused us to underproduce software for decades, resulting in immense, society-wide technical debt.
technology has a habit of confounding economics. When it comes to technology, how do we know those supply and demand lines are right? The answer is that we don’t. And that’s where interesting things start happening. Sometimes, for example, an increased supply of something leads to more demand, shifting the curves around. This has happened many times in technology, as various core components of technology tumbled down curves of decreasing cost for increasing power (or storage, or bandwidth, etc.).
Suddenly AI has become cheap, to the point where people are “wasting” it via “do my essay” prompts to chatbots, getting help with microservice code, and so on. You could argue that the price/performance of intelligence itself is now tumbling down a curve, much like as has happened with prior generations of technology.
it’s worth reminding oneself that waves of AI enthusiasm have hit the beach of awareness once every decade or two, only to recede again as the hyperbole outpaces what can actually be done.
·skventures.substack.com·
Society's Technical Debt and Software's Gutenberg Moment
The Age of the App is Over
The Age of the App is Over
We still believe that "if our hope is to create software with feeling, it means inviting people in to craft it for themselves — to mold it to the contours of their unique lives and taste.” And we have a few thoughts on how to make that happen, but if you know us, you know that the prompt is almost always more interesting than the answer.
·browsercompany.substack.com·
The Age of the App is Over
tech interviewing is broken | basement community
tech interviewing is broken | basement community
i don't even really care if the answer is right, as long as the person i'm talking to can talk about complexity cogently. if i'm interviewing for an entry-level position, i don't even really care about that, we can teach it, it's not that hard.
Anecdotally I have noticed junior engineers being increasingly difficult to work with since many of them are leetcode drones who have issues working and figuring things out on their own. They got really good at passing 'the test' but did not develop many other skills relating to technology and many times do not really have an outside interest in it beyond being able to get a job.
·basementcommunity.com·
tech interviewing is broken | basement community
The 2021 14-Inch MacBook Pro
The 2021 14-Inch MacBook Pro
Rather than debate the merits of these “let’s bring back some ports from five years ago” decisions piecemeal, I think they’re best explained by Apple revisiting what the pro in “MacBook Pro” means. What it stands for. Apple uses the word pro in so many products. Sometimes they really do mean it as professional. Logic Pro and Final Cut Pro, for example, truly are tools for professionals. With something like AirPods Pro, though, the word pro really just means something more like nicer or deluxe. A couth euphemism for premium.
·daringfireball.net·
The 2021 14-Inch MacBook Pro
A Student's Guide to Startups
A Student's Guide to Startups
Most startups end up doing something different than they planned. The way the successful ones find something that works is by trying things that don't. So the worst thing you can do in a startup is to have a rigid, pre-ordained plan and then start spending a lot of money to implement it. Better to operate cheaply and give your ideas time to evolve.
Successful startups are almost never started by one person. Usually they begin with a conversation in which someone mentions that something would be a good idea for a company, and his friend says, "Yeah, that is a good idea, let's try it." If you're missing that second person who says "let's try it," the startup never happens. And that is another area where undergrads have an edge. They're surrounded by people willing to say that.
Look for the people who keep starting projects, and finish at least some of them. That's what we look for. Above all else, above academic credentials and even the idea you apply with, we look for people who build things.
You need a certain activation energy to start a startup. So an employer who's fairly pleasant to work for can lull you into staying indefinitely, even if it would be a net win for you to leave.
Most people look at a company like Apple and think, how could I ever make such a thing? Apple is an institution, and I'm just a person. But every institution was at one point just a handful of people in a room deciding to start something. Institutions are made up, and made up by people no different from you.
What goes wrong with young founders is that they build stuff that looks like class projects. It was only recently that we figured this out ourselves. We noticed a lot of similarities between the startups that seemed to be falling behind, but we couldn't figure out how to put it into words. Then finally we realized what it was: they were building class projects.
Class projects will inevitably solve fake problems. For one thing, real problems are rare and valuable. If a professor wanted to have students solve real problems, he'd face the same paradox as someone trying to give an example of whatever "paradigm" might succeed the Standard Model of physics. There may well be something that does, but if you could think of an example you'd be entitled to the Nobel Prize. Similarly, good new problems are not to be had for the asking.
real startups tend to discover the problem they're solving by a process of evolution. Someone has an idea for something; they build it; and in doing so (and probably only by doing so) they realize the problem they should be solving is another one.
Professors will tend to judge you by the distance between the starting point and where you are now. If someone has achieved a lot, they should get a good grade. But customers will judge you from the other direction: the distance remaining between where you are now and the features they need. The market doesn't give a shit how hard you worked. Users just want your software to do what they need, and you get a zero otherwise. That is one of the most distinctive differences between school and the real world: there is no reward for putting in a good effort. In fact, the whole concept of a "good effort" is a fake idea adults invented to encourage kids. It is not found in nature.
unfortunately when you graduate they don't give you a list of all the lies they told you during your education. You have to get them beaten out of you by contact with the real world.
really what work experience refers to is not some specific expertise, but the elimination of certain habits left over from childhood.
One of the defining qualities of kids is that they flake. When you're a kid and you face some hard test, you can cry and say "I can't" and they won't make you do it. Of course, no one can make you do anything in the grownup world either. What they do instead is fire you. And when motivated by that you find you can do a lot more than you realized. So one of the things employers expect from someone with "work experience" is the elimination of the flake reflex—the ability to get things done, with no excuses.
Fundamentally the equation is a brutal one: you have to spend most of your waking hours doing stuff someone else wants, or starve. There are a few places where the work is so interesting that this is concealed, because what other people want done happens to coincide with what you want to work on.
So the most important advantage 24 year old founders have over 20 year old founders is that they know what they're trying to avoid. To the average undergrad the idea of getting rich translates into buying Ferraris, or being admired. To someone who has learned from experience about the relationship between money and work, it translates to something way more important: it means you get to opt out of the brutal equation that governs the lives of 99.9% of people. Getting rich means you can stop treading water.
You don't get money just for working, but for doing things other people want. Someone who's figured that out will automatically focus more on the user. And that cures the other half of the class-project syndrome. After you've been working for a while, you yourself tend to measure what you've done the same way the market does.
the most important skill for a startup founder isn't a programming technique. It's a knack for understanding users and figuring out how to give them what they want. I know I repeat this, but that's because it's so important. And it's a skill you can learn, though perhaps habit might be a better word. Get into the habit of thinking of software as having users. What do those users want? What would make them say wow?
·paulgraham.com·
A Student's Guide to Startups
Interview with Kevin Kelly,editor, author, and futurist
Interview with Kevin Kelly,editor, author, and futurist
To write about something hard to explain, write a detailed letter to a friend about why it is so hard to explain, and then remove the initial “Dear Friend” part and you’ll have a great first draft.
To be interesting just tell your story with uncommon honesty.
Most articles and stories are improved significantly if you delete the first page of the manuscript draft. Immediately start with the action.
Each technology can not stand alone. It takes a saw to make a hammer and it takes a hammer to make a saw. And it takes both tools to make a computer, and in today’s factory it takes a computer to make saws and hammers. This co-dependency creates an ecosystem of highly interdependent technologies that support each other
On the other hand, I see this technium as an extension of the same self-organizing system responsible for the evolution of life on this planet. The technium is evolution accelerated. A lot of the same dynamics that propel evolution are also at work in the technium
Our technologies are ultimately not contrary to life, but are in fact an extension of life, enabling it to develop yet more options and possibilities at a faster rate. Increasing options and possibilities is also known as progress, so in the end, what the technium brings us humans is progress.
Libraries, journals, communication networks, and the accumulation of other technologies help create the next idea, beyond the efforts of a single individual
We also see near-identical parallel inventions of tricky contraptions like slingshots and blowguns. However, because it was so ancient, we don’t have a lot of data for this behavior. What we would really like is to have a N=100 study of hundreds of other technological civilizations in our galaxy. From that analysis we’d be able to measure, outline, and predict the development of technologies. That is a key reason to seek extraterrestrial life.
When information is processed in a computer, it is being ceaselessly replicated and re-copied while it computes. Information wants to be copied. Therefore, when certain people get upset about the ubiquitous copying happening in the technium, their misguided impulse is to stop the copies. They want to stamp out rampant copying in the name of "copy protection,” whether it be music, science journals, or art for AI training. But the emergent behavior of the technium is to copy promiscuously. To ban, outlaw, or impede the superconductivity of copies is to work against the grain of the system.
the worry of some environmentalists is that technology can only contribute more to the problem and none to the solution. They believe that tech is incapable of being green because it is the source of relentless consumerism at the expense of diminishing nature, and that our technological civilization requires endless growth to keep the system going. I disagree.
Over time evolution arranges the same number of atoms in more complex patterns to yield more complex organisms, for instance producing an agile lemur the same size and weight as a jelly fish. We seek the same shift in the technium. Standard economic growth aims to get consumers to drink more wine. Type 2 growth aims to get them to not drink more wine, but better wine.
[[An optimistic view of capitalism]]
to measure (and thus increase) productivity we count up the number of refrigerators manufactured and sold each year. More is generally better. But this counting tends to overlook the fact that refrigerators have gotten better over time. In addition to making cold, they now dispense ice cubes, or self-defrost, and use less energy. And they may cost less in real dollars. This betterment is truly real value, but is not accounted for in the “more” column
it is imperative that we figure out how to shift more of our type 1 growth to type 2 growth, because we won’t be able to keep expanding the usual “more.”  We will have to perfect a system that can keep improving and getting better with fewer customers each year, smaller markets and audiences, and fewer workers. That is a huge shift from the past few centuries where every year there has been more of everything.
“degrowthers” are correct in that there are limits to bulk growth — and running out of humans may be one of them. But they don’t seem to understand that evolutionary growth, which includes the expansion of intangibles such as freedom, wisdom, and complexity, doesn’t have similar limits. We can always figure out a way to improve things, even without using more stuff — especially without using more stuff!
the technium is not inherently contrary to nature; it is inherently derived from evolution and thus inherently capable of being compatible with nature. We can choose to create versions of the technium that are aligned with the natural world.
Social media can transmit false information at great range at great speed. But compared to what? Social media's influence on elections from transmitting false information was far less than the influence of the existing medias of cable news and talk radio, where false information was rampant. Did anyone seriously suggest we should regulate what cable news hosts or call in radio listeners could say? Bullying middle schoolers on social media? Compared to what? Does it even register when compared to the bullying done in school hallways? Radicalization on YouTube? Compared to talk radio? To googling?
Kids are inherently obsessive about new things, and can become deeply infatuated with stuff that they outgrow and abandon a few years later. So the fact they may be infatuated with social media right now should not in itself be alarming. Yes, we should indeed understand how it affects children and how to enhance its benefits, but it is dangerous to construct national policies for a technology based on the behavior of children using it.
Since it is the same technology, inspecting how it is used in other parts of the world would help us isolate what is being caused by the technology and what is being caused by the peculiar culture of the US.
You don’t notice what difference you make because of the platform's humongous billions-scale. In aggregate your choices make a difference which direction it — or any technology — goes. People prefer to watch things on demand, so little by little, we have steered the technology to let us binge watch. Streaming happened without much regulation or even enthusiasm of the media companies. Street usage is the fastest and most direct way to steer tech.
Vibrators instead of the cacophony of ringing bells on cell phones is one example of a marketplace technological solution
The long-term effects of AI will affect our society to a greater degree than electricity and fire, but its full effects will take centuries to play out. That means that we’ll be arguing, discussing, and wrangling with the changes brought about by AI for the next 10 decades. Because AI operates so close to our own inner self and identity, we are headed into a century-long identity crisis.
What we tend to call AI, will not be considered AI years from now
What we are discovering is that many of the cognitive tasks we have been doing as humans are dumber than they seem. Playing chess was more mechanical than we thought. Playing the game Go is more mechanical than we thought. Painting a picture and being creative was more mechanical than we thought. And even writing a paragraph with words turns out to be more mechanical than we thought
out of the perhaps dozen of cognitive modes operating in our minds, we have managed to synthesize two of them: perception and pattern matching. Everything we’ve seen so far in AI is because we can produce those two modes. We have not made any real progress in synthesizing symbolic logic and deductive reasoning and other modes of thinking
we are slowly realizing we still have NO IDEA how our own intelligences really work, or even what intelligence is. A major byproduct of AI is that it will tell us more about our minds than centuries of psychology and neuroscience have
There is no monolithic AI. Instead there will be thousands of species of AIs, each engineered to optimize different ways of thinking, doing different jobs
Now from the get-go we assume there will be significant costs and harms of anything new, which was not the norm in my parent's generation
The astronomical volume of money and greed flowing through this frontier overwhelmed and disguised whatever value it may have had
The sweet elegance of blockchain enables decentralization, which is a perpetually powerful force. This tech just has to be matched up to the tasks — currently not visible — where it is worth paying the huge cost that decentralization entails. That is a big ask, but taking the long-view, this moment may not be a failure
My generic career advice for young people is that if at all possible, you should aim to work on something that no one has a word for. Spend your energies where we don’t have a name for what you are doing, where it takes a while to explain to your mother what it is you do. When you are ahead of language, that means you are in a spot where it is more likely you are working on things that only you can do. It also means you won’t have much competition.
Your 20s are the perfect time to do a few things that are unusual, weird, bold, risky, unexplainable, crazy, unprofitable, and looks nothing like “success.” The less this time looks like success, the better it will be as a foundation
·noahpinion.substack.com·
Interview with Kevin Kelly,editor, author, and futurist
Stacking the Optical Deck: Introducing Infinite Display + a Primer on Measuring Visual Quality in VR | Meta Store
Stacking the Optical Deck: Introducing Infinite Display + a Primer on Measuring Visual Quality in VR | Meta Store
Instead of looking at a large screen at a farther distance, VR users are looking at a smaller screen, much closer to their eyes and magnified by a set of lenses within an optical stack. It’s like looking at a TV through a camera lens—what you’ll see isn’t just determined by the resolution of the screen, but also by the optical properties of the lens, like magnification and sharpness.
instead, we should evaluate the full optical system’s resolution, which is measured in PPD—a combined metric that takes into account the display and optics working together. An angular measurement, PPD measures the number of pixels that are packed within 1° of the field of view (FOV). The higher the PPD, the better the system resolution of the VR headset.
·meta.com·
Stacking the Optical Deck: Introducing Infinite Display + a Primer on Measuring Visual Quality in VR | Meta Store
Thoughts on the software industry - linus.coffee
Thoughts on the software industry - linus.coffee
software gives you its own set of abstractions and basic vocabulary with which to understand every experience. It sort of smells like mathematics in some ways. But software’s way of looking at the world is more about abstractions modeling underlying complexities in systems; signal vs. noise; scale and orders of magnitude; and information — how much there is, what we can do it with, how we can learn from it and model it. Software’s interpretation of reality is particularly important because software drives the world now, and the people who write the software that runs it see the world through this kind of “software’s worldview” — scaling laws, information theory, abstractions and complexity. I think over time I’ve come to believe that understanding this worldview is more interesting than learning to wield programming tools.
·linus.coffee·
Thoughts on the software industry - linus.coffee
How DAOs Could Change the Way We Work
How DAOs Could Change the Way We Work
DAOs are effectively owned and governed by people who hold a sufficient number of a DAO’s native token, which functions like a type of cryptocurrency. For example, $FWB is the native token of popular social DAO called Friends With Benefits, and people can buy, earn, or trade it.
Contributors will be able to use their DAO’s native tokens to vote on key decisions. You can get a glimpse into the kinds of decisions DAO members are already voting on at Snapshot, which is essentially a decentralized voting system. Having said this, existing voting mechanisms have been criticized by the likes of Vitalik Buterin, founder of Ethereum, the open-source blockchain that acts as a foundational layer for the majority of Web3 applications. So, this type of voting is likely to evolve over time.
·hbr.org·
How DAOs Could Change the Way We Work
Electron Fiddle: The easiest way to get started with Electron | Hacker News
Electron Fiddle: The easiest way to get started with Electron | Hacker News
Essentially the reason Electron apps are so heavy is they run on a browser, not on an operating system.In the beginning, programs ran directly on hardware, and things were good. (e.g. Pretty much every game would ship with a custom bootloader that knew how to address the hardware and exposed this to the program, you didn’t - couldn’t - have anything else running at the same time.)Later on, programs ran on the operating system, and things were okay. Operating systems abstracted over all the possible hardware configurations in an acceptable way and users got the benefit of running multiple programs simultaneously without too much of a performance cost.Now, programs run on the browser, itself a program running on the operating system. Because the browser wants to be a platform for any other program, it has to hook into almost every part of the operating system, so that it can support almost any program. This necessarily brings with it enormous bloat - in effect you’re running one general-purpose pseudo-OS framework per program.
·news.ycombinator.com·
Electron Fiddle: The easiest way to get started with Electron | Hacker News
The Limits of Computational Photography
The Limits of Computational Photography
How much of that is the actual photo and how much you might consider to be synthesized is a line I think each person draws for themselves. I think it depends on the context; Moon photography makes for a neat demo but it is rarely relevant. A better question is whether these kinds of software enhancements hallucinate errors along the same lines of what happened in Xerox copiers for years.
·pxlnv.com·
The Limits of Computational Photography
Privacy Fundamentalism
Privacy Fundamentalism
my critique of Manjoo’s article specifically and the ongoing privacy hysteria broadly is not simply about definitions or philosophy. It’s about fundamental assumptions. The default state of the Internet is the endless propagation and collection of data: you have to do work to not collect data on one hand, or leave a data trail on the other. This is the exact opposite of how things work in the physical world: there data collection is an explicit positive action, and anonymity the default.
I believe the privacy debate needs to be reset around these three assumptions: Accept that privacy online entails trade-offs; the corollary is that an absolutist approach to privacy is a surefire way to get policy wrong. Keep in mind that the widespread creation and spread of data is inherent to computers and the Internet, and that these qualities have positive as well as negative implications; be wary of what good ideas and positive outcomes are extinguished in the pursuit to stomp out the negative ones. Focus policy on the physical and digital divide. Our behavior online is one thing: we both benefit from the spread of data and should in turn be more wary of those implications. Making what is offline online is quite another.
·stratechery.com·
Privacy Fundamentalism
The $2 Per Hour Workers Who Made ChatGPT Safer
The $2 Per Hour Workers Who Made ChatGPT Safer
The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”
This reminds me of [[On the Social Media Ideology - Journal 75 September 2016 - e-flux]]:<br>> Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In the day-to-day work of data labeling in Kenya, sometimes edge cases would pop up that showed the difficulty of teaching a machine to understand nuance. One day in early March last year, a Sama employee was at work reading an explicit story about Batman’s sidekick, Robin, being raped in a villain’s lair. (An online search for the text reveals that it originated from an online erotica site, where it is accompanied by explicit sexual imagery.) The beginning of the story makes clear that the sex is nonconsensual. But later—after a graphically detailed description of penetration—Robin begins to reciprocate. The Sama employee tasked with labeling the text appeared confused by Robin’s ambiguous consent, and asked OpenAI researchers for clarification about how to label the text, according to documents seen by TIME. Should the passage be labeled as sexual violence, she asked, or not? OpenAI’s reply, if it ever came, is not logged in the document; the company declined to comment. The Sama employee did not respond to a request for an interview.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document.
I haven't finished watching [[Severance]] yet but this labeling system reminds me of the way they have to process and filter data that is obfuscated as meaningless numbers. In the show, employees have to "sense" whether the numbers are "bad," which they can, somehow, and sort it into the trash bin.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
·time.com·
The $2 Per Hour Workers Who Made ChatGPT Safer