Found 5 bookmarks
Newest
My Last Five Years of Work
My Last Five Years of Work
Copywriting, tax preparation, customer service, and many other tasks are or will soon be heavily automated. I can see the beginnings in areas like software development and contract law. Generally, tasks that involve reading, analyzing, and synthesizing information, and then generating content based on it, seem ripe for replacement by language models.
Anyone who makes a living through  delicate and varied movements guided by situation specific know-how can expect to work for much longer than five more years. Thus, electricians, gardeners, plumbers, jewelry makers, hair stylists, as well as those who repair ironwork or make stained glass might find their handiwork contributing to our society for many more years to come
Finally, I expect there to be jobs where humans are preferred to AIs even if the AIs can do the job equally well, or perhaps even if they can do it better. This will apply to jobs where something is gained from the very fact that a human is doing it—likely because it involves the consumer feeling like they have a relationship with the human worker as a human. Jobs that might fall into this category include counselors, doulas, caretakers for the elderly, babysitters, preschool teachers, priests and religious leaders, even sex workers—much has been made of AI girlfriends, but I still expect that a large percentage of buyers of in-person sexual services will have a strong preference for humans. Some have called these jobs “nostalgic jobs.”
It does seem that, overall, unemployment makes people sadder, sicker, and more anxious. But it isn’t clear if this is an inherent fact of unemployment, or a contingent one. It is difficult to isolate the pure psychological effects of being unemployed, because at present these are confounded with the financial effects—if you lose your job, you have less money—which produce stress that would not exist in the context of, say, universal basic income. It is also confounded with the “shame” aspect of being fired or laid off—of not working when you really feel you should be working—as opposed to the context where essentially all workers have been displaced.
One study that gets around the “shame” confounder of unemployment is “A Forced Vacation? The Stress of Being Temporarily Laid Off During a Pandemic” by Scott Schieman, Quan Mai, and Ryu Won Kang. This study looked at Canadian workers who were temporarily laid off several months into the COVID-19 pandemic. They first assumed that such a disruption would increase psychological distress, but instead found that the self-reported wellbeing was more in line with the “forced vacation hypothesis,” suggesting that temporarily laid-off workers might initially experience lower distress due to the unique circumstances of the pandemic.
By May 2020, the distress gap observed in April had vanished, indicating that being temporarily laid off was not associated with higher distress during these months. The interviews revealed that many workers viewed being left without work as a “forced vacation,” appreciating the break from work-related stress and valuing the time for self-care and family. The widespread nature of layoffs normalized the experience, reducing personal blame and fostering a sense of shared experience. Financial strain was mitigated by government support, personal savings, and reduced spending, which buffered against potential distress.
The study suggests that the context and available support systems can significantly alter the psychological outcomes of unemployment—which seems promising for AGI-induced unemployment.
From the studies on plant closures and pandemic layoffs, it seems that shame plays a role in making people unhappy after unemployment, which implies that they might be happier in full automation-induced unemployment, since it would be near-universal and not signify any personal failing.
A final piece that reveals a societal-psychological aspect to how much work is deemed necessary is that the amount has changed over time! The number of hours that people have worked has declined over the past 150 years. Work hours tend to decline as a country gets richer. It seems odd to assume that the current accepted amount of work of roughly 40 hours a week is the optimal amount. The 8-hour work day, weekends, time off—hard-fought and won by the labor movement!—seem to have been triumphs for human health and well-being. Why should we assume that stopping here is right? Why should we assume that less work was better in the past, but less work now would be worse?
Removing the shame that accompanies unemployment by removing the sense that one ought to be working seems one way to make people happier during unemployment. Another is what they do with their free time. Regardless of how one enters unemployment, one still confronts empty and often unstructured time.
One paper, titled “Having Too Little or Too Much Time Is Linked to Lower Subjective Well-Being” by Marissa A. Sharif, Cassie Mogilner, and Hal E. Hershfield tried to explore whether it was possible to have “too much” leisure time.
The paper concluded that it is possible to have too little discretionary time, but also possible to have too much, and that moderate amounts of discretionary time seemed best for subjective well-being. More time could be better, or at least not meaningfully worse, provided it was spent on “social” or “productive” leisure activities. This suggests that how people fare psychologically with their post-AGI unemployment will depend heavily on how they use their time, not how much of it there is
Automation-induced unemployment could feel like retiring depending on how total it is. If essentially no one is working, and no one feels like they should be working, it might be more akin to retirement, in that it would lack the shameful element of feeling set apart from one’s peers.
Women provide another view on whether formal work is good for happiness. Women are, for the most part, relatively recent entrants to the formal labor market. In the U.S., 18% of women were in the formal labor force in 1890. In 2016, 57% were. Has labor force participation made them happier? By some accounts: no. A paper that looked at subjective well-being for U.S. women from the General Social Survey between the 1970s and 2000s—a time when labor force participation was climbing—found both relative and absolute declines in female happiness.
I think women’s work and AI is a relatively optimistic story. Women have been able to automate unpleasant tasks via technological advances, while the more meaningful aspects of their work seem less likely to be automated away.  When not participating in the formal labor market, women overwhelmingly fill their time with childcare and housework. The time needed to do housework has declined over time due to tools like washing machines, dryers, and dishwashers. These tools might serve as early analogous examples of the future effects of AI: reducing unwanted and burdensome work to free up time for other tasks deemed more necessary or enjoyable.
it seems less likely that AIs will so thoroughly automate childcare and child-rearing because this “work” is so much more about the relationship between the parties involved. Like therapy, childcare and teaching seems likely to be one of the forms of work where a preference for a human worker will persist the longest.
In the early modern era, landed gentry and similar were essentially unemployed. Perhaps they did some minor administration of their tenants, some dabbled in politics or were dragged into military projects, but compared to most formal workers they seem to have worked relatively few hours. They filled the remainder of their time with intricate social rituals like balls and parties, hobbies like hunting, studying literature, and philosophy, producing and consuming art, writing letters, and spending time with friends and family. We don’t have much real well-being survey data from this group, but, hedonically, they seem to have been fine. Perhaps they suffered from some ennui, but if we were informed that the great mass of humanity was going to enter their position, I don’t think people would be particularly worried.
I sometimes wonder if there is some implicit classism in people’s worries about unemployment: the rich will know how to use their time well, but the poor will need to be kept busy.
Although a trained therapist might be able to counsel my friends or family through their troubles better, I still do it, because there is value in me being the one to do so. We can think of this as the relational reason for doing something others can do better. I write because sometimes I enjoy it, and sometimes I think it betters me. I know others do so better, but I don’t care—at least not all the time. The reasons for this are part hedonic and part virtue or morality.  A renowned AI researcher once told me that he is practicing for post-AGI by taking up activities that he is not particularly good at: jiu-jitsu, surfing, and so on, and savoring the doing even without excellence. This is how we can prepare for our future where we will have to do things from joy rather than need, where we will no longer be the best at them, but will still have to choose how to fill our days.
·palladiummag.com·
My Last Five Years of Work
Looking for AI use-cases — Benedict Evans
Looking for AI use-cases — Benedict Evans
  • LLMs have impressive capabilities, but many people struggle to find immediate use-cases that match their own needs and workflows.
  • Realizing the potential of LLMs requires not just technical advancements, but also identifying specific problems that can be automated and building dedicated applications around them.
  • The adoption of new technologies often follows a pattern of initially trying to fit them into existing workflows, before eventually changing workflows to better leverage the new tools.
if you had showed VisiCalc to a lawyer or a graphic designer, their response might well have been ‘that’s amazing, and maybe my book-keeper should see this, but I don’t do that’. Lawyers needed a word processor, and graphic designers needed (say) Postscript, Pagemaker and Photoshop, and that took longer.
I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.
A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time.
no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this.
Some of this is about imagination, and familiarity. It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’.
This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The concept of product-market fit is that normally you have to iterate your idea of the product and your idea of the use-case and customer towards each other - and then you need sales.
Meanwhile, spreadsheets were both a use-case for a PC and a general-purpose substrate in their own right, just as email or SQL might be, and yet all of those have been unbundled. The typical big company today uses hundreds of different SaaS apps, all them, so to speak, unbundling something out of Excel, Oracle or Outlook. All of them, at their core, are an idea for a problem and an idea for a workflow to solve that problem, that is easier to grasp and deploy than saying ‘you could do that in Excel!’ Rather, you instantiate the problem and the solution in software - ‘wrap it’, indeed - and sell that to a CIO. You sell them a problem.
there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL.
Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.
people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem, and with a general-purpose, open-ended prompt, the user has to think of all of that themselves, every single time, or hope it’s already in the training data. So, can the GUI itself be generative? Or do we need another whole generation of Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.
·ben-evans.com·
Looking for AI use-cases — Benedict Evans
What I learned getting acquired by Google
What I learned getting acquired by Google
While there were undoubtedly people who came in for the food, worked 3 hours a day, and enjoyed their early retirements, all the people I met were earnest, hard-working, and wanted to do great work. What beat them down were the gauntlet of reviews, the frequent re-orgs, the institutional scar tissue from past failures, and the complexity of doing even simple things on the world stage. Startups can afford to ignore many concerns, Googlers rarely can. What also got in the way were the people themselves - all the smart people who could argue against anything but not for something, all the leaders who lacked the courage to speak the uncomfortable truth, and all the people that were hired without a clear project to work on, but must still be retained through promotion-worthy made-up work.
Another blocker to progress that I saw up close was the imbalance of a top heavy team. A team with multiple successful co-founders and 10-20 year Google veterans might sound like a recipe for great things, but it’s also a recipe for gridlock. This structure might work if there are multiple areas to explore, clear goals, and strong autonomy to pursue those paths.
Good teams regularly pay down debt by cleaning things up on quieter days. Just as real is process debt. A review added because of a launch gone wrong. A new legal check to guard against possible litigation. A section added to a document template. Layers accumulate over the years until you end up unable to release a new feature for months after it's ready because it's stuck between reviews, with an unclear path out.
·shreyans.org·
What I learned getting acquired by Google
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
What Is AI Doing To Art? | NOEMA
What Is AI Doing To Art? | NOEMA
The proliferation of AI-generated images in online environments won’t eradicate human art wholesale, but it does represent a reshuffling of the market incentives that help creative economies flourish. Like the college essay, another genre of human creativity threatened by AI usurpation, creative “products” might become more about process than about art as a commodity.
Are artists using computer software on iPads to make seemingly hand-painted images engaged in a less creative process than those who produce the image by hand? We can certainly judge one as more meritorious than the other but claiming that one is more original is harder to defend.
An understanding of the technology as one that separates human from machine into distinct categories leaves little room for the messier ways we often fit together with our tools. AI-generated images will have a big impact on copyright law, but the cultural backlash against the “computers making art” overlooks the ways computation has already been incorporated into the arts.
The problem with debates around AI-generated images that demonize the tool is that the displacement of human-made art doesn’t have to be an inevitability. Markets can be adjusted to mitigate unemployment in changing economic landscapes. As legal scholar Ewan McGaughey points out, 42% of English workers were redundant after WWII — and yet the U.K. managed to maintain full employment.
Contemporary critics claim that prompt engineering and synthography aren’t emergent professions but euphemisms necessary to equate AI-generated artwork with the work of human artists. As with the development of photography as a medium, today’s debates about AI often overlook how conceptions of human creativity are themselves shaped by commercialization and labor.
Others looking to elevate AI art’s status alongside other forms of digital art are opting for an even loftier rebrand: “synthography.” This categorization suggests a process more complex than the mechanical operation of a picture-making tool, invoking the active synthesis of disparate aesthetic elements. Like Fox Talbot and his contemporaries in the nineteenth century, “synthographers” maintain that AI art simply automates the most time-consuming parts of drawing and painting, freeing up human cognition for higher-order creativity.
Separating human from camera was a necessary part of preserving the myth of the camera as an impartial form of vision. To incorporate photography into an economic landscape of creativity, however, human agency needed to ascribe to all parts of the process.
Consciously or not, proponents of AI-generated images stamp the tool with rhetoric that mirrors the democratic aspirations of the twenty-first century.
Stability AI took a similar tack, billing itself as “AI by the people, for the people,” despite turning Stable Diffusion, their text-to-image model, into a profitable asset. That the program is easy to use is another selling point. Would-be digital artists no longer need to use expensive specialized software to produce visually interesting material.
Meanwhile, communities of digital artists and their supporters claim that the reason AI-generated images are compelling at all is because they were trained with data sets that contained copyrighted material. They reject the claim that AI-generated art produces anything original and suggest it instead be thought of as a form of “twenty-first century collage.”
Erasing human influence from the photographic process was good for underscoring arguments about objectivity, but it complicated commercial viability. Ownership would need to be determined if photographs were to circulate as a new form of property. Was the true author of a photograph the camera or its human operator?
By reframing photographs as les dessins photographiques — or photographic drawings, the plaintiffs successfully established that the development of photographs in a darkroom was part of an operator’s creative process. In addition to setting up a shot, the photographer needed to coax the image from the camera’s film in a process resembling the creative output of drawing. The camera was a pencil capable of drawing with light and photosensitive surfaces, but held and directed by a human author.
Establishing photography’s dual function as both artwork and document may not have been philosophically straightforward, but it staved off a surge of harder questions.
Human intervention in the photographic process still appeared to happen only on the ends — in setup and then development — instead of continuously throughout the image-making process.
·noemamag.com·
What Is AI Doing To Art? | NOEMA