Found 28 bookmarks
Custom sorting
The Life and Death of Hollywood, by Daniel Bessner
The Life and Death of Hollywood, by Daniel Bessner
now the streaming gold rush—the era that made Dickinson—is over. In the spring of 2022, the Federal Reserve began raising interest rates after years of nearly free credit, and at roughly the same time, Wall Street began calling in the streamers’ bets. The stock prices of nearly all the major companies with streaming platforms took precipitous falls, and none have rebounded to their prior valuation.
Thanks to decades of deregulation and a gush of speculative cash that first hit the industry in the late Aughts, while prestige TV was climbing the rungs of the culture, massive entertainment and media corporations had been swallowing what few smaller companies remained, and financial firms had been infiltrating the business, moving to reduce risk and maximize efficiency at all costs, exhausting writers in evermore unstable conditions.
The new effective bosses of the industry—colossal conglomerates, asset-management companies, and private-equity firms—had not been simply pushing workers too hard and grabbing more than their fair share of the profits. They had been stripping value from the production system like copper pipes from a house—threatening the sustainability of the studios themselves. Today’s business side does not have a necessary vested interest in “the business”—in the health of what we think of as Hollywood, a place and system in which creativity is exchanged for capital. The union wins did not begin to address this fundamental problem.
To the new bosses, the quantity of money that studios had been spending on developing screenplays—many of which would never be made—was obvious fat to be cut, and in the late Aughts, executives increasingly began offering one-step deals, guaranteeing only one round of pay for one round of work. Writers, hoping to make it past Go, began doing much more labor—multiple steps of development—for what was ostensibly one step of the process. In separate interviews, Dana Stevens, writer of The Woman King, and Robin Swicord described the change using exactly the same words: “Free work was encoded.” So was safe material. In an effort to anticipate what a studio would green-light, writers incorporated feedback from producers and junior executives, constructing what became known as producer’s drafts. As Rodman explained it: “Your producer says to you, ‘I love your script. It’s a great first draft. But I know what the studio wants. This isn’t it. So I need you to just make this protagonist more likable, and blah, blah, blah.’ And you do it.”
By 2019, the major Hollywood agencies had been consolidated into an oligopoly of four companies that controlled more than 75 percent of WGA writers’ earnings. And in the 2010s, high finance reached the agencies: by 2014, private equity had acquired Creative Artists Agency and William Morris Endeavor, and the latter had purchased IMG. Meeting benchmarks legible to the new bosses—deals actually made, projects off the ground—pushed agents to function more like producers, and writers began hearing that their asking prices were too high.
Executives, meanwhile, increasingly believed that they’d found their best bet in “IP”: preexisting intellectual property—familiar stories, characters, and products—that could be milled for scripts. As an associate producer of a successful Aughts IP-driven franchise told me, IP is “sort of a hedge.” There’s some knowledge of the consumer’s interest, he said. “There’s a sort of dry run for the story.” Screenwriter Zack Stentz, who co-wrote the 2011 movies Thor and X-Men: First Class, told me, “It’s a way to take risk out of the equation as much as possible.”
Multiple writers I spoke with said that selecting preexisting characters and cinematic worlds gave executives a type of psychic edge, allowing them to claim a degree of creative credit. And as IP took over, the perceived authority of writers diminished. Julie Bush, a writer-producer for the Apple TV+ limited series Manhunt, told me, “Executives get to feel like the author of the work, even though they have a screenwriter, like me, basically create a story out of whole cloth.” At the same time, the biggest IP success story, the Marvel Cinematic Universe, by far the highest-earning franchise of all time, pioneered a production apparatus in which writers were often separated from the conception and creation of a movie’s overall story.
Joanna Robinson, co-author of the book MCU: The Reign of Marvel Studios, told me that the writers for WandaVision, a Marvel show for Disney+, had to craft almost the entirety of the series’ single season without knowing where their work was ultimately supposed to arrive: the ending remained undetermined, because executives had not yet decided what other stories they might spin off from the show.
The streaming ecosystem was built on a wager: high subscriber numbers would translate to large market shares, and eventually, profit. Under this strategy, an enormous amount of money could be spent on shows that might or might not work: more shows meant more opportunities to catch new subscribers. Producers and writers for streamers were able to put ratings aside, which at first seemed to be a luxury. Netflix paid writers large fees up front, and guaranteed that an entire season of a show would be produced. By the mid-2010s, the sheer quantity of series across the new platforms—what’s known as “Peak TV”—opened opportunities for unusually offbeat projects (see BoJack Horseman, a cartoon for adults about an equine has-been sitcom star), and substantially more shows created by women and writers of color. In 2009, across cable, broadcast, and streaming, 189 original scripted shows aired or released new episodes; in 2016, that number was 496. In 2022, it was 849.
supply soon overshot demand. For those who beat out the competition, the work became much less steady than it had been in the pre-streaming era. According to insiders, in the past, writers for a series had usually been employed for around eight months, crafting long seasons and staying on board through a show’s production. Junior writers often went to the sets where their shows were made and learned how to take a story from the page to the screen—how to talk to actors, how to stay within budget, how to take a studio’s notes—setting them up to become showrunners. Now, in an innovation called mini-rooms, reportedly first ventured by cable channels such as AMC and Starz, fewer writers were employed for each series and for much shorter periods—usually eight to ten weeks but as little as four.
Writers in the new mini-room system were often dismissed before their series went to production, which meant that they rarely got the opportunity to go to set and weren’t getting the skills they needed to advance. Showrunners were left responsible for all writing-related tasks when these rooms shut down. “It broke a lot of showrunners,” the A-list film and TV writer told me. “Physically, mentally, financially. It also ruined a lot of shows.”
The price of entry for working in Hollywood had been high for a long time: unpaid internships, low-paid assistant jobs. But now the path beyond the entry level was increasingly unclear. Jason Grote, who was a staff writer on Mad Men and who came to TV from playwriting, told me, “It became like a hobby for people, or something more like theater—you had your other day jobs or you had a trust fund.” Brenden Gallagher, a TV writer a decade in, said, “There are periods of time where I work at the Apple Store. I’ve worked doing data entry, I’ve worked doing research, I’ve worked doing copywriting.” Since he’d started in the business in 2014, in his mid-twenties, he’d never had more than eight months at a time when he didn’t need a source of income from outside the industry.
“There was this feeling,” the head of the midsize studio told me that day at Soho House, “during the last ten years or so, of, ‘Oh, we need to get more people of color in writers’ rooms.’ ” But what you get now, he said, is the black or Latino person who went to Harvard. “They’re getting the shot, but you don’t actually see a widening of the aperture to include people who grew up poor, maybe went to a state school or not even, and are just really talented. That has not happened at all.”
“The Sopranos does not exist without David Chase having worked in television for almost thirty years,” Blake Masters, a writer-producer and creator of the Showtime series Brotherhood, told me. “Because The Sopranos really could not be written by somebody unless they understood everything about television, and hated all of it.” Grote said much the same thing: “Prestige TV wasn’t new blood coming into Hollywood as much as it was a lot of veterans that were never able to tell these types of stories, who were suddenly able to cut through.”
The threshold for receiving the viewership-based streaming residuals is also incredibly high: a show must be viewed by at least 20 percent of a platform’s domestic subscribers “in the first 90 days of release, or in the first 90 days in any subsequent exhibition year.” As Bloomberg reported in November, fewer than 5 percent of the original shows that streamed on Netflix in 2022 would have met this benchmark. “I am not impressed,” the A-list writer told me in January. Entry-level TV staffing, where more and more writers are getting stuck, “is still a subsistence-level job,” he said. “It’s a job for rich kids.”
Brenden Gallagher, who echoed Conover’s belief that the union was well-positioned to gain more in 2026, put it this way: “My view is that there was a lot of wishful thinking about achieving this new middle class, based around, to paraphrase 30 Rock, making it 1997 again through science or magic. Will there be as big a working television-writer cohort that is making six figures a year consistently living in Los Angeles as there was from 1992 to 2021? No. That’s never going to come back.”
As for what types of TV and movies can get made by those who stick around, Kelvin Yu, creator and showrunner of the Disney+ series American Born Chinese, told me: “I think that there will be an industry move to the middle in terms of safer, four-quadrant TV.” (In L.A., a “four-quadrant” project is one that aims to appeal to all demographics.) “I think a lot of people,” he said, “who were disenfranchised or marginalized—their drink tickets are up.” Indeed, multiple writers and executives told me that following the strike, studio choices have skewed even more conservative than before. “It seems like buyers are much less adventurous,” one writer said. “Buyers are looking for Friends.”
The film and TV industry is now controlled by only four major companies, and it is shot through with incentives to devalue the actual production of film and television.
The entertainment and finance industries spend enormous sums lobbying both parties to maintain deregulation and prioritize the private sector. Writers will have to fight the studios again, but for more sweeping reforms. One change in particular has the potential to flip the power structure of the industry on its head: writers could demand to own complete copyright for the stories they create. They currently have something called “separated rights,” which allow a writer to use a script and its characters for limited purposes. But if they were to retain complete copyright, they would have vastly more leverage. Nearly every writer I spoke with seemed to believe that this would present a conflict with the way the union functions. This point is complicated and debatable, but Shawna Kidman and the legal expert Catherine Fisk—both preeminent scholars of copyright and media—told me that the greater challenge is Hollywood’s structure. The business is currently built around studio ownership. While Kidman found the idea of writer ownership infeasible, Fisk said it was possible, though it would be extremely difficult. Pushing for copyright would essentially mean going to war with the studios. But if things continue on their current path, writers may have to weigh such hazards against the prospect of the end of their profession. Or, they could leave it all behind.
·harpers.org·
The Life and Death of Hollywood, by Daniel Bessner
Looking for AI use-cases — Benedict Evans
Looking for AI use-cases — Benedict Evans
  • LLMs have impressive capabilities, but many people struggle to find immediate use-cases that match their own needs and workflows.
  • Realizing the potential of LLMs requires not just technical advancements, but also identifying specific problems that can be automated and building dedicated applications around them.
  • The adoption of new technologies often follows a pattern of initially trying to fit them into existing workflows, before eventually changing workflows to better leverage the new tools.
if you had showed VisiCalc to a lawyer or a graphic designer, their response might well have been ‘that’s amazing, and maybe my book-keeper should see this, but I don’t do that’. Lawyers needed a word processor, and graphic designers needed (say) Postscript, Pagemaker and Photoshop, and that took longer.
I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.
A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time.
no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this.
Some of this is about imagination, and familiarity. It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’.
This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The concept of product-market fit is that normally you have to iterate your idea of the product and your idea of the use-case and customer towards each other - and then you need sales.
Meanwhile, spreadsheets were both a use-case for a PC and a general-purpose substrate in their own right, just as email or SQL might be, and yet all of those have been unbundled. The typical big company today uses hundreds of different SaaS apps, all them, so to speak, unbundling something out of Excel, Oracle or Outlook. All of them, at their core, are an idea for a problem and an idea for a workflow to solve that problem, that is easier to grasp and deploy than saying ‘you could do that in Excel!’ Rather, you instantiate the problem and the solution in software - ‘wrap it’, indeed - and sell that to a CIO. You sell them a problem.
there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL.
Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.
people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem, and with a general-purpose, open-ended prompt, the user has to think of all of that themselves, every single time, or hope it’s already in the training data. So, can the GUI itself be generative? Or do we need another whole generation of Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.
·ben-evans.com·
Looking for AI use-cases — Benedict Evans
From Tech Critique to Ways of Living — The New Atlantis
From Tech Critique to Ways of Living — The New Atlantis
Yuk Hui's concept of "cosmotechnics" combines technology with morality and cosmology. Inspired by Daoism, it envisions a world where advanced tech exists but cultures favor simpler, purposeful tools that guide people towards contentment by focusing on local, relational, and ironic elements. A Daoist cosmotechnics points to alternative practices and priorities - learning how to live from nature rather than treating it as a resource to be exploited, valuing embodied relation over abstract information
We might think of the shifting relationship of human beings to the natural world in the terms offered by German sociologist Gerd-Günter Voß, who has traced our movement through three different models of the “conduct of life.”
The first, and for much of human history the only conduct of life, is what he calls the traditional. Your actions within the traditional conduct of life proceed from social and familial circumstances, from what is thus handed down to you. In such a world it is reasonable for family names to be associated with trades, trades that will be passed down from father to son: Smith, Carpenter, Miller.
But the rise of the various forces that we call “modernity” led to the emergence of the strategic conduct of life: a life with a plan, with certain goals — to get into law school, to become a cosmetologist, to get a corner office.
thanks largely to totalizing technology’s formation of a world in which, to borrow a phrase from Marx and Engels, “all that is solid melts into air,” the strategic model of conduct is replaced by the situational. Instead of being systematic planners, we become agile improvisers: If the job market is bad for your college major, you turn a side hustle into a business. But because you know that your business may get disrupted by the tech industry, you don’t bother thinking long-term; your current gig might disappear at any time, but another will surely present itself, which you will assess upon its arrival.
The movement through these three forms of conduct, whatever benefits it might have, makes our relations with nature increasingly instrumental. We can see this shift more clearly when looking at our changing experience of time
Within the traditional conduct of life, it is necessary to take stewardly care of the resources required for the exercise of a craft or a profession, as these get passed on from generation to generation.
But in the progression from the traditional to the strategic to the situational conduct of life, continuity of preservation becomes less valuable than immediacy of appropriation: We need more lithium today, and merely hope to find greater reserves — or a suitable replacement — tomorrow. This revaluation has the effect of shifting the place of the natural order from something intrinsic to our practices to something extrinsic. The whole of nature becomes what economists tellingly call an externality.
The basic argument of the SCT goes like this. We live in a technopoly, a society in which powerful technologies come to dominate the people they are supposed to serve, and reshape us in their image. These technologies, therefore, might be called prescriptive (to use Franklin’s term) or manipulatory (to use Illich’s). For example, social networks promise to forge connections — but they also encourage mob rule.
all things increasingly present themselves to us as technological: we see them and treat them as what Heidegger calls a “standing reserve,” supplies in a storeroom, as it were, pieces of inventory to be ordered and conscripted, assembled and disassembled, set up and set aside
In his exceptionally ambitious book The Question Concerning Technology in China (2016) and in a series of related essays and interviews, Hui argues, as the title of his book suggests, that we go wrong when we assume that there is one question concerning technology, the question, that is universal in scope and uniform in shape. Perhaps the questions are different in Hong Kong than in the Black Forest. Similarly, the distinction Heidegger draws between ancient and modern technology — where with modern technology everything becomes a mere resource — may not universally hold.
Thesis: Technology is an anthropological universal, understood as an exteriorization of memory and the liberation of organs, as some anthropologists and philosophers of technology have formulated it; Antithesis: Technology is not anthropologically universal; it is enabled and constrained by particular cosmologies, which go beyond mere functionality or utility. Therefore, there is no one single technology, but rather multiple cosmotechnics.
osmotechnics is the integration of a culture's worldview and ethical framework with its technological practices, illustrating that technology is not just about functionality but also embodies a way of life realized through making.
I think Hui’s cosmotechnics, generously leavened with the ironic humor intrinsic to Daoism, provides a genuine Way — pun intended — beyond the limitations of the Standard Critique of Technology. I say this even though I am not a Daoist; I am, rather, a Christian. But it should be noted that Daoism is both daojiao, an organized religion, and daojia, a philosophical tradition. It is daojia that Hui advocates, which makes the wisdom of Daoism accessible and attractive to a Christian like me. Indeed, I believe that elements of daojia are profoundly consonant with Christianity, and yet underdeveloped in the Christian tradition, except in certain modes of Franciscan spirituality, for reasons too complex to get into here.
this technological Daoism as an embodiment of daojia, is accessible to people of any religious tradition or none. It provides a comprehensive and positive account of the world and one’s place in it that makes a different approach to technology more plausible and compelling. The SCT tends only to gesture in the direction of a model of human flourishing, evokes it mainly by implication, whereas Yuk Hui’s Daoist model gives an explicit and quite beautiful account.
The application of Daoist principles is most obvious, as the above exposition suggests, for “users” who would like to graduate to the status of “non-users”: those who quietly turn their attention to more holistic and convivial technologies, or who simply sit or walk contemplatively. But in the interview I quoted from earlier, Hui says, “Some have quipped that what I am speaking about is Daoist robots or organic AI” — and this needs to be more than a quip. Peter Thiel’s longstanding attempt to make everyone a disciple of René Girard is a dead end. What we need is a Daoist culture of coders, and people devoted to “action without acting” making decisions about lithium mining.
Tools that do not contribute to the Way will neither be worshipped nor despised. They will simply be left to gather dust as the people choose the tools that will guide them in the path of contentment and joy: utensils to cook food, devices to make clothes. Of course, the food of one village will differ from that of another, as will the clothing. Those who follow the Way will dwell among the “ten thousand things” of this world — what we call nature — in a certain manner that cannot be specified legally: Verse 18 of the Tao says that when virtue arises only from rules, that is a sure sign that the Way is not present and active. A cosmotechnics is a living thing, always local in the specifics of its emergence in ways that cannot be specified in advance.
It is from the ten thousand things that we learn how to live among the ten thousand things; and our choice of tools will be guided by what we have learned from that prior and foundational set of relations. This is cosmotechnics.
Multiplicity avoids the universalizing, totalizing character of technopoly. The adherents of technopoly, Hui writes, “wishfully believ[e] that the world process will stamp out differences and diversities” and thereby achieve a kind of techno-secular “theodicy,” a justification of the ways of technopoly to its human subjects. But the idea of multiple cosmotechnics is also necessary, Hui believes, in order to avoid the simply delusional attempt to find “a way out of modernity” by focusing on the indigenous or biological “Other.” An aggressive hostility to modernity and a fetishizing of pre-modernity is not the Daoist way.
“I believe that to overcome modernity without falling back into war and fascism, it is necessary to reappropriate modern technology through the renewed framework of a cosmotechnics.” His project “doesn’t refuse modern technology, but rather looks into the possibility of different technological futures.”
“Thinking rooted in the earthy virtue of place is the motor of cosmotechnics. However, for me, this discourse on locality doesn’t mean a refusal of change and of progress, or any kind of homecoming or return to traditionalism; rather, it aims at a re-appropriation of technology from the perspective of the local and a new understanding of history.”
Always Coming Home illustrates cosmotechnics in a hundred ways. Consider, for instance, information storage and retrieval. At one point we meet the archivist of the Library of the Madrone Lodge in the village of Wakwaha-na. A visitor from our world is horrified to learn that while the library gives certain texts and recordings to the City of Mind, some of their documents they simply destroy. “But that’s the point of information storage and retrieval systems! The material is kept for anyone who wants or needs it. Information is passed on — the central act of human culture.” But that is not how the librarian thinks about it. “Tangible or intangible, either you keep a thing or you give it. We find it safer to give it” — to practice “unhoarding.”
It is not information, but relation. This too is cosmotechnics.
The modern technological view treats information as a resource to be stored and optimized. But the archivist in Le Guin's Daoist-inspired society takes a different approach, one where documents can be freely discarded because what matters is not the hoarding of information but the living of life in sustainable relation
a cosmotechnics is the point at which a way of life is realized through making. The point may be illustrated with reference to an ancient tale Hui offers, about an excellent butcher who explains to a duke what he calls the Dao, or “way,” of butchering. The reason he is a good butcher, he says, it not his mastery of a skill, or his reliance on superior tools. He is a good butcher because he understands the Dao: Through experience he has come to rely on his intuition to thrust the knife precisely where it does not cut through tendons or bones, and so his knife always stays sharp. The duke replies: “Now I know how to live.” Hui explains that “it is thus the question of ‘living,’ rather than that of technics, that is at the center of the story.”
·thenewatlantis.com·
From Tech Critique to Ways of Living — The New Atlantis
Fandom's Great Divide
Fandom's Great Divide
The 1970s sitcom "All in the Family" sparked debates with its bigoted-yet-lovable Archie Bunker character, leaving audiences divided over whether the show was satirizing prejudice or inadvertently promoting it, and reflecting TV's power to shape societal attitudes.
This sort of audience divide, not between those who love a show and those who hate it but between those who love it in very different ways, has become a familiar schism in the past fifteen years, during the rise of—oh, God, that phrase again—Golden Age television. This is particularly true of the much lauded stream of cable “dark dramas,” whose protagonists shimmer between the repulsive and the magnetic. As anyone who has ever read the comments on a recap can tell you, there has always been a less ambivalent way of regarding an antihero: as a hero
a subset of viewers cheered for Walter White on “Breaking Bad,” growling threats at anyone who nagged him to stop selling meth. In a blog post about that brilliant series, I labelled these viewers “bad fans,” and the responses I got made me feel as if I’d poured a bucket of oil onto a flame war from the parapets of my snobby critical castle. Truthfully, my haters had a point: who wants to hear that they’re watching something wrong?
·newyorker.com·
Fandom's Great Divide
Strong and weak technologies - cdixon
Strong and weak technologies - cdixon
Strong technologies capture the imaginations of technology enthusiasts. That is why many important technologies start out as weekend hobbies. Enthusiasts vote with their time, and, unlike most of the business world, have long-term horizons. They build from first principles, making full use of the available resources to design technologies as they ought to exist.
·cdixon.org·
Strong and weak technologies - cdixon
Why corporate America broke up with design
Why corporate America broke up with design
Design thinking alone doesn't determine market success, nor does it always transform business as expected.
There are a multitude of viable culprits behind this revenue drop. Robson himself pointed to the pandemic and tightened global budgets while arguing that “the widespread adoption of design thinking . . . has reduced demand for our services.” (Ideo was, in part, its own competition here since for years, it sold courses on design thinking.) It’s perhaps worth noting that, while design thinking was a buzzword from the ’90s to the early 2010s, it’s commonly met with all sorts of criticism today.
“People were like, ‘We did the process, why doesn’t our business transform?'” says Cliff Kuang, a UX designer and coauthor of User Friendly (and a former Fast Company editor). He points to PepsiCo, which in 2012 hired its first chief design officer and opened an in-house design studio. The investment has not yielded a string of blockbusters (and certainly no iPhone for soda). One widely promoted product, Drinkfinity, attempted to respond to diminishing soft-drink sales with K-Cup-style pods and a reusable water bottle. The design process was meticulous, with extensive prototyping and testing. But Drinkfinity had a short shelf life, discontinued within two years of its 2018 release.
“Design is rarely the thing that determines whether something succeeds in the market,” Kuang says. Take Amazon’s Kindle e-reader. “Jeff Bezos henpecked the original Kindle design to death. Because he didn’t believe in capacitive touch, he put a keyboard on it, and all this other stuff,” Kuang says. “Then the designer of the original Kindle walked and gave [the model] to Barnes & Noble.” Barnes & Noble released a product with a superior physical design, the Nook. But design was no match for distribution. According to the most recent data, Amazon owns approximately 80% of the e-book market share.
The rise of mobile computing has forced companies to create effortless user experiences—or risk getting left behind. When you hail an Uber or order toilet paper in a single click, you are reaping the benefits of carefully considered design. A 2018 McKinsey study found that companies with the strongest commitment to design and the best execution of design principles had revenue that was 32 percentage points higher—and shareholder returns that were 56 percentage points higher—than other companies.
·fastcompany.com·
Why corporate America broke up with design
Fake It ’Til You Fake It
Fake It ’Til You Fake It
On the long history of photo manipulation dating back to the origins of photography. While new technologies have made manipulation much easier, the core questions around trust and authenticity remain the same and have been asked for over a century.
The criticisms I have been seeing about the features of the Pixel 8, however, feel like we are only repeating the kinds of fears of nearly two hundred years. We have not been able to wholly trust photographs pretty much since they were invented. The only things which have changed in that time are the ease with which the manipulations can happen, and their availability.
We all live with a growing sense that everything around us is fraudulent. It is striking to me how these tools have been introduced as confidence in institutions has declined. It feels like a death spiral of trust — not only are we expected to separate facts from their potentially misleading context, we increasingly feel doubtful that any experts are able to help us, yet we keep inventing new ways to distort reality.
The questions that are being asked of the Pixel 8’s image manipulation capabilities are good and necessary because there are real ethical implications. But I think they need to be more fully contextualized. There is a long trail of exactly the same concerns and, to avoid repeating ourselves yet again, we should be asking these questions with that history in mind. This era feels different. I think we should be asking more precisely why that is.
The questions we ask about generative technologies should acknowledge that we already have plenty of ways to lie, and that lots of the information we see is suspect. That does not mean we should not believe anything, but it does mean we ought to be asking questions about what is changed when tools like these become more widespread and easier to use.
·pxlnv.com·
Fake It ’Til You Fake It
Designing in Winter
Designing in Winter
As the construction industry matured, and best practices were commodified, the percentage of buildings requiring the direct involvement of architects plummeted. Builders can now choose from an array of standard layouts that cover most of their needs; materials and design questions, too, have been standardized, and reflect economies of scale more than local or unique contextual realities.
Cities have lots of rules and regulation about how things can be designed and built, reducing the need for and value of creativity
The situation is similar in our field. In 2009, companies might ask a designer to “imagine the shoe-shopping experience on mobile,” and such a designer would need to marshal a considerable number of skills to do so: research into how such activity happens today and how it had been attempted online before and the psychology of people engaged in it; explorations of many kinds of interfaces, since no one really knew yet how to present these kinds of information on smartphones; market investigations to determine e.g. “what % of prospective shoppers have which kinds of devices, and what designs can accommodate them all”; testing for raw usability: can people even figure out what to do when they see these screens? And so on.In 2023, the scene is very different. Best practices in most forms of software and services are commodified; we know, from a decade plus of market activity, what works for most people in a very broad range of contexts. Standardization is everywhere, and resources for the easy development of UIs abound.
It’s also the case that if a designer adds 15% to a design’s quality but increases cycle time substantially, is another cook in the kitchen, demands space for ideation or research, and so on, the trade-off will surely start to seem debatable to many leaders, and that’s ignoring FTE costs! We can be as offended by this as we want, but the truth is that the ten millionth B2B SaaS startup can probably validate or falsify product-market-fit without hiring Jony Ive and an entire team of specialists.
We design apps downstream of how Apple designs iOS. There’s just not that much room for innovating in UI at the moment
Today, for a larger-than-ever percentage of projects, some good libraries and guidelines like Apple’s HIG can get non-designers where they need to go. Many companies could probably do very well with1 designer to do native design + create and maintain a design systemPMs and executives for ideationFront-end engineers working off of the design system / component library to implement ideasSo even where commodification doesn’t mean no designers, it still probably means fewer designers.
If, for example, they land AR / VR, we will once again face a world of businesses who need to figure out how their goods and services make sense in a new context: how should we display Substack posts in AR, for example? Which metaphors should persist into the new world? What’s the best way to shop for shoes in VR? What affordances empower the greatest number of people?
But there will at least be another period when engineers who “just ship” will produce such massively worse user interfaces that software designers will be important again.
“design process” and “design cycles” are under pressure and may face much more soon. Speed helps, and so too does a general orientation towards working with production however it’s happening. This basically sums to: “Be less precious, and try to fit in in whatever ways help your company ship.”
being capable of more of the work of making software can mean becoming better at strategy and ideation, such that you’re ever executive’s favorite collaborative partner; you listen well, you mock fast (maybe with AI), and you help them communicate; or it can mean becoming better at execution, learning, for example, to code.
·suckstosuck.substack.com·
Designing in Winter
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
What Is AI Doing To Art? | NOEMA
What Is AI Doing To Art? | NOEMA
The proliferation of AI-generated images in online environments won’t eradicate human art wholesale, but it does represent a reshuffling of the market incentives that help creative economies flourish. Like the college essay, another genre of human creativity threatened by AI usurpation, creative “products” might become more about process than about art as a commodity.
Are artists using computer software on iPads to make seemingly hand-painted images engaged in a less creative process than those who produce the image by hand? We can certainly judge one as more meritorious than the other but claiming that one is more original is harder to defend.
An understanding of the technology as one that separates human from machine into distinct categories leaves little room for the messier ways we often fit together with our tools. AI-generated images will have a big impact on copyright law, but the cultural backlash against the “computers making art” overlooks the ways computation has already been incorporated into the arts.
The problem with debates around AI-generated images that demonize the tool is that the displacement of human-made art doesn’t have to be an inevitability. Markets can be adjusted to mitigate unemployment in changing economic landscapes. As legal scholar Ewan McGaughey points out, 42% of English workers were redundant after WWII — and yet the U.K. managed to maintain full employment.
Contemporary critics claim that prompt engineering and synthography aren’t emergent professions but euphemisms necessary to equate AI-generated artwork with the work of human artists. As with the development of photography as a medium, today’s debates about AI often overlook how conceptions of human creativity are themselves shaped by commercialization and labor.
Others looking to elevate AI art’s status alongside other forms of digital art are opting for an even loftier rebrand: “synthography.” This categorization suggests a process more complex than the mechanical operation of a picture-making tool, invoking the active synthesis of disparate aesthetic elements. Like Fox Talbot and his contemporaries in the nineteenth century, “synthographers” maintain that AI art simply automates the most time-consuming parts of drawing and painting, freeing up human cognition for higher-order creativity.
Separating human from camera was a necessary part of preserving the myth of the camera as an impartial form of vision. To incorporate photography into an economic landscape of creativity, however, human agency needed to ascribe to all parts of the process.
Consciously or not, proponents of AI-generated images stamp the tool with rhetoric that mirrors the democratic aspirations of the twenty-first century.
Stability AI took a similar tack, billing itself as “AI by the people, for the people,” despite turning Stable Diffusion, their text-to-image model, into a profitable asset. That the program is easy to use is another selling point. Would-be digital artists no longer need to use expensive specialized software to produce visually interesting material.
Meanwhile, communities of digital artists and their supporters claim that the reason AI-generated images are compelling at all is because they were trained with data sets that contained copyrighted material. They reject the claim that AI-generated art produces anything original and suggest it instead be thought of as a form of “twenty-first century collage.”
Erasing human influence from the photographic process was good for underscoring arguments about objectivity, but it complicated commercial viability. Ownership would need to be determined if photographs were to circulate as a new form of property. Was the true author of a photograph the camera or its human operator?
By reframing photographs as les dessins photographiques — or photographic drawings, the plaintiffs successfully established that the development of photographs in a darkroom was part of an operator’s creative process. In addition to setting up a shot, the photographer needed to coax the image from the camera’s film in a process resembling the creative output of drawing. The camera was a pencil capable of drawing with light and photosensitive surfaces, but held and directed by a human author.
Establishing photography’s dual function as both artwork and document may not have been philosophically straightforward, but it staved off a surge of harder questions.
Human intervention in the photographic process still appeared to happen only on the ends — in setup and then development — instead of continuously throughout the image-making process.
·noemamag.com·
What Is AI Doing To Art? | NOEMA
Elegy for the Native Mac App
Elegy for the Native Mac App
Tracing a trendline from the start of the Mac apps platforms to the future of visionOS
In recent years Sketch’s Mac-ness has become a liability. Requiring every person in a large design organization to use a Mac is not an easy sell. Plus, a new generation of “internet native” users expect different things from their software than old-school Mac connoisseurs: Multiplayer editing, inline commenting, and cloud sync are now table-stakes for any successful creative app.
At the time of Sketch’s launch most UX designers were using Photoshop or Illustrator. Both were expensive and overwrought, and neither were actually created for UX design. Sketch’s innovation wasn’t any particular feature — if anything it was the lack of features. It did a few things really well, and those were exactly the things UX designers wanted. In that way it really embodied the Mac ethos: simple, single-purpose, and fun to use.
Apple pushed hard to attract artists, filmmakers, musicians, and other creative professionals. It started a virtuous cycle. More creatives using Macs meant more potential customers for creative Mac software, which meant more developers started building that software, which in turn attracted even more customers to the platform.And so the Mac ended up with an abundance of improbably-good creative tools. Usually these apps weren’t as feature-rich or powerful as their PC counterparts, but were faster and easier and cheaper and just overall more conducive to the creative process.
Apple is still very interested in selling Macs — precision-milled aluminum computers with custom-designed chips and “XDR” screens. But they no longer care much about The Mac: The operating system, the software platform, its design sensibilities, its unique features, its vibes.
The term-of-art for this style is “skeuomorphism”: modern designs inspired by their antecedents — calculator apps that look like calculators, password-entry fields that look like bank vaults, reminders that look like sticky notes, etc.This skeuomorphic playfulness made downloading a new Mac app delightful. The discomfort of opening a new unfamiliar piece of software was totally offset by the joy of seeing a glossy pixel-perfect rendition of a bookshelf or a bodega or a poker table, complete with surprising little animations.
There are literally dozens of ways to develop cross-platform apps, including Apple’s own Catalyst — but so far, none of these tools can create anything quite as polished as native implementations.So it comes down to user preference: Would you rather have the absolute best app experience, or do you want the ability to use an acceptably-functional app from any of your devices? It seems that users have shifted to prefer the latter.
Unfortunately the appeal of native Mac software was, at its core, driven by brand strategy. Mac users were sold on the idea that they were buying not just a device but an ecosystem, an experience. Apple extended this branding for third-party developers with its yearly Apple Design Awards.
for the first time since the introduction of the original Mac, they’re just computers. Yes, they were technically always “just computers”, but they used to feel like something bigger. Now Macs have become just another way, perhaps the best way, to use Slack or VSCode or Figma or Chrome or Excel.
visionOS’s story diverges from that of the Mac. Apple is no longer a scrappy upstart. Rather, they’re the largest company in the world by market cap. It’s not so much that Apple doesn’t care about indie developers anymore, it’s just that indie developers often end up as the ants crushed beneath Apple’s giant corporate feet.
I think we’ll see a lot of cool indie software for visionOS, but also I think most of it will be small utilities or toys. It takes a lot of effort to build and support apps that people rely on for their productivity or creativity. If even the wildly-popular Mac platform can’t support those kinds of projects anymore, what chance does a luxury headset have?
·medium.com·
Elegy for the Native Mac App
Pessimists Archive
Pessimists Archive

Pessimists Archive™ is a project to educate people on and archive the history of technophobia and moral panics. We believe the best antidote to fear of the new is looking back at fear of the old.

Only by looking back at fears of old things when they were new, can we have rational constructive debates about emerging technologies today that avoids the pitfalls of moral panic and incumbent protectionism.

Pessimists Archive™ is a project to educate people on and archive the history of technophobia and moral panics. We believe the best antidote to fear of the new is looking back at fear of the old.Only by looking back at fears of old things when they were new, can we have rational constructive debates about emerging technologies today that avoids the pitfalls of moral panic and incumbent protectionism.
·pessimistsarchive.org·
Pessimists Archive
Studio Branding in the Streaming Wars
Studio Branding in the Streaming Wars
The race for the streamers to configure themselves as full-service production, distribution, and exhibition outlets has intensified the need for each to articulate a more specific brand identity.
What we are seeing with the streaming wars is not the emergence of a cluster of copy-cat services, with everyone trying to do everything, but the beginnings of a legible strategy to carve up the mediascape and compete for peoples’ waking hours.
Netflix’s penchant for character-centered stories with a three-act structure, as well as high production values (an average of $20–$50-plus million for award contenders), resonates with the “quality” features of the Classical era.
rom early on, Netflix cultivated a liberal public image, which has propelled its investment in social documentary and also driven some of its inclusivity initiatives and collaborations with global auteurs and showrunners of color, such as Alfonso Cuarón, Ava DuVernay, Spike Lee, and Justin Simien.
Quibi as short for “Quick Bites.” In turn, the promos wouldn’t so much emphasize “the what” of the programming as the interest and convenience of being able to watch it while waiting, commuting, or just taking a break. However, this unit of prospective viewing time lies uncomfortably between the ultra-brief TikTok video and the half-hour sitcom.
Peacock’s central obstacle moving forward will be convincing would-be subscribers that the things they loved about linear broadcast and cable TV are worth the investment.
One of the most intriguing and revealing of metaphors, however, isn’t so much related to war as celestial coexistence of streamer-planets within the “universe.” Certainly, the term resonates with key franchises, such as the “Marvel Cinematic Universe,” and the bevvy of intricate stories that such an expansive environment makes possible. This language stakes a claim for the totality of media — that there are no other kinds of moving images beyond what exists on, or what can be imagined for, these select platforms.
·lareviewofbooks.org·
Studio Branding in the Streaming Wars
The End of the English Major
The End of the English Major
. Perhaps you see the liberal-arts idyll, removed from the pressures of the broader world and filled with tweedy creatures reading on quadrangle lawns. This is the redoubt of the idealized figure of the English major, sensitive and sweatered, moving from “Pale Fire” to “The Fire Next Time” and scaling the heights of “Ulysses” for the view. The goal of such an education isn’t direct career training but cultivation of the min
Or perhaps you think of the university as the research colony, filled with laboratories and conferences and peer-reviewed papers written for audiences of specialists. This is a place that thumps with the energy of a thousand gophers turning over knowledge. It’s the small-bore university of campus comedy—of “Lucky Jim” and “Who’s Afraid of Virginia Woolf?”—but also the quarry of deconstruction, quantum electrodynamics, and value theory. It produces new knowledge and ways of understanding that wouldn’t have an opportunity to emerge anywhere else.
English professors find the turn particularly baffling now: a moment when, by most appearances, the appetite for public contemplation of language, identity, historiography, and other longtime concerns of the seminar table is at a peak.
“Young people are very, very concerned about the ethics of representation, of cultural interaction—all these kinds of things that, actually, we think about a lot!” Amanda Claybaugh, Harvard’s dean of undergraduate education and an English professor, told me last fall.
In a quantitative society for which optimization—getting the most output from your input—has become a self-evident good, universities prize actions that shift numbers, and pre-professionalism lends itself to traceable change
One literature professor and critic at Harvard—not old or white or male—noticed that it had become more publicly rewarding for students to critique something as “problematic” than to grapple with what the problems might be; they seemed to have found that merely naming concerns had more value, in today’s cultural marketplace, than curiosity about what underlay them
·newyorker.com·
The End of the English Major
Why education is so difficult and contentious
Why education is so difficult and contentious
This article proposes to explain why education is so difficult and contentious by arguing that educational thinking draws on only three fundamental ideas&emdash;that of socializing the young, shaping the mind by a disciplined academic curriculum, and facilitating the development of students' potential. All educational positions are made up of various mixes of these ideas. The problems we face in education are due to the fact that each of these ideas is significantly flawed and also that each is incompatible in basic ways with the other two. Until we recognize these basic incompatibilities we will be unable adequately to respond to the problems we face.
·sfu.ca·
Why education is so difficult and contentious
Discuss HN: Software Careers Post ChatGPT+ | Hacker News
Discuss HN: Software Careers Post ChatGPT+ | Hacker News
ChatGPT feels like the current aim assist debates in a lot of FPSses to me. It'll make you better at the shooting part of the game, perfect even. But, won't necessarily make you that much of a better player, because aiming is only one aspect of what makes someone good at FPSes. However, if someone is generally good enough or very good at the "not aiming" portion of the games, then having aim assist would drastically increase their overall skill.
·news.ycombinator.com·
Discuss HN: Software Careers Post ChatGPT+ | Hacker News