Found 21 bookmarks
Newest
Your "Per-Seat" Margin is My Opportunity
Your "Per-Seat" Margin is My Opportunity

Traditional software is sold on a per seat subscription. More humans, more money. We are headed to a future where AI agents will replace the work humans do. But you can’t charge agents a per seat cost. So we’re headed to a world where software will be sold on a consumption model (think tasks) and then on an outcome model (think job completed) Incumbents will be forced to adapt but it’s classic innovators dilemma. How do you suddenly give up all that subscription revenue? This gives an opportunity for startups to win.

Per-seat pricing only works when your users are human. But when agents become the primary users of software, that model collapses.
Executives aren't evaluating software against software anymore. They're comparing the combined costs of software licenses plus labor against pure outcome-based solutions. Think customer support (per resolved ticket vs. per agent + seat), marketing (per campaign vs. headcount), sales (per qualified lead vs. rep). That's your pricing umbrella—the upper limit enterprises will pay before switching entirely to AI.
enterprises are used to deterministic outcomes and fixed annual costs. Usage-based pricing makes budgeting harder. But individual leaders seeing 10x efficiency gains won't wait for procurement to catch up. Savvy managers will find ways around traditional buying processes.
This feels like a generational reset of how businesses operate. Zero upfront costs, pay only for outcomes—that's not just a pricing model. That's the future of business.
The winning strategy in my books? Give the platform away for free. Let your agents read and write to existing systems through unstructured data—emails, calls, documents. Once you handle enough workflows, you become the new system of record.
·writing.nikunjk.com·
Your "Per-Seat" Margin is My Opportunity
How Elon Musk Got Tangled Up in Blue
How Elon Musk Got Tangled Up in Blue
Mr. Musk had largely come to peace with a price of $100 a year for Blue. But during one meeting to discuss pricing, his top assistant, Jehn Balajadia, felt compelled to speak up. “There’s a lot of people who can’t even buy gas right now,” she said, according to two people in attendance. It was hard to see how any of those people would pony up $100 on the spot for a social media status symbol. Mr. Musk paused to think. “You know, like, what do people pay for Starbucks?” he asked. “Like $8?” Before anyone could raise objections, he whipped out his phone to set his word in stone. “Twitter’s current lords & peasants system for who has or doesn’t have a blue checkmark is bullshit,” he tweeted on Nov. 1. “Power to the people! Blue for $8/month.”
·nytimes.com·
How Elon Musk Got Tangled Up in Blue
$700bn delusion - Does using data to target specific audiences make advertising more effective?
$700bn delusion - Does using data to target specific audiences make advertising more effective?
Being broadly effective, but somewhat inefficient, is better than being narrowly efficient, but less effective.
Targeting can increase the scale of effects, but this study suggests that the cheaper approach of not targeting so specifically, might actually deliver a greater financial outcome
As Wiberg’s findings point out, the problem with targeting towards conversion optimisation is you are effectively advertising to many people who were already going to buy you.
If I only sell to IT decision-makers, for example, I need some targeting, as I just can’t afford to talk to random consumers. I must pay for some targeting in my media buy, in order to reach a relatively niche audience.  Targeting is no longer a nice to do, but a must have. The interesting question then becomes not should I target, but how can I target effectively?
What they found was any form of second or third-party data led segmenting and targeting of advertising does not outperform a random sample when it comes to accuracy of reaching the actual target.
Contextual ads massively outperform even first party data
We can improve the quality of our targeting much better by just buying ads that appear in the right context, than we can by using my massive first party database to drive the buy, and it’s way cheaper to do that. Putting ads in contextually relevant places beats any form of targeting to individual characteristics. Even using your own data.
The secret to effective, immediate action-based advertising, is perhaps not so much about finding the right people with the right personas and serving them a tailored customised message. It’s to be in the right places. The places where they are already engaging with your category, and then use advertising to make buying easier from that place
Even hard, sales-driving advertising isn’t the tough guy we want it to be. Advertising mostly works when it makes things easier, much more often than when it tries to persuade or invoke a reluctant action.
Thinking about advertising as an ease-making mechanism is much more likely to set us on the right path
If your ad is in the right place, you automatically get the right people, and you also get them at the right time; when they are actually more interested in what you have to sell. You also spend much less to be there than crunching all that data
·archive.is·
$700bn delusion - Does using data to target specific audiences make advertising more effective?
AI Integration and Modularization
AI Integration and Modularization
Summary: The question of integration versus modularization in the context of AI, drawing on the work of economists Ronald Coase and Clayton Christensen. Google is pursuing a fully integrated approach similar to Apple, while AWS is betting on modularization, and Microsoft and Meta are somewhere in between. Integration may provide an advantage in the consumer market and for achieving AGI, but that for enterprise AI, a more modular approach leveraging data gravity and treating models as commodities may prevail. Ultimately, the biggest beneficiary of this dynamic could be Nvidia.
The left side of figure 5-1 indicates that when there is a performance gap — when product functionality and reliability are not yet good enough to address the needs of customers in a given tier of the market — companies must compete by making the best possible products. In the race to do this, firms that build their products around proprietary, interdependent architectures enjoy an important competitive advantage against competitors whose product architectures are modular, because the standardization inherent in modularity takes too many degrees of design freedom away from engineers, and they cannot not optimize performance.
The issue I have with this analysis of vertical integration — and this is exactly what I was taught at business school — is that the only considered costs are financial. But there are other, more difficult to quantify costs. Modularization incurs costs in the design and experience of using products that cannot be overcome, yet cannot be measured. Business buyers — and the analysts who study them — simply ignore them, but consumers don’t. Some consumers inherently know and value quality, look-and-feel, and attention to detail, and are willing to pay a premium that far exceeds the financial costs of being vertically integrated.
Google trains and runs its Gemini family of models on its own TPU processors, which are only available on Google’s cloud infrastructure. Developers can access Gemini through Vertex AI, Google’s fully-managed AI development platform; and, to the extent Vertex AI is similar to Google’s internal development environment, that is the platform on which Google is building its own consumer-facing AI apps. It’s all Google, from top-to-bottom, and there is evidence that this integration is paying off: Gemini 1.5’s industry leading 2 million token context window almost certainly required joint innovation between Google’s infrastructure team and its model-building team.
In AI, Google is pursuing an integrated strategy, building everything from chips to models to applications, similar to Apple's approach in smartphones.
On the other extreme is AWS, which doesn’t have any of its own models; instead its focus has been on its Bedrock managed development platform, which lets you use any model. Amazon’s other focus has been on developing its own chips, although the vast majority of its AI business runs on Nvidia GPUs.
Microsoft is in the middle, thanks to its close ties to OpenAI and its models. The company added Azure Models-as-a-Service last year, but its primary focus for both external customers and its own internal apps has been building on top of OpenAI’s GPT family of models; Microsoft has also launched its own chip for inference, but the vast majority of its workloads run on Nvidia.
Google is certainly building products for the consumer market, but those products are not devices; they are Internet services. And, as you might have noticed, the historical discussion didn’t really mention the Internet. Both Google and Meta, the two biggest winners of the Internet epoch, built their services on commodity hardware. Granted, those services scaled thanks to the deep infrastructure work undertaken by both companies, but even there Google’s more customized approach has been at least rivaled by Meta’s more open approach. What is notable is that both companies are integrating their models and their apps, as is OpenAI with ChatGPT.
Google's integrated AI strategy is unique but may not provide a sustainable advantage for Internet services in the way Apple's integration does for devices
It may be the case that selling hardware, which has to be perfect every year to justify a significant outlay of money by consumers, provides a much better incentive structure for maintaining excellence and execution than does being an Aggregator that users access for free.
Google’s collection of moonshots — from Waymo to Google Fiber to Nest to Project Wing to Verily to Project Loon (and the list goes on) — have mostly been science projects that have, for the most part, served to divert profits from Google Search away from shareholders. Waymo is probably the most interesting, but even if it succeeds, it is ultimately a car service rather far afield from Google’s mission statement “to organize the world’s information and make it universally accessible and useful.”
The only thing that drives meaningful shifts in platform marketshare are paradigm shifts, and while I doubt the v1 version of Pixie [Google’s rumored Pixel-only AI assistant] would be good enough to drive switching from iPhone users, there is at least a path to where it does exactly that.
the fact that Google is being mocked mercilessly for messed-up AI answers gets at why consumer-facing AI may be disruptive for the company: the reason why incumbents find it hard to respond to disruptive technologies is because they are, at least at the beginning, not good enough for the incumbent’s core offering. Time will tell if this gives more fuel to a shift in smartphone strategies, or makes the company more reticent.
while I was very impressed with Google’s enterprise pitch, which benefits from its integration with Google’s infrastructure without all of the overhead of potentially disrupting the company’s existing products, it’s going to be a heavy lift to overcome data gravity, i.e. the fact that many enterprise customers will simply find it easier to use AI services on the same clouds where they already store their data (Google does, of course, also support non-Gemini models and Nvidia GPUs for enterprise customers). To the extent Google wins in enterprise it may be by capturing the next generation of startups that are AI first and, by definition, data light; a new company has the freedom to base its decision on infrastructure and integration.
Amazon is certainly hoping that argument is correct: the company is operating as if everything in the AI value chain is modular and ultimately a commodity, which insinuates that it believes that data gravity will matter most. What is difficult to separate is to what extent this is the correct interpretation of the strategic landscape versus a convenient interpretation of the facts that happens to perfectly align with Amazon’s strengths and weaknesses, including infrastructure that is heavily optimized for commodity workloads.
Unclear if Amazon's strategy is based on true insight or motivated reasoning based on their existing strengths
Meta’s open source approach to Llama: the company is focused on products, which do benefit from integration, but there are also benefits that come from widespread usage, particularly in terms of optimization and complementary software. Open source accrues those benefits without imposing any incentives that detract from Meta’s product efforts (and don’t forget that Meta is receiving some portion of revenue from hyperscalers serving Llama models).
The iPhone maker, like Amazon, appears to be betting that AI will be a feature or an app; like Amazon, it’s not clear to what extent this is strategic foresight versus motivated reasoning.
achieving something approaching AGI, whatever that means, will require maximizing every efficiency and optimization, which rewards the integrated approach.
the most value will be derived from building platforms that treat models like processors, delivering performance improvements to developers who never need to know what is going on under the hood.
·stratechery.com·
AI Integration and Modularization
The Life and Death of Hollywood, by Daniel Bessner
The Life and Death of Hollywood, by Daniel Bessner
now the streaming gold rush—the era that made Dickinson—is over. In the spring of 2022, the Federal Reserve began raising interest rates after years of nearly free credit, and at roughly the same time, Wall Street began calling in the streamers’ bets. The stock prices of nearly all the major companies with streaming platforms took precipitous falls, and none have rebounded to their prior valuation.
Thanks to decades of deregulation and a gush of speculative cash that first hit the industry in the late Aughts, while prestige TV was climbing the rungs of the culture, massive entertainment and media corporations had been swallowing what few smaller companies remained, and financial firms had been infiltrating the business, moving to reduce risk and maximize efficiency at all costs, exhausting writers in evermore unstable conditions.
The new effective bosses of the industry—colossal conglomerates, asset-management companies, and private-equity firms—had not been simply pushing workers too hard and grabbing more than their fair share of the profits. They had been stripping value from the production system like copper pipes from a house—threatening the sustainability of the studios themselves. Today’s business side does not have a necessary vested interest in “the business”—in the health of what we think of as Hollywood, a place and system in which creativity is exchanged for capital. The union wins did not begin to address this fundamental problem.
To the new bosses, the quantity of money that studios had been spending on developing screenplays—many of which would never be made—was obvious fat to be cut, and in the late Aughts, executives increasingly began offering one-step deals, guaranteeing only one round of pay for one round of work. Writers, hoping to make it past Go, began doing much more labor—multiple steps of development—for what was ostensibly one step of the process. In separate interviews, Dana Stevens, writer of The Woman King, and Robin Swicord described the change using exactly the same words: “Free work was encoded.” So was safe material. In an effort to anticipate what a studio would green-light, writers incorporated feedback from producers and junior executives, constructing what became known as producer’s drafts. As Rodman explained it: “Your producer says to you, ‘I love your script. It’s a great first draft. But I know what the studio wants. This isn’t it. So I need you to just make this protagonist more likable, and blah, blah, blah.’ And you do it.”
By 2019, the major Hollywood agencies had been consolidated into an oligopoly of four companies that controlled more than 75 percent of WGA writers’ earnings. And in the 2010s, high finance reached the agencies: by 2014, private equity had acquired Creative Artists Agency and William Morris Endeavor, and the latter had purchased IMG. Meeting benchmarks legible to the new bosses—deals actually made, projects off the ground—pushed agents to function more like producers, and writers began hearing that their asking prices were too high.
Executives, meanwhile, increasingly believed that they’d found their best bet in “IP”: preexisting intellectual property—familiar stories, characters, and products—that could be milled for scripts. As an associate producer of a successful Aughts IP-driven franchise told me, IP is “sort of a hedge.” There’s some knowledge of the consumer’s interest, he said. “There’s a sort of dry run for the story.” Screenwriter Zack Stentz, who co-wrote the 2011 movies Thor and X-Men: First Class, told me, “It’s a way to take risk out of the equation as much as possible.”
Multiple writers I spoke with said that selecting preexisting characters and cinematic worlds gave executives a type of psychic edge, allowing them to claim a degree of creative credit. And as IP took over, the perceived authority of writers diminished. Julie Bush, a writer-producer for the Apple TV+ limited series Manhunt, told me, “Executives get to feel like the author of the work, even though they have a screenwriter, like me, basically create a story out of whole cloth.” At the same time, the biggest IP success story, the Marvel Cinematic Universe, by far the highest-earning franchise of all time, pioneered a production apparatus in which writers were often separated from the conception and creation of a movie’s overall story.
Joanna Robinson, co-author of the book MCU: The Reign of Marvel Studios, told me that the writers for WandaVision, a Marvel show for Disney+, had to craft almost the entirety of the series’ single season without knowing where their work was ultimately supposed to arrive: the ending remained undetermined, because executives had not yet decided what other stories they might spin off from the show.
The streaming ecosystem was built on a wager: high subscriber numbers would translate to large market shares, and eventually, profit. Under this strategy, an enormous amount of money could be spent on shows that might or might not work: more shows meant more opportunities to catch new subscribers. Producers and writers for streamers were able to put ratings aside, which at first seemed to be a luxury. Netflix paid writers large fees up front, and guaranteed that an entire season of a show would be produced. By the mid-2010s, the sheer quantity of series across the new platforms—what’s known as “Peak TV”—opened opportunities for unusually offbeat projects (see BoJack Horseman, a cartoon for adults about an equine has-been sitcom star), and substantially more shows created by women and writers of color. In 2009, across cable, broadcast, and streaming, 189 original scripted shows aired or released new episodes; in 2016, that number was 496. In 2022, it was 849.
supply soon overshot demand. For those who beat out the competition, the work became much less steady than it had been in the pre-streaming era. According to insiders, in the past, writers for a series had usually been employed for around eight months, crafting long seasons and staying on board through a show’s production. Junior writers often went to the sets where their shows were made and learned how to take a story from the page to the screen—how to talk to actors, how to stay within budget, how to take a studio’s notes—setting them up to become showrunners. Now, in an innovation called mini-rooms, reportedly first ventured by cable channels such as AMC and Starz, fewer writers were employed for each series and for much shorter periods—usually eight to ten weeks but as little as four.
Writers in the new mini-room system were often dismissed before their series went to production, which meant that they rarely got the opportunity to go to set and weren’t getting the skills they needed to advance. Showrunners were left responsible for all writing-related tasks when these rooms shut down. “It broke a lot of showrunners,” the A-list film and TV writer told me. “Physically, mentally, financially. It also ruined a lot of shows.”
The price of entry for working in Hollywood had been high for a long time: unpaid internships, low-paid assistant jobs. But now the path beyond the entry level was increasingly unclear. Jason Grote, who was a staff writer on Mad Men and who came to TV from playwriting, told me, “It became like a hobby for people, or something more like theater—you had your other day jobs or you had a trust fund.” Brenden Gallagher, a TV writer a decade in, said, “There are periods of time where I work at the Apple Store. I’ve worked doing data entry, I’ve worked doing research, I’ve worked doing copywriting.” Since he’d started in the business in 2014, in his mid-twenties, he’d never had more than eight months at a time when he didn’t need a source of income from outside the industry.
“There was this feeling,” the head of the midsize studio told me that day at Soho House, “during the last ten years or so, of, ‘Oh, we need to get more people of color in writers’ rooms.’ ” But what you get now, he said, is the black or Latino person who went to Harvard. “They’re getting the shot, but you don’t actually see a widening of the aperture to include people who grew up poor, maybe went to a state school or not even, and are just really talented. That has not happened at all.”
“The Sopranos does not exist without David Chase having worked in television for almost thirty years,” Blake Masters, a writer-producer and creator of the Showtime series Brotherhood, told me. “Because The Sopranos really could not be written by somebody unless they understood everything about television, and hated all of it.” Grote said much the same thing: “Prestige TV wasn’t new blood coming into Hollywood as much as it was a lot of veterans that were never able to tell these types of stories, who were suddenly able to cut through.”
The threshold for receiving the viewership-based streaming residuals is also incredibly high: a show must be viewed by at least 20 percent of a platform’s domestic subscribers “in the first 90 days of release, or in the first 90 days in any subsequent exhibition year.” As Bloomberg reported in November, fewer than 5 percent of the original shows that streamed on Netflix in 2022 would have met this benchmark. “I am not impressed,” the A-list writer told me in January. Entry-level TV staffing, where more and more writers are getting stuck, “is still a subsistence-level job,” he said. “It’s a job for rich kids.”
Brenden Gallagher, who echoed Conover’s belief that the union was well-positioned to gain more in 2026, put it this way: “My view is that there was a lot of wishful thinking about achieving this new middle class, based around, to paraphrase 30 Rock, making it 1997 again through science or magic. Will there be as big a working television-writer cohort that is making six figures a year consistently living in Los Angeles as there was from 1992 to 2021? No. That’s never going to come back.”
As for what types of TV and movies can get made by those who stick around, Kelvin Yu, creator and showrunner of the Disney+ series American Born Chinese, told me: “I think that there will be an industry move to the middle in terms of safer, four-quadrant TV.” (In L.A., a “four-quadrant” project is one that aims to appeal to all demographics.) “I think a lot of people,” he said, “who were disenfranchised or marginalized—their drink tickets are up.” Indeed, multiple writers and executives told me that following the strike, studio choices have skewed even more conservative than before. “It seems like buyers are much less adventurous,” one writer said. “Buyers are looking for Friends.”
The film and TV industry is now controlled by only four major companies, and it is shot through with incentives to devalue the actual production of film and television.
The entertainment and finance industries spend enormous sums lobbying both parties to maintain deregulation and prioritize the private sector. Writers will have to fight the studios again, but for more sweeping reforms. One change in particular has the potential to flip the power structure of the industry on its head: writers could demand to own complete copyright for the stories they create. They currently have something called “separated rights,” which allow a writer to use a script and its characters for limited purposes. But if they were to retain complete copyright, they would have vastly more leverage. Nearly every writer I spoke with seemed to believe that this would present a conflict with the way the union functions. This point is complicated and debatable, but Shawna Kidman and the legal expert Catherine Fisk—both preeminent scholars of copyright and media—told me that the greater challenge is Hollywood’s structure. The business is currently built around studio ownership. While Kidman found the idea of writer ownership infeasible, Fisk said it was possible, though it would be extremely difficult. Pushing for copyright would essentially mean going to war with the studios. But if things continue on their current path, writers may have to weigh such hazards against the prospect of the end of their profession. Or, they could leave it all behind.
·harpers.org·
The Life and Death of Hollywood, by Daniel Bessner
AI startups require new strategies
AI startups require new strategies

comment from Habitue on Hacker News: > These are some good points, but it doesn't seem to mention a big way in which startups disrupt incumbents, which is that they frame the problem a different way, and they don't need to protect existing revenue streams.

The “hard tech” in AI are the LLMs available for rent from OpenAI, Anthropic, Cohere, and others, or available as open source with Llama, Bloom, Mistral and others. The hard-tech is a level playing field; startups do not have an advantage over incumbents.
There can be differentiation in prompt engineering, problem break-down, use of vector databases, and more. However, this isn’t something where startups have an edge, such as being willing to take more risks or be more creative. At best, it is neutral; certainly not an advantage.
This doesn’t mean it’s impossible for a startup to succeed; surely many will. It means that you need a strategy that creates differentiation and distribution, even more quickly and dramatically than is normally required
Whether you’re training existing models, developing models from scratch, or simply testing theories, high-quality data is crucial. Incumbents have the data because they have the customers. They can immediately leverage customers’ data to train models and tune algorithms, so long as they maintain secrecy and privacy.
Intercom’s AI strategy is built on the foundation of hundreds of millions of customer interactions. This gives them an advantage over a newcomer developing a chatbot from scratch. Similarly, Google has an advantage in AI video because they own the entire YouTube library. GitHub has an advantage with Copilot because they trained their AI on their vast code repository (including changes, with human-written explanations of the changes).
While there will always be individuals preferring the startup environment, the allure of working on AI at an incumbent is equally strong for many, especially pure computer and data scientsts who, more than anything else, want to work on interesting AI projects. They get to work in the code, with a large budget, with all the data, with above-market compensation, and a built-in large customer base that will enjoy the fruits of their labor, all without having to do sales, marketing, tech support, accounting, raising money, or anything else that isn’t the pure joy of writing interesting code. This is heaven for many.
A chatbot is in the chatbot market, and an SEO tool is in the SEO market. Adding AI to those tools is obviously a good idea; indeed companies who fail to add AI will likely become irrelevant in the long run. Thus we see that “AI” is a new tool for developing within existing markets, not itself a new market (except for actual hard-tech AI companies).
AI is in the solution-space, not the problem-space, as we say in product management. The customer problem you’re solving is still the same as ever. The problem a chatbot is solving is the same as ever: Talk to customers 24/7 in any language. AI enables completely new solutions that none of us were imagining a few years ago; that’s what’s so exciting and truly transformative. However, the customer problems remain the same, even though the solutions are different
Companies will pay more for chatbots where the AI is excellent, more support contacts are deferred from reaching a human, more languages are supported, and more kinds of questions can be answered, so existing chatbot customers might pay more, which grows the market. Furthermore, some companies who previously (rightly) saw chatbots as a terrible customer experience, will change their mind with sufficiently good AI, and will enter the chatbot market, which again grows that market.
the right way to analyze this is not to say “the AI market is big and growing” but rather: “Here is how AI will transform this existing market.” And then: “Here’s how we fit into that growth.”
·longform.asmartbear.com·
AI startups require new strategies
Muse retrospective by Adam Wiggins
Muse retrospective by Adam Wiggins
  • Wiggins focused on storytelling and brand-building for Muse, achieving early success with an email newsletter, which helped engage potential users and refine the product's value proposition.
  • Muse aspired to a "small giants" business model, emphasizing quality, autonomy, and a healthy work environment over rapid growth. They sought to avoid additional funding rounds by charging a prosumer price early on.
  • Short demo videos on Twitter showcasing the app in action proved to be the most effective method for attracting new users.
Muse as a brand and a product represented something aspirational. People want to be deeper thinkers, to be more strategic, and to use cool, status-quo challenging software made by small passionate teams. These kinds of aspirations are easier to indulge in times of plenty. But once you're getting laid off from your high-paying tech job, or struggling to raise your next financing round, or scrambling to protect your kids' college fund from runaway inflation and uncertain markets... I guess you don't have time to be excited about cool demos on Twitter and thoughtful podcasts on product design.
I’d speculate that another factor is the half-life of cool new productivity software. Evernote, Slack, Notion, Roam, Craft, and many others seem to get pretty far on community excitement for their first few years. After that, I think you have to be left with software that serves a deep and hard-to-replace purpose in people’s lives. Muse got there for a few thousand people, but the economics of prosumer software means that just isn’t enough. You need tens of thousands, hundreds of thousands, to make the cost of development sustainable.
We envisioned Muse as the perfect combination of the freeform elements of a whiteboard, the structured text-heavy style of Notion or Google Docs, and the sense of place you get from a “virtual office” ala group chat. As a way to asynchronously trade ideas and inspiration, sketch out project ideas, and explore possibilities, the multiplayer Muse experience is, in my honest opinion, unparalleled for small creative teams working remotely.
But friction began almost immediately. The team lead or organizer was usually the one bringing Muse to the team, and they were already a fan of its approach. But the other team members are generally a little annoyed to have to learn any new tool, and Muse’s steeper learning curve only made that worse. Those team members would push the problem back to the team lead, treating them as customer support (rather than contacting us directly for help). The team lead often felt like too much of the burden of pushing Muse adoption was on their shoulders. This was in addition to the obvious product gaps, like: no support for the web or Windows; minimal or no integration with other key tools like Notion and Google Docs; and no permissions or support for multiple workspaces. Had we raised $10M back during the cash party of 2020–2021, we could have hired the 15+ person team that would have been necessary to build all of that. But with only seven people (we had added two more people to the team in 2021–2022), it just wasn’t feasible.
We neither focused on a particular vertical (academics, designers, authors...) or a narrow use case (PDF reading/annotation, collaborative whiteboarding, design sketching...). That meant we were always spread pretty thin in terms of feature development, and marketing was difficult even over and above the problem of explaining canvas software and digital thinking tools.
being general-purpose was in its blood from birth. Part of it was maker's hubris: don't we always dream of general-purpose tools that will be everything to everyone? And part of it was that it's truly the case that Muse excels at the ability to combine together so many different related knowledge tasks and media types into a single, minimal, powerful canvas. Not sure what I would do differently here, even with the benefit of hindsight.
Muse built a lot of its reputation on being principled, but we were maybe too cautious to do the mercenary things that help you succeed. A good example here is asking users for ratings; I felt like this was not to user benefit and distracting when the user is trying to use your app. Our App Store rating was on the low side (~3.9 stars) for most of our existence. When we finally added the standard prompt-for-rating dialog, it instantly shot up to ~4.7 stars. This was a small example of being too principled about doing good for the user, and not thinking about what would benefit our business.
Growing the team slowly was a delight. At several previous ventures, I've onboard people in the hiring-is-job-one environment of a growth startup. At Muse, we started with three founders and then hired roughly one person per year. This was absolutely fantastic for being able to really take our time to find the perfect person for the role, and then for that person to have tons of time to onboard and find their footing on the team before anyone new showed up. The resulting team was the best I've ever worked on, with minimal deadweight or emotional baggage.
ultimately your product does have to have some web presence. My biggest regret is not building a simple share-to-web function early on, which could have created some virality and a great deal of utility for users as well.
In terms of development speed, quality of the resulting product, hardware integration, and a million other things: native app development wins.
After decades working in product development, being on the marketing/brand/growth/storytelling side was a huge personal challenge for me. But I feel like I managed to grow into the role and find my own approach (podcasting, demo videos, etc) to create a beacon to attract potential customers to our product.
when it comes time for an individual or a team to sit down and sketch out the beginnings of a new business, a new book, a new piece of art—this almost never happens at a computer. Or if it does, it’s a cobbled-together collection of tools like Google Docs and Zoom which aren’t really made for this critical part of the creative lifecycle.
any given business will find a small number of highly-effective channels, and the rest don't matter. For Heroku, that was attending developer conferences and getting blog posts on Hacker News. For another business it might be YouTube influencer sponsorships and print ads in a niche magazine. So I set about systematically testing many channels.
·adamwiggins.com·
Muse retrospective by Adam Wiggins
Great Products Have Great Premises
Great Products Have Great Premises
A great premise gives users context and permission to take actions they might not otherwise take.
The most powerful thing a product can do is give its user a premise.1 A premise is the foundational belief that shapes a user’s behavior. A premise can normalize actions that people otherwise might not take, held back by some existing norm
AirBnb. The premise: It’s ok to stay in strangers’ homes.
the idea of staying in strangers’ homes for short stays was doubted even by the founders. Crashing in someone’s spare room wasn’t unheard of, but it might be seen as weird, taboo, or even dangerous.
Bumble. The premise: It’s ok for women to ask men out.
The best way to follow through on a premise is to make it the core feature of the app. Bumble did, requiring that women make the first move on the app. A woman would be presented with a list of her matches and would have to make the first "move" before men could reply. This of course became a powerful differentiating feature and marketing hook.
Substack. The premise: It’s ok to charge for your writing.
Substack's premise aimed to normalize the hardest part of internet writing: getting paid. They aimed to show that independent authors could succeed at making a living (and subscription models aligned with this ethos). In doing so, Substack also made the less-hard parts of internet writing even easier. You could start a newsletter and keep it free until you felt confident about going paid. This not only normalized the end goal but also lowered the barrier to getting started.
A premise is valuable not only for “products,” but also for experiences.As I recently shouted, people still underestimate the power of giving a social event a premise. Hackathons, housewarmings, happy hours and the like are hangouts with a narrative. They have a good premise — a specific context that makes it more comfortable to do something that can be hard: socialize. (Side note: some of the best tv series and films are built on great premises.)
Premises work best on end consumers, prosumers, small business freelancers, and the like. Many two-sided marketplaces serving two of these stakeholder groups tend to have a good premise. For example, Kickstarter's premise for the creator might be: It’s ok to ask for money before you've built a product.
·workingtheorys.com·
Great Products Have Great Premises
How can we develop transformative tools for thought?
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work. But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
·numinous.productions·
How can we develop transformative tools for thought?
Divine Discontent, Disruption’s Antidote
Divine Discontent, Disruption’s Antidote
in their efforts to provide better products than their competitors and earn higher prices and margins, suppliers often “overshoot” their market: They give customers more than they need or ultimately are willing to pay for. And more importantly, it means that disruptive technologies that may underperform today, relative to what users in the market demand, may be fully performance-competitive in that same market tomorrow. This was the basis for insisting that the iPhone must have a low-price model: surely Apple would soon run out of new technology to justify the prices it charged for high-end iPhones, and consumers would start buying much cheaper Android phones instead! In fact, as I discussed in after January’s earnings results, the company has gone in the other direction: more devices per customer, higher prices per device, and an increased focus on ongoing revenue from those same customers.
Apple seems to have mostly saturated the high end, slowly adding switchers even as existing iPhone users hold on to their phones longer; what is not happening, though, is what disruption predicts: Apple isn’t losing customers to low-cost competitors for having “overshot” and overpriced its phones. It seems my thesis was right: a superior experience can never be too good — or perhaps I didn’t go far enough.
Jeff Bezos has been writing an annual letter to shareholders since 1997, and he attaches that original letter to one he pens every year. It included this section entitled Obsess Over Customers: From the beginning, our focus has been on offering our customers compelling value. We realized that the Web was, and still is, the World Wide Wait. Therefore, we set out to offer customers something they simply could not get any other way, and began serving them with books. We brought them much more selection than was possible in a physical store (our store would now occupy 6 football fields), and presented it in a useful, easy-to-search, and easy-to-browse format in a store open 365 days a year, 24 hours a day. We maintained a dogged focus on improving the shopping experience, and in 1997 substantially enhanced our store. We now offer customers gift certificates, 1-Click shopping, and vastly more reviews, content, browsing options, and recommendation features. We dramatically lowered prices, further increasing customer value. Word of mouth remains the most powerful customer acquisition tool we have, and we are grateful for the trust our customers have placed in us. Repeat purchases and word of mouth have combined to make Amazon.com the market leader in online bookselling.
This year, after highlighting just how much customers love Amazon (answer: a lot), Bezos wrote: One thing I love about customers is that they are divinely discontent. Their expectations are never static — they go up. It’s human nature. We didn’t ascend from our hunter-gatherer days by being satisfied. People have a voracious appetite for a better way, and yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’. I see that cycle of improvement happening at a faster rate than ever before. It may be because customers have such easy access to more information than ever before — in only a few seconds and with a couple taps on their phones, customers can read reviews, compare prices from multiple retailers, see whether something’s in stock, find out how fast it will ship or be available for pick-up, and more. These examples are from retail, but I sense that the same customer empowerment phenomenon is happening broadly across everything we do at Amazon and most other industries as well. You cannot rest on your laurels in this world. Customers won’t have it.
when it comes to Internet-based services, this customer focus does not come at the expense of a focus on infrastructure or distribution or suppliers: while those were the means to customers in the analog world, in the online world controlling the customer relationship gives a company power over its suppliers, the capital to build out infrastructure, and control over distribution. Bezos is not so much choosing to prioritize customers insomuch as he has unlocked the key to controlling value chains in an era of aggregation.
consumer expectations are not static: they are, as Bezos’ memorably states, “divinely discontent”. What is amazing today is table stakes tomorrow, and, perhaps surprisingly, that makes for a tremendous business opportunity: if your company is predicated on delivering the best possible experience for consumers, then your company will never achieve its goal.
In the case of Amazon, that this unattainable and ever-changing objective is embedded in the company’s culture is, in conjunction with the company’s demonstrated ability to spin up new businesses on the profits of established ones, a sort of perpetual motion machine
Owning the customer relationship by means of delivering a superior experience is how these companies became dominant, and, when they fall, it will be because consumers deserted them, either because the companies lost control of the user experience (a danger for Facebook and Google), or because a paradigm shift made new experiences matter more (a danger for Google and Apple).
·stratechery.com·
Divine Discontent, Disruption’s Antidote
Generative AI’s Act Two
Generative AI’s Act Two
This page also has many infographics providing an overview of different aspects of the AI industry at time of writing.
We still believe that there will be a separation between the “application layer” companies and foundation model providers, with model companies specializing in scale and research and application layer companies specializing in product and UI. In reality, that separation hasn’t cleanly happened yet. In fact, the most successful user-facing applications out of the gate have been vertically integrated.
We predicted that the best generative AI companies could generate a sustainable competitive advantage through a data flywheel: more usage → more data → better model → more usage. While this is still somewhat true, especially in domains with very specialized and hard-to-get data, the “data moats” are on shaky ground: the data that application companies generate does not create an insurmountable moat, and the next generations of foundation models may very well obliterate any data moats that startups generate. Rather, workflows and user networks seem to be creating more durable sources of competitive advantage.
Some of the best consumer companies have 60-65% DAU/MAU; WhatsApp’s is 85%. By contrast, generative AI apps have a median of 14% (with the notable exception of Character and the “AI companionship” category). This means that users are not finding enough value in Generative AI products to use them every day yet.
generative AI’s biggest problem is not finding use cases or demand or distribution, it is proving value. As our colleague David Cahn writes, “the $200B question is: What are you going to use all this infrastructure to do? How is it going to change people’s lives?”
·sequoiacap.com·
Generative AI’s Act Two
Seven Rules For Internet CEOs To Avoid Enshittification
Seven Rules For Internet CEOs To Avoid Enshittification
People forget that when Bezos introduced Amazon Prime, Wall St. flipped out, because they insisted that it would cost way too much for too little benefit. But, through it all Amazon survived (and thrived) because Bezos just kept telling investors exactly what his plan was, and never backed down, no matter what Wall St. kept saying to him.
This is too easily forgotten, but your users are everything if you run an internet business. They’re not “the product.” They’re what makes your site useful and valuable, and often provide the best marketing you could never buy by convincing others to join and providing you with all of the best ideas on how to improve things and make your service even better for the users. The moment you’re undermining your own community, you’re beginning to spiral downward.
As you’re developing a business model, the best way to make sure that you’re serving your users best, and not enshittifying everything, is to constantly make sure that you’re only capturing some of the value you’re creating, and are instead putting much more out into the world, especially for your community.
Push the power to make your service better out from the service to the users themselves and watch what they do. Let them build. Let them improve your service. Let them make it work better for you. But, you have to have some trust here. If you’re focused on “Rule 3” you have to recognize that sometimes your users will create value that you don’t capture. Or even that someone else captures. But in the long run, it still flows back to you, as it makes your service that much more valuable.
If you’re charging for something that was once free, you’re taking away value from your community. You’re changing the nature of the bargain, and ripping away the trust that your community put in you. Instead, always look for something new that is worth paying for above and beyond what you already offered.
There are ways to monetize that don’t need to overwhelm, that don’t need to suck up every bit of data, that don’t need to rely on taking away features users relied on. Focus on adding more scarce value, and figuring out ways to charge for those new things which can’t be easily replicated.
You start learning acronyms like “ARPU” (average revenue per user) and such. And then you’re being measured on how much you’re increasing those metrics, which means you need to squeeze more out of each individual user, and you’re now deep within the enshittification stage, in which you’re trying to squeeze your users for more money each quarter (because now everything is judged in how well you did in the last 3 months to improve that number).
·techdirt.com·
Seven Rules For Internet CEOs To Avoid Enshittification
Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two sided market," where a platform sits between buyers and sellers, holding each hostage to the other, raking off an ever-larger share of the value that passes between them.
Today, Marketplace sellers are handing 45%+ of the sale price to Amazon in junk fees. The company's $31b "advertising" program is really a payola scheme that pits sellers against each other, forcing them to bid on the chance to be at the top of your search.
Search Amazon for "cat beds" and the entire first screen is ads, including ads for products Amazon cloned from its own sellers, putting them out of business (third parties have to pay 45% in junk fees to Amazon, but Amazon doesn't charge itself these fees).
This is enshittification: surpluses are first directed to users; then, once they're locked in, surpluses go to suppliers; then once they're locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit.
This made publications truly dependent on Facebook – their readers no longer visited the publications' websites, they just tuned into them on Facebook. The publications were hostage to those readers, who were hostage to each other. Facebook stopped showing readers the articles publications ran, tuning The Algorithm to suppress posts from publications unless they paid to "boost" their articles to the readers who had explicitly subscribed to them and asked Facebook to put them in their feeds.
Today, Facebook is terminally enshittified, a terrible place to be whether you're a user, a media company, or an advertiser. It's a company that deliberately demolished a huge fraction of the publishers it relied on, defrauding them into a "pivot to video" based on false claims of the popularity of video among Facebook users. Companies threw billions into the pivot, but the viewers never materialized, and media outlets folded in droves:
These videos go into Tiktok users' ForYou feeds, which Tiktok misleadingly describes as being populated by videos "ranked by an algorithm that predicts your interests based on your behavior in the app." In reality, For You is only sometimes composed of videos that Tiktok thinks will add value to your experience – the rest of the time, it's full of videos that Tiktok has inserted in order to make creators think that Tiktok is a great place to reach an audience.
"Sources told Forbes that TikTok has often used heating to court influencers and brands, enticing them into partnerships by inflating their videos’ view count.
"Monetize" is a terrible word that tacitly admits that there is no such thing as an "Attention Economy." You can't use attention as a medium of exchange. You can't use it as a store of value. You can't use it as a unit of account. Attention is like cryptocurrency: a worthless token that is only valuable to the extent that you can trick or coerce someone into parting with "fiat" currency in exchange for it.
The algorithm creates conditions for which the necessity of ads exists
For Tiktok, handing out free teddy-bears by "heating" the videos posted by skeptical performers and media companies is a way to convert them to true believers, getting them to push all their chips into the middle of the table, abandoning their efforts to build audiences on other platforms (it helps that Tiktok's format is distinctive, making it hard to repurpose videos for Tiktok to circulate on rival platforms).
every time Tiktok shows you a video you asked to see, it loses a chance to show you a video it wants you to se
I just handed Twitter $8 for Twitter Blue, because the company has strongly implied that it will only show the things I post to the people who asked to see them if I pay ransom money.
Compuserve could have "monetized" its own version of Caller ID by making you pay $2.99 extra to see the "From:" line on email before you opened the message – charging you to know who was speaking before you started listening – but they didn't.
Useful idiots on the right were tricked into thinking that the risk of Twitter mismanagement was "woke shadowbanning," whereby the things you said wouldn't reach the people who asked to hear them because Twitter's deep state didn't like your opinions. The real risk, of course, is that the things you say won't reach the people who asked to hear them because Twitter can make more money by enshittifying their feeds and charging you ransom for the privilege to be included in them.
Individual product managers, executives, and activist shareholders all give preference to quick returns at the cost of sustainability, and are in a race to see who can eat their seed-corn first. Enshittification has only lasted for as long as it has because the internet has devolved into "five giant websites, each filled with screenshots of the other four"
policymakers should focus on freedom of exit – the right to leave a sinking platform while continuing to stay connected to the communities that you left behind, enjoying the media and apps you bought, and preserving the data you created
technological self-determination is at odds with the natural imperatives of tech businesses. They make more money when they take away our freedom – our freedom to speak, to leave, to connect.
even Tiktok's critics grudgingly admitted that no matter how surveillant and creepy it was, it was really good at guessing what you wanted to see. But Tiktok couldn't resist the temptation to show you the things it wants you to see, rather than what you want to see.
·pluralistic.net·
Tiktok’s enshittification (21 Jan 2023) – Pluralistic: Daily links from Cory Doctorow
Writers On Set | Not a Blog
Writers On Set | Not a Blog
I wrote five scripts during my season and a half on TZ, and I was deeply involved in every aspect of every one of them.   I did not just write my script, turn it in, and go away.   I sat in on the casting sessions.   I worked with the directors.   I was present at the table reads.   “The Last Defender of Camelot” was the first of my scripts to go into production, and I was on set every day.   I watched the stuntmen rehearse the climactic sword fight (in the lobby of the ST ELSEWHERE set, as it turned out), and I was present when they shot that scene and someone zigged when he should have zagged and a stuntman’s nose was cut off… a visceral lesson as to the kind of thing that can go wrong.   With Phil and Jim and Harvey Frand (our line producer, another great guy who taught me a lot), I watched dailies every day.    After the episode was in the can, I sat in on some post-production, and watched the editors work their magic.   I learned from them too.
Streamers and shortened seasons have blown the ladder to splinters.   The way it works now, a show gets put in development, the showrunner assembles a “mini-room,” made up of a couple of senior writers and a couple newcomers, they meet for a month or two, beat out the season, break down the episodes, go off and write scripts, reassemble, get notes, give notes, rewrite, rinse and repeat… and finally turn into the scripts.   And show is greenlit (or not, some shows never get past the room) and sent into production.  The showrunner and his second, maybe his second and his third, take it from there.   The writer producers.   The ones who already know all the things that I learned on TWILIGHT ZONE. The junior writers?  They’re not there.   Once they delivered their scripts and did a revision of two, they were paid, sent home, their salary ended.   They are off looking for another gig.
In many cases they won’t be asked to set even when the episodes they wrote are being filmed.   (They may be ALLOWED on set, if the showrunner and execs are cool with that, but only as a visitor, with no authority, no role.   And no pay, of course.   They may even be told they are not allowed to speak to the actors).
One of the things the AMPTP put forward in their last offer to the WGA is that some writers might be brought onto sets as unpaid interns, to “shadow” and “observe.”   Even that will not be an absolute right.   Maybe they will be let in, maybe not.   These are the people who wrote the stories being filmed, who created the characters, who wrote the words the actors are saying.   I was WAY more than that in 1985, and so was every other staff writer in television at the time.
Mini-rooms are abominations, and the refusal of the AMPTP to pay writers to stay with their shows through production — as part of the JOB, for which they need to be paid, not as a tourist —  is not only wrong, it is incredibly short sighted.   If the Story Editors of 2023 are not allowed to get any production experience, where do the studios think the Showrunners of 2033 are going to come from?
·georgerrmartin.com·
Writers On Set | Not a Blog
Inside Amazon Studios: Big Swings Hampered by Confusion and Frustration
Inside Amazon Studios: Big Swings Hampered by Confusion and Frustration
numerous sources say they cannot discern what kind of material Salke and head of television Vernon Sanders want to make. A showrunner with ample experience at the studio says, “There’s no vision for what an Amazon Prime show is. You can’t say, ‘They stand for this kind of storytelling.’ It’s completely random what they make and how they make it.” Another showrunner with multiple series at Amazon finds it baffling that the streamer hasn’t had more success: Amazon has “more money than God,” this person says. “If they wanted to produce unbelievable television, they certainly have the resources to do it.”
·hollywoodreporter.com·
Inside Amazon Studios: Big Swings Hampered by Confusion and Frustration
Netflix’s New Chapter
Netflix’s New Chapter
Blockbuster responded by pricing Blockbuster Online 50 cents cheaper, accelerating Netflix’s stock slide. Netflix, though, knew that Blockbuster was carrying $1 billion in debt from its spin-off from Viacom, and decided to wait it out; Blockbuster cut the price again, taking an increasing share of new subscribers, and still Netflix waited.
·stratechery.com·
Netflix’s New Chapter
What China, Marvel, and Avatar Tell Us About the Future of Blockbuster Franchises — MatthewBall.vc
What China, Marvel, and Avatar Tell Us About the Future of Blockbuster Franchises — MatthewBall.vc
Swelling trade tensions and the rise of “direct-to-consumer” platforms were bound to heighten the scrutiny on the import of mass media cultural products. But it’s also notable that the Marvel movies that did gain admittance in China were led by six heroes (The Avengers), five of whom were employed by the American military (with the sole outlier being an extraterrestrial) and all of whom were white. The current, rejected leads are more diverse in vocation, American allegiance, and ethnicity (among other attributes).
In 2017, Disney began a marketing integration with aerospace and defense giant Northrop Grumman encouraging those who use Google to research American defense contractor Stark Industries to join something like the real thing.
Avatar’s unprecedented achievements require us to examine not just its technological innovations, but also its narrative. The film’s “protagonist humans” are classic Western archetypes such as the taciturn soldier and the driven scientist. The villains are archetypes as well, but they are also particularly close to foreign caricatures of evil Americans: the tough-as-nails, violence-prone colonel and pillage-the-earth corporate executive. Furthermore, Avatar’s overarching message is one of collectivism, spiritualism, and alignment with nature. At the end of the movie, each of the Western heroes literally shed their individual identities (and white bodies) to become part of the cooperative aboriginal mind and save the day.
·matthewball.vc·
What China, Marvel, and Avatar Tell Us About the Future of Blockbuster Franchises — MatthewBall.vc