Found 48 bookmarks
Newest
Ask HN: Can I really create a company around my open-source software? | Hacker News
Ask HN: Can I really create a company around my open-source software? | Hacker News
I get that you've worked on this for months, that you're burned out generally, and now unemployed. So this comment is not meant as "mean" but rather offered in the spirit of encouragement. Firstly, building a business (especially in a crowded space) is stressful. It's not a place to recover from burnout. It's not a place that reduces anxiety. So my first recommendation is to relax a bit, put this on the back burner, and when you're ready go look for your next job. Secondly, treat this project as an education. You had an idea and spent months implementing it. That's the easy part. The hard part is finding a market willing to pay money for something. So for your next project do the hard part first. First find a market, find out what they will spend, ideally collect a small deposit (to prove they're serious) and then go from there. In my business we have 3 main product lines. The first 2 happened because the market paid us to build a solution. We iterated on those for 30 years, and we now are big players (in very niche spaces.) The 3rd happened as a take-over of a project by another retiring developer. He had a few customers, and a good product, but in a crowded space where there's lots of reasons not to change. It's taken many years to build it out, despite being clearly better than the competition, and it's still barely profitable (if you ignore a bunch of expenses paid by the whole business. ) The lesson being to follow the money, not the idea. (Aside, early on we followed some ideas, all those projects died, most without generating any revenue.) So congratulations to seeing something through to release. But turning a product into a business is really hard. Turning a commodity like this into a business is almost impossible. I wish you well in your future endeavors.
For a major commercial product I visited similar markets to ours, knocked on the doors of distributors, tried to find people who wanted to integrate our product into their market. I failed a lot but succeeded twice, and those 2 have been paying us lots of money every year for 20 years as they make sales. Your approach may vary. Start locally. Talk to shop keepers, restaurants, businesses, charities, schools and so on. Look for markets that are not serviced (which is different to where the person is just too cheap, or adverse to tech for other reasons.) Of course it's a LOT harder now to find unserviced markets. There's a lot more software out there now than there was when I started out. Ultimately though it's about connecting with people - real people not just sending out spam emails. And so meeting the right person at the right time is "lucky". But if you're not out there luck can't work with you. You need to give luck a chance.
·news.ycombinator.com·
Ask HN: Can I really create a company around my open-source software? | Hacker News
DeepSeek isn't a victory for the AI sceptics
DeepSeek isn't a victory for the AI sceptics
we now know that as the price of computing equipment fell, new use cases emerged to fill the gap – which is why today my lightbulbs have semiconductors inside them, and I occasionally have to install firmware updates my doorbell.
surely the compute freed up by more efficient models will be used to train models even harder, and apply even more “brain power” to coming up with responses? Even if DeepSeek is dramatically more efficient, the logical thing to do will be to use the excess capacity to ensure the answers are even smarter.
ure, if DeepSeek heralds a new era of much leaner LLMs, it’s not great news in the short term if you’re a shareholder in Nvidia, Microsoft, Meta or Google.6 But if DeepSeek is the enormous breakthrough it appears, it just became even cheaper to train and use the most sophisticated models humans have so far built, by one or more orders of magnitude. Which is amazing news for big tech, because it means that AI usage is going to be even more ubiquitous.
·takes.jamesomalley.co.uk·
DeepSeek isn't a victory for the AI sceptics
Your "Per-Seat" Margin is My Opportunity
Your "Per-Seat" Margin is My Opportunity

Traditional software is sold on a per seat subscription. More humans, more money. We are headed to a future where AI agents will replace the work humans do. But you can’t charge agents a per seat cost. So we’re headed to a world where software will be sold on a consumption model (think tasks) and then on an outcome model (think job completed) Incumbents will be forced to adapt but it’s classic innovators dilemma. How do you suddenly give up all that subscription revenue? This gives an opportunity for startups to win.

Per-seat pricing only works when your users are human. But when agents become the primary users of software, that model collapses.
Executives aren't evaluating software against software anymore. They're comparing the combined costs of software licenses plus labor against pure outcome-based solutions. Think customer support (per resolved ticket vs. per agent + seat), marketing (per campaign vs. headcount), sales (per qualified lead vs. rep). That's your pricing umbrella—the upper limit enterprises will pay before switching entirely to AI.
enterprises are used to deterministic outcomes and fixed annual costs. Usage-based pricing makes budgeting harder. But individual leaders seeing 10x efficiency gains won't wait for procurement to catch up. Savvy managers will find ways around traditional buying processes.
This feels like a generational reset of how businesses operate. Zero upfront costs, pay only for outcomes—that's not just a pricing model. That's the future of business.
The winning strategy in my books? Give the platform away for free. Let your agents read and write to existing systems through unstructured data—emails, calls, documents. Once you handle enough workflows, you become the new system of record.
·writing.nikunjk.com·
Your "Per-Seat" Margin is My Opportunity
How Elon Musk Got Tangled Up in Blue
How Elon Musk Got Tangled Up in Blue
Mr. Musk had largely come to peace with a price of $100 a year for Blue. But during one meeting to discuss pricing, his top assistant, Jehn Balajadia, felt compelled to speak up. “There’s a lot of people who can’t even buy gas right now,” she said, according to two people in attendance. It was hard to see how any of those people would pony up $100 on the spot for a social media status symbol. Mr. Musk paused to think. “You know, like, what do people pay for Starbucks?” he asked. “Like $8?” Before anyone could raise objections, he whipped out his phone to set his word in stone. “Twitter’s current lords & peasants system for who has or doesn’t have a blue checkmark is bullshit,” he tweeted on Nov. 1. “Power to the people! Blue for $8/month.”
·nytimes.com·
How Elon Musk Got Tangled Up in Blue
$700bn delusion - Does using data to target specific audiences make advertising more effective?
$700bn delusion - Does using data to target specific audiences make advertising more effective?
Being broadly effective, but somewhat inefficient, is better than being narrowly efficient, but less effective.
Targeting can increase the scale of effects, but this study suggests that the cheaper approach of not targeting so specifically, might actually deliver a greater financial outcome
As Wiberg’s findings point out, the problem with targeting towards conversion optimisation is you are effectively advertising to many people who were already going to buy you.
If I only sell to IT decision-makers, for example, I need some targeting, as I just can’t afford to talk to random consumers. I must pay for some targeting in my media buy, in order to reach a relatively niche audience.  Targeting is no longer a nice to do, but a must have. The interesting question then becomes not should I target, but how can I target effectively?
What they found was any form of second or third-party data led segmenting and targeting of advertising does not outperform a random sample when it comes to accuracy of reaching the actual target.
Contextual ads massively outperform even first party data
We can improve the quality of our targeting much better by just buying ads that appear in the right context, than we can by using my massive first party database to drive the buy, and it’s way cheaper to do that. Putting ads in contextually relevant places beats any form of targeting to individual characteristics. Even using your own data.
The secret to effective, immediate action-based advertising, is perhaps not so much about finding the right people with the right personas and serving them a tailored customised message. It’s to be in the right places. The places where they are already engaging with your category, and then use advertising to make buying easier from that place
Even hard, sales-driving advertising isn’t the tough guy we want it to be. Advertising mostly works when it makes things easier, much more often than when it tries to persuade or invoke a reluctant action.
Thinking about advertising as an ease-making mechanism is much more likely to set us on the right path
If your ad is in the right place, you automatically get the right people, and you also get them at the right time; when they are actually more interested in what you have to sell. You also spend much less to be there than crunching all that data
·archive.is·
$700bn delusion - Does using data to target specific audiences make advertising more effective?
What Apple's AI Tells Us: Experimental Models⁴
What Apple's AI Tells Us: Experimental Models⁴
Companies are exploring various approaches, from large, less constrained frontier models to smaller, more focused models that run on devices. Apple's AI focuses on narrow, practical use cases and strong privacy measures, while companies like OpenAI and Anthropic pursue the goal of AGI.
the most advanced generalist AI models often outperform specialized models, even in the specific domains those specialized models were designed for. That means that if you want a model that can do a lot - reason over massive amounts of text, help you generate ideas, write in a non-robotic way — you want to use one of the three frontier models: GPT-4o, Gemini 1.5, or Claude 3 Opus.
Working with advanced models is more like working with a human being, a smart one that makes mistakes and has weird moods sometimes. Frontier models are more likely to do extraordinary things but are also more frustrating and often unnerving to use. Contrast this with Apple’s narrow focus on making AI get stuff done for you.
Every major AI company argues the technology will evolve further and has teased mysterious future additions to their systems. In contrast, what we are seeing from Apple is a clear and practical vision of how AI can help most users, without a lot of effort, today. In doing so, they are hiding much of the power, and quirks, of LLMs from their users. Having companies take many approaches to AI is likely to lead to faster adoption in the long term. And, as companies experiment, we will learn more about which sets of models are correct.
·oneusefulthing.org·
What Apple's AI Tells Us: Experimental Models⁴
AI Integration and Modularization
AI Integration and Modularization
Summary: The question of integration versus modularization in the context of AI, drawing on the work of economists Ronald Coase and Clayton Christensen. Google is pursuing a fully integrated approach similar to Apple, while AWS is betting on modularization, and Microsoft and Meta are somewhere in between. Integration may provide an advantage in the consumer market and for achieving AGI, but that for enterprise AI, a more modular approach leveraging data gravity and treating models as commodities may prevail. Ultimately, the biggest beneficiary of this dynamic could be Nvidia.
The left side of figure 5-1 indicates that when there is a performance gap — when product functionality and reliability are not yet good enough to address the needs of customers in a given tier of the market — companies must compete by making the best possible products. In the race to do this, firms that build their products around proprietary, interdependent architectures enjoy an important competitive advantage against competitors whose product architectures are modular, because the standardization inherent in modularity takes too many degrees of design freedom away from engineers, and they cannot not optimize performance.
The issue I have with this analysis of vertical integration — and this is exactly what I was taught at business school — is that the only considered costs are financial. But there are other, more difficult to quantify costs. Modularization incurs costs in the design and experience of using products that cannot be overcome, yet cannot be measured. Business buyers — and the analysts who study them — simply ignore them, but consumers don’t. Some consumers inherently know and value quality, look-and-feel, and attention to detail, and are willing to pay a premium that far exceeds the financial costs of being vertically integrated.
Google trains and runs its Gemini family of models on its own TPU processors, which are only available on Google’s cloud infrastructure. Developers can access Gemini through Vertex AI, Google’s fully-managed AI development platform; and, to the extent Vertex AI is similar to Google’s internal development environment, that is the platform on which Google is building its own consumer-facing AI apps. It’s all Google, from top-to-bottom, and there is evidence that this integration is paying off: Gemini 1.5’s industry leading 2 million token context window almost certainly required joint innovation between Google’s infrastructure team and its model-building team.
In AI, Google is pursuing an integrated strategy, building everything from chips to models to applications, similar to Apple's approach in smartphones.
On the other extreme is AWS, which doesn’t have any of its own models; instead its focus has been on its Bedrock managed development platform, which lets you use any model. Amazon’s other focus has been on developing its own chips, although the vast majority of its AI business runs on Nvidia GPUs.
Microsoft is in the middle, thanks to its close ties to OpenAI and its models. The company added Azure Models-as-a-Service last year, but its primary focus for both external customers and its own internal apps has been building on top of OpenAI’s GPT family of models; Microsoft has also launched its own chip for inference, but the vast majority of its workloads run on Nvidia.
Google is certainly building products for the consumer market, but those products are not devices; they are Internet services. And, as you might have noticed, the historical discussion didn’t really mention the Internet. Both Google and Meta, the two biggest winners of the Internet epoch, built their services on commodity hardware. Granted, those services scaled thanks to the deep infrastructure work undertaken by both companies, but even there Google’s more customized approach has been at least rivaled by Meta’s more open approach. What is notable is that both companies are integrating their models and their apps, as is OpenAI with ChatGPT.
Google's integrated AI strategy is unique but may not provide a sustainable advantage for Internet services in the way Apple's integration does for devices
It may be the case that selling hardware, which has to be perfect every year to justify a significant outlay of money by consumers, provides a much better incentive structure for maintaining excellence and execution than does being an Aggregator that users access for free.
Google’s collection of moonshots — from Waymo to Google Fiber to Nest to Project Wing to Verily to Project Loon (and the list goes on) — have mostly been science projects that have, for the most part, served to divert profits from Google Search away from shareholders. Waymo is probably the most interesting, but even if it succeeds, it is ultimately a car service rather far afield from Google’s mission statement “to organize the world’s information and make it universally accessible and useful.”
The only thing that drives meaningful shifts in platform marketshare are paradigm shifts, and while I doubt the v1 version of Pixie [Google’s rumored Pixel-only AI assistant] would be good enough to drive switching from iPhone users, there is at least a path to where it does exactly that.
the fact that Google is being mocked mercilessly for messed-up AI answers gets at why consumer-facing AI may be disruptive for the company: the reason why incumbents find it hard to respond to disruptive technologies is because they are, at least at the beginning, not good enough for the incumbent’s core offering. Time will tell if this gives more fuel to a shift in smartphone strategies, or makes the company more reticent.
while I was very impressed with Google’s enterprise pitch, which benefits from its integration with Google’s infrastructure without all of the overhead of potentially disrupting the company’s existing products, it’s going to be a heavy lift to overcome data gravity, i.e. the fact that many enterprise customers will simply find it easier to use AI services on the same clouds where they already store their data (Google does, of course, also support non-Gemini models and Nvidia GPUs for enterprise customers). To the extent Google wins in enterprise it may be by capturing the next generation of startups that are AI first and, by definition, data light; a new company has the freedom to base its decision on infrastructure and integration.
Amazon is certainly hoping that argument is correct: the company is operating as if everything in the AI value chain is modular and ultimately a commodity, which insinuates that it believes that data gravity will matter most. What is difficult to separate is to what extent this is the correct interpretation of the strategic landscape versus a convenient interpretation of the facts that happens to perfectly align with Amazon’s strengths and weaknesses, including infrastructure that is heavily optimized for commodity workloads.
Unclear if Amazon's strategy is based on true insight or motivated reasoning based on their existing strengths
Meta’s open source approach to Llama: the company is focused on products, which do benefit from integration, but there are also benefits that come from widespread usage, particularly in terms of optimization and complementary software. Open source accrues those benefits without imposing any incentives that detract from Meta’s product efforts (and don’t forget that Meta is receiving some portion of revenue from hyperscalers serving Llama models).
The iPhone maker, like Amazon, appears to be betting that AI will be a feature or an app; like Amazon, it’s not clear to what extent this is strategic foresight versus motivated reasoning.
achieving something approaching AGI, whatever that means, will require maximizing every efficiency and optimization, which rewards the integrated approach.
the most value will be derived from building platforms that treat models like processors, delivering performance improvements to developers who never need to know what is going on under the hood.
·stratechery.com·
AI Integration and Modularization
The Life and Death of Hollywood, by Daniel Bessner
The Life and Death of Hollywood, by Daniel Bessner
now the streaming gold rush—the era that made Dickinson—is over. In the spring of 2022, the Federal Reserve began raising interest rates after years of nearly free credit, and at roughly the same time, Wall Street began calling in the streamers’ bets. The stock prices of nearly all the major companies with streaming platforms took precipitous falls, and none have rebounded to their prior valuation.
Thanks to decades of deregulation and a gush of speculative cash that first hit the industry in the late Aughts, while prestige TV was climbing the rungs of the culture, massive entertainment and media corporations had been swallowing what few smaller companies remained, and financial firms had been infiltrating the business, moving to reduce risk and maximize efficiency at all costs, exhausting writers in evermore unstable conditions.
The new effective bosses of the industry—colossal conglomerates, asset-management companies, and private-equity firms—had not been simply pushing workers too hard and grabbing more than their fair share of the profits. They had been stripping value from the production system like copper pipes from a house—threatening the sustainability of the studios themselves. Today’s business side does not have a necessary vested interest in “the business”—in the health of what we think of as Hollywood, a place and system in which creativity is exchanged for capital. The union wins did not begin to address this fundamental problem.
To the new bosses, the quantity of money that studios had been spending on developing screenplays—many of which would never be made—was obvious fat to be cut, and in the late Aughts, executives increasingly began offering one-step deals, guaranteeing only one round of pay for one round of work. Writers, hoping to make it past Go, began doing much more labor—multiple steps of development—for what was ostensibly one step of the process. In separate interviews, Dana Stevens, writer of The Woman King, and Robin Swicord described the change using exactly the same words: “Free work was encoded.” So was safe material. In an effort to anticipate what a studio would green-light, writers incorporated feedback from producers and junior executives, constructing what became known as producer’s drafts. As Rodman explained it: “Your producer says to you, ‘I love your script. It’s a great first draft. But I know what the studio wants. This isn’t it. So I need you to just make this protagonist more likable, and blah, blah, blah.’ And you do it.”
By 2019, the major Hollywood agencies had been consolidated into an oligopoly of four companies that controlled more than 75 percent of WGA writers’ earnings. And in the 2010s, high finance reached the agencies: by 2014, private equity had acquired Creative Artists Agency and William Morris Endeavor, and the latter had purchased IMG. Meeting benchmarks legible to the new bosses—deals actually made, projects off the ground—pushed agents to function more like producers, and writers began hearing that their asking prices were too high.
Executives, meanwhile, increasingly believed that they’d found their best bet in “IP”: preexisting intellectual property—familiar stories, characters, and products—that could be milled for scripts. As an associate producer of a successful Aughts IP-driven franchise told me, IP is “sort of a hedge.” There’s some knowledge of the consumer’s interest, he said. “There’s a sort of dry run for the story.” Screenwriter Zack Stentz, who co-wrote the 2011 movies Thor and X-Men: First Class, told me, “It’s a way to take risk out of the equation as much as possible.”
Multiple writers I spoke with said that selecting preexisting characters and cinematic worlds gave executives a type of psychic edge, allowing them to claim a degree of creative credit. And as IP took over, the perceived authority of writers diminished. Julie Bush, a writer-producer for the Apple TV+ limited series Manhunt, told me, “Executives get to feel like the author of the work, even though they have a screenwriter, like me, basically create a story out of whole cloth.” At the same time, the biggest IP success story, the Marvel Cinematic Universe, by far the highest-earning franchise of all time, pioneered a production apparatus in which writers were often separated from the conception and creation of a movie’s overall story.
Joanna Robinson, co-author of the book MCU: The Reign of Marvel Studios, told me that the writers for WandaVision, a Marvel show for Disney+, had to craft almost the entirety of the series’ single season without knowing where their work was ultimately supposed to arrive: the ending remained undetermined, because executives had not yet decided what other stories they might spin off from the show.
The streaming ecosystem was built on a wager: high subscriber numbers would translate to large market shares, and eventually, profit. Under this strategy, an enormous amount of money could be spent on shows that might or might not work: more shows meant more opportunities to catch new subscribers. Producers and writers for streamers were able to put ratings aside, which at first seemed to be a luxury. Netflix paid writers large fees up front, and guaranteed that an entire season of a show would be produced. By the mid-2010s, the sheer quantity of series across the new platforms—what’s known as “Peak TV”—opened opportunities for unusually offbeat projects (see BoJack Horseman, a cartoon for adults about an equine has-been sitcom star), and substantially more shows created by women and writers of color. In 2009, across cable, broadcast, and streaming, 189 original scripted shows aired or released new episodes; in 2016, that number was 496. In 2022, it was 849.
supply soon overshot demand. For those who beat out the competition, the work became much less steady than it had been in the pre-streaming era. According to insiders, in the past, writers for a series had usually been employed for around eight months, crafting long seasons and staying on board through a show’s production. Junior writers often went to the sets where their shows were made and learned how to take a story from the page to the screen—how to talk to actors, how to stay within budget, how to take a studio’s notes—setting them up to become showrunners. Now, in an innovation called mini-rooms, reportedly first ventured by cable channels such as AMC and Starz, fewer writers were employed for each series and for much shorter periods—usually eight to ten weeks but as little as four.
Writers in the new mini-room system were often dismissed before their series went to production, which meant that they rarely got the opportunity to go to set and weren’t getting the skills they needed to advance. Showrunners were left responsible for all writing-related tasks when these rooms shut down. “It broke a lot of showrunners,” the A-list film and TV writer told me. “Physically, mentally, financially. It also ruined a lot of shows.”
The price of entry for working in Hollywood had been high for a long time: unpaid internships, low-paid assistant jobs. But now the path beyond the entry level was increasingly unclear. Jason Grote, who was a staff writer on Mad Men and who came to TV from playwriting, told me, “It became like a hobby for people, or something more like theater—you had your other day jobs or you had a trust fund.” Brenden Gallagher, a TV writer a decade in, said, “There are periods of time where I work at the Apple Store. I’ve worked doing data entry, I’ve worked doing research, I’ve worked doing copywriting.” Since he’d started in the business in 2014, in his mid-twenties, he’d never had more than eight months at a time when he didn’t need a source of income from outside the industry.
“There was this feeling,” the head of the midsize studio told me that day at Soho House, “during the last ten years or so, of, ‘Oh, we need to get more people of color in writers’ rooms.’ ” But what you get now, he said, is the black or Latino person who went to Harvard. “They’re getting the shot, but you don’t actually see a widening of the aperture to include people who grew up poor, maybe went to a state school or not even, and are just really talented. That has not happened at all.”
“The Sopranos does not exist without David Chase having worked in television for almost thirty years,” Blake Masters, a writer-producer and creator of the Showtime series Brotherhood, told me. “Because The Sopranos really could not be written by somebody unless they understood everything about television, and hated all of it.” Grote said much the same thing: “Prestige TV wasn’t new blood coming into Hollywood as much as it was a lot of veterans that were never able to tell these types of stories, who were suddenly able to cut through.”
The threshold for receiving the viewership-based streaming residuals is also incredibly high: a show must be viewed by at least 20 percent of a platform’s domestic subscribers “in the first 90 days of release, or in the first 90 days in any subsequent exhibition year.” As Bloomberg reported in November, fewer than 5 percent of the original shows that streamed on Netflix in 2022 would have met this benchmark. “I am not impressed,” the A-list writer told me in January. Entry-level TV staffing, where more and more writers are getting stuck, “is still a subsistence-level job,” he said. “It’s a job for rich kids.”
Brenden Gallagher, who echoed Conover’s belief that the union was well-positioned to gain more in 2026, put it this way: “My view is that there was a lot of wishful thinking about achieving this new middle class, based around, to paraphrase 30 Rock, making it 1997 again through science or magic. Will there be as big a working television-writer cohort that is making six figures a year consistently living in Los Angeles as there was from 1992 to 2021? No. That’s never going to come back.”
As for what types of TV and movies can get made by those who stick around, Kelvin Yu, creator and showrunner of the Disney+ series American Born Chinese, told me: “I think that there will be an industry move to the middle in terms of safer, four-quadrant TV.” (In L.A., a “four-quadrant” project is one that aims to appeal to all demographics.) “I think a lot of people,” he said, “who were disenfranchised or marginalized—their drink tickets are up.” Indeed, multiple writers and executives told me that following the strike, studio choices have skewed even more conservative than before. “It seems like buyers are much less adventurous,” one writer said. “Buyers are looking for Friends.”
The film and TV industry is now controlled by only four major companies, and it is shot through with incentives to devalue the actual production of film and television.
The entertainment and finance industries spend enormous sums lobbying both parties to maintain deregulation and prioritize the private sector. Writers will have to fight the studios again, but for more sweeping reforms. One change in particular has the potential to flip the power structure of the industry on its head: writers could demand to own complete copyright for the stories they create. They currently have something called “separated rights,” which allow a writer to use a script and its characters for limited purposes. But if they were to retain complete copyright, they would have vastly more leverage. Nearly every writer I spoke with seemed to believe that this would present a conflict with the way the union functions. This point is complicated and debatable, but Shawna Kidman and the legal expert Catherine Fisk—both preeminent scholars of copyright and media—told me that the greater challenge is Hollywood’s structure. The business is currently built around studio ownership. While Kidman found the idea of writer ownership infeasible, Fisk said it was possible, though it would be extremely difficult. Pushing for copyright would essentially mean going to war with the studios. But if things continue on their current path, writers may have to weigh such hazards against the prospect of the end of their profession. Or, they could leave it all behind.
·harpers.org·
The Life and Death of Hollywood, by Daniel Bessner
LinkedIn is not a social or professional network, it's a learning network
LinkedIn is not a social or professional network, it's a learning network
Maybe one frame is through taking control of your own personal development and learning: after all “learning is the one thing your employer can’t take away from you”
Over the years we’ve seen the rise of bro-etry and cringe “thought leadership” and crying CEOs. When I scroll my feed I have to sidestep the clearly threadboi and #personalbrand engagement-farming posts and try and focus on the real content.
Networking is useful, but distasteful to many. Instead, participating in self-directed learning communities is networking
“Don’t become a marketing manager, become someone who knows how to run user research”
·tomcritchlow.com·
LinkedIn is not a social or professional network, it's a learning network
AI startups require new strategies
AI startups require new strategies

comment from Habitue on Hacker News: > These are some good points, but it doesn't seem to mention a big way in which startups disrupt incumbents, which is that they frame the problem a different way, and they don't need to protect existing revenue streams.

The “hard tech” in AI are the LLMs available for rent from OpenAI, Anthropic, Cohere, and others, or available as open source with Llama, Bloom, Mistral and others. The hard-tech is a level playing field; startups do not have an advantage over incumbents.
There can be differentiation in prompt engineering, problem break-down, use of vector databases, and more. However, this isn’t something where startups have an edge, such as being willing to take more risks or be more creative. At best, it is neutral; certainly not an advantage.
This doesn’t mean it’s impossible for a startup to succeed; surely many will. It means that you need a strategy that creates differentiation and distribution, even more quickly and dramatically than is normally required
Whether you’re training existing models, developing models from scratch, or simply testing theories, high-quality data is crucial. Incumbents have the data because they have the customers. They can immediately leverage customers’ data to train models and tune algorithms, so long as they maintain secrecy and privacy.
Intercom’s AI strategy is built on the foundation of hundreds of millions of customer interactions. This gives them an advantage over a newcomer developing a chatbot from scratch. Similarly, Google has an advantage in AI video because they own the entire YouTube library. GitHub has an advantage with Copilot because they trained their AI on their vast code repository (including changes, with human-written explanations of the changes).
While there will always be individuals preferring the startup environment, the allure of working on AI at an incumbent is equally strong for many, especially pure computer and data scientsts who, more than anything else, want to work on interesting AI projects. They get to work in the code, with a large budget, with all the data, with above-market compensation, and a built-in large customer base that will enjoy the fruits of their labor, all without having to do sales, marketing, tech support, accounting, raising money, or anything else that isn’t the pure joy of writing interesting code. This is heaven for many.
A chatbot is in the chatbot market, and an SEO tool is in the SEO market. Adding AI to those tools is obviously a good idea; indeed companies who fail to add AI will likely become irrelevant in the long run. Thus we see that “AI” is a new tool for developing within existing markets, not itself a new market (except for actual hard-tech AI companies).
AI is in the solution-space, not the problem-space, as we say in product management. The customer problem you’re solving is still the same as ever. The problem a chatbot is solving is the same as ever: Talk to customers 24/7 in any language. AI enables completely new solutions that none of us were imagining a few years ago; that’s what’s so exciting and truly transformative. However, the customer problems remain the same, even though the solutions are different
Companies will pay more for chatbots where the AI is excellent, more support contacts are deferred from reaching a human, more languages are supported, and more kinds of questions can be answered, so existing chatbot customers might pay more, which grows the market. Furthermore, some companies who previously (rightly) saw chatbots as a terrible customer experience, will change their mind with sufficiently good AI, and will enter the chatbot market, which again grows that market.
the right way to analyze this is not to say “the AI market is big and growing” but rather: “Here is how AI will transform this existing market.” And then: “Here’s how we fit into that growth.”
·longform.asmartbear.com·
AI startups require new strategies
Muse retrospective by Adam Wiggins
Muse retrospective by Adam Wiggins
  • Wiggins focused on storytelling and brand-building for Muse, achieving early success with an email newsletter, which helped engage potential users and refine the product's value proposition.
  • Muse aspired to a "small giants" business model, emphasizing quality, autonomy, and a healthy work environment over rapid growth. They sought to avoid additional funding rounds by charging a prosumer price early on.
  • Short demo videos on Twitter showcasing the app in action proved to be the most effective method for attracting new users.
Muse as a brand and a product represented something aspirational. People want to be deeper thinkers, to be more strategic, and to use cool, status-quo challenging software made by small passionate teams. These kinds of aspirations are easier to indulge in times of plenty. But once you're getting laid off from your high-paying tech job, or struggling to raise your next financing round, or scrambling to protect your kids' college fund from runaway inflation and uncertain markets... I guess you don't have time to be excited about cool demos on Twitter and thoughtful podcasts on product design.
I’d speculate that another factor is the half-life of cool new productivity software. Evernote, Slack, Notion, Roam, Craft, and many others seem to get pretty far on community excitement for their first few years. After that, I think you have to be left with software that serves a deep and hard-to-replace purpose in people’s lives. Muse got there for a few thousand people, but the economics of prosumer software means that just isn’t enough. You need tens of thousands, hundreds of thousands, to make the cost of development sustainable.
We envisioned Muse as the perfect combination of the freeform elements of a whiteboard, the structured text-heavy style of Notion or Google Docs, and the sense of place you get from a “virtual office” ala group chat. As a way to asynchronously trade ideas and inspiration, sketch out project ideas, and explore possibilities, the multiplayer Muse experience is, in my honest opinion, unparalleled for small creative teams working remotely.
But friction began almost immediately. The team lead or organizer was usually the one bringing Muse to the team, and they were already a fan of its approach. But the other team members are generally a little annoyed to have to learn any new tool, and Muse’s steeper learning curve only made that worse. Those team members would push the problem back to the team lead, treating them as customer support (rather than contacting us directly for help). The team lead often felt like too much of the burden of pushing Muse adoption was on their shoulders. This was in addition to the obvious product gaps, like: no support for the web or Windows; minimal or no integration with other key tools like Notion and Google Docs; and no permissions or support for multiple workspaces. Had we raised $10M back during the cash party of 2020–2021, we could have hired the 15+ person team that would have been necessary to build all of that. But with only seven people (we had added two more people to the team in 2021–2022), it just wasn’t feasible.
We neither focused on a particular vertical (academics, designers, authors...) or a narrow use case (PDF reading/annotation, collaborative whiteboarding, design sketching...). That meant we were always spread pretty thin in terms of feature development, and marketing was difficult even over and above the problem of explaining canvas software and digital thinking tools.
being general-purpose was in its blood from birth. Part of it was maker's hubris: don't we always dream of general-purpose tools that will be everything to everyone? And part of it was that it's truly the case that Muse excels at the ability to combine together so many different related knowledge tasks and media types into a single, minimal, powerful canvas. Not sure what I would do differently here, even with the benefit of hindsight.
Muse built a lot of its reputation on being principled, but we were maybe too cautious to do the mercenary things that help you succeed. A good example here is asking users for ratings; I felt like this was not to user benefit and distracting when the user is trying to use your app. Our App Store rating was on the low side (~3.9 stars) for most of our existence. When we finally added the standard prompt-for-rating dialog, it instantly shot up to ~4.7 stars. This was a small example of being too principled about doing good for the user, and not thinking about what would benefit our business.
Growing the team slowly was a delight. At several previous ventures, I've onboard people in the hiring-is-job-one environment of a growth startup. At Muse, we started with three founders and then hired roughly one person per year. This was absolutely fantastic for being able to really take our time to find the perfect person for the role, and then for that person to have tons of time to onboard and find their footing on the team before anyone new showed up. The resulting team was the best I've ever worked on, with minimal deadweight or emotional baggage.
ultimately your product does have to have some web presence. My biggest regret is not building a simple share-to-web function early on, which could have created some virality and a great deal of utility for users as well.
In terms of development speed, quality of the resulting product, hardware integration, and a million other things: native app development wins.
After decades working in product development, being on the marketing/brand/growth/storytelling side was a huge personal challenge for me. But I feel like I managed to grow into the role and find my own approach (podcasting, demo videos, etc) to create a beacon to attract potential customers to our product.
when it comes time for an individual or a team to sit down and sketch out the beginnings of a new business, a new book, a new piece of art—this almost never happens at a computer. Or if it does, it’s a cobbled-together collection of tools like Google Docs and Zoom which aren’t really made for this critical part of the creative lifecycle.
any given business will find a small number of highly-effective channels, and the rest don't matter. For Heroku, that was attending developer conferences and getting blog posts on Hacker News. For another business it might be YouTube influencer sponsorships and print ads in a niche magazine. So I set about systematically testing many channels.
·adamwiggins.com·
Muse retrospective by Adam Wiggins
Great Products Have Great Premises
Great Products Have Great Premises
A great premise gives users context and permission to take actions they might not otherwise take.
The most powerful thing a product can do is give its user a premise.1 A premise is the foundational belief that shapes a user’s behavior. A premise can normalize actions that people otherwise might not take, held back by some existing norm
AirBnb. The premise: It’s ok to stay in strangers’ homes.
the idea of staying in strangers’ homes for short stays was doubted even by the founders. Crashing in someone’s spare room wasn’t unheard of, but it might be seen as weird, taboo, or even dangerous.
Bumble. The premise: It’s ok for women to ask men out.
The best way to follow through on a premise is to make it the core feature of the app. Bumble did, requiring that women make the first move on the app. A woman would be presented with a list of her matches and would have to make the first "move" before men could reply. This of course became a powerful differentiating feature and marketing hook.
Substack. The premise: It’s ok to charge for your writing.
Substack's premise aimed to normalize the hardest part of internet writing: getting paid. They aimed to show that independent authors could succeed at making a living (and subscription models aligned with this ethos). In doing so, Substack also made the less-hard parts of internet writing even easier. You could start a newsletter and keep it free until you felt confident about going paid. This not only normalized the end goal but also lowered the barrier to getting started.
A premise is valuable not only for “products,” but also for experiences.As I recently shouted, people still underestimate the power of giving a social event a premise. Hackathons, housewarmings, happy hours and the like are hangouts with a narrative. They have a good premise — a specific context that makes it more comfortable to do something that can be hard: socialize. (Side note: some of the best tv series and films are built on great premises.)
Premises work best on end consumers, prosumers, small business freelancers, and the like. Many two-sided marketplaces serving two of these stakeholder groups tend to have a good premise. For example, Kickstarter's premise for the creator might be: It’s ok to ask for money before you've built a product.
·workingtheorys.com·
Great Products Have Great Premises
How can we develop transformative tools for thought?
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work. But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
·numinous.productions·
How can we develop transformative tools for thought?
Scarlet Witch - Wikipedia
Scarlet Witch - Wikipedia
Marvel licensed the filming rights of the X-Men and related concepts, such as mutants, to 20th Century Fox, who created a film series based on the franchise. Years later, Marvel started their own film franchise, known as the Marvel Cinematic Universe (MCU), which focused on characters that they had not licensed to other studios (see below). At the time, the rights to Quicksilver and Scarlet Witch were disputed by both studios. As they both held the rights to the characters, with Fox citing the characters' mutant status and being children of Magneto and Marvel citing the twins' editorial history being more closely tied to the Avengers rather than the X-Men, the studios made an agreement wherein both of them could use the characters on the condition that the plots did not make reference to the other studio's properties (i.e. the Fox films could not mention the twins as members of the Avengers while the MCU could not mention them as mutants or children of Magneto).[215] The arrangement became moot following the acquisition of 21st Century Fox by Disney – the parent company of Marvel Studios, and the confirmation that future X-Men films will take place within the MCU.
·en.wikipedia.org·
Scarlet Witch - Wikipedia
Nina
Nina

Blockchain network for music distribution and publish

Nina v2 provides: - a permanent archive of your music - 100% of sales go to artists - profit splits can be programmed in - paid writers - interlinked discovery Everything people have been calling for in the wake of the bandcamp debacle

·ninaprotocol.com·
Nina
Divine Discontent, Disruption’s Antidote
Divine Discontent, Disruption’s Antidote
in their efforts to provide better products than their competitors and earn higher prices and margins, suppliers often “overshoot” their market: They give customers more than they need or ultimately are willing to pay for. And more importantly, it means that disruptive technologies that may underperform today, relative to what users in the market demand, may be fully performance-competitive in that same market tomorrow. This was the basis for insisting that the iPhone must have a low-price model: surely Apple would soon run out of new technology to justify the prices it charged for high-end iPhones, and consumers would start buying much cheaper Android phones instead! In fact, as I discussed in after January’s earnings results, the company has gone in the other direction: more devices per customer, higher prices per device, and an increased focus on ongoing revenue from those same customers.
Apple seems to have mostly saturated the high end, slowly adding switchers even as existing iPhone users hold on to their phones longer; what is not happening, though, is what disruption predicts: Apple isn’t losing customers to low-cost competitors for having “overshot” and overpriced its phones. It seems my thesis was right: a superior experience can never be too good — or perhaps I didn’t go far enough.
Jeff Bezos has been writing an annual letter to shareholders since 1997, and he attaches that original letter to one he pens every year. It included this section entitled Obsess Over Customers: From the beginning, our focus has been on offering our customers compelling value. We realized that the Web was, and still is, the World Wide Wait. Therefore, we set out to offer customers something they simply could not get any other way, and began serving them with books. We brought them much more selection than was possible in a physical store (our store would now occupy 6 football fields), and presented it in a useful, easy-to-search, and easy-to-browse format in a store open 365 days a year, 24 hours a day. We maintained a dogged focus on improving the shopping experience, and in 1997 substantially enhanced our store. We now offer customers gift certificates, 1-Click shopping, and vastly more reviews, content, browsing options, and recommendation features. We dramatically lowered prices, further increasing customer value. Word of mouth remains the most powerful customer acquisition tool we have, and we are grateful for the trust our customers have placed in us. Repeat purchases and word of mouth have combined to make Amazon.com the market leader in online bookselling.
This year, after highlighting just how much customers love Amazon (answer: a lot), Bezos wrote: One thing I love about customers is that they are divinely discontent. Their expectations are never static — they go up. It’s human nature. We didn’t ascend from our hunter-gatherer days by being satisfied. People have a voracious appetite for a better way, and yesterday’s ‘wow’ quickly becomes today’s ‘ordinary’. I see that cycle of improvement happening at a faster rate than ever before. It may be because customers have such easy access to more information than ever before — in only a few seconds and with a couple taps on their phones, customers can read reviews, compare prices from multiple retailers, see whether something’s in stock, find out how fast it will ship or be available for pick-up, and more. These examples are from retail, but I sense that the same customer empowerment phenomenon is happening broadly across everything we do at Amazon and most other industries as well. You cannot rest on your laurels in this world. Customers won’t have it.
when it comes to Internet-based services, this customer focus does not come at the expense of a focus on infrastructure or distribution or suppliers: while those were the means to customers in the analog world, in the online world controlling the customer relationship gives a company power over its suppliers, the capital to build out infrastructure, and control over distribution. Bezos is not so much choosing to prioritize customers insomuch as he has unlocked the key to controlling value chains in an era of aggregation.
consumer expectations are not static: they are, as Bezos’ memorably states, “divinely discontent”. What is amazing today is table stakes tomorrow, and, perhaps surprisingly, that makes for a tremendous business opportunity: if your company is predicated on delivering the best possible experience for consumers, then your company will never achieve its goal.
In the case of Amazon, that this unattainable and ever-changing objective is embedded in the company’s culture is, in conjunction with the company’s demonstrated ability to spin up new businesses on the profits of established ones, a sort of perpetual motion machine
Owning the customer relationship by means of delivering a superior experience is how these companies became dominant, and, when they fall, it will be because consumers deserted them, either because the companies lost control of the user experience (a danger for Facebook and Google), or because a paradigm shift made new experiences matter more (a danger for Google and Apple).
·stratechery.com·
Divine Discontent, Disruption’s Antidote
Generative AI’s Act Two
Generative AI’s Act Two
This page also has many infographics providing an overview of different aspects of the AI industry at time of writing.
We still believe that there will be a separation between the “application layer” companies and foundation model providers, with model companies specializing in scale and research and application layer companies specializing in product and UI. In reality, that separation hasn’t cleanly happened yet. In fact, the most successful user-facing applications out of the gate have been vertically integrated.
We predicted that the best generative AI companies could generate a sustainable competitive advantage through a data flywheel: more usage → more data → better model → more usage. While this is still somewhat true, especially in domains with very specialized and hard-to-get data, the “data moats” are on shaky ground: the data that application companies generate does not create an insurmountable moat, and the next generations of foundation models may very well obliterate any data moats that startups generate. Rather, workflows and user networks seem to be creating more durable sources of competitive advantage.
Some of the best consumer companies have 60-65% DAU/MAU; WhatsApp’s is 85%. By contrast, generative AI apps have a median of 14% (with the notable exception of Character and the “AI companionship” category). This means that users are not finding enough value in Generative AI products to use them every day yet.
generative AI’s biggest problem is not finding use cases or demand or distribution, it is proving value. As our colleague David Cahn writes, “the $200B question is: What are you going to use all this infrastructure to do? How is it going to change people’s lives?”
·sequoiacap.com·
Generative AI’s Act Two
Seven Rules For Internet CEOs To Avoid Enshittification
Seven Rules For Internet CEOs To Avoid Enshittification
People forget that when Bezos introduced Amazon Prime, Wall St. flipped out, because they insisted that it would cost way too much for too little benefit. But, through it all Amazon survived (and thrived) because Bezos just kept telling investors exactly what his plan was, and never backed down, no matter what Wall St. kept saying to him.
This is too easily forgotten, but your users are everything if you run an internet business. They’re not “the product.” They’re what makes your site useful and valuable, and often provide the best marketing you could never buy by convincing others to join and providing you with all of the best ideas on how to improve things and make your service even better for the users. The moment you’re undermining your own community, you’re beginning to spiral downward.
As you’re developing a business model, the best way to make sure that you’re serving your users best, and not enshittifying everything, is to constantly make sure that you’re only capturing some of the value you’re creating, and are instead putting much more out into the world, especially for your community.
Push the power to make your service better out from the service to the users themselves and watch what they do. Let them build. Let them improve your service. Let them make it work better for you. But, you have to have some trust here. If you’re focused on “Rule 3” you have to recognize that sometimes your users will create value that you don’t capture. Or even that someone else captures. But in the long run, it still flows back to you, as it makes your service that much more valuable.
If you’re charging for something that was once free, you’re taking away value from your community. You’re changing the nature of the bargain, and ripping away the trust that your community put in you. Instead, always look for something new that is worth paying for above and beyond what you already offered.
There are ways to monetize that don’t need to overwhelm, that don’t need to suck up every bit of data, that don’t need to rely on taking away features users relied on. Focus on adding more scarce value, and figuring out ways to charge for those new things which can’t be easily replicated.
You start learning acronyms like “ARPU” (average revenue per user) and such. And then you’re being measured on how much you’re increasing those metrics, which means you need to squeeze more out of each individual user, and you’re now deep within the enshittification stage, in which you’re trying to squeeze your users for more money each quarter (because now everything is judged in how well you did in the last 3 months to improve that number).
·techdirt.com·
Seven Rules For Internet CEOs To Avoid Enshittification
Spotify
Spotify
Spotify dominates the music streaming industry with over 500 million monthly active users and 210 million paid subscribers, and is expanding into new areas like podcasts and audiobooks. The company aims to generate $100 billion in annual revenue by 2030 through expanding margins, increasing prices, and growing its userbase to 1 billion monthly active users. According to the author's analysis, Spotify represents a significant investment opportunity with a potential stock price increase of around 7 times by 2030.
·purvil.bearblog.dev·
Spotify
Netflix, Shein and MrBeast — Benedict Evans
Netflix, Shein and MrBeast — Benedict Evans
both Netflix and Shein realised that you can make far more SKUs if you’re not constrained by physical inventory - the time slots on linear TV and the store rooms of physical retail.
If you don’t need thousands of physical stores, then you can turn over the product range much faster and reach new customers much more quickly - and so Shein is now bigger than H&M and on track to pass Inditex.
Of course, the fundamental TV question is ‘what’s your budget?’ There’s a circular relationship: a given budget means a given quality and quantity of content, which, combined with your CAC, means a given audience, which means a given level of revenue and a given budget. There is no network effect in TV, and going to Hollywood with the world’s best software and $5 will get you a latte.
While it is true that a popular TV show can attract more viewers and potentially drive subscriptions, there is no guarantee of this happening
YouTube doesn’t buy LA stuff from LA people - it runs a network, and the questions are Silicon Valley questions. YouTube, in both the network and the kinds of content, is a much bigger change to ‘TV’ than Netflix. It’s ‘video’, but it’s also ‘time spent’ and it competes with Netflix and TV but also with Instagram and TikTok (it does puzzle me that people focus on competition between Instagram and TikTok when the form overlaps at least as much with YouTube). And YouTube doesn’t really buy shows or buy users - it pays a revenue share.
Business model comparison between Netflix and YouTube
Netflix can indeed make TV shows as well as any legacy TV company, but did Disney make software that’s as good as Netflix? It didn’t have to. It just had to make software that’s good enough, because ‘software’ questions are not the point of leverage. But I don’t see any media companies competing with YouTube or TikTok, where software is the point of leverage - at least, not recently.
·ben-evans.com·
Netflix, Shein and MrBeast — Benedict Evans
A. G. Sulzberger on the Battles Within and Against the New York Times
A. G. Sulzberger on the Battles Within and Against the New York Times
One of the things that’s misunderstood about independence is that it doesn’t require you not to have a theory of the case, right? My great-grandfather had a line that he often quoted: “I believe in an open mind, but not so open that your brains fall out.”
it? If you are a Democrat and you believe that Donald Trump represents a threat to democracy, is it then anti-democracy for an organization like yours, David, to produce reporting that raises questions about the actions, conduct, or fitness of President Biden?
Members of the Hasidic community criticized our reporting, and very loudly. They sent a letter to the Pulitzer committee raising all sorts of concerns. But it is also true that we heard from countless members of the community saying, “We needed this.” The implicit request of the critics is to suppress such reporting: “It may be true, but, because it can be misused, we don’t want it out there.” But, if we had suppressed the reporting, more kids would be deprived of education. That is the posture of independence.
The posture of independence is not about being a blank slate. It’s not about having no life experience, no personal perspectives. That is an impossible ask. That’s a parody of the long debate over objectivity. The idea of objectivity, as it was originally formulated, wasn’t about the person’s innate characteristics. It was about the process that helped address the inherent biases that all of us carry in our lives. So the question isn’t “Do you have any view?” The question is “Are you animated by an open mind, a skeptical mind, and a commitment to following the facts wherever they lead?”
The key isn’t being a blank slate. It’s not that you don’t have a theory going into any story. It’s a willingness to put the facts above any individual agenda. Think about this moment and how polarized this country is. How many institutions in American life do you believe are truly putting the facts above any agenda?
Let’s be absolutely clear: the former President of the United States, the current leader of one of America’s two political parties, has now spent the better part of seven years telling the public not just to distrust us but that we are the enemies of the American people, that our work is fake, manufactured. The term “enemies of the people” has roots in Stalin’s Soviet Union and in Hitler’s Germany.
Another dynamic inside our industry is that journalism, to some extent, has become an echo chamber. What do I mean by that? It’s been a while since I looked at your bio, but, if you are like many journalists of your generation and in my generation, you probably started at a local paper. That was the traditional path. And what was the day like for a journalist at that point? If you were a cub reporter, you were probably writing—As you were at the Providence Journal.You were probably writing one story a day to three stories a week somewhere. What were your days like? Every day, you were out in the communities you were covering. You were being confronted with the full diversity of this country and of the human experience. On the same day, you would talk to rich and poor, you’d talk to a mother who just had lost a son to murder, and a mother whose son was just arrested for murder, right?
Are you saying that’s changed? That reporters are just sitting in rooms in front of a screen? I don’t think that’s the case.Of course it’s the case! It’s the least talked-about and most insidious result of the collapse of the business model that historically supported quality journalism. The work of reporting is expensive. As traditional media faded, and particularly local media faded, and as digital media filled that vacuum, we saw a full inversion of how reporters’ days were spent. The new model is you have to write three to five stories a day. And, if you have to write three to five stories a day, there is no time to get out into the world. You’re spending your time writing, you’re typing, typing, which means that you are drawing on your own experience and the experience of the people immediately around you. So, literally, many journalists in this country have gone from spending their days out in the field, surrounded by life, to spending their days in an office with people who are in the same profession, working for the same institution, living in the same city, graduating from the same type of university.
The concern is that there’s also a widening gulf in the realm of information. Just as there’s an income-inequality problem in this country that gets worse and worse, there’s an informational divide. I’m not saying that A. G. Sulzberger can be responsible for it and make it all better with a stroke, but there is that problem.I disagree with the hypothesis. I think there is an information problem, but I think it’s about the collapse of local news. I think that that is an American tragedy, a dangerous and insidious force in American life.
broader thought about Opinion: I would just say, look, three years after that episode, do you feel that, on the Times Opinion pages, are you regularly seeing pieces from every side of the political spectrum on the abortion debate? On business and economic questions? Social and political questions? I think you do. I’d argue that, under Katie [Kingsbury, who replaced Bennet], you’re seeing more of them than ever. I think you see that she’s just hired another conservative columnist, our first evangelical columnist, also a military veteran.
Would you hire a Trumpist on the Op-Ed page?This is a question I’ve been getting now for six years, and it’s a really tricky one. It’s trickier than it sounds, and I bet you have a suspicion on why. It is harder than you’d think to find the Trumpist who hasn’t, at some point, said, “The 2020 election was rigged, and Donald Trump won the election.”I get it. But a huge number, tens of millions of people, either tolerate that point of view or believe it.Yeah. But independence is not about “both sides.”So you would not have a Trumpist who said that at some point writing on your Op-Ed page?We would not have anyone who—But you’d have guest columnists like Tom Cotton—We certainly would not have a columnist who has a track record of saying things that are demonstrably untrue.
In this hyper-politicized, hyper-polarized moment, is society benefitting from every single player getting louder and louder about declaring their personal allegiances and loyalties and preferences? Or do you think there’s space for some actors who are really committed just to serving the public with the full story, let the facts fall where they may?
·newyorker.com·
A. G. Sulzberger on the Battles Within and Against the New York Times
Leviathan Wakes: the case for Apple's Vision Pro - Redeem Tomorrow
Leviathan Wakes: the case for Apple's Vision Pro - Redeem Tomorrow
The existing VR hardware has not received sufficient investment to fully demonstrate the potential of this technology. It is unclear whether the issues lie with augmented reality (AR) itself or the technology used to deliver it. However, Apple has taken a different approach by investing significantly in creating a serious computer with an optical overlay as its primary interface. Unlike other expensive headsets, Apple has integrated the ecosystem to make it appealing right out of the box, allowing users to watch movies, view photos, and run various apps. This comprehensive solution aims to address the uncertainties surrounding AR. The display quality is top-notch, finger-based interaction replaces clunky joysticks, and performance is optimized to minimize motion sickness. Furthermore, a large and experienced developer community stands ready to create apps, supported by mature tools and extensive documentation. With these factors in place, there is anticipation for a new paradigm enabled by a virtually limitless monitor. The author expresses eagerness to witness how this technology unfolds.
What can you do with this thing? There’s a good chance that, whatever killer apps may emerge, they don’t need the entire complement of sensors and widgets to deliver a great experience. As that’s discovered, Apple will be able to open a second tier in this category and sell you a simplified model at a lower cost. Meanwhile, the more they manufacture the essentials—high density displays, for example—the higher their yields will become, the more their margins will increase. It takes time to perfect manufacturing processes and build up capacity. Vision Pro isn’t just about 2024’s model. It’s setting up the conditions for Apple to build the next five years of augmented reality wearable technology.
VR/AR doesn’t have to suck ass. It doesn’t have to give you motion sickness. It doesn’t have to use these awkward, stupid controllers you accidentally swap into the wrong hand. It doesn’t have to be fundamentally isolating. If this paradigm shift could have been kicked off by cheap shit, we’d be there already. May as well pursue the other end of the market.
what starts as clunky needn’t remain so. As the technology for augmented reality becomes more affordable, more lightweight, more energy efficient, more stylish, it will be more feasible for more people to use. In the bargain, we’ll get a display technology entirely unshackled from the constraints of a monitor stand. We’ll have much broader canvases subject to the flexibility of digital creativity, collaboration and expression. What this unlocks, we can’t say.
·redeem-tomorrow.com·
Leviathan Wakes: the case for Apple's Vision Pro - Redeem Tomorrow