Found 2 bookmarks
Newest
American Disruption
American Disruption
manufacturing in Asia is fundamentally different than the manufacturing we remember in the United States decades ago: instead of firms with product-specific factories, China has flexible factories that accommodate all kinds of orders, delivering on that vector of speed, convenience, and customization that Christensen talked about.
Every decrease in node size comes at increasingly astronomical costs; the best way to afford those costs is to have one entity making chips for everyone, and that has turned out to be TSMC. Indeed, one way to understand Intel’s struggles is that it was actually one of the last massive integrated manufacturers: Intel made chips almost entirely for itself. However, once the company missed mobile, it had no choice but to switch to a foundry model; the company is trying now, but really should have started fifteen years ago. Now the company is stuck, and I think they will need government help.
companies that go up-market find it impossible to go back down, and I think this too applies to countries. Start with the theory: Christensen had a chapter in The Innovator’s Dilemma entitled “What Goes Up, Can’t Go Down”: Three factors — the promise of upmarket margins, the simultaneous upmarket movement of many of a company’s customers, and the difficulty of cutting costs to move downmarket profitably — together create powerful barriers to downward mobility. In the internal debates about resource allocation for new product development, therefore, proposals to pursue disruptive technologies generally lose out to proposals to move upmarket. In fact, cultivating a systematic approach to weeding out new product development initiatives that would likely lower profits is one of the most important achievements of any well-managed company.
So could Apple pay more to get U.S. workers? I suppose — leaving aside the questions of skills and whatnot — but there is also the question of desirability; the iPhone assembly work that is not automated is highly drudgerous, sitting in a factory for hours a day delicately assembling the same components over and over again. It’s a good job if the alternative is working in the fields or in a much more dangerous and uncomfortable factory, but it’s much worse than basically any sort of job that is available in the U.S. market.
First, blanket tariffs are a mistake. I understand the motivation: a big reason why Chinese imports to the U.S. have actually shrunk over the last few years is because a lot of final assembly moved to countries like Vietnam, Thailand, Mexico, etc. Blanket tariffs stop this from happening, at least in theory. The problem, however, is that those final assembly jobs are the least desirable jobs in the value chain, at least for the American worker; assuming the Trump administration doesn’t want to import millions of workers — that seems rather counter to the foundation of his candidacy! — the United States needs to find alternative trustworthy countries for final assembly. This can be accomplished through selective tariffs (which is exactly what happened in the first Trump administration).
Secondly, using trade flows to measure the health of the economic relationship with these countries — any country, really, but particularly final assembly countries — is legitimately stupid. Go back to the iPhone: the value-add of final assembly is in the single digit dollar range; the value-add of Apple’s software, marketing, distribution, etc. is in the hundreds of dollars. Simply looking at trade flows — where an imported iPhone is calculated as a trade deficit of several hundred dollars — completely obscures this reality. Moreover, the criteria for a final assembly country is that they have low wages, which by definition can’t pay for an equivalent amount of U.S. goods to said iPhone.
At the same time, the overall value of final assembly does exceed its economic value, for the reasons noted above: final assembly is gravity for higher value components, and it’s those components that are the biggest national security problem. This is where component tariffs might be a useful tool: the U.S. could use a scalpel instead of a sledgehammer to incentivize buying components from trusted allies, or from the U.S. itself, or to build new capacity in trusted locations. This does, admittedly, start to sound a lot like central planning, but that is why the gravity argument is an important one: simply moving final assembly somewhere other than China is a win — but not if there are blanket tariffs, at which point you might as well leave the supply chain where it is.
You can certainly make the case that things like castings and other machine components are of sufficient importance to the U.S. that they ought to be manufactured here, but you have to ramp up to that. What is much more problematic is that raw materials and components are now much cheaper for Haas’ foreign competitors; even if those competitors face tariffs in the United States, their cost of goods sold will be meaningfully lower than Haas, completely defeating the goal of encouraging the purchase of U.S. machine tools.
Fourth, there remains the problem of chips. Trump just declared economic war on China, which definitionally increases the possibility of kinetic war. A kinetic war, however, will mean the destruction of TSMC, leaving the U.S. bereft of chips at the very moment that A.I. is poised to create tremendous opportunities for growth and automation. And, even if A.I. didn’t exist, it’s enough to note that modern life would grind to a halt without chips. That’s why this is the area that most needs direct intervention from the federal government, particularly in terms of incentivizing demand for both leading and trailing edge U.S. chips.
my prevailing emotion over the past week — one I didn’t fully come to grips with until interrogating why Monday’s Article failed to live up to my standards — is sadness over the end of an era in technology, and frustration-bordering-on-disillusionment over the demise of what I thought was a uniquely American spirit.
Internet 1.0 was about technology. This was the early web, when technology was made for technology’s sake. This was when we got standards like TCP/IP, DNS, HTTP, etc. This was obviously the best era, but one that was impossible to maintain once there was big money to be made on the Internet. Internet 2.0 was about economics. This was the era of Aggregators — the era of Stratechery, in other words — when the Internet developed, for better or worse, in ways that made maximum economic sense. This was a massive boon for the U.S., which sits astride the world of technology; unfortunately none of the value that comes from that position is counted in the trade statistics, so the administration doesn’t seem to care. Internet 3.0 is about politics. This is the era when countries make economically sub-optimal choices for reasons that can’t be measured in dollars and cents. In that Article I thought that Big Tech exercising its power against the President might be a spur for other countries to seek to wean themselves away from American companies; instead it is the U.S. that may be leaving other countries little choice but to retaliate against U.S. tech.
There is, admittedly, a hint of that old school American can-do attitude embedded in these tariffs: the Trump administration seems to believe the U.S. can overcome all of the naysayers and skeptics through sheer force of will. That force of will, however, would be much better spent pursuing a vision of a new world order in 2050, not trying to return to 1950. That is possible to do, by the way, but only if you accept 1950’s living standards, which weren’t nearly as attractive as nostalgia-colored glasses paint them, and if we’re not careful, 1950’s technology as well. I think we can do better that that; I know we can do better than this.
·stratechery.com·
American Disruption
AI Integration and Modularization
AI Integration and Modularization
Summary: The question of integration versus modularization in the context of AI, drawing on the work of economists Ronald Coase and Clayton Christensen. Google is pursuing a fully integrated approach similar to Apple, while AWS is betting on modularization, and Microsoft and Meta are somewhere in between. Integration may provide an advantage in the consumer market and for achieving AGI, but that for enterprise AI, a more modular approach leveraging data gravity and treating models as commodities may prevail. Ultimately, the biggest beneficiary of this dynamic could be Nvidia.
The left side of figure 5-1 indicates that when there is a performance gap — when product functionality and reliability are not yet good enough to address the needs of customers in a given tier of the market — companies must compete by making the best possible products. In the race to do this, firms that build their products around proprietary, interdependent architectures enjoy an important competitive advantage against competitors whose product architectures are modular, because the standardization inherent in modularity takes too many degrees of design freedom away from engineers, and they cannot not optimize performance.
The issue I have with this analysis of vertical integration — and this is exactly what I was taught at business school — is that the only considered costs are financial. But there are other, more difficult to quantify costs. Modularization incurs costs in the design and experience of using products that cannot be overcome, yet cannot be measured. Business buyers — and the analysts who study them — simply ignore them, but consumers don’t. Some consumers inherently know and value quality, look-and-feel, and attention to detail, and are willing to pay a premium that far exceeds the financial costs of being vertically integrated.
Google trains and runs its Gemini family of models on its own TPU processors, which are only available on Google’s cloud infrastructure. Developers can access Gemini through Vertex AI, Google’s fully-managed AI development platform; and, to the extent Vertex AI is similar to Google’s internal development environment, that is the platform on which Google is building its own consumer-facing AI apps. It’s all Google, from top-to-bottom, and there is evidence that this integration is paying off: Gemini 1.5’s industry leading 2 million token context window almost certainly required joint innovation between Google’s infrastructure team and its model-building team.
In AI, Google is pursuing an integrated strategy, building everything from chips to models to applications, similar to Apple's approach in smartphones.
On the other extreme is AWS, which doesn’t have any of its own models; instead its focus has been on its Bedrock managed development platform, which lets you use any model. Amazon’s other focus has been on developing its own chips, although the vast majority of its AI business runs on Nvidia GPUs.
Microsoft is in the middle, thanks to its close ties to OpenAI and its models. The company added Azure Models-as-a-Service last year, but its primary focus for both external customers and its own internal apps has been building on top of OpenAI’s GPT family of models; Microsoft has also launched its own chip for inference, but the vast majority of its workloads run on Nvidia.
Google is certainly building products for the consumer market, but those products are not devices; they are Internet services. And, as you might have noticed, the historical discussion didn’t really mention the Internet. Both Google and Meta, the two biggest winners of the Internet epoch, built their services on commodity hardware. Granted, those services scaled thanks to the deep infrastructure work undertaken by both companies, but even there Google’s more customized approach has been at least rivaled by Meta’s more open approach. What is notable is that both companies are integrating their models and their apps, as is OpenAI with ChatGPT.
Google's integrated AI strategy is unique but may not provide a sustainable advantage for Internet services in the way Apple's integration does for devices
It may be the case that selling hardware, which has to be perfect every year to justify a significant outlay of money by consumers, provides a much better incentive structure for maintaining excellence and execution than does being an Aggregator that users access for free.
Google’s collection of moonshots — from Waymo to Google Fiber to Nest to Project Wing to Verily to Project Loon (and the list goes on) — have mostly been science projects that have, for the most part, served to divert profits from Google Search away from shareholders. Waymo is probably the most interesting, but even if it succeeds, it is ultimately a car service rather far afield from Google’s mission statement “to organize the world’s information and make it universally accessible and useful.”
The only thing that drives meaningful shifts in platform marketshare are paradigm shifts, and while I doubt the v1 version of Pixie [Google’s rumored Pixel-only AI assistant] would be good enough to drive switching from iPhone users, there is at least a path to where it does exactly that.
the fact that Google is being mocked mercilessly for messed-up AI answers gets at why consumer-facing AI may be disruptive for the company: the reason why incumbents find it hard to respond to disruptive technologies is because they are, at least at the beginning, not good enough for the incumbent’s core offering. Time will tell if this gives more fuel to a shift in smartphone strategies, or makes the company more reticent.
while I was very impressed with Google’s enterprise pitch, which benefits from its integration with Google’s infrastructure without all of the overhead of potentially disrupting the company’s existing products, it’s going to be a heavy lift to overcome data gravity, i.e. the fact that many enterprise customers will simply find it easier to use AI services on the same clouds where they already store their data (Google does, of course, also support non-Gemini models and Nvidia GPUs for enterprise customers). To the extent Google wins in enterprise it may be by capturing the next generation of startups that are AI first and, by definition, data light; a new company has the freedom to base its decision on infrastructure and integration.
Amazon is certainly hoping that argument is correct: the company is operating as if everything in the AI value chain is modular and ultimately a commodity, which insinuates that it believes that data gravity will matter most. What is difficult to separate is to what extent this is the correct interpretation of the strategic landscape versus a convenient interpretation of the facts that happens to perfectly align with Amazon’s strengths and weaknesses, including infrastructure that is heavily optimized for commodity workloads.
Unclear if Amazon's strategy is based on true insight or motivated reasoning based on their existing strengths
Meta’s open source approach to Llama: the company is focused on products, which do benefit from integration, but there are also benefits that come from widespread usage, particularly in terms of optimization and complementary software. Open source accrues those benefits without imposing any incentives that detract from Meta’s product efforts (and don’t forget that Meta is receiving some portion of revenue from hyperscalers serving Llama models).
The iPhone maker, like Amazon, appears to be betting that AI will be a feature or an app; like Amazon, it’s not clear to what extent this is strategic foresight versus motivated reasoning.
achieving something approaching AGI, whatever that means, will require maximizing every efficiency and optimization, which rewards the integrated approach.
the most value will be derived from building platforms that treat models like processors, delivering performance improvements to developers who never need to know what is going on under the hood.
·stratechery.com·
AI Integration and Modularization