Found 407 bookmarks
Custom sorting
Tim Cook vs. Steve Jobs
Tim Cook vs. Steve Jobs
Broadly speaking, Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla are all “technology” companies. Looking more specifically, though, each company occupies fundamentally different categories of tech. Apple is a consumer computing hardware manufacturer. Its primary products are smartphones, laptops, desktop computers, and tablets. Other products that it makes, including the so-called “services,” are primarily accessories to or supportive of their consumer computing hardware: e.g., App Store, Apple Music, and iCloud. Apple’s specific product focus has remained unchanged since its founding as “Apple Computer Company.”
Meta has tried to pivot to the so-called “metaverse,” symbolically renaming the whole company from “Facebook” and continuing to pour $billions every year into the effort, yet with not much more return on investment than Apple’s own “spatial computing”, i.e., Vision Pro. And now Meta is trying to pivot to A.I., pouring a ton of money into that too, but with nothing much to show for it. We’re supposed to be impressed by Meta poaching individual Apple engineers with nine-figure pay packages, which in one sense is impressive, just not impressive in the sense of paying off for Meta. Perhaps it will pay off for Meta in the future. Or perhaps not. Meanwhile, Meta is still practically printing money at its old, core business: selling ads on social media
Jobs did not just make tech products willy-nilly, for no other reason than to maximize profit and stockholder returns. He was always focused specifically on consumer computing devices and platforms. That’s what he cared about, and where his experienced rested. When Jobs left Apple in the 1980s, what did he do? Again, he created a new personal computing platform, NeXT, a combination of hardware and operating system, just like the Apple II, Lisa, and Macintosh that came before. Jobs was innovating… on a theme, almost like a classical composer. Jobs was eventually able to return to Apple and become CEO precisely because Jobs made what Apple needed: a personal computer operating system, NeXTSTEP, which became Mac OS X.
It’s instructive to recall that the iPod, Apple’s second hit product under CEO Jobs after the iMac, was not only a consumer electronics device but also originally an accessory to the Mac.
I feel that McGee and other critics of Tim Cook fallaciously lump Apple in with other tech companies that are not Apple competitors. Tesla is not an Apple competitor. Neither are Nvidia or Meta, or for that matter, Amazon. You have to ask what makes Amazon a “tech” company. Amazon is primarily a retailer of physical goods. It sells those goods over the internet, which was novel in the 1990s but unremarkable today. I can order food online, but that doesn’t make the restaurant a tech company. If any product qualifies Amazon for the label, I’d say it would be Amazon Web Services. This is a business product, though, not a consumer product.
Why are we comparing Apple to Meta and Nvidia rather than to Samsung and Xiaomi on mobile, Lenovo and HP on desktop? Perhaps those markets have become saturated and don’t provide as much room for growth as other potential markets. So what? I get the impression that commentators complaining about Tim Cook’s lack of innovation simply want “growth,” unlimited growth, without any purpose behind that growth, technology without the intersection of the liberal arts, to use a metaphor from Steve Jobs, who always had a purpose, his innovation always oriented toward consumer computing hardware
·lapcatsoftware.com·
Tim Cook vs. Steve Jobs
Reflections on Palantir - Nabeel S. Qureshi
Reflections on Palantir - Nabeel S. Qureshi
Another thing I can trace back to Peter is the idea of talent bat-signals. Having started my own company now (in stealth for the moment), I appreciate this a lot more: recruiting good people is hard, and you need a differentiated source of talent. If you’re just competing against Facebook/Google for the same set of Stanford CS grads every year, you’re going to lose. That means you need a set of talent that is (a) interested in joining you in particular, over other companies (b) a way of reaching them at scale. Palantir had several differentiated sources of recruiting alpha.
But doesn’t the military sometimes do bad things? Of course - I was opposed to the Iraq war. This gets to the crux of the matter: working at the company was neither 100% morally good — because sometimes we’d be helping agencies that had goals I’d disagree with — nor 100% bad: the government does a lot of good things, and helping them do it more efficiently by providing software that doesn’t suck is a noble thing. One way of clarifying the morality question is to break down the company’s work into three buckets – these categories aren’t perfect, but bear with me: Morally neutral. Normal corporate work, e.g. FedEx, CVS, finance companies, tech companies, and so on. Some people might have a problem with it, but on the whole people feel fine about these things. Unambiguously good. For example, anti-pandemic response with the CDC; anti-child pornography work with NCMEC; and so on. Most people would agree these are good things to work on. Grey areas. By this I mean ‘involve morally thorny, difficult decisions’: examples include health insurance, immigration enforcement, oil companies, the military, spy agencies, police/crime, and so on.
The critical case against Palantir seemed to be something like “you shouldn’t work on category 3 things, because sometimes this involves making morally bad decisions”. An example was immigration enforcement during 2016-2020, aspects of which many people were uncomfortable with.
I don’t believe there is a clear answer to whether you should work with category 3 customers; it’s a case by case thing. Palantir’s answer to this is something like “we will work with most category 3 organizations, unless they’re clearly bad, and we’ll trust the democratic process to get them trending in a good direction over time”. Thus: On the ICE question, they disengaged from ERO (Enforcement and Removal Operations) during the Trump era, while continuing to work with HSI (Homeland Security Investigations). They did work with most other category 3 organizations, on the argument that they’re mostly doing good in the world, even though it’s easy to point to bad things they did as well. I can’t speak to specific details here, but Palantir software is partly responsible for stopping multiple terror attacks. I believe this fact alone vindicates this stance.
This is an uncomfortable stance for many, precisely because you’re not guaranteed to be doing 100% good at all times. You’re at the mercy of history, in some ways, and you’re betting that (a) more good is being done than bad (b) being in the room is better than not. This was good enough for me. Others preferred to go elsewhere. The danger of this stance, of course, is that it becomes a fully general argument for doing whatever the power structure wants. You are just amplifying existing processes. This is where the ‘case by case’ comes in: there’s no general answer, you have to be specific. For my own part, I spent most of my time there working on healthcare and bio stuff, and I feel good about my contributions.
by making the company about something other than making money (civil liberties; AI god) you attract true believers from the start, who in turn create the highly generative intellectual culture that persists once you eventually find success.
Palantir does data integration for companies, but the data is owned by the companies – not Palantir. “Mining” data usually means using somebody else’s data for your own profits, or selling it. Palantir doesn’t do that - customer data stays with the customer.
·nabeelqu.substack.com·
Reflections on Palantir - Nabeel S. Qureshi
Habits, UI changes, and OS stagnation | Riccardo Mori
Habits, UI changes, and OS stagnation | Riccardo Mori
We have been secretly, for the last 18 months, been designing a completely new user interface. And that user interface builds on Apple’s legacy and carries it into the next century. And we call that new user interface Aqua, because it’s liquid. One of the design goals was that when you saw it you wanted to lick it. But it’s important to remember that this part came several minutes after outlining Mac OS X’s underlying architecture. Jobs began talking about Mac OS X by stating its goals, then the architecture used to attain those goals, and then there was a mention of how the new OS looked.
Sure, a lot has changed in the technology landscape over the past twenty years, but the Mac OS X introduction in 2000 is almost disarming in how clearly and precisely focused it is. It is framed in such a way that you understand Jobs is talking about a new powerful tool. Sure, it also looks cool, but it feels as if it’s simply a consequence of a grander scheme. A tool can be powerful in itself, but making it attractive and user-friendly is a crucial extension of its power.
But over the years (and to be fair, this started to happen when Jobs was still CEO), I’ve noticed that, iteration after iteration, the focus of each introduction of a new version of Mac OS X shifted towards more superficial features and the general look of the system. As if users were more interested in stopping and admiring just how gorgeous Mac OS looks, rather than having a versatile, robust and reliable foundation with which to operate their computers and be productive.
What some geeks may be shocked to know is that most regular people don’t really care about these changes in the way an application or operating system looks. What matters to them is continuity and reliability. Again, this isn’t being change-averse. Regular users typically welcome change if it brings something interesting to the table and, most of all, if it improves functionality in meaningful ways. Like saving mouse clicks or making a multi-step workflow more intuitive and streamlined.
But making previous features or UI elements less discoverable because you want them to appear only when needed (and who decides when I need something out of the way? Maybe I like to see it all the time) — that’s not progress. It’s change for change’s sake. It’s rearranging the shelves in your supermarket in a way that seems cool and marketable to you but leaves your customers baffled and bewildered.
This yearly cycle forces Apple engineers — and worse, Apple designers — to come up with ‘new stuff’, and this diverts focus from fixing underlying bugs and UI friction that inevitably accumulate over time.
Microsoft may leave entire layers of legacy code in Windows, turning Windows into a mastodontic operating system with a clean surface and decades of baggage underneath. Apple has been cleaning and rearranging the surface for a while now, and has been getting rid of so much baggage that they went to the other extreme. They’ve thrown the baby out with the bathwater, and Mac OS’s user interface has become more brittle after all the changes and inconsistent applications of those Human Interface Guidelines that have informed good UI design in Apple software for so long.
Meanwhile the system hasn’t really gone anywhere. On mobile, iOS started out excitingly, and admittedly still seems to be moving in an evolving trajectory, but on the iPad’s front there has been a lot of wheel reinventing to make the device behave more like a traditional computer, instead of embarking both the device and its operating system in a journey of revolution and redefinition of the tablet experience in order to truly start a ‘Post-PC era’.
An operating system is something that shouldn’t be treated as an ‘app’, or as something people should stop and admire for its æsthetic elegance, or a product whose updates should be marketed as if it’s the next iPhone iteration. An operating system is something that needs a separate, tailored development cycle. Something that needs time so that you can devise an evolution plan about it; so that you can keep working on its robustness by correcting bugs that have been unaddressed for years, and present features that really improve workflows and productivity while building organically on what came before. This way, user-facing UI changes will look reasonable, predictable, intuitive, easily assimilable, and not just arbitrary, cosmetic, and of questionable usefulness.
·morrick.me·
Habits, UI changes, and OS stagnation | Riccardo Mori
The narratives we build, build us — sindhu.live
The narratives we build, build us — sindhu.live
You see glimpses of it in how Epic Games evolved from game engines to virtual worlds to digital marketplaces, or how Stripe started as a payments processing platform but expanded into publishing books on technological progress, funding atmospheric carbon removal, and running an AI research lab.
Think about what an operating system is: the fundamental architecture that determines what's possible within a system. It manages resources, enables or constrains actions, and creates the environment in which everything else runs.
The dominant view looks at narrative as fundamentally extractive: something to be mined for short-term gain rather than built upon. Companies create compelling stories to sell something, manipulate perception for quick wins, package experiences into consumable soundbites. Oil companies, for example, like to run campaigns about being "energy companies" committed to sustainability, while their main game is still extracting fossil fuels. Vision and mission statements claim to be the DNA of a business, when in reality they're just bumper stickers.
When a narrative truly functions as an operating system, it creates the parameters of understanding, determines what questions can be asked, and what solutions are possible. Xerox PARC's focus on the architecture of information wasn't a fancy summary of their work. It was a narrative that shaped their entire approach to imagining and building things that didn't exist yet. The "how" became downstream of that deeper understanding. So if your narrative isn't generating new realities, you don't have a narrative. You have a tagline.
Most companies think they have an execution problem when, really, they have a meaning problem.
They optimise processes, streamline workflows, and measure outcomes, all while avoiding the harder work of truly understanding what unique value they're creating in the world. Execution becomes a convenient distraction from the more challenging philosophical work of asking what their business means.
A narrative operating system fundamentally shifts this dynamic from what a business does to how it thinks. The business itself becomes almost a vehicle or a social technology for manifesting that narrative, rather than the narrative being a thin veneer over a profit-making mechanism. The conversation shifts, excitingly, from “What does this business do?" to "What can this business mean?" The narrative becomes a reality-construction mechanism: not prescriptive, but generative.
When Stripe first articulated their mission to "increase the GDP of the internet" and “think at planetary scale”, it became a lens to see beyond just economic output. It revealed broader, more exciting questions about what makes the internet more generative: not just financially, but intellectually and culturally. Through this frame emerged problems worth solving that stretched far beyond payments:  What actually prevents more people from contributing to the internet's growth? Why has our civilisation's progress slowed? What creates the conditions for ambitious building? These questions led them down unexpected paths that seem obvious in retrospect. Stripe Atlas enables more participants in the internet economy by removing the complexity of incorporating a company anywhere in the world. Stripe Climate makes climate action as easy as processing a payment by embedding carbon removal into the financial infrastructure itself. Their research arm investigates why human progress has slowed, from the declining productivity of science to the bureaucratisation of building. And finally, Stripe Press—my favourite example—publishes new and evergreen ideas about technological progress.
The very metrics meant to help the organisation coordinate end up drawing boundaries around what it can imagine [1]. The problem here again, is that we’re looking at narratives as proclamations rather than living practices.
I don’t mean painted slogans on walls and meeting rooms—I mean in how teams are structured, how decisions get made, what gets celebrated, what questions are encouraged, and even in what feels possible to imagine.
The question to ask isn't always "What story are we telling?" but also "What reality are we generating?”
Patagonia is a great example of this. Their narrative is, quite simply: “We’re in business to save our home planet”. It shows up in their unconventional decision to use regenerative agriculture for their cotton, yes, but also in their famous "Don't Buy This Jacket" Black Friday campaign, and in their policy to bail out employees arrested for peaceful socio-environmental protests. When they eventually restructured their entire ownership model to "make Earth our only shareholder," it felt less like a radical move and more like the natural next step in their narrative's evolution. The most powerful proof of their narrative operating system was that these decisions felt obvious to insiders long before it made sense to the outside world.
Most narrative operating systems face their toughest test when they encounter market realities and competing incentives. There are players in the system—investors, board members, shareholders—who become active narrative controllers but often have fundamentally different ideas about what the company should be. The pressure to deliver quarterly results, to show predictable growth, to fit into recognisable business models: all of these forces push against maintaining a truly generative narrative.
The magic of "what could be" gets sacrificed for the certainty of "what already works." Initiatives that don't show immediate commercial potential get killed. Questions about meaning and possibility get replaced by questions about efficiency and optimisation.
a narrative operating system's true worth shows up in stranger, more interesting places than a balance sheet.
adaptability and interpretive range. How many different domains can the narrative be applied to? Can it generate unexpected connections? Does it create new questions more than provide answers? What kind of novel use cases or applications outside original context can it generate, while maintaining a clear through-line? Does it have what I call a ‘narrative surplus’: ideas and initiatives that might not fit current market conditions but expand the organisation's possibility space?
rate of internal idea generation. How many ideas come out of the lab? And how many of them don’t have immediate (or direct) commercial viability? A truly generative narrative creates a constant bubbling up of possibilities, not all of which will make sense in the current market or at all.
evolutionary resilience, or how well the narrative can incorporate new developments and contexts while maintaining its core integrity. Generative narratives should be able to evolve without fracturing at the core.
cross-pollination potential. How effectively does the narrative enable different groups to coordinate and build upon each other's work? The open source software movement shows this beautifully: its narrative about collaborative creation enables distributed innovation and actively generates new forms of cooperation we couldn't have imagined before.
There are, of course, other failure modes of narrative operating systems. What happens when narratives become dogmatic and self-referential? When they turn into mechanisms of exclusion rather than generation? When they become so focused on their own internal logic that they lose touch with the realities they're trying to change? Those are meaty questions that deserve their own essay.
·sindhu.live·
The narratives we build, build us — sindhu.live
The AIs are trying too hard to be your friend
The AIs are trying too hard to be your friend
Reinforcement learning with human feedback is a process by which models learn how to answer queries based on which responses users prefer most, and users mostly prefer flattery. More sophisticated users might balk at a bot that feels too sycophantic, but the mainstream seems to love it. Earlier this month, Meta was caught gaming a popular benchmark to exploit this phenomenon: one theory is that the company tuned the model to flatter the blind testers that encountered it so that it would rise higher on the leaderboard.
A series of recent, invisible updates to GPT-4o had spurred the model to go to extremes in complimenting users and affirming their behavior. It cheered on one user who claimed to have solved the trolley problem by diverting a train to save a toaster, at the expense of several animals; congratulated one person for no longer taking their prescribed medication; and overestimated users’ IQs by 40 or more points when asked.
OpenAI, Meta, and all the rest remain under the same pressures they were under before all this happened. When your users keep telling you to flatter them, how do you build the muscle to fight against their short-term interests?  One way is to understand that going too far will result in PR problems, as it has for varying degrees to both Meta (through the Chatbot Arena situation) and now OpenAI. Another is to understand that sycophancy trades against utility: a model that constantly tells you that you’re right is often going to fail at helping you, which might send you to a competitor. A third way is to build models that get better at understanding what kind of support users need, and dialing the flattery up or down depending on the situation and the risk it entails. (Am I having a bad day? Flatter me endlessly. Do I think I am Jesus reincarnate? Tell me to seek professional help.)
But while flattery does come with risk, the more worrisome issue is that we are training large language models to deceive us. By upvoting all their compliments, and giving a thumbs down to their criticisms, we are teaching LLMs to conceal their honest observations. This may make future, more powerful models harder to align to our values — or even to understand at all. And in the meantime, I expect that they will become addictive in ways that make the previous decade’s debate over “screentime” look minor in comparison. The financial incentives are now pushing hard in that direction. And the models are evolving accordingly.
·platformer.news·
The AIs are trying too hard to be your friend
you are what you launch: how software became a lifestyle brand
you are what you launch: how software became a lifestyle brand
opening notion or obsidian feels less like launching software and more like putting on your favorite jacket. it says something about you. aligns you with a tribe, becomes part of your identity. software isn’t just functional anymore. it’s quietly turned into a lifestyle brand, a digital prosthetic we use to signal who we are, or who we wish we were.
somewhere along the way, software stopped being invisible. it started meaning things. your browser, your calendar, your to-do list, these are not just tools anymore. they are taste. alignment. self-expression.
Though many people definitely still see software as just software i.e. people who only use defaults
suddenly your app stack said something about you. not in a loud, obvious way but like the kind of shoes you wear when you don’t want people to notice, but still want them to know. margiela replica. new balance 992. arcteryx. stuff that whispers instead of shouts, it’s all about signaling to the right people.
I guess someone only using default software / being 'unopinionated' about what software choices they make is itself a kind of statement along these lines?
notion might be one of the most unopinionated tools out there. you can build practically anything with it. databases, journals, dashboards, even websites. but for a tool so open-ended, it’s surprisingly curated. only three fonts, ten colors.
if notion is a sleek apartment in seoul, obsidian is a cluttered home lab. markdown files. local folders. keyboard shortcuts. graph views. it doesn’t care how it looks, it cares that it works. it’s functional first, aesthetic maybe never. there’s no onboarding flow, no emoji illustrations, no soft gradients telling you everything’s going to be okay. just an empty vault and the quiet suggestion: you figure it out. obsidian is built for tinkerers. not in the modern, drag and drop sense but in the old way. the “i wanna see how this thing works under the hood way”. it’s a tool that rewards curiosity and exploration. everything in obsidian feels like it was made by someone who didn’t just want to take notes, they wanted to build the system that takes notes. it’s messy, it’s endless, and that’s the point. it’s a playground for people who believe that the best tools are the ones you shape yourself.
notion is for people who want a beautiful space to live in, obsidian is for people who want to wire the whole building from scratch. both offer freedom, but one is curated and the other is raw. obsidian and notion don’t just attract different users. they attract different lifestyles.
the whole obsidian ecosystem runs on a kind of quiet technical fluency.
the fact that people think obsidian is open source matters more than whether it actually is. because open source, in this context, isn’t just a licence, it’s a vibe. it signals independence. self-reliance. a kind of technical purity. using obsidian says: i care about local files. i care about control. i care enough to make things harder on myself. and that is a lifestyle.
now, there’s a “premium” version of everything. superhuman for email. cron (i don’t wanna call it notion calendar) for calendars. arc for browsing. raycast for spotlight. even perplexity, somehow, for search.
these apps aren’t solving new problems. they’re solving old ones with better fonts. tighter animations, cleaner onboarding. they’re selling taste.
chrome gets the job done, but arc gets you. the onboarding feels like a guided meditation. it’s not about speed or performance. it’s about posture.
arc makes you learn new gestures. it hides familiar things. it’s not trying to be invisible, it wants to be felt. same with linear. same with superhuman. these apps add friction on purpose. like doc martens or raw denim that needs breaking in.
linear even has a “work with linear” page, a curated list of companies that use their tool. it’s a perfect example of companies not just acknowledging their lifestyle brand status, but actively leaning into it as a recruiting and signaling mechanism.
·omeru.bearblog.dev·
you are what you launch: how software became a lifestyle brand
Apple innovation and execution — Benedict Evans
Apple innovation and execution — Benedict Evans
since the iPhone launched Apple has created three (!) more innovative and category-defining products - the iPad, Watch and AirPods. The iPad is a little polarising amongst tech people (and it remains unfinished business, as Apple concedes in how often it fiddles with the keyboard and the multitasking) but after a rocky start it’s stabilised as roughly the same size as the Mac. The Watch and the Airpods, again, have both become $10bn+ businesses, but also seem to have stabilised. (The ‘Wearables, Home and Accessories’ category also includes the Apple TV, HomePods and Apple’s sizeable cable, dongle & case business.)
Meanwhile, since both the Watch and AirPods on one side and the services on the other are all essentially about the attach rate to iPhone users, you could group them together as one big upsell, which suggests a different chart: half of Apple’s revenue is the iPhone and another third is iPhone upsells - 80% in total.
I think the car project was the classic Apple of Steve Jobs. Apple spent a lot of time and money trying to work out whether it could bring something new, and said no. . The shift to electric is destabilising the car industry and creating lots of questions about who builds cars and how they build them, and that’s a situation that should attract Apple. However, I also think that Apple concluded that while there was scope to make a great car, and perhaps one that did a few things better, there wasn’t really scope to do something fundamentally different, and solve some problem that no-one else was solving. Apple would only be making another EV, not redefining what ‘car’ means, because EVs are still basically cars - which is Tesla’s problem. It looks like the EV market will play out not like smartphones, where Apple had something unique, but like Android, where there was frenzied competition in a low-margin commodity market. So, Apple walked away - it said no.
People often suggest that Apple should buy anything from Netflix to telcos to banks, and I used to make fun of this by suggesting that Apple should buy an airline ‘because it could make the seats and the screens better’. Yes, Apple could maybe make better seats than Collins Aerospace, but that’s not what it means to run an airline. Where can Apple change the fundamental questions?
It ships MVPs that get better later, sure, and the original iPhone and Watch were MVPs, but the original iPhone also was the best phone I’d ever owned even with no 3G and no App Store. It wasn’t a concept. it wasn’t a vision of the future- it was the future. The Vision Pro is a concept, or a demo, and Apple doesn’t ship demos. Why did it ship the Vision Pro? What did it achieve? It didn’t sell in meaningful volume, because it couldn’t, and it didn’t lead to much developer activity ether, because no-one bought it. A lot of people even at Apple are puzzled.
The new Siri that’s been delayed this week is the mirror image of this. Last summer Apple told a very clear, coherent, compelling story of how it would combine the software frameworks it’s already built with the personal data in apps spread across your phones and the capabilities of LLMs to produce a new kind of personal assistant. This was the eats of Apple - taking a new primary technology and proposing way to make it useful for everyone else
·ben-evans.com·
Apple innovation and execution — Benedict Evans
Taste is Eating Silicon Valley.
Taste is Eating Silicon Valley.
The lines between technology and culture are blurring. And so, it’s no longer enough to build great tech.
Whether in expressed via product design, brand, or user experience, taste now defines how a product is perceived and felt as well as how it is adopted, i.e. distributed — whether it’s software or hardware or both. Technology has become deeply intertwined with culture.3 People now engage with technology as part of their lives, no matter their location, career, or status.
founders are realizing they have to do more than code, than be technical. Utility is always key, but founders also need to calibrate design, brand, experience, storytelling, community — and cultural relevance. The likes of Steve Jobs and Elon Musk are admired not just for their technical innovations but for the way they turned their products, and themselves, into cultural icons.
The elevation of taste invites a melting pot of experiences and perspectives into the arena — challenging “legacy” Silicon Valley from inside and outside.
B2C sectors that once prioritized functionality and even B2B software now feel the pull of user experience, design, aesthetics, and storytelling.
Arc is taking on legacy web browsers with design and brand as core selling points. Tools like Linear, a project management tool for software teams, are just as known for their principled approach to company building and their heavily-copied landing page design as they are known for their product’s functionality.4 Companies like Arc and Linear build an entire aesthetic ecosystem that invites users and advocates to be part of their version of the world, and to generate massive digital and literal word-of-mouth. (Their stories are still unfinished but they stand out among this sector in Silicon Valley.)
Any attempt to give examples of taste will inevitably be controversial, since taste is hard to define and ever elusive. These examples are pointing at narratives around taste within a community.
So how do they compete? On how they look, feel, and how they make users feel.6 The subtleties of interaction (how intuitive, friendly, or seamless the interface feels) and the brand aesthetic (from playful websites to marketing messages) are now differentiators, where users favor tools aligned with their personal values. All of this should be intertwined in a product, yet it’s still a noteworthy distinction.
Investors can no longer just fund the best engineering teams and wait either. They’re looking for teams that can capture cultural relevance and reflect the values, aesthetics, and tastes of their increasingly diverse markets.
How do investors position themselves in this new landscape? They bet on taste-driven founders who can capture the cultural zeitgeist. They build their own personal and firm brands too. They redesign their websites, write manifestos, launch podcasts, and join forces with cultural juggernauts.
Code is cheap. Money now chases utility wrapped in taste, function sculpted with beautiful form, and technology framed in artistry.
The dictionary says it’s the ability to discern what is of good quality or of a high aesthetic standard. Taste bridges personal choice (identity), societal standards (culture), and the pursuit of validation (attention). But who sets that standard? Taste is subjective at an individual level — everyone has their own personal interpretation of taste — but it is calibrated from within a given culture and community.
Taste manifests as a combination of history, design, user experience, and embedded values that creates emotional resonance — that defines how a product connects with people as individuals and aligns with their identity. None of the tactical things alone are taste; they’re mere artifacts or effects of expressing one’s taste. At a minimum, taste isn’t bland — it’s opinionated.
The most compelling startups will be those that marry great tech with great taste. Even the pursuit of unlocking technological breakthroughs must be done with taste and cultural resonance in mind, not just for the sake of the technology itself. Taste alone won’t win, but you won’t win without taste playing a major role.
Founders must now master cultural resonance alongside technical innovation.
In some sectors—like frontier AI, deep tech, cybersecurity, industrial automation—taste is still less relevant, and technical innovation remains the main focus. But the footprint of sectors where taste doesn’t play a big role is shrinking. The most successful companies now blend both. Even companies aiming to be mainstream monopolies need to start with a novel opinionated approach.
I think we should leave it at “taste” which captures the artistic and cultural expressions that traditional business language can’t fully convey, reflecting the deep-rooted and intuitive aspects essential for product dev
·workingtheorys.com·
Taste is Eating Silicon Valley.
Revenge of the junior developer | Sourcegraph Blog
Revenge of the junior developer | Sourcegraph Blog
with agents, you don’t have to do all the ugly toil of bidirectional copy/paste and associated prompting, which is the slow human-y part. Instead, the agent takes over and handles that for you, only returning to chat with you when it finishes or gets stuck or you run out of cash.
As fast and robust as they may be, you still need to break things down and shepherd coding agents carefully. If you give one a task that’s too big, like "Please fix all my JIRA tickets", it will hurl itself at the problem and get almost nowhere. They require careful supervision and thoughtful problem selection today. In short, they are ornery critters.
it’s not all doom and gloom ahead. Far from it! There will be a bunch of jobs in the software industry. Just not the kind that involve writing code by hand like some sort of barbarian.
But for the most part, junior developers – including (a) newly-minted devs, (b) devs still in school, and (c) devs who are still thinkin’ about school – are all picking this stuff up really fast. They grab the O’Reilly AI Engineering book, which all devs need to know cover to cover now, and they treat it as job training. They’re all using chat coding, they all use coding assistants, and I know a bunch of you junior developers out there are using coding agents already.
I believe the AI-refusers regrettably have a lot invested in the status quo, which they think, with grievous mistakenness, equates to job security. They all tell themselves that the AI has yet to prove that it’s better than they are at performing X, Y, or Z, and therefore, it’s not ready yet.
It’s not AI’s job to prove it’s better than you. It’s your job to get better using AI
·sourcegraph.com·
Revenge of the junior developer | Sourcegraph Blog
Something Is Rotten in the State of Cupertino
Something Is Rotten in the State of Cupertino
Who decided these features should go in the WWDC keynote, with a promise they’d arrive in the coming year, when, at the time, they were in such an unfinished state they could not be demoed to the media even in a controlled environment? Three months later, who decided Apple should double down and advertise these features in a TV commercial, and promote them as a selling point of the iPhone 16 lineup — not just any products, but the very crown jewels of the company and the envy of the entire industry — when those features still remained in such an unfinished or perhaps even downright non-functional state that they still could not be demoed to the press? Not just couldn’t be shipped as beta software. Not just couldn’t be used by members of the press in a hands-on experience, but could not even be shown to work by Apple employees on Apple-controlled devices in an Apple-controlled environment? But yet they advertised them in a commercial for the iPhone 16, when it turns out they won’t ship, in the best case scenario, until months after the iPhone 17 lineup is unveiled?
“Can anyone tell me what MobileMe is supposed to do?” Having received a satisfactory answer, he continued, “So why the fuck doesn’t it do that?” For the next half-hour Jobs berated the group. “You’ve tarnished Apple’s reputation,” he told them. “You should hate each other for having let each other down.” The public humiliation particularly infuriated Jobs. Walt Mossberg, the influential Wall Street Journal gadget columnist, had panned MobileMe. “Mossberg, our friend, is no longer writing good things about us,” Jobs said. On the spot, Jobs named a new executive to run the group. Tim Cook should have already held a meeting like that to address and rectify this Siri and Apple Intelligence debacle. If such a meeting hasn’t yet occurred or doesn’t happen soon, then, I fear, that’s all she wrote. The ride is over. When mediocrity, excuses, and bullshit take root, they take over. A culture of excellence, accountability, and integrity cannot abide the acceptance of any of those things, and will quickly collapse upon itself with the acceptance of all three.
·daringfireball.net·
Something Is Rotten in the State of Cupertino
Applying the Web Dev Mindset to Dealing With Life Challenges | CSS-Tricks
Applying the Web Dev Mindset to Dealing With Life Challenges | CSS-Tricks
Claude summary: "This deeply personal article explores how the mindset and skills used in web development can be applied to navigating life's challenges, particularly trauma and abuse. The author draws parallels between web security concepts and psychological protection, comparing verbal abuse to cross-site scripting attacks and boundary violations to hacking attempts. Through their experience of escaping an abusive relationship, they demonstrate how the programmer's ability to redefine meaning and sanitize malicious input can be used to protect one's mental health. The article argues against compartmentalizing work and personal life, suggesting instead that the problem-solving approach of developers—with their comfort with meaninglessness and ability to bend rules—can be valuable tools for personal growth and healing. It concludes that taking calculated risks and being vulnerable, both in code and in life, is necessary for creating value and moving forward."
·css-tricks.com·
Applying the Web Dev Mindset to Dealing With Life Challenges | CSS-Tricks
Gen Z and the End of Predictable Progress
Gen Z and the End of Predictable Progress
Gen Z faces a double disruption: AI-driven technological change and institutional instability Three distinct Gen Z cohorts have emerged, each with different relationships to digital reality A version of the barbell strategy is splitting career paths between "safety seekers" and "digital gamblers" Our fiscal reality is quite stark right now, and that is shaping how young people see opportunities
When I talk to young people from New York or Louisiana or Tennessee or California or DC or Indiana or Massachusetts about their futures, they're not just worried about finding jobs, they're worried about whether or not the whole concept of a "career" as we know it will exist in five years.
When a main path to financial security comes through the algorithmic gods rather than institutional advancement (like when a single viral TikTok can generate more income than a year of professional work) it fundamentally changes how people view everything from education to social structures to political systems that they’re apart of.
Gen Z 1.0: The Bridge Generation: This group watched the digital transformation happen in real-time, experiencing both the analog and internet worlds during formative years. They might view technology as a tool rather than an environment. They're young enough to navigate digital spaces fluently but old enough to remember alternatives. They (myself included) entered the workforce during Covid and might have severe workplace interaction gaps because they missed out on formative time during their early years. Gen Z 1.5: The Covid Cohort: This group hit major life milestones during a global pandemic. They entered college under Trump but graduated under Biden. This group has a particularly complex relationship with institutions. They watched traditional systems bend and break in real-time during Covid, while simultaneously seeing how digital infrastructure kept society functioning. Gen Z 2.0: The Digital Natives: This is the first group that will be graduate into the new digital economy. This group has never known a world without smartphones. To them, social media could be another layer of reality. Their understanding of economic opportunity is completely different from their older peers.
Gen Z 2.0 doesn't just use digital tools differently, they understand reality through a digital-first lens. Their identity formation happens through and with technology.
Technology enables new forms of value exchange, which creates new economic possibilities so people build identities around these possibilities and these identities drive development of new technologies and the cycle continues.
different generations don’t just use different tools, they operate in different economic realities and form identity through fundamentally different processes. Technology is accelerating differentiation. Economic paths are becoming more extreme. Identity formation is becoming more fluid.
I wrote a very long piece about why Trump won that focused on uncertainty, structural affordability, and fear - and that’s what the younger Gen Z’s are facing. Add AI into this mix, and the rocky path gets rockier. Traditional professional paths that once promised stability and maybe the ability to buy a house one day might not even exist in two years. Couple this with increased zero sum thinking, a lack of trust in institutions and subsequent institutional dismantling, and the whole attention economy thing, and you’ve got a group of young people who are going to be trying to find their footing in a whole new world. Of course you vote for the person promising to dismantle it and save you.
·kyla.substack.com·
Gen Z and the End of Predictable Progress
DeepSeek isn't a victory for the AI sceptics
DeepSeek isn't a victory for the AI sceptics
we now know that as the price of computing equipment fell, new use cases emerged to fill the gap – which is why today my lightbulbs have semiconductors inside them, and I occasionally have to install firmware updates my doorbell.
surely the compute freed up by more efficient models will be used to train models even harder, and apply even more “brain power” to coming up with responses? Even if DeepSeek is dramatically more efficient, the logical thing to do will be to use the excess capacity to ensure the answers are even smarter.
ure, if DeepSeek heralds a new era of much leaner LLMs, it’s not great news in the short term if you’re a shareholder in Nvidia, Microsoft, Meta or Google.6 But if DeepSeek is the enormous breakthrough it appears, it just became even cheaper to train and use the most sophisticated models humans have so far built, by one or more orders of magnitude. Which is amazing news for big tech, because it means that AI usage is going to be even more ubiquitous.
·takes.jamesomalley.co.uk·
DeepSeek isn't a victory for the AI sceptics
Zuckerberg officially gives up
Zuckerberg officially gives up
I floated a theory of mine to Atlantic writer Charlie Warzel on this week’s episode of Panic World that content moderation, as we’ve understood, it effectively ended on January 6th, 2021. You can listen to the whole episode here, but the way I look at it is that the Insurrection was the first time Americans could truly see the radicalizing effects of algorithmic platforms like Facebook and YouTube that other parts of the world, particularly the Global South, had dealt with for years. A moment of political violence Silicon Valley could no longer ignore or obfuscate the way it had with similar incidents in countries like Myanmar, India, Ethiopia, or Brazil. And once faced with the cold, hard truth of what their platforms had been facilitating, companies like Google and Meta, at least internally, accepted that they would never be able to moderate them at scale. And so they just stopped.
After 2021, the major tech platforms we’ve relied on since the 2010s could no longer pretend that they would ever be able to properly manage the amount of users, the amount of content, the amount of influence they “need” to exist at the size they “need” to exist at to make the amount of money they “need” to exist.
Under Zuckerberg’s new “censorship”-free plan, Meta’s social networks will immediately fill up with hatred and harassment. Which will make a fertile ground for terrorism and extremism. Scams and spam will clog comments and direct messages. And illicit content, like non-consensual sexual material, will proliferate in private corners of networks like group messages and private Groups. Algorithms will mindlessly spread this slop, boosted by the loudest, dumbest, most reactionary users on the platform, helping it evolve and metastasize into darker, stickier social movements. And the network will effectively break down. But Meta is betting that the average user won’t care or notice. AI profiles will like their posts, comment on them, and even make content for them. A feedback loop of nonsense and violence. Our worst, unmoderated impulses, shared by algorithm and reaffirmed by AI. Where nothing has to be true and everything is popular.
·garbageday.email·
Zuckerberg officially gives up
I still don’t think companies serve you ads based on spying through your microphone
I still don’t think companies serve you ads based on spying through your microphone
Crucially, this was never proven in court. And if Apple settle the case it never will be. Let’s think this through. For the accusation to be true, Apple would need to be recording those wake word audio snippets and transmitting them back to their servers for additional processing (likely true), but then they would need to be feeding those snippets in almost real time into a system which forwards them onto advertising partners who then feed that information into targeting networks such that next time you view an ad on your phone the information is available to help select the relevant ad.
Why would Apple do that? Especially given both their brand and reputation as a privacy-first company combined with the large amounts of product design and engineering work they’ve put into preventing apps from doing exactly this kind of thing by enforcing permission-based capabilities and ensuring a “microphone active” icon is available at all times when an app is listening in.
·simonwillison.net·
I still don’t think companies serve you ads based on spying through your microphone
Your "Per-Seat" Margin is My Opportunity
Your "Per-Seat" Margin is My Opportunity

Traditional software is sold on a per seat subscription. More humans, more money. We are headed to a future where AI agents will replace the work humans do. But you can’t charge agents a per seat cost. So we’re headed to a world where software will be sold on a consumption model (think tasks) and then on an outcome model (think job completed) Incumbents will be forced to adapt but it’s classic innovators dilemma. How do you suddenly give up all that subscription revenue? This gives an opportunity for startups to win.

Per-seat pricing only works when your users are human. But when agents become the primary users of software, that model collapses.
Executives aren't evaluating software against software anymore. They're comparing the combined costs of software licenses plus labor against pure outcome-based solutions. Think customer support (per resolved ticket vs. per agent + seat), marketing (per campaign vs. headcount), sales (per qualified lead vs. rep). That's your pricing umbrella—the upper limit enterprises will pay before switching entirely to AI.
enterprises are used to deterministic outcomes and fixed annual costs. Usage-based pricing makes budgeting harder. But individual leaders seeing 10x efficiency gains won't wait for procurement to catch up. Savvy managers will find ways around traditional buying processes.
This feels like a generational reset of how businesses operate. Zero upfront costs, pay only for outcomes—that's not just a pricing model. That's the future of business.
The winning strategy in my books? Give the platform away for free. Let your agents read and write to existing systems through unstructured data—emails, calls, documents. Once you handle enough workflows, you become the new system of record.
·writing.nikunjk.com·
Your "Per-Seat" Margin is My Opportunity
Meet Willow, our state-of-the-art quantum chip
Meet Willow, our state-of-the-art quantum chip
Quantum engineers are essentially working with a "black box" - they can harness quantum mechanical principles to build working computers without fully understanding the deeper nature of what's happening, whether it truly involves parallel universes or some other explanation for the remarkable computational advantages quantum computers achieve.
Pioneered by our team and now widely used as a standard in the field, RCS is the classically hardest benchmark that can be done on a quantum computer today. You can think of this as an entry point for quantum computing — it checks whether a quantum computer is doing something that couldn’t be done on a classical computer. Any team building a quantum computer should check first if it can beat classical computers on RCS; otherwise there is strong reason for skepticism that it can tackle more complex quantum tasks.
Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.
·blog.google·
Meet Willow, our state-of-the-art quantum chip
In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT : r/ApplyingToCollege
In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT : r/ApplyingToCollege

An experienced college essay reviewer identifies seven distinct patterns that reveal ChatGPT's writing "fingerprint" in admission essays, demonstrating how AI-generated content, despite being well-written, often lacks originality and follows predictable patterns that make it detectable to experienced readers.

Seven key indicators of ChatGPT-written essays:

  1. Specific vocabulary choices (e.g., "delve," "tapestry")
  2. Limited types of extended metaphors (weaving, cooking, painting, dance, classical music)
  3. Distinctive punctuation patterns (em dashes, mixed apostrophe styles)
  4. Frequent use of tricolons (three-part phrases), especially ascending ones
  5. Common phrase pattern: "I learned that the true meaning of X is not only Y, it's also Z"
  6. Predictable future-looking conclusions: "As I progress... I will carry..."
  7. Multiple ending syndrome (similar to Lord of the Rings movies)
·reddit.com·
In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT : r/ApplyingToCollege
Data Laced with History: Causal Trees & Operational CRDTs
Data Laced with History: Causal Trees & Operational CRDTs
After mulling over my bullet points, it occurred to me that the network problems I was dealing with—background cloud sync, editing across multiple devices, real-time collaboration, offline support, and reconciliation of distant or conflicting revisions—were all pointing to the same question: was it possible to design a system where any two revisions of the same document could be merged deterministically and sensibly without requiring user intervention?
It’s what happened after sync that was troubling. On encountering a merge conflict, you’d be thrown into a busy conversation between the network, model, persistence, and UI layers just to get back into a consistent state. The data couldn’t be left alone to live its peaceful, functional life: every concurrent edit immediately became a cross-architectural matter.
I kept several questions in mind while doing my analysis. Could a given technique be generalized to arbitrary and novel data types? Did the technique pass the PhD Test? And was it possible to use the technique in an architecture with smart clients and dumb servers?
Concurrent edits are sibling branches. Subtrees are runs of characters. By the nature of reverse timestamp+UUID sort, sibling subtrees are sorted in the order of their head operations.
This is the underlying premise of the Causal Tree. In contrast to all the other CRDTs I’d been looking into, the design presented in Victor Grishchenko’s brilliant paper was simultaneously clean, performant, and consequential. Instead of dense layers of theory and labyrinthine data structures, everything was centered around the idea of atomic, immutable, metadata-tagged, and causally-linked operations, stored in low-level data structures and directly usable as the data they represented.
I’m going to be calling this new breed of CRDTs operational replicated data types—partly to avoid confusion with the exiting term “operation-based CRDTs” (or CmRDTs), and partly because “replicated data type” (RDT) seems to be gaining popularity over “CRDT” and the term can be expanded to “ORDT” without impinging on any existing terminology.
Much like Causal Trees, ORDTs are assembled out of atomic, immutable, uniquely-identified and timestamped “operations” which are arranged in a basic container structure. (For clarity, I’m going to be referring to this container as the structured log of the ORDT.) Each operation represents an atomic change to the data while simultaneously functioning as the unit of data resultant from that action. This crucial event–data duality means that an ORDT can be understood as either a conventional data structure in which each unit of data has been augmented with event metadata; or alternatively, as an event log of atomic actions ordered to resemble its output data structure for ease of execution
To implement a custom data type as a CT, you first have to “atomize” it, or decompose it into a set of basic operations, then figure out how to link those operations such that a mostly linear traversal of the CT will produce your output data. (In other words, make the structure analogous to a one- or two-pass parsable format.)
OT and CRDT papers often cite 50ms as the threshold at which people start to notice latency in their text editors. Therefore, any code we might want to run on a CT—including merge, initialization, and serialization/deserialization—has to fall within this range. Except for trivial cases, this precludes O(n2) or slower complexity: a 10,000 word article at 0.01ms per character would take 7 hours to process! The essential CT functions have to be O(nlogn) at the very worst.
Of course, CRDTs aren’t without their difficulties. For instance, a CRDT-based document will always be “live”, even when offline. If a user inadvertently revises the same CRDT-based document on two offline devices, they won’t see the familiar pick-a-revision dialog on reconnection: both documents will happily merge and retain any duplicate changes. (With ORDTs, this can be fixed after the fact by filtering changes by device, but the user will still have to learn to treat their documents with a bit more caution.) In fully decentralized contexts, malicious users will have a lot of power to irrevocably screw up the data without any possibility of a rollback, and encryption schemes, permission models, and custom protocols may have to be deployed to guard against this. In terms of performance and storage, CRDTs contain a lot of metadata and require smart and performant peers, whereas centralized architectures are inherently more resource-efficient and only demand the bare minimum of their clients. You’d be hard-pressed to use CRDTs in data-heavy scenarios such as screen sharing or video editing. You also won’t necessarily be able to layer them on top of existing infrastructure without significant refactoring.
Perhaps a CRDT-based text editor will never quite be as fast or as bandwidth-efficient as Google Docs, for such is the power of centralization. But in exchange for a totally decentralized computing future? A world full of devices that control their own data and freely collaborate with one another? Data-centric code that’s entirely free from network concerns? I’d say: it’s surely worth a shot!
·archagon.net·
Data Laced with History: Causal Trees & Operational CRDTs
The Only Reason to Explore Space
The Only Reason to Explore Space

Claude summary: > This article argues that the only enduring justification for space exploration is its potential to fundamentally transform human civilization and our understanding of ourselves. The author traces the history of space exploration, from the mystical beliefs of early rocket pioneers to the geopolitical motivations of the Space Race, highlighting how current economic, scientific, and military rationales fall short of sustaining long-term commitment. The author contends that achieving interstellar civilization will require unprecedented organizational efforts and societal commitment, likely necessitating institutions akin to governments or religions. Ultimately, the piece suggests that only a society that embraces the pursuit of interstellar civilization as its central legitimating project may succeed in this monumental endeavor, framing space exploration not as an inevitable outcome of progress, but as a deliberate choice to follow a "golden path to a destiny among the stars."

·palladiummag.com·
The Only Reason to Explore Space
The CrowdStrike Outage and Market-Driven Brittleness
The CrowdStrike Outage and Market-Driven Brittleness
Redundancies are unprofitable. Being slow and careful is unprofitable. Being less embedded in and less essential and having less access to the customers’ networks and machines is unprofitable—at least in the short term, by which these companies are measured. This is true for companies like CrowdStrike. It’s also true for CrowdStrike’s customers, who also didn’t have resilience, redundancy, or backup systems in place for failures such as this because they are also an expense that affects short-term profitability.
The market rewards short-term profit-maximizing systems, and doesn’t sufficiently penalize such companies for the impact their mistakes can have. (Stock prices depress only temporarily. Regulatory penalties are minor. Class-action lawsuits settle. Insurance blunts financial losses.) It’s not even clear that the information technology industry could exist in its current form if it had to take into account all the risks such brittleness causes.
The asymmetry of costs is largely due to our complex interdependency on so many systems and technologies, any one of which can cause major failures. Each piece of software depends on dozens of others, typically written by other engineering teams sometimes years earlier on the other side of the planet. Some software systems have not been properly designed to contain the damage caused by a bug or a hack of some key software dependency.
This market force has led to the current global interdependence of systems, far and wide beyond their industry and original scope. It’s why flying planes depends on software that has nothing to do with the avionics. It’s why, in our connected internet-of-things world, we can imagine a similar bad software update resulting in our cars not starting one morning or our refrigerators failing.
Right now, the market incentives in tech are to focus on how things succeed: A company like CrowdStrike provides a key service that checks off required functionality on a compliance checklist, which makes it all about the features that they will deliver when everything is working. That’s exactly backward. We want our technological infrastructure to mimic nature in the way things fail. That will give us deep complexity rather than just surface complexity, and resilience rather than brittleness.
Netflix is famous for its Chaos Monkey tool, which intentionally causes failures to force the systems (and, really, the engineers) to be more resilient. The incentives don’t line up in the short term: It makes it harder for Netflix engineers to do their jobs and more expensive for them to run their systems. Over years, this kind of testing generates more stable systems. But it requires corporate leadership with foresight and a willingness to spend in the short term for possible long-term benefits.
The National Highway Traffic Safety Administration crashes cars to learn what happens to the people inside. But cars are relatively simple, and keeping people safe is straightforward. Software is different. It is diverse, is constantly changing, and has to continually adapt to novel circumstances. We can’t expect that a regulation that mandates a specific list of software crash tests would suffice. Again, security and resilience are achieved through the process by which we fail and fix, not through any specific checklist. Regulation has to codify that process.
·lawfaremedia.org·
The CrowdStrike Outage and Market-Driven Brittleness