Found 611 bookmarks
Newest
Fandom's Great Divide
Fandom's Great Divide
The 1970s sitcom "All in the Family" sparked debates with its bigoted-yet-lovable Archie Bunker character, leaving audiences divided over whether the show was satirizing prejudice or inadvertently promoting it, and reflecting TV's power to shape societal attitudes.
This sort of audience divide, not between those who love a show and those who hate it but between those who love it in very different ways, has become a familiar schism in the past fifteen years, during the rise of—oh, God, that phrase again—Golden Age television. This is particularly true of the much lauded stream of cable “dark dramas,” whose protagonists shimmer between the repulsive and the magnetic. As anyone who has ever read the comments on a recap can tell you, there has always been a less ambivalent way of regarding an antihero: as a hero
a subset of viewers cheered for Walter White on “Breaking Bad,” growling threats at anyone who nagged him to stop selling meth. In a blog post about that brilliant series, I labelled these viewers “bad fans,” and the responses I got made me feel as if I’d poured a bucket of oil onto a flame war from the parapets of my snobby critical castle. Truthfully, my haters had a point: who wants to hear that they’re watching something wrong?
·newyorker.com·
Fandom's Great Divide
The Other Two Captures the Strangeness of Social Media Stardom
The Other Two Captures the Strangeness of Social Media Stardom
Social media is the lens for a lot of the show’s biggest bits and even plotlines. It is, just as in life, omnipresent, and so, even as the show spotlights the inherent ridiculousness of the extremely online, it also understands the way social media is a deranging accelerant of everyday problems, and thus a medium of everyday life.
These are all just a bunch of funny jokes about people who are too online, celebrities whose shallow fame exists only by way of the apps, and a contemporary American culture hypnotized by the blue light of screens.
In her book The Drama of Celebrity, the scholar Sharon Marcus argues that celebrity, as we know it, is a cultural phenomenon with three distinct authors. There’s the celebrity, who expresses themself through whatever art or product they make; there are the journalists who write about and photograph and criticize and otherwise construct the celebrity’s public image; and then there’s the public, who contribute devotion and imagination, and money, and love and hate.
There was a time when Marilyn Monroe emerged as an illusion, a trick of the light produced between herself, her studio’s massive press apparatus, and an adoring and vampiric public. Today, anyone can be an illusion like this, if at smaller scale.
The show by no means wants to redeem the industry, but, this season especially, it’s become invested in exposing the lazy nihilism that can come along with seeing the worst in people. If you run into a craven, soulless industry hack in the morning, you ran into a craven, soulless industry hack; if you run into them all day, you are the craven, soulless industry hack.
The Other Two is about identity. It’s a flimsy, fungible thing, and it’s a trap. It’s a point of pride and a point of embarrassment. There’s the real you that we all struggle to find and to express truthfully; there’s the version of yourself that you perform for the public; there’s the version of you that others create in and against their own image.
·newrepublic.com·
The Other Two Captures the Strangeness of Social Media Stardom
Euphoria's Cinematography Explained — Light, Camera Movement, and Long Takes
Euphoria's Cinematography Explained — Light, Camera Movement, and Long Takes
to Levinson, emotional realism meant making the internal external. In other words, he wanted to show the extreme highs and lows of adolescence visually, even if those visuals didn’t adhere to a physical realism.
why not give a show that’s not like a realistic portrait of the youth but more like how they portray themselves
most of the time, we’re using primary colors, and I’m relying a lot on the orange-blue color contrast, which is a really basic one… We use that in night scenes, as well as in day scenes.”As the Euphoria cinematographer notes, the orange-blue contrast is a classic use of a complementary color scheme. And it is used in countless films and TV shows. But Rév cranks up the orange-ness and blue-ness of the lights, creating a contrast that goes beyond the reality of a setting.
the lighting is not completely divorced from the physical reality of the situation. The blue is motivated by the moon, the orange by streetlights. But the degree to which he leans into this contrast is what goes beyond reality and into emotional realism.
“Of course, you have party scenes and stuff, [with] basic colors. Sometimes, it’s red; sometimes, it’s blue,” explains the Euphoria cinematographer. “But we try to stick to one defined color, and not be all over the place.”
I would say the camera movement is the glue in the show, that glues it together.
With a few exceptions, the camera seems to float, giving it an ethereal quality matching the show’s mood.“When the camera is moving, it’s always on tracks or on a dolly,” said Levinson. “We do very little handheld camerawork. And probably 70 percent of the show is shot on sets.”These sets are key to the camera movement. Because the sets are built from the ground up, they are often constructed with specific camera maneuvers in mind.
Of course, this level of complexity requires a massive amount of planning, including storyboarding the camera movements.“Marcell and I sat down with Peter Beck, our storyboard artist, and we basically storyboarded the entire episode,” says Levinson. “There were roughly 700 or 800 boards, and then, in conversation with [production director] Michael [Grasley], we built all the sets from those boards.”The shot took a whopping six days to finish, a rarity in television. “Part of the nature of television is that it doesn’t usually allow for a lot of indulgence,” explains Levinson. “On this show, we made the decision in advance not to do a lot of coverage, which is unusual for television. But in deciding to shoot that way, we accepted the fact that we had to really plan the thing out to get it right.”This type of auteur-esque control is what allows Euphoria cinematography to look so striking. It’s a show which has a visual style that few other series have ever matched.
·studiobinder.com·
Euphoria's Cinematography Explained — Light, Camera Movement, and Long Takes
Why Does Everything On Netflix Look Like That?
Why Does Everything On Netflix Look Like That?
Although it’s hard to pinpoint what exactly makes all Netflix shows look the same, a few things stand out: The image in general is dark, and the colors are extremely saturated; Especially in scenes at night, there tends to be a lot of colored lighting, making everything look like it’s washed in neon even if the characters are inside; Actors look like the makeup is caked on their faces, and details in their costumes like puckering seams are unusually visible
Much like you can instantly recognize a Syfy channel production by its heavy reliance on greenscreen but not as expensive computer-generated special effects, or a Hallmark movie by it’s bright, fluffy, pastel look, Netflix productions also have recognizable aesthetics. Even if you don’t know what to look for, it’s so distinct that you’ll probably be able to guess whether or not something was created for Netflix just based on a few frames.
Netflix requests some basic technical specifications from all its productions, which include things like what cameras to use, Netflix’s minimum requirements for the resolution of the image, and what percentage of the production can use a non-approved camera.
Connor described the budgets on Netflix projects as being high, but in an illusory way. This is because in the age of streaming, “above the line” talent like big name actors or directors get more of the budget that’s allotted to Netflix projects because they won’t get any backend compensation from the profits of the film or television show.“They're over compensated at the beginning,” Connor said. “That means that all of your above the line talent now costs, on day one that the series drops, 130 percent of what it costs somewhere else. So your overall budget looks much higher, but in fact, what's happened is to try to save all that money, you pull it out of things like design and location.”
·vice.com·
Why Does Everything On Netflix Look Like That?
Art of the Cut: Dune 2
Art of the Cut: Dune 2
the early television speaker technology was closer in design to a telephone: built to maximize vocal range over other things. But in Cinema we’re a lot more free. This was mixed in Dolby Atmos, native. So sound was always a very key strategy.
I think TV is so dialogue-driven because in the early days, you couldn’t really have very cinematic images. You’re just looking at a small screen. What are you gonna do? You gotta tell me the story with talking.
our aim in Dune, which is a vast ensemble piece with a complex story and complex backgrounds and Frank Herbert’s almost fractal approach to storytelling, we had to have utter clarity and delivery of ideas.
There’s been some recent discussion about burdensome amounts of dialogue in film because of the influence of Television. From my background in Britain, it’s probably something I recognize more as the heritage of Radio and Theater rather than Television.
What’s the pace, the overall pace of a film? When I say pace, I don’t just mean how fast the cuts are. I mean what is moving you, underneath? What is the big drive in the story and how do we cross-cut those? If you cut off the flow too soon, it’s just an age old editing conundrum.  In TV often – Mad Men for example is constantly doing the Chinese plate trick of going between different story strands, keeping each plate spinning, and that works in TV because of the medium.
in a feature film where you want a strong feeling of drive, it’s sometimes a better idea to kind of combine stories or to let them flow. I’m basically playing with Paul’s story, the Harkonnen story, and on Jessica laying “the Way." Irulan’s diaries always gave us an opportunity to clarify their progress. And to that end, Denis shot a beautiful amount of material of the diary room.
There wer so many more angles than we needed because he knew that we might need to improvise one [a diary scene] and we did.
·borisfx.com·
Art of the Cut: Dune 2
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Some of the topics I touch on: Why I believe Vision Pro may be an over-engineered “devkit” The genius & audacity behind some of Apple’s hardware decisions Gaze & pinch is an incredible UI superpower and major industry ah-ha moment Why the Vision Pro software/content story is so dull and unimaginative Why most people won’t use Vision Pro for watching TV/movies Apple’s bet in immersive video is a total game-changer for live sports Why I returned my Vision Pro… and my Top 10 wishlist to reconsider Apple’s VR debut is the best thing that ever happened to Oculus/Meta My unsolicited product advice to Meta for Quest Pro 2 and beyond
Apple really played it safe in the design of this first VR product by over-engineering it. For starters, Vision Pro ships with more sensors than what’s likely necessary to deliver Apple’s intended experience. This is typical in a first-generation product that’s been under development for so many years. It makes Vision Pro start to feel like a devkit.
A sensor party: 6 tracking cameras, 2 passthrough cameras, 2 depth sensors(plus 4 eye-tracking cameras not shown)
it’s easy to understand two particularly important decisions Apple made for the Vision Pro launch: Designing an incredible in-store Vision Pro demo experience, with the primary goal of getting as many people as possible to experience the magic of VR through Apple’s lenses — most of whom have no intention to even consider a $4,000 purchase. The demo is only secondarily focused on actually selling Vision Pro headsets. Launching an iconic woven strap that photographs beautifully even though this strap simply isn’t comfortable enough for the vast majority of head shapes. It’s easy to conclude that this decision paid off because nearly every bit of media coverage (including and especially third-party reviews on YouTube) uses the woven strap despite the fact that it’s less comfortable than the dual loop strap that’s “hidden in the box”.
Apple’s relentless and uncompromising hardware insanity is largely what made it possible for such a high-res display to exist in a VR headset, and it’s clear that this product couldn’t possibly have launched much sooner than 2024 for one simple limiting factor — the maturity of micro-OLED displays plus the existence of power-efficient chipsets that can deliver the heavy compute required to drive this kind of display (i.e. the M2).
·hugo.blog·
Vision Pro is an over-engineered “devkit” // Hardware bleeds genius & audacity but software story is disheartening // What we got wrong at Oculus that Apple got right // Why Meta could finally have its Android moment
Saving the Liberal Arts - David Perell
Saving the Liberal Arts - David Perell
  • The decline of Liberal Arts is driven by both financial and ideological factors.
    • Universities prioritize career-focused majors due to funding cuts and student demand for practical skills.
    • The high cost of college tuition pushes students to focus on getting a high-paying job after graduation.
    • Professional skills become obsolete quickly, while a Liberal Arts education provides a foundation of knowledge that is timeless.
    • Liberal Arts education is considered "fundamental knowledge" that is slow to change and provides a broader perspective.
  • People are sacrificing their well-being for work without questioning why.
  • Obsession with productivity discourages people from pursuing non-income producing knowledge.
  • The decline of leisure time reduces opportunities for contemplation and reflection.
  • The Liberal Arts provide timeless and fundamental knowledge that is applicable across situations.
    • Universities prioritize filtering students for the labor market over nurturing their potential.
    • The tenure system can incentivize research over teaching and responsiveness to student needs.
  • A Liberal Arts education helps people question societal structures and appreciate life beyond materialism.
  • Critique of universities for prioritizing job placement over a well-rounded education.
There’s a trade-off between practicality and timelessness. Knowledge at the top of the chart, such as fashion and commerce, is immediately actionable, but decays the fastest. They’re relevant in everyday life for social and earning an income, but their specifics have a short half-life. Meanwhile, culture and nature are deeper on the chart. They offer fundamental knowledge. Their lessons apply everywhere, even if they’re beyond the scope of conscious thought in most people’s day-to-day lives. The further down the layers you travel, the longer it will take for the knowledge to pay off, but the longer that information will stay relevant and the more widely applicable it will be.
by pushing students to pursue what is immediately profitable instead of what’s ultimately meaningful, they will devalue fundamental knowledge. That’s because the business models for income share agreements and student debt insurance only work if the students make a lot of money after college.
In a study conducted during her time there, three quarters of freshmen said college was essential to developing a meaningful philosophy of life. By contrast, only a third said that it was essential to financial well-being. Today, those fractions have flipped.
You shouldn’t need to attend four years of college to earn a living. Instead, we should make it cheap and expedient for young people to receive a professional education and develop practical skills. After a year of training in the classroom, they can do an apprenticeship where they can get paid to learn instead of paying to learn.
These changes will help young adults achieve financial stability, build economically rewarded skills, and break free from parental dependence. They should study the Liberal Arts when they’re older. Rather than forcing students to slog through Dostoevsky when they are 18 — when they’re all wondering, rightly, how this is going to help them find a job — we should create schools for amateurs of all ages so they can read Crime and Punishment and The Brothers Karamazov later, when they have the life experience to appreciate it.
Today, most students are only able to formally study the Liberal Arts between the ages of 18 and 30. They only have four years during their undergraduate degree, and only the most academically ambitious of them continue their studies into graduate school. Of those who pursue a master’s degree, most stay in academia.
But if students could take Liberal Arts classes later in life, a much greater percentage would learn for the joy of it. Once again, the religious metaphor holds. No Church expects its congregants to only study the Bible for four years, with an option to keep studying as long as you plan to become a priest. But that’s what we do with the Liberal Arts.
Plato would have criticized today’s Westerners who compromise an erudite life and salivate over wealth instead, even when they’re swimming in riches.49 In a criticism of his contemporaries, he observed that their love of wealth “leaves them no respite to concern themselves with anything other than their private property. The soul of the citizen today is entirely taken up with getting rich and with making sure that every day brings its share of profit. The citizen is ready to learn any technique, to engage in any kind of activity, so long as it is profitable. He thumbs his nose at the rest.”
To prevent these failure modes, there are three guidelines Liberal Arts schools should follow: Don’t focus on practical skills, prize free thinking over ideology, and target an older audience of professionally established people.
A market-driven curriculum will create McDegree programs where students study the kinds of self-help books you find in the philosophy section of an airport bookstore. Think of already-popular books like Aristotle’s Way: How Ancient Wisdom Can Change Your Life, which reduces the great Greek philosopher into a self-help guru.
People who are employed struggle to pursue a Liberal Arts education not just because they have busy schedules, but because the material can feel so disconnected from daily reality.
With the extended focus on Professional Education at the early stage of a career, we should create opportunities for students to receive a Civilized Education throughout their life where they can appreciate life beyond the almighty Dollar.
Wesleyan President Michael Roth, the author of the best book I’ve read on the Liberal Arts, once wrote: “Education is for human development, human freedom, not the molding of an individual into a being who can perform a particular task. That would be slavery.” Up until now, our colleges have followed a philosophy of giving young people freedom early and waiting until the postgraduate years to focus on a profession. Only toward the end of the 20th century did a bachelor’s degree become a prerequisite for most jobs and professional academic study such as a master’s or a PhD. We should return to a world where Civilized Education is not mandatory. Where students can postpone the Liberal Arts to acquire technical skills that are rewarded in the economic marketplace. Students are already asking for the changes, as shown by the changing composition of majors.
College was once a place to explore the True, the Good, and the Beautiful without regard for utility. But today, it’s seen as a means toward the end of finding a job. Ideas that aren’t economically valuable are belittled as useless knowledge. Materialism has become our North Star. As a society, we measure progress in changes to the material world, where we prioritize what we can see and measure. We evaluate ourselves by productivity, our economy by the availability of cheap goods, and our civilization by the rate of technological progress. We’ve forgotten about our human need for wonder, beauty, and contemplation. Today, we worship the Factual, the Useful, and the Monetizable.
Don’t get me wrong. The fruits of clean water and modern medicine are miraculous. But what good is a materialistic utopia if it comes at the cost of a spiritual one? We are more indebted, depressed, and suicidal than ever before. And yet, we continue to worship technological progress and material abundance as if they will elevate the soul. As so, we run and run and run — hoping that all that effort will save us. But if people feel too constrained to pursue wonder and beauty as ends in themselves, are we really making progress?
Mistaking money for cultural well-being is like mistaking a roof for a home. ROI-brain only speaks in the language of materialism, and we should be skeptical of it. Otherwise, our lives will follow the leash of cheap pleasures and distracting dopamine hits. Corporations, too, will continue to sell instant improvements without regard for their long-term effects.
Taken all together, the Liberal Arts is the meta-recognition of our world.
If we cannot question the systems that guide our lives, we will be enslaved to them. Nor will we be saved by comfort, pleasure, or a respectable job that impresses our family at the Thanksgiving table. Only with a Liberal Arts education will we develop the capacity to thrive as conscious adults. By studying the foundations of how we think, who we are, and how we got here, we’ll gain control over our minds and create a more flourishing civilization.
·perell.com·
Saving the Liberal Arts - David Perell
Effects of Acute Exercise on Mood, Cognition, Neurophysiology, and Neurochemical Pathways - A Review
Effects of Acute Exercise on Mood, Cognition, Neurophysiology, and Neurochemical Pathways - A Review
A significant body of work has investigated the effects of acute exercise, defined as a single bout of physical activity, on mood and cognitive functions in humans. Several excellent recent reviews have summarized these findings; however, the neurobiological basis of these results has received less attention. In this review, we will first briefly summarize the cognitive and behavioral changes that occur with acute exercise in humans. We will then review the results from both human and animal model studies documenting the wide range of neurophysiological and neurochemical alterations that occur after a single bout of exercise. Finally, we will discuss the strengths, weaknesses, and missing elements in the current literature, as well as offer an acute exercise standardization protocol and provide possible goals for future research.
As we age, cognitive decline, though not inevitable, is a common occurrence resulting from the process of neurodegeneration. In some instances, neurodegeneration results in mild cognitive impairment or more severe forms of dementia including Alzheimer’s, Parkinson’s, or Huntington’s disease. Because of the role of exercise in enhancing neurogenesis and brain plasticity, physical activity may serve as a potential therapeutic tool to prevent, delay, or treat cognitive decline. Indeed, studies in both rodents and humans have shown that long-term exercise is helpful in both delaying the onset of cognitive decline and dementia as well as improving symptoms in patients with an already existing diagnosis
·ncbi.nlm.nih.gov·
Effects of Acute Exercise on Mood, Cognition, Neurophysiology, and Neurochemical Pathways - A Review
Memetics - Wikipedia
Memetics - Wikipedia
The term "meme" was coined by biologist Richard Dawkins in his 1976 book The Selfish Gene,[1] to illustrate the principle that he later called "Universal Darwinism".
He gave as examples, tunes, catchphrases, fashions, and technologies. Like genes, memes are selfish replicators and have causal efficacy; in other words, their properties influence their chances of being copied and passed on.
Just as genes can work together to form co-adapted gene complexes, so groups of memes acting together form co-adapted meme complexes or memeplexes.
Criticisms of memetics include claims that memes do not exist, that the analogy with genes is false, that the units cannot be specified, that culture does not evolve through imitation, and that the sources of variation are intelligently designed rather than random.
·en.m.wikipedia.org·
Memetics - Wikipedia
AI startups require new strategies
AI startups require new strategies

comment from Habitue on Hacker News: > These are some good points, but it doesn't seem to mention a big way in which startups disrupt incumbents, which is that they frame the problem a different way, and they don't need to protect existing revenue streams.

The “hard tech” in AI are the LLMs available for rent from OpenAI, Anthropic, Cohere, and others, or available as open source with Llama, Bloom, Mistral and others. The hard-tech is a level playing field; startups do not have an advantage over incumbents.
There can be differentiation in prompt engineering, problem break-down, use of vector databases, and more. However, this isn’t something where startups have an edge, such as being willing to take more risks or be more creative. At best, it is neutral; certainly not an advantage.
This doesn’t mean it’s impossible for a startup to succeed; surely many will. It means that you need a strategy that creates differentiation and distribution, even more quickly and dramatically than is normally required
Whether you’re training existing models, developing models from scratch, or simply testing theories, high-quality data is crucial. Incumbents have the data because they have the customers. They can immediately leverage customers’ data to train models and tune algorithms, so long as they maintain secrecy and privacy.
Intercom’s AI strategy is built on the foundation of hundreds of millions of customer interactions. This gives them an advantage over a newcomer developing a chatbot from scratch. Similarly, Google has an advantage in AI video because they own the entire YouTube library. GitHub has an advantage with Copilot because they trained their AI on their vast code repository (including changes, with human-written explanations of the changes).
While there will always be individuals preferring the startup environment, the allure of working on AI at an incumbent is equally strong for many, especially pure computer and data scientsts who, more than anything else, want to work on interesting AI projects. They get to work in the code, with a large budget, with all the data, with above-market compensation, and a built-in large customer base that will enjoy the fruits of their labor, all without having to do sales, marketing, tech support, accounting, raising money, or anything else that isn’t the pure joy of writing interesting code. This is heaven for many.
A chatbot is in the chatbot market, and an SEO tool is in the SEO market. Adding AI to those tools is obviously a good idea; indeed companies who fail to add AI will likely become irrelevant in the long run. Thus we see that “AI” is a new tool for developing within existing markets, not itself a new market (except for actual hard-tech AI companies).
AI is in the solution-space, not the problem-space, as we say in product management. The customer problem you’re solving is still the same as ever. The problem a chatbot is solving is the same as ever: Talk to customers 24/7 in any language. AI enables completely new solutions that none of us were imagining a few years ago; that’s what’s so exciting and truly transformative. However, the customer problems remain the same, even though the solutions are different
Companies will pay more for chatbots where the AI is excellent, more support contacts are deferred from reaching a human, more languages are supported, and more kinds of questions can be answered, so existing chatbot customers might pay more, which grows the market. Furthermore, some companies who previously (rightly) saw chatbots as a terrible customer experience, will change their mind with sufficiently good AI, and will enter the chatbot market, which again grows that market.
the right way to analyze this is not to say “the AI market is big and growing” but rather: “Here is how AI will transform this existing market.” And then: “Here’s how we fit into that growth.”
·longform.asmartbear.com·
AI startups require new strategies
Rethinking the startup MVP - Building a competitive product - Linear
Rethinking the startup MVP - Building a competitive product - Linear
Building something valuable is no longer about validating a novel idea as fast as possible. Instead, the modern MVP exercise is about building a version of an idea that is different from and better than what exists today. Most of us aren’t building for a net-new market. Rather, we’re finding opportunities to improve existing categories. We need an MVP concept that helps founders and product leaders iterate on their early ideas to compete in an existing market.
It’s not good enough to be first with an idea. You have to out-execute from day 1.
The MVP as a practice of building a hacky product as quickly and cheaply as possible to validate the product does no longer work. Many product categories are already saturated with a variety of alternatives, and to truly test the viability of any new idea you need to build something that is substantially better.
Airbnb wanted to build a service that relied on people being comfortable spending the night at a stranger’s house. When they started in 2009, it wasn’t obvious if people were ready for this. Today, it’s obvious that it works, so they wouldn’t need to validate the idea. A similar analogy works for Lyft when they started exploring ridesharing as a concept.
Today, the MVP is no longer about validating a novel idea as quickly as possible. Rather, its aim is to create a compelling product that draws in the early users in order to gather feedback that you then use to sharpen the product into the best version of many.
If you look at successful companies that have IPO'd in the recent years–Zoom, Slack, TikTok, Snowflake, Robinhood–you see examples not of novel ideas, but of these highly-refined ideas.Since many of us are building in a crowded market, the bar for a competitive, public-ready MVP is much higher than the MVP for a novel idea, since users have options. To get to this high bar, we have to spend more time refining the initial version.
The original MVP idea can still work if you’re in the fortunate position of creating a wholly new category of product or work with new technology platforms, but that becomes rarer and rarer as time goes on.
Let’s jump over the regular startup journey that you might take today when building a new product:You start with the idea on how you want to improve on existing products in a category.You build your first prototype.You iterate with your vision and based on feedback from early users.You get an inkling of product market fit and traction.Optional: You start fundraising (with demonstrable traction).Optional: You scale your team, improve the product, and go to market.
In today’s landscape, you’re likely competing against many other products. To win, you have to build a product that provides more value to your users than your competition does.To be able to do this with limited resources, you must scope down your audience (and thus your ambitions) as much as possible to make competing easier, and aim to solve the problems of specific people.
When we started Linear, our vision was to become the standard of how software is built. This is not really something you can expect to do during your early startup journey, let alone in an MVP. But you should demonstrate you have the ability to achieve your bigger vision via your early bets. We chose to do this by focusing on IC’s at small startups. We started with the smallest atomic unit of work they actually needed help with: issue tracking.
We knew we wanted our product to demonstrate three values:It should be as fast as possible (local data storage, no page reloads, available offline).It should be modern (keyboard shortcuts, command menu, contextual menus).It should be multiplayer (real-time sync and teammates presence).
Remember, you’re likely not building a revolutionary or novel product. You’re unlikely to go viral with your announcement, so you need a network of people who understand the “why” behind your product to help spread the word to drive people to sign up. Any product category has many people who are frustrated with the existing tools or ways of working. Ideally you find and are able to reach out to those people.
Once you have a bunch of people on your waitlist, you need to invite the right users at each stage of your iteration. You want to invite people who are likely to be happy with the limited set of features you’ve built so far. Otherwise, they’ll churn straight away and you’ll learn nothing.
To recap:Narrow down your initial audience and build for them: Figure out who you're building the product for and make the target audience as small as possible before expanding.Build and leverage your waitlist: The waitlist is the grinding stone with which you can sharpen your idea into something truly valuable that will succeed at market, so use it effectively.Trust your gut and validate demand with your users: Talk, talk, talk to your users and find out how invested in the product they are (and if they’d be willing to pay)
·linear.app·
Rethinking the startup MVP - Building a competitive product - Linear
Competition is overrated - cdixon
Competition is overrated - cdixon
That other people tried your idea without success could imply it’s a bad idea or simply that the timing or execution was wrong. Distinguishing between these cases is hard and where you should apply serious thought. If you think your competitors executed poorly, you should develop a theory of what they did wrong and how you’ll do better.
If you think your competitor’s timing was off, you should have a thesis about what’s changed to make now the right time. These changes could come in a variety of forms: for example, it could be that users have become more sophisticated, the prices of key inputs have dropped, or that prerequisite technologies have become widely adopted.
Startups are primarly competing against indifference, lack of awareness, and lack of understanding — not other startups.
There were probably 50 companies that tried to do viral video sharing before YouTube. Before 2005, when YouTube was founded, relatively few users had broadband and video cameras. YouTube also took advantage of the latest version of Flash that could play videos seamlessly.
Google and Facebook launched long after their competitors, but executed incredibly well and focused on the right things. When Google launched, other search engines like Yahoo, Excite, and Lycos were focused on becoming multipurpose “portals” and had de-prioritized search (Yahoo even outsourced their search technology).
·cdixon.org·
Competition is overrated - cdixon
The idea maze - cdixon
The idea maze - cdixon
Imagine, for example, that you were thinking of starting Netflix back when it was founded in 1997. How would content providers, distribution channels, and competitors respond? How soon would technology develop to open a hidden door and let you distribute online instead of by mail? Or consider Dropbox in 2007. Dozens of cloud storage companies had been started before. What mistakes had they made? How would incumbents like Amazon and Google respond? How would new platforms like mobile affect you?
When you’re starting out, it’s impossible to completely map out the idea maze. But there are some places you can look for help: History. If your idea has been tried before (and almost all good ideas have), you should figure out what the previous attempts did right and wrong. A lot of this knowledge exists only in the brains of practitioners, which is one of many reasons why “stealth mode” is a bad idea. The benefits of learning about the maze generally far outweigh the risks of having your idea stolen. Analogy. You can also build the maze by analogy to similar businesses. If you are building a “peer economy” company it can be useful to look at what Airbnb did right. If you are building a marketplace you should understand eBay’s beginnings. Etc.
·cdixon.org·
The idea maze - cdixon
Muse retrospective by Adam Wiggins
Muse retrospective by Adam Wiggins
  • Wiggins focused on storytelling and brand-building for Muse, achieving early success with an email newsletter, which helped engage potential users and refine the product's value proposition.
  • Muse aspired to a "small giants" business model, emphasizing quality, autonomy, and a healthy work environment over rapid growth. They sought to avoid additional funding rounds by charging a prosumer price early on.
  • Short demo videos on Twitter showcasing the app in action proved to be the most effective method for attracting new users.
Muse as a brand and a product represented something aspirational. People want to be deeper thinkers, to be more strategic, and to use cool, status-quo challenging software made by small passionate teams. These kinds of aspirations are easier to indulge in times of plenty. But once you're getting laid off from your high-paying tech job, or struggling to raise your next financing round, or scrambling to protect your kids' college fund from runaway inflation and uncertain markets... I guess you don't have time to be excited about cool demos on Twitter and thoughtful podcasts on product design.
I’d speculate that another factor is the half-life of cool new productivity software. Evernote, Slack, Notion, Roam, Craft, and many others seem to get pretty far on community excitement for their first few years. After that, I think you have to be left with software that serves a deep and hard-to-replace purpose in people’s lives. Muse got there for a few thousand people, but the economics of prosumer software means that just isn’t enough. You need tens of thousands, hundreds of thousands, to make the cost of development sustainable.
We envisioned Muse as the perfect combination of the freeform elements of a whiteboard, the structured text-heavy style of Notion or Google Docs, and the sense of place you get from a “virtual office” ala group chat. As a way to asynchronously trade ideas and inspiration, sketch out project ideas, and explore possibilities, the multiplayer Muse experience is, in my honest opinion, unparalleled for small creative teams working remotely.
But friction began almost immediately. The team lead or organizer was usually the one bringing Muse to the team, and they were already a fan of its approach. But the other team members are generally a little annoyed to have to learn any new tool, and Muse’s steeper learning curve only made that worse. Those team members would push the problem back to the team lead, treating them as customer support (rather than contacting us directly for help). The team lead often felt like too much of the burden of pushing Muse adoption was on their shoulders. This was in addition to the obvious product gaps, like: no support for the web or Windows; minimal or no integration with other key tools like Notion and Google Docs; and no permissions or support for multiple workspaces. Had we raised $10M back during the cash party of 2020–2021, we could have hired the 15+ person team that would have been necessary to build all of that. But with only seven people (we had added two more people to the team in 2021–2022), it just wasn’t feasible.
We neither focused on a particular vertical (academics, designers, authors...) or a narrow use case (PDF reading/annotation, collaborative whiteboarding, design sketching...). That meant we were always spread pretty thin in terms of feature development, and marketing was difficult even over and above the problem of explaining canvas software and digital thinking tools.
being general-purpose was in its blood from birth. Part of it was maker's hubris: don't we always dream of general-purpose tools that will be everything to everyone? And part of it was that it's truly the case that Muse excels at the ability to combine together so many different related knowledge tasks and media types into a single, minimal, powerful canvas. Not sure what I would do differently here, even with the benefit of hindsight.
Muse built a lot of its reputation on being principled, but we were maybe too cautious to do the mercenary things that help you succeed. A good example here is asking users for ratings; I felt like this was not to user benefit and distracting when the user is trying to use your app. Our App Store rating was on the low side (~3.9 stars) for most of our existence. When we finally added the standard prompt-for-rating dialog, it instantly shot up to ~4.7 stars. This was a small example of being too principled about doing good for the user, and not thinking about what would benefit our business.
Growing the team slowly was a delight. At several previous ventures, I've onboard people in the hiring-is-job-one environment of a growth startup. At Muse, we started with three founders and then hired roughly one person per year. This was absolutely fantastic for being able to really take our time to find the perfect person for the role, and then for that person to have tons of time to onboard and find their footing on the team before anyone new showed up. The resulting team was the best I've ever worked on, with minimal deadweight or emotional baggage.
ultimately your product does have to have some web presence. My biggest regret is not building a simple share-to-web function early on, which could have created some virality and a great deal of utility for users as well.
In terms of development speed, quality of the resulting product, hardware integration, and a million other things: native app development wins.
After decades working in product development, being on the marketing/brand/growth/storytelling side was a huge personal challenge for me. But I feel like I managed to grow into the role and find my own approach (podcasting, demo videos, etc) to create a beacon to attract potential customers to our product.
when it comes time for an individual or a team to sit down and sketch out the beginnings of a new business, a new book, a new piece of art—this almost never happens at a computer. Or if it does, it’s a cobbled-together collection of tools like Google Docs and Zoom which aren’t really made for this critical part of the creative lifecycle.
any given business will find a small number of highly-effective channels, and the rest don't matter. For Heroku, that was attending developer conferences and getting blog posts on Hacker News. For another business it might be YouTube influencer sponsorships and print ads in a niche magazine. So I set about systematically testing many channels.
·adamwiggins.com·
Muse retrospective by Adam Wiggins
Great Products Have Great Premises
Great Products Have Great Premises
A great premise gives users context and permission to take actions they might not otherwise take.
The most powerful thing a product can do is give its user a premise.1 A premise is the foundational belief that shapes a user’s behavior. A premise can normalize actions that people otherwise might not take, held back by some existing norm
AirBnb. The premise: It’s ok to stay in strangers’ homes.
the idea of staying in strangers’ homes for short stays was doubted even by the founders. Crashing in someone’s spare room wasn’t unheard of, but it might be seen as weird, taboo, or even dangerous.
Bumble. The premise: It’s ok for women to ask men out.
The best way to follow through on a premise is to make it the core feature of the app. Bumble did, requiring that women make the first move on the app. A woman would be presented with a list of her matches and would have to make the first "move" before men could reply. This of course became a powerful differentiating feature and marketing hook.
Substack. The premise: It’s ok to charge for your writing.
Substack's premise aimed to normalize the hardest part of internet writing: getting paid. They aimed to show that independent authors could succeed at making a living (and subscription models aligned with this ethos). In doing so, Substack also made the less-hard parts of internet writing even easier. You could start a newsletter and keep it free until you felt confident about going paid. This not only normalized the end goal but also lowered the barrier to getting started.
A premise is valuable not only for “products,” but also for experiences.As I recently shouted, people still underestimate the power of giving a social event a premise. Hackathons, housewarmings, happy hours and the like are hangouts with a narrative. They have a good premise — a specific context that makes it more comfortable to do something that can be hard: socialize. (Side note: some of the best tv series and films are built on great premises.)
Premises work best on end consumers, prosumers, small business freelancers, and the like. Many two-sided marketplaces serving two of these stakeholder groups tend to have a good premise. For example, Kickstarter's premise for the creator might be: It’s ok to ask for money before you've built a product.
·workingtheorys.com·
Great Products Have Great Premises
Strong and weak technologies - cdixon
Strong and weak technologies - cdixon
Strong technologies capture the imaginations of technology enthusiasts. That is why many important technologies start out as weekend hobbies. Enthusiasts vote with their time, and, unlike most of the business world, have long-term horizons. They build from first principles, making full use of the available resources to design technologies as they ought to exist.
·cdixon.org·
Strong and weak technologies - cdixon
Writing with AI
Writing with AI
iA writer's vision for using AI in writing process
Thinking in dialogue is easier and more entertaining than struggling with feelings, letters, grammar and style all by ourselves. Using AI as a writing dialogue partner, ChatGPT can become a catalyst for clarifying what we want to say. Even if it is wrong.6 Sometimes we need to hear what’s wrong to understand what’s right.
Seeing in clear text what is wrong or, at least, what we don’t mean can help us set our minds straight about what we really mean. If you get stuck, you can also simply let it ask you questions. If you don’t know how to improve, you can tell it to be evil in its critique of your writing
Just compare usage with AI to how we dealt with similar issues before AI. Discussing our writing with others is a general practice and regarded as universally helpful; honest writers honor and credit their discussion partners We already use spell checkers and grammar tools It’s common practice to use human editors for substantial or minor copy editing of our public writing Clearly, using dictionaries and thesauri to find the right expression is not a crime
Using AI in the editor replaces thinking. Using AI in dialogue increases thinking. Now, how can connect the editor and the chat window without making a mess? Is there a way to keep human and artificial text apart?
·ia.net·
Writing with AI
Tools for Thought as Cultural Practices, not Computational Objects
Tools for Thought as Cultural Practices, not Computational Objects
Summary: Throughout human history, innovations like written language, drawing, maps, the scientific method, and data visualization have profoundly expanded the kinds of thoughts humans can think. Most of these "tools for thought" significantly predate digital computers. The modern usage of the phrase is heavily influenced by the work of computer scientists and technologists in the 20th century who envisioned how computers could become tools to extend human reasoning and help solve complex problems. While computers are powerful "meta-mediums", the current focus on building note-taking apps is quite narrow. To truly expand human cognition, we should explore a wider range of tools and practices, both digital and non-digital.
Taken at face value, the phrase tool for thought doesn't have the word 'computer' or 'digital' anywhere in it. It suggests nothing about software systems or interfaces. It's simply meant to refer to tools that help humans think thoughts; potentially new, different, and better kinds of thoughts than we currently think.
Most of the examples I listed above are cultural practices and techniques. They are primary ways of doing; specific ways of thinking and acting that result in greater cognitive abilities. Ones that people pass down from generation to generation through culture. Every one of these also pre-dates digital computers by at least a few hundred years, if not thousands or tens of thousands. Given that framing, it's time to return to the question of how computation, software objects, and note-taking apps fit into this narrative.
If you look around at the commonly cited “major thinkers” in this space, you get a list of computer programmers: Kenneth Iverson, J.C.R. Licklider, Vannevar Bush, Alan Kay, Bob Taylor, Douglas Englebart, Seymour Papert, Bret Victor, and Howard Rheingold, among others.
This is relevant because it means these men share a lot of the same beliefs, values, and context. They know the same sorts of people, learned the same historical stories in school and were taught to see the world in particular kinds of ways. Most of them worked together, or are at most one personal connection away from the next. Tools for thought is a community scene as much as it's a concept. This gives tools for thought a distinctly computer-oriented, male, American, middle-class flavour. The term has always been used in relation to a dream that is deeply intertwined with digital machines, white-collar knowledge work, and bold American optimism.
Englebart was specifically concerned with our ability to deal with complex problems, rather than simply “amplifying intelligence.” Being able to win a chess match is perceived as intelligent, but it isn't helping us tackle systemic racism or inequality. Englebart argued we should instead focus on “augmenting human intellect” in ways that help us find solutions to wicked problems. While he painted visions of how computers could facilitate this, he also pointed to organisational structures, system dynamics, and effective training as part of this puzzle.
There is a rich literature of research and insight into how we might expand human thought that sometimes feels entirely detached from the history we just covered. Cognitive scientists and philosophers have been tackling questions about the relationship between cognition, our tools, and our physical environments for centuries. Well before microprocessors and hypertext showed up. Oddly, they're rarely cited by the computer scientists. This alternate intellectual lineage is still asking the question “how can we develop better tools for thinking?” But they don't presume the answer revolves around computers.
Proponents of embodied cognition argue that our perceptions, concepts, and cognitive processes are shaped by the physical structures of our body and the sensory experiences it provides, and that cognition cannot be fully understood without considering the bodily basis of our experiences.
Philosopher Andy Clark has spent his career exploring how external tools transform and expand human cognition. His 2003 book Natural-born Cyborgs argues humans have “always been cyborgs.” Not in the sense of embedding wires into our flesh, but in the sense we enter “into deep and complex relationships with nonbiological constructs, props, and aids”. Our ability to think with external objects is precisely what makes us intelligent. Clark argues “the mind” isn't simply a set of functions within the brain, but a process that happens between our bodies and the physical environment. Intelligence emerges at the intersection of humans and tools. He expanded on this idea in a follow-on book called Supersizing the Mind. It became known as the extended mind hypothesis. It's the strong version of theories like embodied cognition, situated cognition, and enacted cognition that are all the rage in cognitive science departments.
There's a scramble to make sense of all these new releases and the differences between them. YouTube and Medium explode with DIY guides, walkthrough tours, and comparison videos. The productivity and knowledge management influencer is born.[ giant wall of productivity youtube nonsense ]The strange thing is, many of these guides are only superficially about the application they're presented in. Most are teaching specific cultural techniques
Zettelkasten, spaced repetition, critical thinking.These techniques are only focused on a narrow band of human activity. Specifically, activity that white-collar knowledge workers engage in.I previously suggested we should rename TFT to CMFT (computational mediums for thought), but that doesn't go far enough. If we're being honest about our current interpretation of TFT's, we should actually rename it to CMFWCKW – computational mediums for white-collar knowledge work.
By now it should be clear that this question of developing better tools for thought can and should cover a much wider scope than developing novel note-taking software.
I do think there's a meaningful distinction between tools and mediums: Mediums are a means of communicating a thought or expressing an idea. Tools are a means of working in a medium. Tools enable specific tasks and workflows within a medium. Cameras are a tool that let people express ideas through photography. Blogs are a tool that lets people express ideas through written language. JavaScript is a tool that let people express ideas through programming. Tools and mediums require each other. This makes lines between them fuzzy.
·maggieappleton.com·
Tools for Thought as Cultural Practices, not Computational Objects
How can we develop transformative tools for thought?
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work. But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
·numinous.productions·
How can we develop transformative tools for thought?
In praise of the particular, and other lessons from 2023 - Andy Matuschak
In praise of the particular, and other lessons from 2023 - Andy Matuschak
in 2023, I switched gears to emphasize intimacy. Instead of statistical analysis and summative interviews, I sat next to individuals for hours, as they used one-off prototypes which I’d made just for them. And I got more insight in the first few weeks of this than I had in all of 2022
I’d been building systems and running big experiments, and I could tell you plenty about forgetting curves and usage patterns—but very little about how those things connected to anything anyone cared about.
I could see, in great detail, the texture of the interaction between my designs and the broader learning context—my real purpose, not some proxy.
Single-user experiments like this emphasize problem-finding and discovery, not precise evaluation.
a good heuristic for evaluating my work seems to be: try designs 1-on-1 until they seem to be working well, and only then run more quantitative experiments to understand how well the effect generalizes.
My aim is to invent augmented reading environments that apply to any kind of informational text—spanning subjects, formats, and audiences. The temptation, then, is to consider every design element in the most systematic, general form. But this again confuses aims with methods. So many of my best insights have come from hoarding and fermenting vivid observations about the particular—a specific design, in a specific situation. That one student’s frustration with that one specific exercise.
It’s often hard to find “misfits” when I’m thinking about general forms. My connection to the problem becomes too diffuse. The object of my attention becomes the system itself, rather than its interactions with a specific context of use. This leads to a common failure mode among system designers: getting lost in towers of purity and abstraction, more and more disconnected from the system’s ostensible purpose in the world.
I experience an enormous difference between “trying to design an augmented reading environment” and “trying to design an augmented version of this specific linear algebra book”. When I think about the former, I mostly focus on primitives, abstractions, and processes. When I think about the latter, I focus on the needs of specific ideas, on specific pages. And then, once it’s in use, I think about specific problems, that specific students had, in specific places. These are the “misfits” I need to remove as a designer.
Of course, I do want my designs to generalize. That’s not just a practical consideration. It’s also spiritual: when I design a system well, it feels like I’ve limned hidden seams of reality; I’ve touched a kind of personal God. On most days, I actually care about this more than my designs’ utilitarian impact. The systems I want to build really do require abstraction and generalization. Transformative systems really do often depend on powerful new primitives. But more and more, my experience has been that the best creative fuel for these systematic solutions often comes from a process which focuses on particulars, at least for long periods at a time.
Also? The particular is often a lot more emotionally engaging, day-to-day. That makes the work easier and more fun.
Throughout my career, I’ve struggled with a paradox in the feeling of my work. When I’ve found my work quite gratifying in the moment, day-to-day, I’ve found it hollow and unsatisfying retrospectively, over the long term. For example, when I was working at Apple, there was so much energy; I was surrounded by brilliant people; I felt very competent, it was clear what to do next; it was easy to see my progress each day. That all felt great. But then, looking back on my work at the end of each year, I felt deeply dissatisfied: I wasn’t making a personal creative contribution. If someone else had done the projects I’d done, the results would have been different, but not in a way that mattered. The work wasn’t reflective of ideas or values that mattered to me. I felt numbed, creatively and intellectually.
Progress often doesn’t look like progressIt often feels like I’m not making any progress at all in my work. I’ll feel awfully frustrated. And then, suddenly, a tremendous insight will drive months of work. This last happened in the fall. Looking back at those journals now, I’m amused to read page after page of me getting so close to that central insight in the weeks leading up to it. I approach it again and again from different directions, getting nearer and nearer, but still one leap away—so it looks to me, at the time, like I’ve got nothing. Then, finally, when I had the idea, it felt like a bolt from the blue.
·andymatuschak.org·
In praise of the particular, and other lessons from 2023 - Andy Matuschak
A camera for ideas
A camera for ideas
Instead of turning light into pictures, it turns ideas into pictures.
This new kind of camera replicates what your imagination does. It receives words and then synthesizes a picture from its experience seeing millions of other pictures. The output doesn’t have a name yet, but I’ll call it a synthograph (meaning synthetic drawing).
Photography can capture moments that happened, but synthography is not bound by the limitations of reality. Synthography can capture moments that did not happen and moments that could never happen.
Taking a great syntho is about stimulating the imagination of the camera. Synthography doesn’t require you to be anywhere or anywhen in particular.
Photography is an important medium of expression because it is so accessible and instantaneous. Synthography will even further reduce barriers to entry, and give everyone the power to convert ideas into pictures.
·stephango.com·
A camera for ideas
Choose optimism
Choose optimism
The life of an optimist is hard but exciting. Pessimism is easy because it costs nothing. Optimism is hard because it must be constantly reaffirmed.
Dreams are delicate and easy to destroy. When an idea presents itself, try to imagine the best version of it — what would make this idea great?
Pessimism and optimism share a trait: both are self-fulfilling. Your intention influences the outcome. Call it karma or, simply, effort.
·stephango.com·
Choose optimism