Found 31 bookmarks
Newest
Fish eye lens for text
Fish eye lens for text
Each level gives you completely different information, depending on what Google thinks the user might be interested in. Maps are a true masterclass for visualizing the same information in a variety of ways.
Viewing the same text at different levels of abstraction is powerful, but what, instead of switching between them, we could see multiple levels at the same time? How might that work?
A portrait lens brings a single subject into focus, isolating it from the background to draw all attention to its details. A wide-angle lens captures more of the scene, showing how the subject relates to its surroundings. And then there’s the fish eye lens—a tool that does both, pulling the center close while curving the edges to reveal the full context.
A fish eye lens doesn’t ask us to choose between focus and context—it lets us experience both simultaneously. It’s good inspiration for how to offer detailed answers while revealing the surrounding connections and structures.
Imagine you’re reading The Elves and the Shoemaker by The Brothers Grimm. You come across a single paragraph describing the shoemaker discovering the tiny, perfectly crafted shoes left by the elves. Without context, the paragraph is just an intriguing moment. Now, what if instead of reading the whole book, you could hover over this paragraph and instantly access a layered view of the story? The immediate layer might summarize the events leading up to this moment: the shoemaker, struggling in poverty, left his last bit of leather out overnight. Another layer could give you a broader view of the story so far: the shoemaker’s business is mysteriously revitalized thanks to these tiny benefactors. Beyond that, an even higher-level summary might preview how the tale concludes, with the shoemaker and his wife crafting clothes for the elves to thank them.
This approach allows you to orient yourself without having to piece everything together by reading linearly. You get the detail of the paragraph itself, but with the added richness of understanding how it fits into the larger story.
Chapters give structure, connecting each idea to the ones that came before and after. A good author sets the stage, immersing you with anecdotes, historical background, or thematic threads that help you make sense of the details. Even the act of flipping through a book—a glance at the cover, the table of contents, a few highlighted sections—anchors you in a broader narrative.
The context of who is telling you the information—their expertise, interests, or personal connection—colors how you understand it.
The exhibit places the fish in an ecosystem of knowledge, helping you understand it in a way that goes beyond just a name.
Let's reimagine a Wikipedia a bit. In the center of the page, you see a detailed article about fancy goldfish—their habitat, types, and role in the food chain. Surrounding this are broader topics like ornamental fish, similar topics like Koi fish, more specific topics like the Oranda goldfish, and related people like the designer who popularized them. Clicking on another topic shifts it to the center, expanding into full detail while its context adjusts around it. It’s dynamic, engaging, and most importantly, it keeps you connected to the web of knowledge
The beauty of a fish eye lens for text is how naturally it fits with the way we process the world. We’re wired to see the details of a single flower while still noticing the meadow it grows in, to focus on a conversation while staying aware of the room around us. Facts and ideas are never meaningful in isolation; they only gain depth and relevance when connected to the broader context.
A single number on its own might tell you something, but it’s the trends, comparisons, and relationships that truly reveal its story. Is 42 a high number? A low one? Without context, it’s impossible to say. Context is what turns raw data into understanding, and it’s what makes any fact—or paragraph, or answer—gain meaning.
The fish eye lens takes this same principle and applies it to how we explore knowledge. It’s not just about seeing the big picture or the fine print—it’s about navigating between them effortlessly. By mirroring the way we naturally process detail and context, it creates tools that help us think not only more clearly but also more humanly.
·wattenberger.com·
Fish eye lens for text
Synthesizer for thought - thesephist.com
Synthesizer for thought - thesephist.com
Draws parallels between the evolution of music production through synthesizers and the potential for new tools in language and idea generation. The author argues that breakthroughs in mathematical understanding of media lead to new creative tools and interfaces, suggesting that recent advancements in language models could revolutionize how we interact with and manipulate ideas and text.
A synthesizer produces music very differently than an acoustic instrument. It produces music at the lowest level of abstraction, as mathematical models of sound waves.
Once we started understanding writing as a mathematical object, our vocabulary for talking about ideas expanded in depth and precision.
An idea is composed of concepts in a vector space of features, and a vector space is a kind of marvelous mathematical object that we can write theorems and prove things about and deeply and fundamentally understand.
Synthesizers enabled entirely new sounds and genres of music, like electronic pop and techno. These new sounds were easier to discover and share because new sounds didn’t require designing entirely new instruments. The synthesizer organizes the space of sound into a tangible human interface, and as we discover new sounds, we could share it with others as numbers and digital files, as the mathematical objects they’ve always been.
Because synthesizers are electronic, unlike traditional instruments, we can attach arbitrary human interfaces to it. This dramatically expands the design space of how humans can interact with music. Synthesizers can be connected to keyboards, sequencers, drum machines, touchscreens for continuous control, displays for visual feedback, and of course, software interfaces for automation and endlessly dynamic user interfaces. With this, we freed the production of music from any particular physical form.
Recently, we’ve seen neural networks learn detailed mathematical models of language that seem to make sense to humans. And with a breakthrough in mathematical understanding of a medium, come new tools that enable new creative forms and allow us to tackle new problems.
Heatmaps can be particularly useful for analyzing large corpora or very long documents, making it easier to pinpoint areas of interest or relevance at a glance.
If we apply the same idea to the experience of reading long-form writing, it may look like this. Imagine opening a story on your phone and swiping in from the scrollbar edge to reveal a vertical spectrogram, each “frequency” of the spectrogram representing the prominence of different concepts like sentiment or narrative tension varying over time. Scrubbing over a particular feature “column” could expand it to tell you what the feature is, and which part of the text that feature most correlates with.
What would a semantic diff view for text look like? Perhaps when I edit text, I’d be able to hover over a control for a particular style or concept feature like “Narrative voice” or “Figurative language”, and my highlighted passage would fan out the options like playing cards in a deck to reveal other “adjacent” sentences I could choose instead. Or, if that involves too much reading, each word could simply be highlighted to indicate whether that word would be more or less likely to appear in a sentence that was more “narrative” or more “figurative” — a kind of highlight-based indicator for the direction of a semantic edit.
Browsing through these icons felt as if we were inventing a new kind of word, or a new notation for visual concepts mediated by neural networks. This could allow us to communicate about abstract concepts and patterns found in the wild that may not correspond to any word in our dictionary today.
What visual and sensory tricks can we use to coax our visual-perceptual systems to understand and manipulate objects in higher dimensions? One way to solve this problem may involve inventing new notation, whether as literal iconic representations of visual ideas or as some more abstract system of symbols.
Photographers buy and sell filters, and cinematographers share and download LUTs to emulate specific color grading styles. If we squint, we can also imagine software developers and their package repositories like NPM to be something similar — a global, shared resource of abstractions anyone can download and incorporate into their work instantly. No such thing exists for thinking and writing. As we figure out ways to extract elements of writing style from language models, we may be able to build a similar kind of shared library for linguistic features anyone can download and apply to their thinking and writing. A catalogue of narrative voice, speaking tone, or flavor of figurative language sampled from the wild or hand-engineered from raw neural network features and shared for everyone else to use.
We’re starting to see something like this already. Today, when users interact with conversational language models like ChatGPT, they may instruct, “Explain this to me like Richard Feynman.” In that interaction, they’re invoking some style the model has learned during its training. Users today may share these prompts, which we can think of as “writing filters”, with their friends and coworkers. This kind of an interaction becomes much more powerful in the space of interpretable features, because features can be combined together much more cleanly than textual instructions in prompts.
·thesephist.com·
Synthesizer for thought - thesephist.com
Timeful Texts
Timeful Texts
Consider texts like the Bible and the Analects of Confucius. People integrate ideas from those books into their lives over time—but not because authors designed them that way. Those books work because they’re surrounded by rich cultural activity. Weekly sermons and communities of practice keep ideas fresh in readers’ minds and facilitate ongoing connections to lived experiences. This is a powerful approach for powerful texts, requiring extensive investment from readers and organizers. We can’t build cathedrals for every book. Sophisticated readers adopt similar methods to study less exalted texts, but most people lack the necessary skills, drive, and cultural contexts. How might we design texts to more widely enable such practices?
spaced repetition is just one approach for writing timeful texts. What other powerful tools might it be possible to create, making future books into artifacts that transcend their pages, as they slowly help readers shape their lives?
·numinous.productions·
Timeful Texts
Future-proofing Your Knowledge in the Age of Information Overload
Future-proofing Your Knowledge in the Age of Information Overload

Reason for using Obsidian: > In the age of information overload and increasing censorship, it is crucial to future-proof your knowledge by creating a personal memex or knowledge management system. A memex, as envisioned by Vannevar Bush, is a device that stores and retrieves vast amounts of information, supplementing human memory. By building a digital memex, you can own your data, access it offline, quickly capture information, sync across devices, and easily search and interconnect knowledge. This system enhances working memory, reduces cognitive overload, and allows you to monetize what you know in the knowledge economy. Obsidian, an open-source application, is an ideal tool for creating a personal knowledge management system due to its flexibility, bi-directional linking, and integrations with other productivity apps.

A memex is a hypothetical device described by Vannevar Bush in his 1945 article “As We May Think”. It stands for "memory extension" and is considered a precursor to the concept of hypertext and the World Wide Web. The memex was envisioned as a mechanical device that could store and retrieve vast amounts of information by interconnecting documents, books, communications, records, annotations, and personal notes. It aimed to supplement human memory and facilitate information organization and retrieval.
All information should be easily searchable and interconnected to optimize resurfacing of knowledge with “exceeding speed and flexibility”.
Because of the information overload we experience every day in the digital world, we tend to forget where to find information we encountered even within the same day of seeing it. This is a problem everyone experiences to varying degrees.
Building a PKM system allows you to outsource valuable information into a centralized location, reducing cognitive overload.
Generating the notes in Markdown makes them future-proof. Even if Obsidian dies and for some reason you can no longer download the application, you’ll still be able to read, write and edit your notes with literally any computer.
·thereversion.co·
Future-proofing Your Knowledge in the Age of Information Overload
How tweet threads cured my writer's block: Twitter as a medium for sketching
How tweet threads cured my writer's block: Twitter as a medium for sketching
witter’s main constraint is encouraging concision. It’s hard to dwell on word choice when you have so little space to work with. Twitter’s conversational tone also helps here—I can just write like I talk, and any fancy words would seem out of place. And of course, I can’t tweak fonts and margins, which cuts off a distraction vector.
each idea has to be wrapped in a little atomic package. I find this helpful for figuring out the boundaries between my thoughts and clarifying the discrete units of an argument.
a thread is linear! No indenting allowed. This forces a brisk straightline through the argument, instead of getting mired in the fine points of the sub-sub-sub-arguments of the first idea.
I think Twitter is useless for persuading a skeptical reader; there’s simply not space for providing enough detail and context.
I prefer to use Twitter as a way to workshop ideas with sympathetic parties who already have enough context to share my excitement about the ideas.
Overall, it seems that we want constraints that help keep us on track with fluid thought, but don’t rule out too many interesting possibilities. Considering both of these criteria together is a subtle balancing act, and I don’t see easy answers.
low barrier to finishing. On Twitter, a single sentence is a completely acceptable unit of publication. Anything beyond that is sort of a bonus. In contrast, most of my blog posts go unpublished because I fear they’re not complete, or not good enough in some dimension. These unpublished drafts are obviously far more complete than a single tweet, but because they’re on a blog, they don’t feel “done,” and it’s hard to overcome the fear of sharing.
This seems like a crucial part of sketching tools: when you make a sketch, it should be understood that your idea is immature, and feel safe to share it in that state. There’s a time and a place for polished, deeply thorough artifacts… and it’s not Twitter! Everyone knows you just did a quick sketch.
I believe that quantity leads to quality. The students who make more pots in ceramics class improve faster than the students who obsess over making a single perfect pot. A tool with a built-in low barrier to finishing makes it easier to overcome the fear, do more work, and share it at an earlier stage.
For me, Twitter does an oddly good job at simulating the thrilling creative energy of a whiteboarding session. People pop in and out of the conversation offering insights; trees and sub-trees form riffing off of earlier points.
I’m curious to think more about the constraints/freedoms afforded by different kinds of creative tools, and whether we could get more clever with those constraints to enable new kinds of sketching. I’m especially curious about kinds of sketching which are only possible thanks to computers, and couldn’t have been done with paper and pen.
·geoffreylitt.com·
How tweet threads cured my writer's block: Twitter as a medium for sketching
Build tools around workflows, not workflows around tools | thesephist.com
Build tools around workflows, not workflows around tools | thesephist.com
Building your own productivity tools that conform to your unique workflows and mental models is more effective than using mass-market tools and bending your workflows to fit them
My biggest benefit from writing my own tool set is that I can build the tools that exactly conform to my workflows, rather than constructing my workflows around the tools available to me. This means the tools can truly be an extension of the way my brain thinks and organizes information about the world around me.
I think it’s easy to underestimate the extent to which our tools can constrain our thinking, if the way they work goes against the way we work. Conversely, great tools that parallel our minds can multiply our creativity and productivity, by removing the invisible friction of translating between our mental models and the models around which the tools are built.
I don’t think everyone needs to go out and build their own productivity tools from the ground-up. But I do think that it’s important to think of the tools you use to organize your life as extensions of your mind and yourself, rather than trivial utilities to fill the gaps in your life.
·thesephist.com·
Build tools around workflows, not workflows around tools | thesephist.com
Muse retrospective by Adam Wiggins
Muse retrospective by Adam Wiggins
  • Wiggins focused on storytelling and brand-building for Muse, achieving early success with an email newsletter, which helped engage potential users and refine the product's value proposition.
  • Muse aspired to a "small giants" business model, emphasizing quality, autonomy, and a healthy work environment over rapid growth. They sought to avoid additional funding rounds by charging a prosumer price early on.
  • Short demo videos on Twitter showcasing the app in action proved to be the most effective method for attracting new users.
Muse as a brand and a product represented something aspirational. People want to be deeper thinkers, to be more strategic, and to use cool, status-quo challenging software made by small passionate teams. These kinds of aspirations are easier to indulge in times of plenty. But once you're getting laid off from your high-paying tech job, or struggling to raise your next financing round, or scrambling to protect your kids' college fund from runaway inflation and uncertain markets... I guess you don't have time to be excited about cool demos on Twitter and thoughtful podcasts on product design.
I’d speculate that another factor is the half-life of cool new productivity software. Evernote, Slack, Notion, Roam, Craft, and many others seem to get pretty far on community excitement for their first few years. After that, I think you have to be left with software that serves a deep and hard-to-replace purpose in people’s lives. Muse got there for a few thousand people, but the economics of prosumer software means that just isn’t enough. You need tens of thousands, hundreds of thousands, to make the cost of development sustainable.
We envisioned Muse as the perfect combination of the freeform elements of a whiteboard, the structured text-heavy style of Notion or Google Docs, and the sense of place you get from a “virtual office” ala group chat. As a way to asynchronously trade ideas and inspiration, sketch out project ideas, and explore possibilities, the multiplayer Muse experience is, in my honest opinion, unparalleled for small creative teams working remotely.
But friction began almost immediately. The team lead or organizer was usually the one bringing Muse to the team, and they were already a fan of its approach. But the other team members are generally a little annoyed to have to learn any new tool, and Muse’s steeper learning curve only made that worse. Those team members would push the problem back to the team lead, treating them as customer support (rather than contacting us directly for help). The team lead often felt like too much of the burden of pushing Muse adoption was on their shoulders. This was in addition to the obvious product gaps, like: no support for the web or Windows; minimal or no integration with other key tools like Notion and Google Docs; and no permissions or support for multiple workspaces. Had we raised $10M back during the cash party of 2020–2021, we could have hired the 15+ person team that would have been necessary to build all of that. But with only seven people (we had added two more people to the team in 2021–2022), it just wasn’t feasible.
We neither focused on a particular vertical (academics, designers, authors...) or a narrow use case (PDF reading/annotation, collaborative whiteboarding, design sketching...). That meant we were always spread pretty thin in terms of feature development, and marketing was difficult even over and above the problem of explaining canvas software and digital thinking tools.
being general-purpose was in its blood from birth. Part of it was maker's hubris: don't we always dream of general-purpose tools that will be everything to everyone? And part of it was that it's truly the case that Muse excels at the ability to combine together so many different related knowledge tasks and media types into a single, minimal, powerful canvas. Not sure what I would do differently here, even with the benefit of hindsight.
Muse built a lot of its reputation on being principled, but we were maybe too cautious to do the mercenary things that help you succeed. A good example here is asking users for ratings; I felt like this was not to user benefit and distracting when the user is trying to use your app. Our App Store rating was on the low side (~3.9 stars) for most of our existence. When we finally added the standard prompt-for-rating dialog, it instantly shot up to ~4.7 stars. This was a small example of being too principled about doing good for the user, and not thinking about what would benefit our business.
Growing the team slowly was a delight. At several previous ventures, I've onboard people in the hiring-is-job-one environment of a growth startup. At Muse, we started with three founders and then hired roughly one person per year. This was absolutely fantastic for being able to really take our time to find the perfect person for the role, and then for that person to have tons of time to onboard and find their footing on the team before anyone new showed up. The resulting team was the best I've ever worked on, with minimal deadweight or emotional baggage.
ultimately your product does have to have some web presence. My biggest regret is not building a simple share-to-web function early on, which could have created some virality and a great deal of utility for users as well.
In terms of development speed, quality of the resulting product, hardware integration, and a million other things: native app development wins.
After decades working in product development, being on the marketing/brand/growth/storytelling side was a huge personal challenge for me. But I feel like I managed to grow into the role and find my own approach (podcasting, demo videos, etc) to create a beacon to attract potential customers to our product.
when it comes time for an individual or a team to sit down and sketch out the beginnings of a new business, a new book, a new piece of art—this almost never happens at a computer. Or if it does, it’s a cobbled-together collection of tools like Google Docs and Zoom which aren’t really made for this critical part of the creative lifecycle.
any given business will find a small number of highly-effective channels, and the rest don't matter. For Heroku, that was attending developer conferences and getting blog posts on Hacker News. For another business it might be YouTube influencer sponsorships and print ads in a niche magazine. So I set about systematically testing many channels.
·adamwiggins.com·
Muse retrospective by Adam Wiggins
Writing with AI
Writing with AI
iA writer's vision for using AI in writing process
Thinking in dialogue is easier and more entertaining than struggling with feelings, letters, grammar and style all by ourselves. Using AI as a writing dialogue partner, ChatGPT can become a catalyst for clarifying what we want to say. Even if it is wrong.6 Sometimes we need to hear what’s wrong to understand what’s right.
Seeing in clear text what is wrong or, at least, what we don’t mean can help us set our minds straight about what we really mean. If you get stuck, you can also simply let it ask you questions. If you don’t know how to improve, you can tell it to be evil in its critique of your writing
Just compare usage with AI to how we dealt with similar issues before AI. Discussing our writing with others is a general practice and regarded as universally helpful; honest writers honor and credit their discussion partners We already use spell checkers and grammar tools It’s common practice to use human editors for substantial or minor copy editing of our public writing Clearly, using dictionaries and thesauri to find the right expression is not a crime
Using AI in the editor replaces thinking. Using AI in dialogue increases thinking. Now, how can connect the editor and the chat window without making a mess? Is there a way to keep human and artificial text apart?
·ia.net·
Writing with AI
Tools for Thought as Cultural Practices, not Computational Objects
Tools for Thought as Cultural Practices, not Computational Objects
Summary: Throughout human history, innovations like written language, drawing, maps, the scientific method, and data visualization have profoundly expanded the kinds of thoughts humans can think. Most of these "tools for thought" significantly predate digital computers. The modern usage of the phrase is heavily influenced by the work of computer scientists and technologists in the 20th century who envisioned how computers could become tools to extend human reasoning and help solve complex problems. While computers are powerful "meta-mediums", the current focus on building note-taking apps is quite narrow. To truly expand human cognition, we should explore a wider range of tools and practices, both digital and non-digital.
Taken at face value, the phrase tool for thought doesn't have the word 'computer' or 'digital' anywhere in it. It suggests nothing about software systems or interfaces. It's simply meant to refer to tools that help humans think thoughts; potentially new, different, and better kinds of thoughts than we currently think.
Most of the examples I listed above are cultural practices and techniques. They are primary ways of doing; specific ways of thinking and acting that result in greater cognitive abilities. Ones that people pass down from generation to generation through culture. Every one of these also pre-dates digital computers by at least a few hundred years, if not thousands or tens of thousands. Given that framing, it's time to return to the question of how computation, software objects, and note-taking apps fit into this narrative.
If you look around at the commonly cited “major thinkers” in this space, you get a list of computer programmers: Kenneth Iverson, J.C.R. Licklider, Vannevar Bush, Alan Kay, Bob Taylor, Douglas Englebart, Seymour Papert, Bret Victor, and Howard Rheingold, among others.
This is relevant because it means these men share a lot of the same beliefs, values, and context. They know the same sorts of people, learned the same historical stories in school and were taught to see the world in particular kinds of ways. Most of them worked together, or are at most one personal connection away from the next. Tools for thought is a community scene as much as it's a concept. This gives tools for thought a distinctly computer-oriented, male, American, middle-class flavour. The term has always been used in relation to a dream that is deeply intertwined with digital machines, white-collar knowledge work, and bold American optimism.
Englebart was specifically concerned with our ability to deal with complex problems, rather than simply “amplifying intelligence.” Being able to win a chess match is perceived as intelligent, but it isn't helping us tackle systemic racism or inequality. Englebart argued we should instead focus on “augmenting human intellect” in ways that help us find solutions to wicked problems. While he painted visions of how computers could facilitate this, he also pointed to organisational structures, system dynamics, and effective training as part of this puzzle.
There is a rich literature of research and insight into how we might expand human thought that sometimes feels entirely detached from the history we just covered. Cognitive scientists and philosophers have been tackling questions about the relationship between cognition, our tools, and our physical environments for centuries. Well before microprocessors and hypertext showed up. Oddly, they're rarely cited by the computer scientists. This alternate intellectual lineage is still asking the question “how can we develop better tools for thinking?” But they don't presume the answer revolves around computers.
Proponents of embodied cognition argue that our perceptions, concepts, and cognitive processes are shaped by the physical structures of our body and the sensory experiences it provides, and that cognition cannot be fully understood without considering the bodily basis of our experiences.
Philosopher Andy Clark has spent his career exploring how external tools transform and expand human cognition. His 2003 book Natural-born Cyborgs argues humans have “always been cyborgs.” Not in the sense of embedding wires into our flesh, but in the sense we enter “into deep and complex relationships with nonbiological constructs, props, and aids”. Our ability to think with external objects is precisely what makes us intelligent. Clark argues “the mind” isn't simply a set of functions within the brain, but a process that happens between our bodies and the physical environment. Intelligence emerges at the intersection of humans and tools. He expanded on this idea in a follow-on book called Supersizing the Mind. It became known as the extended mind hypothesis. It's the strong version of theories like embodied cognition, situated cognition, and enacted cognition that are all the rage in cognitive science departments.
There's a scramble to make sense of all these new releases and the differences between them. YouTube and Medium explode with DIY guides, walkthrough tours, and comparison videos. The productivity and knowledge management influencer is born.[ giant wall of productivity youtube nonsense ]The strange thing is, many of these guides are only superficially about the application they're presented in. Most are teaching specific cultural techniques
Zettelkasten, spaced repetition, critical thinking.These techniques are only focused on a narrow band of human activity. Specifically, activity that white-collar knowledge workers engage in.I previously suggested we should rename TFT to CMFT (computational mediums for thought), but that doesn't go far enough. If we're being honest about our current interpretation of TFT's, we should actually rename it to CMFWCKW – computational mediums for white-collar knowledge work.
By now it should be clear that this question of developing better tools for thought can and should cover a much wider scope than developing novel note-taking software.
I do think there's a meaningful distinction between tools and mediums: Mediums are a means of communicating a thought or expressing an idea. Tools are a means of working in a medium. Tools enable specific tasks and workflows within a medium. Cameras are a tool that let people express ideas through photography. Blogs are a tool that lets people express ideas through written language. JavaScript is a tool that let people express ideas through programming. Tools and mediums require each other. This makes lines between them fuzzy.
·maggieappleton.com·
Tools for Thought as Cultural Practices, not Computational Objects
How can we develop transformative tools for thought?
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work. But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
·numinous.productions·
How can we develop transformative tools for thought?
In praise of the particular, and other lessons from 2023 - Andy Matuschak
In praise of the particular, and other lessons from 2023 - Andy Matuschak
in 2023, I switched gears to emphasize intimacy. Instead of statistical analysis and summative interviews, I sat next to individuals for hours, as they used one-off prototypes which I’d made just for them. And I got more insight in the first few weeks of this than I had in all of 2022
I’d been building systems and running big experiments, and I could tell you plenty about forgetting curves and usage patterns—but very little about how those things connected to anything anyone cared about.
I could see, in great detail, the texture of the interaction between my designs and the broader learning context—my real purpose, not some proxy.
Single-user experiments like this emphasize problem-finding and discovery, not precise evaluation.
a good heuristic for evaluating my work seems to be: try designs 1-on-1 until they seem to be working well, and only then run more quantitative experiments to understand how well the effect generalizes.
My aim is to invent augmented reading environments that apply to any kind of informational text—spanning subjects, formats, and audiences. The temptation, then, is to consider every design element in the most systematic, general form. But this again confuses aims with methods. So many of my best insights have come from hoarding and fermenting vivid observations about the particular—a specific design, in a specific situation. That one student’s frustration with that one specific exercise.
It’s often hard to find “misfits” when I’m thinking about general forms. My connection to the problem becomes too diffuse. The object of my attention becomes the system itself, rather than its interactions with a specific context of use. This leads to a common failure mode among system designers: getting lost in towers of purity and abstraction, more and more disconnected from the system’s ostensible purpose in the world.
I experience an enormous difference between “trying to design an augmented reading environment” and “trying to design an augmented version of this specific linear algebra book”. When I think about the former, I mostly focus on primitives, abstractions, and processes. When I think about the latter, I focus on the needs of specific ideas, on specific pages. And then, once it’s in use, I think about specific problems, that specific students had, in specific places. These are the “misfits” I need to remove as a designer.
Of course, I do want my designs to generalize. That’s not just a practical consideration. It’s also spiritual: when I design a system well, it feels like I’ve limned hidden seams of reality; I’ve touched a kind of personal God. On most days, I actually care about this more than my designs’ utilitarian impact. The systems I want to build really do require abstraction and generalization. Transformative systems really do often depend on powerful new primitives. But more and more, my experience has been that the best creative fuel for these systematic solutions often comes from a process which focuses on particulars, at least for long periods at a time.
Also? The particular is often a lot more emotionally engaging, day-to-day. That makes the work easier and more fun.
Throughout my career, I’ve struggled with a paradox in the feeling of my work. When I’ve found my work quite gratifying in the moment, day-to-day, I’ve found it hollow and unsatisfying retrospectively, over the long term. For example, when I was working at Apple, there was so much energy; I was surrounded by brilliant people; I felt very competent, it was clear what to do next; it was easy to see my progress each day. That all felt great. But then, looking back on my work at the end of each year, I felt deeply dissatisfied: I wasn’t making a personal creative contribution. If someone else had done the projects I’d done, the results would have been different, but not in a way that mattered. The work wasn’t reflective of ideas or values that mattered to me. I felt numbed, creatively and intellectually.
Progress often doesn’t look like progressIt often feels like I’m not making any progress at all in my work. I’ll feel awfully frustrated. And then, suddenly, a tremendous insight will drive months of work. This last happened in the fall. Looking back at those journals now, I’m amused to read page after page of me getting so close to that central insight in the weeks leading up to it. I approach it again and again from different directions, getting nearer and nearer, but still one leap away—so it looks to me, at the time, like I’ve got nothing. Then, finally, when I had the idea, it felt like a bolt from the blue.
·andymatuschak.org·
In praise of the particular, and other lessons from 2023 - Andy Matuschak
Evergreen notes turn ideas into objects that you can manipulate
Evergreen notes turn ideas into objects that you can manipulate
Evergreen notes turn ideas into objects. By turning ideas into objects you can manipulate them, combine them, stack them. You don’t need to hold them all in your head at the same time.
Evergreen notes allow you to think about complex ideas by building them up from smaller composable ideas.
·stephango.com·
Evergreen notes turn ideas into objects that you can manipulate
The cult of Obsidian: Why people are obsessed with the note-taking app
The cult of Obsidian: Why people are obsessed with the note-taking app
Even Obsidian’s most dedicated users don’t expect it to take on Notion and other note-taking juggernauts. They see Obsidian as having a different audience with different values.
Obsidian is on some ways the opposite of a quintessential MacStories app—the site often spotlights apps that are tailored exclusively for Apple platforms, whereas Obsidian is built on a web-based technology called Electron—but Voorhees says it’s his favorite writing tool regardless.
·fastcompany.com·
The cult of Obsidian: Why people are obsessed with the note-taking app
Inboxes only work if you trust how they’re drained
Inboxes only work if you trust how they’re drained
Inboxes only let us Close open loops if they’re reliable—that is, if you can add something to it with total confidence that it’ll get “handled” in some reasonable timeframe. “Handled” is fuzzy: you just need to feel that the fate of those items roughly reflects your true preferences. You’ll trust an inbox system which ends up dropping 90% of items if the other 10% were the only ones you really cared about. You won’t trust an inbox system in which 90% of tasks get done, but the 10% which don’t get done are the ones you really cared about.In efficient inboxes, it may be easy to maintain this kind of confidence: the departure naturally rate exceeds the arrival rate. But most knowledge worker inboxes don’t look like this. The rates are highly variable, which creates bottlenecks. Not every item actually needs to get handled, but people are over-optimistic, so items accrue in a backlog.
·notes.andymatuschak.org·
Inboxes only work if you trust how they’re drained
Learn from others’ experiences with more perspectives on Search
Learn from others’ experiences with more perspectives on Search
In the coming weeks, when you search for something that might benefit from the experiences of others, you may see a Perspectives filter appear at the top of search results. Tap the filter, and you’ll exclusively see long- and short-form videos, images and written posts that people have shared on discussion boards, Q&A sites and social media platforms. We’ll also show more details about the creators of this content, such as their name, profile photo or information about the popularity of their content.
Helpful information can often live in unexpected or hard-to-find places: a comment in a forum thread, a post on a little-known blog, or an article with unique expertise on a topic. Our helpful content ranking system will soon show more of these “hidden gems” on Search, particularly when we think they’ll improve the results.We’ve also worked to improve how we rank review content on Search – for example, web pages that review businesses or destinations – to place greater emphasis on the quality and originality of the information. You’ll now see more pages that are based on first-hand experience, or are created by someone with deep knowledge in a given subject. And as we underscore the importance of “experience” as an element of helpful content, we continue our focus on information quality and critical attributes like authoritativeness, expertise and trustworthiness, so you can rely on the information you find.
·blog.google·
Learn from others’ experiences with more perspectives on Search
Welcome in my mind 🧠 - My second-brain
Welcome in my mind 🧠 - My second-brain
I consider myself as an internet offspring. I had the chance to access to computers very early in my life and I think it had a big influence on who I am right now. Like a lot of us, internet citizens, what I value the most is learning. Whatever the subject, whatever it takes, whatever it cost, money or time, what I like most is learning. That's, I think, the biggest reason of why I'm starting this "Limitless Exploration" project.
·anthonyamar.fr·
Welcome in my mind 🧠 - My second-brain
File over app
File over app
That’s why I feel like Obsidian is a truly great company as it has a true mission that’s rooted in human values and human experience. This is well written. Having apps that are catered to the files and artifacts they produce rather than the files being catered (and only accessible within their apps) to their tools.
File over app is an appeal to tool makers: accept that all software is ephemeral, and give people ownership over their data.
The world is filled with ideas from generations past, transmitted through many mediums, from clay tablets to manuscripts, paintings, sculptures, and tapestries. These artifacts are objects that you can touch, hold, own, store, preserve, and look at. To read something written on paper all you need is eyeballs. Today, we are creating innumerable digital artifacts, but most of these artifacts are out of our control. They are stored on servers, in databases, gated behind an internet connection, and login to a cloud service. Even the files on your hard drive use proprietary formats that make them incompatible with older systems and other tools. Paraphrasing something I wrote recently If you want your writing to still be readable on a computer from the 2060s or 2160s, it’s important that your notes can be read on a computer from the 1960s.
You should want the files you create to be durable, not only for posterity, but also for your future self. You never know when you might want to go back to something you created years or decades ago. Don’t lock your data into a format you can’t retrieve.
·stephanango.com·
File over app
Method
Method
“Ping Practice is a method I'm developing for translating everyday experiences into insights and actions that align with what you need and value.”
·ping-practice.gitbook.io·
Method
Cultivating depth and stillness in research | Andy Matuschak
Cultivating depth and stillness in research | Andy Matuschak
The same applies to writing. For example, when one topic doesn’t seem to fit a narrative structure, it often feels like a problem I need to “get out of the way”. It’s much better to wonder: “Hm, why do I have this strong instinct that this point’s related? Is there some more powerful unifying theme waiting to be identified here?”
Often I need to improve the framing, to find one which better expresses what I’m deeply excited about. If I can’t find a problem statement which captures my curiosity, it’s best to drop the project for now.
I’m much less likely to flinch away when I’m feeling intensely curious, when I truly want to understand something, when it’s a landscape to explore rather than a destination to reach. Happily, curiosity can be cultivated. And curiosity is much more likely than task-orientation to lead me to interesting ideas.
Savor the subtle insights which really do occur regularly in research. Think of it like cultivating a much more sensitive palate.
“Why is this so hard? Because you’re utterly habituated to steady progress—to completing things, to producing, to solving. When progress is subtle or slow, when there’s no clear way to proceed, you flinch away. You redirect your attention to something safer, to something you can do. You jump to implementation prematurely; you feel a compulsion to do more background reading; you obsess over tractable but peripheral details. These are all displacement behaviors, ways of not sitting with the problem. Though each instance seems insignificant, the cumulative effect is that your stare rarely rests on the fog long enough to penetrate it. Weeks pass, with apparent motion, yet you’re just spinning in place. You return to the surface with each glance away. You must learn to remain in the depths.”
Depth of concentration is cumulative, and precious. An extra hour or two of depth is enormously valuable. I reliably get more done—and with more depth—in that 6-7 hour morning block than I’d previously done in 9-10 hours throughout the day.This feels wonderful. By 2PM, I’ve done my important work for the day. I know that no more depth-y work is likely, and that I’ll only frustrate myself if I try—so I free myself from that pressureI notice that some part of me feels ashamed to say that I’m “done” working at 2PM. This is probably because in my previous roles, I really could solve problems and get more done by simply throwing more hours at the work. That’s just obviously not true in my present work, as I’ve learned through much frustration. Reading memoirs of writers, artists, and scientists, I see that 2-4 hours per day seems to be the norm for a primary creative working block. Separately, and I don’t want to harp on this because I want this essay to be about quality, not quantity, but: I think most people are laughably misled about how much time they truly work. In a median morning block, I complete the equivalent of 1225-minute pomodoros. When I worked at large companies, getting 8 done before 6PM was a rarity—even though I’d assiduously arrange my calendar to maximize deep work!. I take meetings; I exercise; I meditate; I go on long walks. I’ll often do shallower initial reads of papers and books in the afternoon, or handle administrative tasks. Sometimes I’ll do easy programming work. It’s all “bonus time”, nothing obligatory. My life got several hours more slack when I adopted this schedule, and yet my output improved. Wonderful!
no internet on my phone before I sit down at my desk. I don’t want anyone else’s thoughts in my head before I start thinking my own.
If I spend a working interval flailing, never sinking below the surface, the temptation is to double-down, to “make up for it”. But the right move for me is usually to go sit in a different room with only my notebook, and to spend the next working interval writing or sketching by hand about the problem.
Administrative tasks are a constant temptation for me: aha, a task I can complete! How tantalizing! But these tasks are rarely important. So I explicitly prohibit myself from doing any kind of administrative work for most of the morning. In the last hour or two, if I notice myself getting weary and unfocused, I’ll sometimes switch gears into administrative work as a way to “rescue” that time.
I’ve noticed that unhealthy afternoon/evening activities can easily harm the next morning’s focus, by habituating me to immediate gratification.
most of the benefit just seems to come from regularly reflecting on what I’m trying and what’s happening as a result. It’s really about developing a rich mental model of what focus and perseverance feel like, and what factors seem to support or harm those states of mind.
Sometimes I just need to execute; and then traditional productivity advice helps enormously. But deep insight is generally the bottleneck to my work, and producing it usually involves the sort of practices I’ve described here.
·andymatuschak.org·
Cultivating depth and stillness in research | Andy Matuschak