Found 11 bookmarks
Newest
Synthesizer for thought - thesephist.com
Synthesizer for thought - thesephist.com
Draws parallels between the evolution of music production through synthesizers and the potential for new tools in language and idea generation. The author argues that breakthroughs in mathematical understanding of media lead to new creative tools and interfaces, suggesting that recent advancements in language models could revolutionize how we interact with and manipulate ideas and text.
A synthesizer produces music very differently than an acoustic instrument. It produces music at the lowest level of abstraction, as mathematical models of sound waves.
Once we started understanding writing as a mathematical object, our vocabulary for talking about ideas expanded in depth and precision.
An idea is composed of concepts in a vector space of features, and a vector space is a kind of marvelous mathematical object that we can write theorems and prove things about and deeply and fundamentally understand.
Synthesizers enabled entirely new sounds and genres of music, like electronic pop and techno. These new sounds were easier to discover and share because new sounds didn’t require designing entirely new instruments. The synthesizer organizes the space of sound into a tangible human interface, and as we discover new sounds, we could share it with others as numbers and digital files, as the mathematical objects they’ve always been.
Because synthesizers are electronic, unlike traditional instruments, we can attach arbitrary human interfaces to it. This dramatically expands the design space of how humans can interact with music. Synthesizers can be connected to keyboards, sequencers, drum machines, touchscreens for continuous control, displays for visual feedback, and of course, software interfaces for automation and endlessly dynamic user interfaces. With this, we freed the production of music from any particular physical form.
Recently, we’ve seen neural networks learn detailed mathematical models of language that seem to make sense to humans. And with a breakthrough in mathematical understanding of a medium, come new tools that enable new creative forms and allow us to tackle new problems.
Heatmaps can be particularly useful for analyzing large corpora or very long documents, making it easier to pinpoint areas of interest or relevance at a glance.
If we apply the same idea to the experience of reading long-form writing, it may look like this. Imagine opening a story on your phone and swiping in from the scrollbar edge to reveal a vertical spectrogram, each “frequency” of the spectrogram representing the prominence of different concepts like sentiment or narrative tension varying over time. Scrubbing over a particular feature “column” could expand it to tell you what the feature is, and which part of the text that feature most correlates with.
What would a semantic diff view for text look like? Perhaps when I edit text, I’d be able to hover over a control for a particular style or concept feature like “Narrative voice” or “Figurative language”, and my highlighted passage would fan out the options like playing cards in a deck to reveal other “adjacent” sentences I could choose instead. Or, if that involves too much reading, each word could simply be highlighted to indicate whether that word would be more or less likely to appear in a sentence that was more “narrative” or more “figurative” — a kind of highlight-based indicator for the direction of a semantic edit.
Browsing through these icons felt as if we were inventing a new kind of word, or a new notation for visual concepts mediated by neural networks. This could allow us to communicate about abstract concepts and patterns found in the wild that may not correspond to any word in our dictionary today.
What visual and sensory tricks can we use to coax our visual-perceptual systems to understand and manipulate objects in higher dimensions? One way to solve this problem may involve inventing new notation, whether as literal iconic representations of visual ideas or as some more abstract system of symbols.
Photographers buy and sell filters, and cinematographers share and download LUTs to emulate specific color grading styles. If we squint, we can also imagine software developers and their package repositories like NPM to be something similar — a global, shared resource of abstractions anyone can download and incorporate into their work instantly. No such thing exists for thinking and writing. As we figure out ways to extract elements of writing style from language models, we may be able to build a similar kind of shared library for linguistic features anyone can download and apply to their thinking and writing. A catalogue of narrative voice, speaking tone, or flavor of figurative language sampled from the wild or hand-engineered from raw neural network features and shared for everyone else to use.
We’re starting to see something like this already. Today, when users interact with conversational language models like ChatGPT, they may instruct, “Explain this to me like Richard Feynman.” In that interaction, they’re invoking some style the model has learned during its training. Users today may share these prompts, which we can think of as “writing filters”, with their friends and coworkers. This kind of an interaction becomes much more powerful in the space of interpretable features, because features can be combined together much more cleanly than textual instructions in prompts.
·thesephist.com·
Synthesizer for thought - thesephist.com
How tweet threads cured my writer's block: Twitter as a medium for sketching
How tweet threads cured my writer's block: Twitter as a medium for sketching
witter’s main constraint is encouraging concision. It’s hard to dwell on word choice when you have so little space to work with. Twitter’s conversational tone also helps here—I can just write like I talk, and any fancy words would seem out of place. And of course, I can’t tweak fonts and margins, which cuts off a distraction vector.
each idea has to be wrapped in a little atomic package. I find this helpful for figuring out the boundaries between my thoughts and clarifying the discrete units of an argument.
a thread is linear! No indenting allowed. This forces a brisk straightline through the argument, instead of getting mired in the fine points of the sub-sub-sub-arguments of the first idea.
I think Twitter is useless for persuading a skeptical reader; there’s simply not space for providing enough detail and context.
I prefer to use Twitter as a way to workshop ideas with sympathetic parties who already have enough context to share my excitement about the ideas.
Overall, it seems that we want constraints that help keep us on track with fluid thought, but don’t rule out too many interesting possibilities. Considering both of these criteria together is a subtle balancing act, and I don’t see easy answers.
low barrier to finishing. On Twitter, a single sentence is a completely acceptable unit of publication. Anything beyond that is sort of a bonus. In contrast, most of my blog posts go unpublished because I fear they’re not complete, or not good enough in some dimension. These unpublished drafts are obviously far more complete than a single tweet, but because they’re on a blog, they don’t feel “done,” and it’s hard to overcome the fear of sharing.
This seems like a crucial part of sketching tools: when you make a sketch, it should be understood that your idea is immature, and feel safe to share it in that state. There’s a time and a place for polished, deeply thorough artifacts… and it’s not Twitter! Everyone knows you just did a quick sketch.
I believe that quantity leads to quality. The students who make more pots in ceramics class improve faster than the students who obsess over making a single perfect pot. A tool with a built-in low barrier to finishing makes it easier to overcome the fear, do more work, and share it at an earlier stage.
For me, Twitter does an oddly good job at simulating the thrilling creative energy of a whiteboarding session. People pop in and out of the conversation offering insights; trees and sub-trees form riffing off of earlier points.
I’m curious to think more about the constraints/freedoms afforded by different kinds of creative tools, and whether we could get more clever with those constraints to enable new kinds of sketching. I’m especially curious about kinds of sketching which are only possible thanks to computers, and couldn’t have been done with paper and pen.
·geoffreylitt.com·
How tweet threads cured my writer's block: Twitter as a medium for sketching
Writing with AI
Writing with AI
iA writer's vision for using AI in writing process
Thinking in dialogue is easier and more entertaining than struggling with feelings, letters, grammar and style all by ourselves. Using AI as a writing dialogue partner, ChatGPT can become a catalyst for clarifying what we want to say. Even if it is wrong.6 Sometimes we need to hear what’s wrong to understand what’s right.
Seeing in clear text what is wrong or, at least, what we don’t mean can help us set our minds straight about what we really mean. If you get stuck, you can also simply let it ask you questions. If you don’t know how to improve, you can tell it to be evil in its critique of your writing
Just compare usage with AI to how we dealt with similar issues before AI. Discussing our writing with others is a general practice and regarded as universally helpful; honest writers honor and credit their discussion partners We already use spell checkers and grammar tools It’s common practice to use human editors for substantial or minor copy editing of our public writing Clearly, using dictionaries and thesauri to find the right expression is not a crime
Using AI in the editor replaces thinking. Using AI in dialogue increases thinking. Now, how can connect the editor and the chat window without making a mess? Is there a way to keep human and artificial text apart?
·ia.net·
Writing with AI
How can we develop transformative tools for thought?
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work. But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
·numinous.productions·
How can we develop transformative tools for thought?
Inboxes only work if you trust how they’re drained
Inboxes only work if you trust how they’re drained
Inboxes only let us Close open loops if they’re reliable—that is, if you can add something to it with total confidence that it’ll get “handled” in some reasonable timeframe. “Handled” is fuzzy: you just need to feel that the fate of those items roughly reflects your true preferences. You’ll trust an inbox system which ends up dropping 90% of items if the other 10% were the only ones you really cared about. You won’t trust an inbox system in which 90% of tasks get done, but the 10% which don’t get done are the ones you really cared about.In efficient inboxes, it may be easy to maintain this kind of confidence: the departure naturally rate exceeds the arrival rate. But most knowledge worker inboxes don’t look like this. The rates are highly variable, which creates bottlenecks. Not every item actually needs to get handled, but people are over-optimistic, so items accrue in a backlog.
·notes.andymatuschak.org·
Inboxes only work if you trust how they’re drained
Cultivating depth and stillness in research | Andy Matuschak
Cultivating depth and stillness in research | Andy Matuschak
The same applies to writing. For example, when one topic doesn’t seem to fit a narrative structure, it often feels like a problem I need to “get out of the way”. It’s much better to wonder: “Hm, why do I have this strong instinct that this point’s related? Is there some more powerful unifying theme waiting to be identified here?”
Often I need to improve the framing, to find one which better expresses what I’m deeply excited about. If I can’t find a problem statement which captures my curiosity, it’s best to drop the project for now.
I’m much less likely to flinch away when I’m feeling intensely curious, when I truly want to understand something, when it’s a landscape to explore rather than a destination to reach. Happily, curiosity can be cultivated. And curiosity is much more likely than task-orientation to lead me to interesting ideas.
Savor the subtle insights which really do occur regularly in research. Think of it like cultivating a much more sensitive palate.
“Why is this so hard? Because you’re utterly habituated to steady progress—to completing things, to producing, to solving. When progress is subtle or slow, when there’s no clear way to proceed, you flinch away. You redirect your attention to something safer, to something you can do. You jump to implementation prematurely; you feel a compulsion to do more background reading; you obsess over tractable but peripheral details. These are all displacement behaviors, ways of not sitting with the problem. Though each instance seems insignificant, the cumulative effect is that your stare rarely rests on the fog long enough to penetrate it. Weeks pass, with apparent motion, yet you’re just spinning in place. You return to the surface with each glance away. You must learn to remain in the depths.”
Depth of concentration is cumulative, and precious. An extra hour or two of depth is enormously valuable. I reliably get more done—and with more depth—in that 6-7 hour morning block than I’d previously done in 9-10 hours throughout the day.This feels wonderful. By 2PM, I’ve done my important work for the day. I know that no more depth-y work is likely, and that I’ll only frustrate myself if I try—so I free myself from that pressureI notice that some part of me feels ashamed to say that I’m “done” working at 2PM. This is probably because in my previous roles, I really could solve problems and get more done by simply throwing more hours at the work. That’s just obviously not true in my present work, as I’ve learned through much frustration. Reading memoirs of writers, artists, and scientists, I see that 2-4 hours per day seems to be the norm for a primary creative working block. Separately, and I don’t want to harp on this because I want this essay to be about quality, not quantity, but: I think most people are laughably misled about how much time they truly work. In a median morning block, I complete the equivalent of 1225-minute pomodoros. When I worked at large companies, getting 8 done before 6PM was a rarity—even though I’d assiduously arrange my calendar to maximize deep work!. I take meetings; I exercise; I meditate; I go on long walks. I’ll often do shallower initial reads of papers and books in the afternoon, or handle administrative tasks. Sometimes I’ll do easy programming work. It’s all “bonus time”, nothing obligatory. My life got several hours more slack when I adopted this schedule, and yet my output improved. Wonderful!
no internet on my phone before I sit down at my desk. I don’t want anyone else’s thoughts in my head before I start thinking my own.
If I spend a working interval flailing, never sinking below the surface, the temptation is to double-down, to “make up for it”. But the right move for me is usually to go sit in a different room with only my notebook, and to spend the next working interval writing or sketching by hand about the problem.
Administrative tasks are a constant temptation for me: aha, a task I can complete! How tantalizing! But these tasks are rarely important. So I explicitly prohibit myself from doing any kind of administrative work for most of the morning. In the last hour or two, if I notice myself getting weary and unfocused, I’ll sometimes switch gears into administrative work as a way to “rescue” that time.
I’ve noticed that unhealthy afternoon/evening activities can easily harm the next morning’s focus, by habituating me to immediate gratification.
most of the benefit just seems to come from regularly reflecting on what I’m trying and what’s happening as a result. It’s really about developing a rich mental model of what focus and perseverance feel like, and what factors seem to support or harm those states of mind.
Sometimes I just need to execute; and then traditional productivity advice helps enormously. But deep insight is generally the bottleneck to my work, and producing it usually involves the sort of practices I’ve described here.
·andymatuschak.org·
Cultivating depth and stillness in research | Andy Matuschak