Found 2 bookmarks
Newest
Fish eye lens for text
Fish eye lens for text
Each level gives you completely different information, depending on what Google thinks the user might be interested in. Maps are a true masterclass for visualizing the same information in a variety of ways.
Viewing the same text at different levels of abstraction is powerful, but what, instead of switching between them, we could see multiple levels at the same time? How might that work?
A portrait lens brings a single subject into focus, isolating it from the background to draw all attention to its details. A wide-angle lens captures more of the scene, showing how the subject relates to its surroundings. And then there’s the fish eye lens—a tool that does both, pulling the center close while curving the edges to reveal the full context.
A fish eye lens doesn’t ask us to choose between focus and context—it lets us experience both simultaneously. It’s good inspiration for how to offer detailed answers while revealing the surrounding connections and structures.
Imagine you’re reading The Elves and the Shoemaker by The Brothers Grimm. You come across a single paragraph describing the shoemaker discovering the tiny, perfectly crafted shoes left by the elves. Without context, the paragraph is just an intriguing moment. Now, what if instead of reading the whole book, you could hover over this paragraph and instantly access a layered view of the story? The immediate layer might summarize the events leading up to this moment: the shoemaker, struggling in poverty, left his last bit of leather out overnight. Another layer could give you a broader view of the story so far: the shoemaker’s business is mysteriously revitalized thanks to these tiny benefactors. Beyond that, an even higher-level summary might preview how the tale concludes, with the shoemaker and his wife crafting clothes for the elves to thank them.
This approach allows you to orient yourself without having to piece everything together by reading linearly. You get the detail of the paragraph itself, but with the added richness of understanding how it fits into the larger story.
Chapters give structure, connecting each idea to the ones that came before and after. A good author sets the stage, immersing you with anecdotes, historical background, or thematic threads that help you make sense of the details. Even the act of flipping through a book—a glance at the cover, the table of contents, a few highlighted sections—anchors you in a broader narrative.
The context of who is telling you the information—their expertise, interests, or personal connection—colors how you understand it.
The exhibit places the fish in an ecosystem of knowledge, helping you understand it in a way that goes beyond just a name.
Let's reimagine a Wikipedia a bit. In the center of the page, you see a detailed article about fancy goldfish—their habitat, types, and role in the food chain. Surrounding this are broader topics like ornamental fish, similar topics like Koi fish, more specific topics like the Oranda goldfish, and related people like the designer who popularized them. Clicking on another topic shifts it to the center, expanding into full detail while its context adjusts around it. It’s dynamic, engaging, and most importantly, it keeps you connected to the web of knowledge
The beauty of a fish eye lens for text is how naturally it fits with the way we process the world. We’re wired to see the details of a single flower while still noticing the meadow it grows in, to focus on a conversation while staying aware of the room around us. Facts and ideas are never meaningful in isolation; they only gain depth and relevance when connected to the broader context.
A single number on its own might tell you something, but it’s the trends, comparisons, and relationships that truly reveal its story. Is 42 a high number? A low one? Without context, it’s impossible to say. Context is what turns raw data into understanding, and it’s what makes any fact—or paragraph, or answer—gain meaning.
The fish eye lens takes this same principle and applies it to how we explore knowledge. It’s not just about seeing the big picture or the fine print—it’s about navigating between them effortlessly. By mirroring the way we naturally process detail and context, it creates tools that help us think not only more clearly but also more humanly.
·wattenberger.com·
Fish eye lens for text
Synthesizer for thought - thesephist.com
Synthesizer for thought - thesephist.com
Draws parallels between the evolution of music production through synthesizers and the potential for new tools in language and idea generation. The author argues that breakthroughs in mathematical understanding of media lead to new creative tools and interfaces, suggesting that recent advancements in language models could revolutionize how we interact with and manipulate ideas and text.
A synthesizer produces music very differently than an acoustic instrument. It produces music at the lowest level of abstraction, as mathematical models of sound waves.
Once we started understanding writing as a mathematical object, our vocabulary for talking about ideas expanded in depth and precision.
An idea is composed of concepts in a vector space of features, and a vector space is a kind of marvelous mathematical object that we can write theorems and prove things about and deeply and fundamentally understand.
Synthesizers enabled entirely new sounds and genres of music, like electronic pop and techno. These new sounds were easier to discover and share because new sounds didn’t require designing entirely new instruments. The synthesizer organizes the space of sound into a tangible human interface, and as we discover new sounds, we could share it with others as numbers and digital files, as the mathematical objects they’ve always been.
Because synthesizers are electronic, unlike traditional instruments, we can attach arbitrary human interfaces to it. This dramatically expands the design space of how humans can interact with music. Synthesizers can be connected to keyboards, sequencers, drum machines, touchscreens for continuous control, displays for visual feedback, and of course, software interfaces for automation and endlessly dynamic user interfaces. With this, we freed the production of music from any particular physical form.
Recently, we’ve seen neural networks learn detailed mathematical models of language that seem to make sense to humans. And with a breakthrough in mathematical understanding of a medium, come new tools that enable new creative forms and allow us to tackle new problems.
Heatmaps can be particularly useful for analyzing large corpora or very long documents, making it easier to pinpoint areas of interest or relevance at a glance.
If we apply the same idea to the experience of reading long-form writing, it may look like this. Imagine opening a story on your phone and swiping in from the scrollbar edge to reveal a vertical spectrogram, each “frequency” of the spectrogram representing the prominence of different concepts like sentiment or narrative tension varying over time. Scrubbing over a particular feature “column” could expand it to tell you what the feature is, and which part of the text that feature most correlates with.
What would a semantic diff view for text look like? Perhaps when I edit text, I’d be able to hover over a control for a particular style or concept feature like “Narrative voice” or “Figurative language”, and my highlighted passage would fan out the options like playing cards in a deck to reveal other “adjacent” sentences I could choose instead. Or, if that involves too much reading, each word could simply be highlighted to indicate whether that word would be more or less likely to appear in a sentence that was more “narrative” or more “figurative” — a kind of highlight-based indicator for the direction of a semantic edit.
Browsing through these icons felt as if we were inventing a new kind of word, or a new notation for visual concepts mediated by neural networks. This could allow us to communicate about abstract concepts and patterns found in the wild that may not correspond to any word in our dictionary today.
What visual and sensory tricks can we use to coax our visual-perceptual systems to understand and manipulate objects in higher dimensions? One way to solve this problem may involve inventing new notation, whether as literal iconic representations of visual ideas or as some more abstract system of symbols.
Photographers buy and sell filters, and cinematographers share and download LUTs to emulate specific color grading styles. If we squint, we can also imagine software developers and their package repositories like NPM to be something similar — a global, shared resource of abstractions anyone can download and incorporate into their work instantly. No such thing exists for thinking and writing. As we figure out ways to extract elements of writing style from language models, we may be able to build a similar kind of shared library for linguistic features anyone can download and apply to their thinking and writing. A catalogue of narrative voice, speaking tone, or flavor of figurative language sampled from the wild or hand-engineered from raw neural network features and shared for everyone else to use.
We’re starting to see something like this already. Today, when users interact with conversational language models like ChatGPT, they may instruct, “Explain this to me like Richard Feynman.” In that interaction, they’re invoking some style the model has learned during its training. Users today may share these prompts, which we can think of as “writing filters”, with their friends and coworkers. This kind of an interaction becomes much more powerful in the space of interpretable features, because features can be combined together much more cleanly than textual instructions in prompts.
·thesephist.com·
Synthesizer for thought - thesephist.com