Found 45 bookmarks
Custom sorting
How can we develop transformative tools for thought?
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work. But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
·numinous.productions·
How can we develop transformative tools for thought?
How Video Games Inspire Great UX
How Video Games Inspire Great UX
Games felt like they were about sparkles and tension. Great app UX is about minimalism and simplicity. Fortunately, I found Raph Koster, the author of A Theory of Fun. Raph is known as a “Game Grammarian” and deeply deconstructs how games are made.
Another more modern example is this landing page for PayPal. Notice how the page clearly invites you to choose. Are you a “Personal” user or a “Business” user? As you mouse over each section, the story unfolds, expanding your choices, offering you things you can easily understand and identify with. Each branch has a clear call to action. This is a beautiful story telling sequence that pulls you in and gets you to become an active part of the on-boarding process.
There are clearly 3 distinct versions of jumping going on here: Initial jump. Simple button pressLong jump. Long button pressLanding jump. Timed jump What’s so interesting here is that there is only one ‘thing’ you’re learning: jumping. But by stressing subtle aspects of how to jump, the game builds up variations of it. A basic jump gets you over things, a long jump can “open” and landing a jump can “attack”. A boring app designer like me would assume you’d need 3 different verbs/buttons for this but Super Mario does this with a single “Jump” action.
APPSEach feature is in isolation, how it is done usually has little relation to other features (other that using a style guide).GAMESBuild a game through a single, mechanic that grows in expressive power by adding modifiers like time, special keys, or timing.
APPSJust throw in a bunch of features into a pot.GAMESUnderstand everything is a journey. Work hard to make everything a closely connected arc of events that help the user create a narrative that matches the overall story.
APPSAssume users are at a constant skill level.GAMESUse hints constantly and patiently to move users to the next level.
APPSTend to offer users a large toolbox and let them figure out how to get started.GAMESHave a clear understanding of the journey and say “Start here first”.
The Mac took a very hardware driven concept, turning on your computer, and turned it into theater. Yes it had the boot sound, but it then showed a promise, a compromise of the final desktop and as it booted, ‘inflated’ that promise with the final working model. Why people loved the Mac is often misunderstood. I’d claim that it’s this dedication to taking people on a carefully crafted story, one which allowed users to craft a compatible narrative, that is at the heart of this devotion.
To win the level you must first cross the street. To cross the street requires that you move the frog. To move the frog requires that you understand joystick timing. Each of these sub levels have their own feedback considerations: Street: the cars movementFrog: How it moves, how far it jumps each timeJoystick: Direction and speed of movement (it’s quite slow actually) Games understand that each of these levels has their own set of feedback, motivation and learning that must take place. This level of deconstruction, in a 30 year old game no less, blew my mind. Games were complex! They really paid attention to detail. There was a lot here to understand.
The computer example here is desktop menus. “Selecting a menu item” is actually a fractal cascade of skills where you first start horizontally browsing the menu bar, with a click, you shift into a vertical mode but keep the same basic highlight approach. For hierarchical menus, you need to understand the graphic hint that there is something deeper and then navigate over to reveal and then select that menu. Anyone who has taught beginning computer users the menu system knows how hard it is to master hierarchical menus. It’s takes practice to find, reveal and track over to that menu. There is a fractal cascade of skills required.
Raph has a great quote in his book for this: “Fun is just another word for learning”. In order to have fun, you must learn. I find this inspiring as app design wants your users to learn but we’ve rarely appreciated this could be fun. Games understand that in order to learn you must start thinking in layers. Begin with a basic skill and slowly add more, getting better one layer at a time.
·jenson.org·
How Video Games Inspire Great UX
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Design Thinking Is Fundamentally Conservative and Preserves the Status Quo
Design Thinking Is Fundamentally Conservative and Preserves the Status Quo
Design thinking, in a slight divergence from the original model, suggests instead that the designer herself should generate information about the problem, by drawing on her experience of the people who will be affected by the design through the empathetic connection that she forges with them
In fact, problem-solving is always messy and most solutions are shaped by political agendas and resource constraints. The solutions that win out are not necessarily the best — they are generally those that are favored by the powerful or at least by the majority.
Design thinking has allowed us to celebrate conventional solutions as breakthrough innovations and to continue with business as usual.
In much the same way that the project shelters the young, it protects nascent ideas by providing a protected space for the on-going and collaborative engagement with the ambiguity and uncertainly
the Living Breakwaters project offers an alternative to the closure built into design thinking. It illustrates a design process where the designer is dethroned and where design is less a step-by-step march through a set of stages and more of a space where people can come together and interpret the ways that changing conditions challenge the meanings, patterns, and relationships that they had long taken for granted.
It represents a commitment to a process with no clear beginning and end, with a goal that is often no more explicitly defined than imaging and articulating new ways to meet changes that are still murky and immeasurable.
·hbr.org·
Design Thinking Is Fundamentally Conservative and Preserves the Status Quo
Magic Ink - Information Software and the Graphical Interface
Magic Ink - Information Software and the Graphical Interface
A good industrial designer understands the capabilities and limitations of the human body in manipulating physical objects, and of the human mind in comprehending mechanical models. A camera designer, for example, shapes her product to fit the human hand. She places buttons such that they can be manipulated with index fingers while the camera rests on the thumbs, and weights the buttons so they can be easily pressed in this position, but won’t trigger on accident. Just as importantly, she designs an understandable mapping from physical features to functions—pressing a button snaps a picture, pulling a lever advances the film, opening a door reveals the film, opening another door reveals the battery.
When the software designer defines the interactive aspects of her program, when she places these pseudo-mechanical affordances and describes their behavior, she is doing a virtual form of industrial design. Whether she realizes it or not. #The software designer can thus approach her art as a fusion of graphic design and industrial design. Now, let’s consider how a user approaches software, and more importantly, why.
·worrydream.com·
Magic Ink - Information Software and the Graphical Interface
Interface Aesthetics - An Introduction - Rhizome
Interface Aesthetics - An Introduction - Rhizome
Nevertheless, the interface pushes back with its prescribed methodologies, workflows, and limitations. Interface and artist are an antagonistic pair. Perhaps the best description of the polemic between the two is one of productive cannibalism. Just as the interface evolves under the pressure of innovation to accommodate new pragmatic uses, the artists’ will continue to deconstruct and push its aesthetic and behavioral properties to their limits.
·rhizome.org·
Interface Aesthetics - An Introduction - Rhizome
What comes after smartphones? — Benedict Evans
What comes after smartphones? — Benedict Evans
Mainframes were followed by PCs, and then the web, and then smartphones. Each of these new models started out looking limited and insignificant, but each of them unlocked a new market that was so much bigger that it pulled in all of the investment, innovation and company creation and so grew to overtake the old one. Meanwhile, the old models didn’t go away, and neither, mostly, did the companies that had been created by them. Mainframes are still a big business and so is IBM; PCs are still a big business and so is Microsoft. But they don’t set the agenda anymore - no-one is afraid of them.
We’ve spent the last few decades getting to the point that we can now give everyone on earth a cheap, reliable, easy-to-use pocket computer with access to a global information network. But so far, though over 4bn people have one of these things, we’ve only just scratched the surface of what we can do with them.
There’s an old saying that the first fifty years of the car industry were about creating car companies and working out what cars should look like, and the second fifty years were about what happened once everyone had a car - they were about McDonalds and Walmart, suburbs and the remaking of the world around the car, for good and of course bad. The innovation in cars became everything around the car. One could suggest the same today about smartphones - now the innovation comes from everything else that happens around them.
·ben-evans.com·
What comes after smartphones? — Benedict Evans
Yale Law Journal - Amazon’s Antitrust Paradox
Yale Law Journal - Amazon’s Antitrust Paradox
Although Amazon has clocked staggering growth, it generates meager profits, choosing to price below-cost and expand widely instead. Through this strategy, the company has positioned itself at the center of e-commerce and now serves as essential infrastructure for a host of other businesses that depend upon it. Elements of the firm’s structure and conduct pose anticompetitive concerns—yet it has escaped antitrust scrutiny.
This Note argues that the current framework in antitrust—specifically its pegging competition to “consumer welfare,” defined as short-term price effects—is unequipped to capture the architecture of market power in the modern economy. We cannot cognize the potential harms to competition posed by Amazon’s dominance if we measure competition primarily through price and output. Specifically, current doctrine underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive.
These concerns are heightened in the context of online platforms for two reasons. First, the economics of platform markets create incentives for a company to pursue growth over profits, a strategy that investors have rewarded. Under these conditions, predatory pricing becomes highly rational—even as existing doctrine treats it as irrational and therefore implausible.
Second, because online platforms serve as critical intermediaries, integrating across business lines positions these platforms to control the essential infrastructure on which their rivals depend. This dual role also enables a platform to exploit information collected on companies using its services to undermine them as competitors.
·yalelawjournal.org·
Yale Law Journal - Amazon’s Antitrust Paradox