Found 38 bookmarks
Newest
LN 038: Semantic zoom
LN 038: Semantic zoom
This “undulant interface” was made by John Underkoffler. The heresy implicit within [1] is the premise that the user, not the system, gets to define what is most important at any given moment; where to place the jeweler’s loupes for more detail, and where to show only a simple overview, within one consistent interface. Notice how when a component is expanded for more detail, the surrounding elements adjust their position, so the increased detail remains in the broader context. This contrasts sharply with how we get more detail in mainstream interfaces of the day, where modal popups obscure surrounding context, or separate screens replace it entirely. Being able to adjust the detail of different components within the singular context allows users to shape the interfaces they need in each moment of their work.
Pushing towards this style of interaction could show up in many parts of an itemized personal computing environment: when moving in and out of sets, single items, or attributes and references within items.
everyone has unique needs and context, yet that which makes our lives more unique makes today’s rigid software interfaces more frustrating to use. How might Colin use the gestural, itemized interface, combined with semantic zoom on this plethora of data, to elicit the interfaces and answers he’s looking for with his data?
since workout items each have data with associated timestamps and locations, the system knows it can offer both a timeline and map view. And since the items are of one kind, it knows it can offer a table view. Instead of selecting one view to switch to, as we first explored in LN 006, we could drag them into the space to have multiple open at once.
As the email item view gets bigger, the preview text of the email’s contents eventually turns into the fully-rendered email. At smaller sizes, this view makes less sense, so the system can swap it out for the preview text as needed.
·alexanderobenauer.com·
LN 038: Semantic zoom
How to Make a Great Government Website—Asterisk
How to Make a Great Government Website—Asterisk
Summary: Dave Guarino, who has worked extensively on improving government benefits programs like SNAP in California, discusses the challenges and opportunities in civic technology. He explains how a simplified online application, GetCalFresh.org, was designed to address barriers that prevent eligible people from accessing SNAP benefits, such as a complex application process, required interviews, and document submission. Guarino argues that while technology alone cannot solve institutional problems, it provides valuable tools for measuring and mitigating administrative burdens. He sees promise in using large language models to help navigate complex policy rules. Guarino also reflects on California's ambitious approach to benefits policy and the structural challenges, like Prop 13 property tax limits, that impact the state's ability to build up implementation capacity.
there are three big categories of barriers. The application barrier, the interview barrier, and the document barrier. And that’s what we spent most of our time iterating on and building a system that could slowly learn about those barriers and then intervene against them.
The application is asking, “Are you convicted of this? Are you convicted of that? Are you convicted of this other thing?” What is that saying to you, as a person, about what the system thinks of you?
Often they’ll call from a blocked number. They’ll send you a notice of when your interview is scheduled for, but this notice will sometimes arrive after the actual date of the interview. Most state agencies are really slammed right now for a bunch of reasons, including Medicaid unwinding. And many of the people assisting on Medicaid are the same workers who process SNAP applications. If you missed your phone interview, you have to call to reschedule it. But in many states, you can’t get through, or you have to call over and over and over again. For a lot of people, if they don’t catch that first interview call, they’re screwed and they’re not going to be approved.
getting to your point about how a website can fix this —  the end result was lowest-burden application form that actually gets a caseworker what they need to efficiently and effectively process it. We did a lot of iteration to figure out that sweet spot.
We didn’t need to do some hard system integration that would potentially take years to develop — we were just using the system as it existed. Another big advantage was that we had to do a lot of built-in data validation because we could not submit anything that was going to fail the county application. We discovered some weird edge cases by doing this.
A lot of times when you want to build a new front end for these programs, it becomes this multiyear, massive project where you’re replacing everything all at once. But if you think about it, there’s a lot of potential in just taking the interfaces you have today, building better ones on top of them, and then using those existing ones as the point of integration.
Government tends to take a more high-modernist approach to the software it builds, which is like “we’re going to plan and know up front how everything is, and that way we’re never going to have to make changes.” In terms of accreting layers — yes, you can get to that point. But I think a lot of the arguments I hear that call for a fundamental transformation suffer from the same high-modernist thinking that is the source of much of the status quo.
If you slowly do this kind of stuff, you can build resilient and durable interventions in the system without knocking it over wholesale. For example, I mentioned procedural denials. It would be adding regulations, it would be making technology systems changes, blah, blah, blah, to have every state report why people are denied, at what rate, across every state up to the federal government. It would take years to do that, but that would be a really, really powerful change in terms of guiding feedback loops that the program has.
Guarino argues that attempts to fundamentally transform government technology often suffer from the same "high-modernist" thinking that created problematic legacy systems in the first place. He advocates for incremental improvements that provide better measurement and feedback loops.
when you start to read about civic technology, it very, very quickly becomes clear that things that look like they are tech problems are actually about institutional culture, or about policy, or about regulatory requirements.
If you have an application where you think people are struggling, you can measure how much time people take on each page. A lot of what technology provides is more rigorous measurement of the burdens themselves. A lot of these technologies have been developed in commercial software because there’s such a massive incentive to get people who start a transaction to finish it. But we can transplant a lot of those into government services and have orders of magnitude better situational awareness.
There’s this starting point thesis: Tech can solve these government problems, right? There’s healthcare.gov and the call to bring techies into government, blah, blah, blah. Then there’s the antithesis, where all these people say, well, no, it’s institutional problems. It’s legal problems. It’s political problems. I think either is sort of an extreme distortion of reality. I see a lot of more oblique levers that technology can pull in this area.
LLMs seem to be a fundamental breakthrough in manipulating words, and at the end of the day, a lot of government is words. I’ve been doing some active experimentation with this because I find it very promising. One common question people have is, “Who’s in my household for the purposes of SNAP?” That’s actually really complicated when you think about people who are living in poverty — they might be staying with a neighbor some of the time, or have roommates but don’t share food, or had to move back home because they lost their job.
I’ve been taking verbatim posts from Reddit that are related to the household question and inputting them into LLMs with some custom prompts that I’ve been iterating on, as well as with the full verbatim federal regulations about household definition. And these models do seem pretty capable at doing some base-level reasoning over complex, convoluted policy words in a way that I think could be really promising.
caseworkers are spending a lot of their time figuring out, wait, what rule in this 200-page policy manual is actually relevant in this specific circumstance? I think LLMS are going to be really impactful there.
It is certainly the case that I’ve seen some productive tensions in counties where there’s more of a mix of that and what you might consider California-style Republicans who are like, “We want to run this like a business, we want to be efficient.” That tension between efficiency and big, ambitious policies can be a healthy, productive one. I don’t know to what extent that exists at the state level, and I think there’s hints of more of an interest in focusing on state-level government working better and getting those fundamentals right, and then doing the more ambitious things on a more steady foundation.
California seemed to really try to take every ambitious option that the feds give us on a whole lot of fronts. I think the corollary of that is that we don’t necessarily get the fundamental operational execution of these programs to a strong place, and we then go and start adding tons and tons of additional complexity on top of them.
·asteriskmag.com·
How to Make a Great Government Website—Asterisk
Malleable software in the age of LLMs
Malleable software in the age of LLMs
Historically, end-user programming efforts have been limited by the difficulty of turning informal user intent into executable code, but LLMs can help open up this programming bottleneck. However, user interfaces still matter, and while chatbots have their place, they are an essentially limited interaction mode. An intriguing way forward is to combine LLMs with open-ended, user-moldable computational media, where the AI acts as an assistant to help users directly manipulate and extend their tools over time.
LLMs will represent a step change in tool support for end-user programming: the ability of normal people to fully harness the general power of computers without resorting to the complexity of normal programming. Until now, that vision has been bottlenecked on turning fuzzy informal intent into formal, executable code; now that bottleneck is rapidly opening up thanks to LLMs.
If this hypothesis indeed comes true, we might start to see some surprising changes in the way people use software: One-off scripts: Normal computer users have their AI create and execute scripts dozens of times a day, to perform tasks like data analysis, video editing, or automating tedious tasks. One-off GUIs: People use AI to create entire GUI applications just for performing a single specific task—containing just the features they need, no bloat. Build don’t buy: Businesses develop more software in-house that meets their custom needs, rather than buying SaaS off the shelf, since it’s now cheaper to get software tailored to the use case. Modding/extensions: Consumers and businesses demand the ability to extend and mod their existing software, since it’s now easier to specify a new feature or a tweak to match a user’s workflow. Recombination: Take the best parts of the different applications you like best, and create a new hybrid that composes them together.
Chat will never feel like driving a car, no matter how good the bot is. In their 1986 book Understanding Computers and Cognition, Terry Winograd and Fernando Flores elaborate on this point: In driving a car, the control interaction is normally transparent. You do not think “How far should I turn the steering wheel to go around that curve?” In fact, you are not even aware (unless something intrudes) of using a steering wheel…The long evolution of the design of automobiles has led to this readiness-to-hand. It is not achieved by having a car communicate like a person, but by providing the right coupling between the driver and action in the relevant domain (motion down the road).
Think about how a spreadsheet works. If you have a financial model in a spreadsheet, you can try changing a number in a cell to assess a scenario—this is the inner loop of direct manipulation at work. But, you can also edit the formulas! A spreadsheet isn’t just an “app” focused on a specific task; it’s closer to a general computational medium which lets you flexibly express many kinds of tasks. The “platform developers"—the creators of the spreadsheet—have given you a set of general primitives that can be used to make many tools. We might draw the double loop of the spreadsheet interaction like this. You can edit numbers in the spreadsheet, but you can also edit formulas, which edits the tool
what if you had an LLM play the role of the local developer? That is, the user mainly drives the creation of the spreadsheet, but asks for technical help with some of the formulas when needed? The LLM wouldn’t just create an entire solution, it would also teach the user how to create the solution themselves next time.
This picture shows a world that I find pretty compelling. There’s an inner interaction loop that takes advantage of the full power of direct manipulation. There’s an outer loop where the user can also more deeply edit their tools within an open-ended medium. They can get AI support for making tool edits, and grow their own capacity to work in the medium. Over time, they can learn things like the basics of formulas, or how a VLOOKUP works. This structural knowledge helps the user think of possible use cases for the tool, and also helps them audit the output from the LLMs. In a ChatGPT world, the user is left entirely dependent on the AI, without any understanding of its inner mechanism. In a computational medium with AI as assistant, the user’s reliance on the AI gently decreases over time as they become more comfortable in the medium.
·geoffreylitt.com·
Malleable software in the age of LLMs
How can we develop transformative tools for thought?
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work. But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
·numinous.productions·
How can we develop transformative tools for thought?
The Mac Turns Forty – Pixel Envy
The Mac Turns Forty – Pixel Envy
As for a Hall of Shame thing? That would be the slow but steady encroachment of single-window applications in MacOS, especially via Catalyst and Electron. The reason I gravitated toward MacOS in the first place is the same reason I continue to use it: it fits my mental model of how an operating system ought to work.
·pxlnv.com·
The Mac Turns Forty – Pixel Envy
AI Models in Software UI - LukeW
AI Models in Software UI - LukeW
In the first approach, the primary interface affordance is an input that directly (for the most part) instructs an AI model(s). In this paradigm, people are authoring prompts that result in text, image, video, etc. generation. These prompts can be sequential, iterative, or un-related. Marquee examples are OpenAI's ChatGPT interface or Midjourney's use of Discord as an input mechanism. Since there are few, if any, UI affordances to guide people these systems need to respond to a very wide range of instructions. Otherwise people get frustrated with their primarily hidden (to the user) limitations.
The second approach doesn't include any UI elements for directly controlling the output of AI models. In other words, there's no input fields for prompt construction. Instead instructions for AI models are created behind the scenes as people go about using application-specific UI elements. People using these systems could be completely unaware an AI model is responsible for the output they see.
The third approach is application specific UI with AI assistance. Here people can construct prompts through a combination of application-specific UI and direct model instructions. These could be additional controls that generate portions of those instructions in the background. Or the ability to directly guide prompt construction through the inclusion or exclusion of content within the application. Examples of this pattern are Microsoft's Copilot suite of products for GitHub, Office, and Windows.
they could be overlays, modals, inline menus and more. What they have in common, however, is that they supplement application specific UIs instead of completely replacing them.
·lukew.com·
AI Models in Software UI - LukeW
Elegy for the Native Mac App
Elegy for the Native Mac App
Tracing a trendline from the start of the Mac apps platforms to the future of visionOS
In recent years Sketch’s Mac-ness has become a liability. Requiring every person in a large design organization to use a Mac is not an easy sell. Plus, a new generation of “internet native” users expect different things from their software than old-school Mac connoisseurs: Multiplayer editing, inline commenting, and cloud sync are now table-stakes for any successful creative app.
At the time of Sketch’s launch most UX designers were using Photoshop or Illustrator. Both were expensive and overwrought, and neither were actually created for UX design. Sketch’s innovation wasn’t any particular feature — if anything it was the lack of features. It did a few things really well, and those were exactly the things UX designers wanted. In that way it really embodied the Mac ethos: simple, single-purpose, and fun to use.
Apple pushed hard to attract artists, filmmakers, musicians, and other creative professionals. It started a virtuous cycle. More creatives using Macs meant more potential customers for creative Mac software, which meant more developers started building that software, which in turn attracted even more customers to the platform.And so the Mac ended up with an abundance of improbably-good creative tools. Usually these apps weren’t as feature-rich or powerful as their PC counterparts, but were faster and easier and cheaper and just overall more conducive to the creative process.
Apple is still very interested in selling Macs — precision-milled aluminum computers with custom-designed chips and “XDR” screens. But they no longer care much about The Mac: The operating system, the software platform, its design sensibilities, its unique features, its vibes.
The term-of-art for this style is “skeuomorphism”: modern designs inspired by their antecedents — calculator apps that look like calculators, password-entry fields that look like bank vaults, reminders that look like sticky notes, etc.This skeuomorphic playfulness made downloading a new Mac app delightful. The discomfort of opening a new unfamiliar piece of software was totally offset by the joy of seeing a glossy pixel-perfect rendition of a bookshelf or a bodega or a poker table, complete with surprising little animations.
There are literally dozens of ways to develop cross-platform apps, including Apple’s own Catalyst — but so far, none of these tools can create anything quite as polished as native implementations.So it comes down to user preference: Would you rather have the absolute best app experience, or do you want the ability to use an acceptably-functional app from any of your devices? It seems that users have shifted to prefer the latter.
Unfortunately the appeal of native Mac software was, at its core, driven by brand strategy. Mac users were sold on the idea that they were buying not just a device but an ecosystem, an experience. Apple extended this branding for third-party developers with its yearly Apple Design Awards.
for the first time since the introduction of the original Mac, they’re just computers. Yes, they were technically always “just computers”, but they used to feel like something bigger. Now Macs have become just another way, perhaps the best way, to use Slack or VSCode or Figma or Chrome or Excel.
visionOS’s story diverges from that of the Mac. Apple is no longer a scrappy upstart. Rather, they’re the largest company in the world by market cap. It’s not so much that Apple doesn’t care about indie developers anymore, it’s just that indie developers often end up as the ants crushed beneath Apple’s giant corporate feet.
I think we’ll see a lot of cool indie software for visionOS, but also I think most of it will be small utilities or toys. It takes a lot of effort to build and support apps that people rely on for their productivity or creativity. If even the wildly-popular Mac platform can’t support those kinds of projects anymore, what chance does a luxury headset have?
·medium.com·
Elegy for the Native Mac App
The Case of The Traveling Text Message — Michele Tepper
The Case of The Traveling Text Message — Michele Tepper
John Watson looks down at his screen, and we see the message he’s reading on our screen as well. Now, we’re used to seeing extradiegetic text appear on screen with the characters: titles like “Three Years Earlier” or “Lisbon” serve to orient us in a scene. Those titles even can help set the tone of the narrative - think of the snarky humor of the character introduction chyrons on Burn Notice. But this is different: this is capturing the viewer’s screen as part of the narrative itself [1] It’s a remarkably elegant solution from director Paul McGuigan. And it works because we, the viewing audience, have been trained to understand it by the last several years of service-driven, multi-platform, multi-screen applications.
The connection between Sherlock’s intellect and a computer’s becomes more explicit in one of my favorite scenes, later in the episode. Sherlock is called to the scene of the murder from which the episode takes its title.[3] We watch him process the clues from the scene and as he takes them in, that same titling style appears, now employed in a more conventional-seeming expositional mode
But then the shot reverses, and it’s not quite so conventional after all. The titling isn’t just what Sherlock is understanding, it’s what he’s seeing. In the same way that text-message titling can take over our screens because whatever we’re watching TV on is just another screen in a multiplatform computing system, this scene tells us that Sherlock views the whole world through the head-up display of his own genius.
·micheletepper.com·
The Case of The Traveling Text Message — Michele Tepper
Design can be free (part 3) - Scott Jenson
Design can be free (part 3) - Scott Jenson
as I’ve wrestled with writing this, it’s clear that many just don’t see the problem, as they assume a cheap button is nearly as good as a proper dial. They’ll openly admit a dial is indeed better but a cheap button is “good enough” and that a dial is “just too expensive.” That actually may be true! There are cases when using a push button is the right choice. But not always. We need to understand when to try a bit harder. Yes, you’re spending a tiny bit more on hardware, but you’re creating a product that is usually much easier to use, reduces returns, and builds your brand which improves sales. Is this positive outcome a given? Of course not, nothing is guaranteed but we need to stop pretending there is NO COST to cheaping out on buttons.
The dial changes the frequency with a simple twist. The push button device “Deconstructs” the twist dial into two up/down buttons. Each press increments the frequency a tiny amount. This means a twist is replaced with many button presses. Again, they are ‘functionally equivalent’ but the expression and ease of use are quite different.
“Adding a feature” is never free. Always start with the user’s problems first. If pressed into using one of these four abuses, make sure to fully appreciate its impact, the friction it creates, and what you can do to work around it.  Adding a feature shouldn’t also “add a problem.”
As a professional UX Designer, I want devices to offer more. But UX Design isn’t about cramming everything into your product in the vague Hail Mary hope it’ll ship a few more units. That’s the sales team speaking, not the user. It’s the wrong motivation and creates monsters.
·jenson.org·
Design can be free (part 3) - Scott Jenson
The State of UX in 2023
The State of UX in 2023
When content is shorter and maximized for engagement, we often lose track of the origin, history, and context behind it: a new designer is more likely to hear about a UX law from a UX influencer on an Instagram carousel than through the actual research which brought it about.The lack of nuance from algorithm-suggested posts undermines any value we could get from them. For a discipline known for asking "why" and for striving to understand users’ context, it’s time we become more intentional about our own information sources.
Shifts in visual narratives happen every decade or so, so it’s not surprising that the design world is moving away from the corporate flatness of web2. Instead of reminding us of the problems of our current world and the harm that’s been caused by Big Tech, the new, abstract forms of web3 distract us from the crises of the day with the promise of a new virtual world.
·trends.uxdesign.cc·
The State of UX in 2023
UX design is becoming a commodity — here’s how we can break the mold
UX design is becoming a commodity — here’s how we can break the mold
TikTok looked at what makes their content unique. Applying an OOUX mindset, the most interesting object is the “post” populating the feed. Two things stand out. First, the videos are very short, with only a couple of seconds of runtime. Which meant the usual distinction between browsing and watching made little sense. Second, opting for a truly mobile experience, their videos would be portrait mode. This meant users could browse and watch in the same orientation, one video at a time. The design decision to merge the browse and watch experience into one stream with autoplay broke all kinds of conventions. Yet, by doing so, it created a unique and engaging experience that is even borderline addictive.
Tinder understood that the selection moment is what makes them unique. They wanted to provide a quick and easy method for their key interaction to decide if a user is a match or not.
·uxdesign.cc·
UX design is becoming a commodity — here’s how we can break the mold
Creating interface studies
Creating interface studies
Avoid getting too specific at a feature level. For example, it's too specific if you say "Page navigator" and it's too high level if you try to explore "A blog builder app." The sweet spot to go for is something that is conceptual where you can explore an interaction for a concept, such as, "Exploring spatial viewing of pages".
·proofofconcept.pub·
Creating interface studies
Folk Interfaces
Folk Interfaces
You can look at an interface and see it as a clearly signposted user journey you should follow. Or you can see it as a collection of functions and affordances to repurpose. As raw material, rather than a guided path.
·maggieappleton.com·
Folk Interfaces
The World's Most Satisfying Checkbox - (Not Boring) Software
The World's Most Satisfying Checkbox - (Not Boring) Software
The industrial designers talked about contours that felt gratifying in the hand and actions that provided a fidget-like comfort such as flipping the lid of a Zippo lighter or the satisfying click of a pen.
In video games, the button you press to make a character jump is often a simple binary input (pressed or not), and yet the output combines a very finely-tuned choreography of interactions, animations, sounds, particles, and camera shake to create a rich composition of sensations. The same jump button can feel like a dainty hop or a powerful leap. “Game feel” (a.k.a. “juice”) is the “aesthetic sensation of control” (Steve Swink, Game Feel) you have when playing a game.
The difference comes down to choice—which is to say, Design (with a capital “D”). Game feel is what makes some games feel gratifying to play (a character gliding down a sand dune) and others feel frustrating (sticky jumping, sliding). These decisions become a signature part of a game’s aesthetic feel and gameplay.
The Browser Company has written that software can optimize for emotional needs rather than just functional needs. Jason Yuan has promoted the idea of “fidgetability” where, similar to a key fob or lighter, digital actions can be designed to feel satisfying. Rahul Vohra has talked about making interfaces that are first fun as a toy—enjoyable to use without any greater aim.
The 2D portion is a particle simulation that “feeds” the growing sphere made with Lottie. It’s inspired by the charging animation common in games before your character delivers a big blow. Every action needs a windup. A big action—in order to feel big—needs a big wind up.
This is the big moment—it has to feel gratifying. We again combine 2D and 3D elements. The sphere and checkmark pop in and a massive starburst fills the screen like an enemy hit in Hollow Knight.
Our digital products are trapped behind a hard pane of glass. We use the term “touch”, but we never really touch them. To truly Feel a digital experience and have an app reach through that glass, requires the Designer to employ many redundant techniques. Video games figured this out decades ago. What the screen takes away, you have to add back in: animation, sound, and haptics.
·andy.works·
The World's Most Satisfying Checkbox - (Not Boring) Software