Found 39 bookmarks
Newest
On Nonviolent Communication
On Nonviolent Communication
if you say “my boss makes me crazy”, you will indeed think your boss is “making” you crazy. If you instead say “I am frustrated because I am wanting stability and consistency in this relationship” you may then think you can control your level of frustration and clearly address what it is you want. If someone else is making you crazy, there’s nothing you can do. If you control your feelings, you can take actions to change how you respond to causes. Words can be windows or they can be walls — they can open doors for compassion or they can do the opposite. NVC uses words as windows. Our language today uses them as walls. More on this later.
If I ask you to meet me at 6:00 and you pick me up at 6:30, how do I feel? It depends. I could be frustrated that you are late because I want to spend my time productively, or scared that you may not know where to find me, or hurt because I need reassurance that you care about me — or, conversely, happy that I get more time to myself.
It’s not enough to blame the feeling on the person whose actions triggered the feeling. That very same action might have inspired completely different feelings in someone else — or even in me, under different circumstances!
Incidents like the friend coming late may stimulate or set the stage for feelings, but they do not *cause* the feelings.
There is a gap between stimulus and cause — and our power lies in how we use that gap. If we truly understood this — the separation between stimulus and cause — and the idea that we are responsible for our own emotions, we would speak very differently.
We wouldn’t say things like “It bugs me when …” or “It makes me angry when”. These phrases imply or actually state that responsibility for your feelings lie outside of yourself. A better statement would be “When I saw you come late, I started to feel scared”. Here, one may at least be taking some responsibility for the feeling of anger, and not simply blaming the latecomer for causing such feelings.
the more we use our language to cede responsibility to others, the less agency we have over our circumstances, and the more we victimize ourselves.
NVC believes that, as human beings, there are only two things that we are basically saying: Please and Thank You. Judgments are distorted attempts to say “Please.”
NVC requires learning how to say what your needs are, what needs are alive in you at a given moment, which ones are getting fulfilled, and which ones are not.
You sacrifice your needs to provide for and take care of your family. Needs are not important. What’s important is obedience to authority. That’s what’s important. With that background and history we’ve been taught a language that doesn’t teach us how to say how we are. It teaches us to worry about what we are in the eyes of authority.
When our minds have been pre-occupied that way we have trouble answering what seems to be a simple question, which is asked in all cultures throughout the world, “How are you?” It is a way of asking what’s alive in you. It’s a critical question. Even though it’s asked in many cultures, people don’t know how to answer it because they haven’t been educated in a culture that cares about how you are.
The shift necessary requires being able to say, how do you feel at this moment, and what are the needs behind your feelings? And when we ask those question to highly educated people, they cannot answer it. Ask them how they feel, and they say “I feel that that’s wrong”. Wrong isn’t a feeling. Wrong is a thought.
When your mind has been shaped to worry about what people think about you, you lose connection with what’s alive in you.
The underlying philosophy of punishment and reward is that if people are basically evil or selfish, then the correctional process if they are behaving in a way you don’t like is to make them hate themselves for what they have done. If a parent, for example, doesn’t like what the child is doing, the parent says something like ”Say you’re sorry!! The child says, “I’m sorry.” The parent says “No! You’re not really sorry!” Then the child starts to cry “I’m sorry. . .” The parent says “Okay, I forgive you.”
Note, I think NVC is productive is for friendships and relationships, or anything where connection is the main goal, not for any work or organizations that primarily serve another mission.
NVC involves the following: 1) how we express ourselves to other people, 2) how we interpret what people say to us, and most importantly, 3) how we communicate with ourselves.
Some have suggested alternatives such as Compassionate Communication, Authentic Communication, Connected Communication.
·substack.com·
On Nonviolent Communication
On the necessity of a sin
On the necessity of a sin
AI excels at tasks that are intensely human: writing, ideation, faking empathy. However, it struggles with tasks that machines typically excel at, such as repeating a process consistently or performing complex calculations without assistance. In fact, it tends to solve problems that machines are good at in a very human way. When you get GPT-4 to do data analysis of a spreadsheet for you, it doesn’t innately read and understand the numbers. Instead, it uses tools the way we might, glancing at a bit of the data to see what is in it, and then writing Python programs to try to actually do the analysis. And its flaws — making up information, false confidence in wrong answers, and occasional laziness — also seem very much more like human than machine errors.
This quasi-human weirdness is why the best users of AI are often managers and teachers, people who can understand the perspective of others and correct it when it is going wrong.
Rather than focusing purely on teaching people to write good prompts, we might want to spend more time teaching them to manage the AI.
Telling the system “who” it is helps shape the outputs of the system. Telling it to act as a teacher of MBA students will result in a different output than if you ask it to act as a circus clown. This isn’t magical—you can’t say Act as Bill Gates and get better business advice or write like Hemingway and get amazing prose —but it can help make the tone and direction appropriate for your purpose.
·oneusefulthing.org·
On the necessity of a sin
Part 1: How To Be An Adult— Kegan’s Theory of Adult Development
Part 1: How To Be An Adult— Kegan’s Theory of Adult Development
Robert Kegan's theory of adult development proposes that adults go through 5 developmental stages. Becoming an 'adult' means transitioning to higher stages of development, which involves developing an independent sense of self, gaining traits associated with wisdom and social maturity, and becoming more self-aware and in control of one's behavior and relationships. However, most adults never progress past Stage 3, lacking a fully independent sense of self. Progressing requires a "subject-object shift" where one's beliefs, emotions, and behaviors become observable and controllable, rather than subjective forces.
When we’re older, religion becomes more objective — i.e. I’m no longer my beliefs. I am now a human WITH beliefs who can step back, reflect on and decide what to believe in.
Stage 1 — Impulsive mind (early childhood)Stage 2 — Imperial mind (adolescence, 6% of adult population)Stage 3 — Socialized mind (58% of the adult population)Stage 4 — Self-Authoring mind (35% of the adult population)Stage 5 — Self-Transforming mind (1% of the adult population)I focus on Stages 2–5, because they’re most applicable to adult development. Most of the time we’re in transition between stages and/or behave at different stages with different people (i.e. Stage 3 with a partner, Stage 4 with a coworker).
·medium.com·
Part 1: How To Be An Adult— Kegan’s Theory of Adult Development
Looking for AI use-cases — Benedict Evans
Looking for AI use-cases — Benedict Evans
  • LLMs have impressive capabilities, but many people struggle to find immediate use-cases that match their own needs and workflows.
  • Realizing the potential of LLMs requires not just technical advancements, but also identifying specific problems that can be automated and building dedicated applications around them.
  • The adoption of new technologies often follows a pattern of initially trying to fit them into existing workflows, before eventually changing workflows to better leverage the new tools.
if you had showed VisiCalc to a lawyer or a graphic designer, their response might well have been ‘that’s amazing, and maybe my book-keeper should see this, but I don’t do that’. Lawyers needed a word processor, and graphic designers needed (say) Postscript, Pagemaker and Photoshop, and that took longer.
I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.
A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time.
no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this.
Some of this is about imagination, and familiarity. It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’.
This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The concept of product-market fit is that normally you have to iterate your idea of the product and your idea of the use-case and customer towards each other - and then you need sales.
Meanwhile, spreadsheets were both a use-case for a PC and a general-purpose substrate in their own right, just as email or SQL might be, and yet all of those have been unbundled. The typical big company today uses hundreds of different SaaS apps, all them, so to speak, unbundling something out of Excel, Oracle or Outlook. All of them, at their core, are an idea for a problem and an idea for a workflow to solve that problem, that is easier to grasp and deploy than saying ‘you could do that in Excel!’ Rather, you instantiate the problem and the solution in software - ‘wrap it’, indeed - and sell that to a CIO. You sell them a problem.
there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL.
Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.
people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem, and with a general-purpose, open-ended prompt, the user has to think of all of that themselves, every single time, or hope it’s already in the training data. So, can the GUI itself be generative? Or do we need another whole generation of Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.
·ben-evans.com·
Looking for AI use-cases — Benedict Evans
Tulpa - Wikipedia
Tulpa - Wikipedia
Tulpa is a concept originally from Tibetan Buddhism and found in later traditions of mysticism and the paranormal of a materialized being or thought-form
The Theosophist Annie Besant, in the 1905 book Thought-Forms, divides them into three classes: forms in the shape of the person who creates them, forms that resemble objects or people and may become ensouled by nature spirits or by the dead, and forms that represent inherent qualities from the astral or mental planes, such as emotions.
The Slender Man has been described by some people as a tulpa-effect, and attributed to multiple people's thought processes.
·en.wikipedia.org·
Tulpa - Wikipedia
Memetics - Wikipedia
Memetics - Wikipedia
The term "meme" was coined by biologist Richard Dawkins in his 1976 book The Selfish Gene,[1] to illustrate the principle that he later called "Universal Darwinism".
He gave as examples, tunes, catchphrases, fashions, and technologies. Like genes, memes are selfish replicators and have causal efficacy; in other words, their properties influence their chances of being copied and passed on.
Just as genes can work together to form co-adapted gene complexes, so groups of memes acting together form co-adapted meme complexes or memeplexes.
Criticisms of memetics include claims that memes do not exist, that the analogy with genes is false, that the units cannot be specified, that culture does not evolve through imitation, and that the sources of variation are intelligently designed rather than random.
·en.m.wikipedia.org·
Memetics - Wikipedia
The Mac Turns Forty – Pixel Envy
The Mac Turns Forty – Pixel Envy
As for a Hall of Shame thing? That would be the slow but steady encroachment of single-window applications in MacOS, especially via Catalyst and Electron. The reason I gravitated toward MacOS in the first place is the same reason I continue to use it: it fits my mental model of how an operating system ought to work.
·pxlnv.com·
The Mac Turns Forty – Pixel Envy
A bicycle for the senses
A bicycle for the senses
We can take nature’s superpowers and expand them across many more vectors that are interesting to humans: Across scale — far and near, binoculars, zoom, telescope, microscope Across wavelength — UV, IR, heatmaps, nightvision, wifi, magnetic fields, electrical and water currents Across time — view historical imagery, architectural, terrain, geological, and climate changes Across culture — experience the relevance of a place in books, movies, photography, paintings, and language Across space — travel immersively to other locations for tourism, business, and personal connections Across perspective — upside down, inside out, around corners, top down, wider, narrower, out of body Across interpretation — alter the visual and artistic interpretation of your environment, color-shifting, saturation, contrast, sharpness
Headset displays connect sensory extensions directly to your vision. Equipped with sensors that perceive beyond human capabilities, and access to the internet, they can provide information about your surroundings wherever you are. Until now, visual augmentation has been constrained by the tiny display on our phone. By virtue of being integrated with your your eyesight, headsets can open up new kinds of apps that feel more natural. Every app is a superpower. Sensory computing opens up new superpowers that we can borrow from nature. Animals, plants and other organisms can sense things that humans can’t
The first mass-market bicycle for the senses was Apple’s AirPods. Its noise cancellation and transparency mode replace and enhance your hearing. Earbuds are turning into ear computers that will become more easily programmable. This can enable many more kinds of hearing. For example, instantaneous translation may soon be a reality
For the past seven decades, computers have been designed to enhance what your brain can do — think and remember. New kinds of computers will enhance what your senses can do — see, hear, touch, smell, taste. The term spatial computing is emerging to encompass both augmented and virtual reality. I believe we are exploring an even broader paradigm: sensory computing. The phone was a keyhole for peering into this world, and now we’re opening the door.
What happens when put on a headset and open the “Math” app? How could seeing the world through math help you understand both better?
Advances in haptics may open up new kinds of tactile sensations. A kind of second skin, or softwear, if you will. Consider that Apple shipped a feature to help you find lost items that vibrates more strongly as you get closer. What other kinds of data could be translated into haptic feedback?
It may sound far-fetched, but converting olfactory patterns into visual patterns could open up some interesting applications. Perhaps a new kind of cooking experience? Or new medical applications that convert imperceptible scents into visible patterns?
·stephango.com·
A bicycle for the senses
AI Models in Software UI - LukeW
AI Models in Software UI - LukeW
In the first approach, the primary interface affordance is an input that directly (for the most part) instructs an AI model(s). In this paradigm, people are authoring prompts that result in text, image, video, etc. generation. These prompts can be sequential, iterative, or un-related. Marquee examples are OpenAI's ChatGPT interface or Midjourney's use of Discord as an input mechanism. Since there are few, if any, UI affordances to guide people these systems need to respond to a very wide range of instructions. Otherwise people get frustrated with their primarily hidden (to the user) limitations.
The second approach doesn't include any UI elements for directly controlling the output of AI models. In other words, there's no input fields for prompt construction. Instead instructions for AI models are created behind the scenes as people go about using application-specific UI elements. People using these systems could be completely unaware an AI model is responsible for the output they see.
The third approach is application specific UI with AI assistance. Here people can construct prompts through a combination of application-specific UI and direct model instructions. These could be additional controls that generate portions of those instructions in the background. Or the ability to directly guide prompt construction through the inclusion or exclusion of content within the application. Examples of this pattern are Microsoft's Copilot suite of products for GitHub, Office, and Windows.
they could be overlays, modals, inline menus and more. What they have in common, however, is that they supplement application specific UIs instead of completely replacing them.
·lukew.com·
AI Models in Software UI - LukeW
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
This paper maps concepts from AI alignment onto a basic, three step interaction cycle, yielding a corresponding set of alignment objectives: 1) specification alignment: ensuring the user can efficiently and reliably communicate objectives to the AI, 2) process alignment: providing the ability to verify and optionally control the AI's execution process, and 3) evaluation support: ensuring the user can verify and understand the AI's output.
the notion of a Process Gulf, which highlights how differences between human and AI processes can lead to challenges in AI control.
·arxiv.org·
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
Feeling through emotional truths
Feeling through emotional truths

To gain insight into emotional truths, Kasra recommends feeling into strong emotions rather than overthinking them. Some techniques include sentence completion exercises, imagining emotions as characters to dialogue with, focusing on body sensations, and identifying underlying beliefs.

In general it's adopting a mindset of curiosity rather than doubt when exploring one's emotions.

Your emotions are a signaling mechanism. They are your subconscious mind’s toolkit for protecting you from dangers, improving your circumstances, and navigating an otherwise incomprehensibly complex world. Every emotion has some adaptive purpose: fear keeps you safe; anger enforces your boundaries; sadness slows you down; joy speeds you up.
The first step towards living better is to recognize that your subconscious mind is trying to tell you things you don’t yet know (primarily through your emotions, but also via other channels like your dreams). A lot of people struggle to realize even this basic fact; they think of emotions as a disruption: a distraction from, say, their career development, or an impediment to their capacity to “be rational.”
your emotions are worth heeding because they carry wisdom your conscious mind doesn’t have access to. And at that point you must embark on the second step—the much harder step—of figuring out what it is that your mind is trying to tell you.
an attitude of curiosity rather than doubt. Embodiment rather than intellect. You find the answer by allowing yourself to be playful, generative, and spontaneous; not by being methodical, intentional, and constricted. Sit back and feel your way to the answer
·bitsofwonder.substack.com·
Feeling through emotional truths
The Signal and the Corrective
The Signal and the Corrective

A technical breakdown of 'narratives' and how they operate: narratives simplify issues by focusing on a main "signal" while ignoring other relevant "noise", and this affects discussions between those with opposing preferred signals. It goes into many examples across basically any kind of ideological or cultural divide.

AI summary:

  • The article explores how different people can derive opposing narratives from the same set of facts, with each viewing their interpretation as the "signal" and opposing views as "noise"
  • Key concepts:
    • Signal: The core belief or narrative someone holds as fundamentally true
    • Corrective: The moderating adjustments made to account for exceptions to the core belief
    • Figure-ground inversion: How the same reality can be interpreted in opposite ways
  • Examples of opposing narratives include:
    • Government as public service vs. government as pork distribution
    • Medical care as healing vs. medical care as harmful intervention
    • Capitalism as wealth creation vs. capitalism as exploitation
    • Nature vs. nurture in human behavior
    • Science as gradual progress vs. science as paradigm shifts
  • Communication dynamics:
    • People are more likely to fall back on pure signals (without correctives) when:
      • Discussions become abstract
      • Communication bandwidth is limited
      • Under stress or emotional pressure
      • Speaking to unfamiliar audiences
      • In hostile environments
  • Persuasion insights:
    • It's easier to add correctives to someone's existing signal than to completely change their core beliefs
    • People must feel their fundamental views are respected before accepting criticism
    • Acknowledging partial validity of opposing views is crucial for productive dialogue
  • Problems in modern discourse:
    • Online debates often lack real-world consequences
    • When there's no need for cooperation, people prefer conquest over consensus
    • Lack of real relationships reduces incentives for civility and understanding
  • The author notes that while most people hold moderate views with both signals and correctives, fundamental differences can be masked when discussing specific policies but become apparent in discussions of general principles
  • The piece maintains a thoughtful, analytical tone while acknowledging the complexity and challenges of human communication and belief systems
  • The author expresses personal examples and vulnerability in describing how they themselves react differently to criticism based on whether it comes from those who share their fundamental values
narratives contradicting each other means that they simplify and generalize in different ways and assign goodness and badness to things in opposite directions. While that might look like contradiction it isn’t, because generalizations and value judgments aren’t strictly facts about the world. As a consequence, the more abstracted and value-laden narratives get the more they can contradict each other without any of them being “wrong”.
“The free market is extremely powerful and will work best as a rule, but there are a few outliers where it won’t, and some people will be hurt so we should have a social safety net to contain the bad side effects.” and “Capitalism is morally corrupt and rewards selfishness and greed. An economy run for the people by the people is a moral imperative, but planned economies don’t seem to work very well in practice so we need the market to fuel prosperity even if it is distasteful.” . . . have very different fundamental attitudes but may well come down quite close to each other in terms of supported policies. If you model them as having one “main signal” (basic attitude) paired with a corrective to account for how the basic attitude fails to match reality perfectly, then this kind of difference is understated when the conversation is about specific issues (because then signals plus correctives are compared and the correctives bring “opposite” people closer together) but overstated when the conversation is about general principles — because then it’s only about the signal.
I’ve said that when discussions get abstract and general people tend to go back to their main signals and ignore correctives, which makes participants seem further apart than they really are. The same thing happens when the communication bandwidth is low for some reason. When dealing with complex matters human communication tends not to be super efficient in the first place and if something makes subtlety extra hard — like a 140 character limit, only a few minutes to type during a bathroom break at work, little to no context or a noisy discourse environment — you’re going to fall back to simpler, more basic messages. Internal factors matter too. When you’re stressed, don’t have time to think, don’t know the person you’re talking to and don’t really care about them, when emotions are heated, when you feel attacked, when an audience is watching and you can’t look weak, or when you smell blood in the water, then you’re going to go simple, you’re going to go basic, you’re going to push in a direction rather than trying to hit a target. And whoever you’re talking to is going to do the same. You both fall back in different directions, exactly when you shouldn’t.
It makes sense to think of complex disagreements as not about single facts but about narratives made up of generalizations, abstractions and interpretations of many facts, most of which aren’t currently on the table. And the status of our favorite narratives matters to us, because they say what’s happening, who the heroes are and who the villains are, what’s matters and what doesn’t, who owes and who is owed. Most of us, when not in our very best moods, will make sure our most cherished narratives are safe before we let any others thrive.
Most people will accept that their main signals have correctives, but they will not accept that their main signals have no validity or legitimacy. It’s a lot easier to install a corrective in someone than it is to dislodge their main signal (and that might later lead to a more fundamental change of heart) — but to do that you must refrain from threatening the signal because that makes people defensive. And it’s not so hard. Listen and acknowledge that their view has greater than zero validity.
In an ideal world, any argumentation would start with laying out its own background assumptions, including stating if what it says should be taken as a corrective on top of its opposite or a complete rejection of it.
·everythingstudies.com·
The Signal and the Corrective
Arrival (film) - Wikipedia
Arrival (film) - Wikipedia
When Banks is able to establish sufficient shared vocabulary to ask why the aliens have come, they answer with a statement that could be translated as "offer weapon". China interprets this as "use weapon", prompting them to break off communications, and other nations follow. Banks argues that the symbol interpreted as "weapon" can be more abstractly related to the concepts of "means" or "tool"; China's translation likely results from interacting with the aliens using mahjong, a highly competitive game.
·en.wikipedia.org·
Arrival (film) - Wikipedia
File over app
File over app
That’s why I feel like Obsidian is a truly great company as it has a true mission that’s rooted in human values and human experience. This is well written. Having apps that are catered to the files and artifacts they produce rather than the files being catered (and only accessible within their apps) to their tools.
File over app is an appeal to tool makers: accept that all software is ephemeral, and give people ownership over their data.
The world is filled with ideas from generations past, transmitted through many mediums, from clay tablets to manuscripts, paintings, sculptures, and tapestries. These artifacts are objects that you can touch, hold, own, store, preserve, and look at. To read something written on paper all you need is eyeballs. Today, we are creating innumerable digital artifacts, but most of these artifacts are out of our control. They are stored on servers, in databases, gated behind an internet connection, and login to a cloud service. Even the files on your hard drive use proprietary formats that make them incompatible with older systems and other tools. Paraphrasing something I wrote recently If you want your writing to still be readable on a computer from the 2060s or 2160s, it’s important that your notes can be read on a computer from the 1960s.
You should want the files you create to be durable, not only for posterity, but also for your future self. You never know when you might want to go back to something you created years or decades ago. Don’t lock your data into a format you can’t retrieve.
·stephanango.com·
File over app
Natural Language Is an Unnatural Interface
Natural Language Is an Unnatural Interface
On the user experience of interacting with LLMs
Prompt engineers not only need to get the model to respond to a given question but also structure the output in a parsable way (such as JSON), in case it needs to be rendered in some UI components or be chained into the input of a future LLM query. They scaffold the raw input that is fed into an LLM so the end user doesn’t need to spend time thinking about prompting at all.
From the user’s side, it’s hard to decide what to ask while providing the right amount of context.From the developer’s side, two problems arise. It’s hard to monitor natural language queries and understand how users are interacting with your product. It’s also hard to guarantee that an LLM can successfully complete an arbitrary query. This is especially true for agentic workflows, which are incredibly brittle in practice.
When we speak to other people, there is a shared context that we communicate under. We’re not just exchanging words, but a larger information stream that also includes intonation while speaking, hand gestures, memories of each other, and more. LLMs unfortunately cannot understand most of this context and therefore, can only do as much as is described by the prompt
most people use LLMs for ~4 basic natural language tasks, rarely taking advantage of the conversational back-and-forth built into chat systems:Summarization: Summarizing a large amount of information or text into a concise yet comprehensive summary. This is useful for quickly digesting information from long articles, documents or conversations. An AI system needs to understand the key ideas, concepts and themes to produce a good summary.ELI5 (Explain Like I'm 5): Explaining a complex concept in a simple, easy-to-understand manner without any jargon. The goal is to make an explanation clear and simple enough for a broad, non-expert audience.Perspectives: Providing multiple perspectives or opinions on a topic. This could include personal perspectives from various stakeholders, experts with different viewpoints, or just a range of ways a topic can be interpreted based on different experiences and backgrounds. In other words, “what would ___ do?”Contextual Responses: Responding to a user or situation in an appropriate, contextualized manner (via email, message, etc.). Contextual responses should feel organic and on-topic, as if provided by another person participating in the same conversation.
Prompting nearly always gets in the way because it requires the user to think. End users ultimately do not wish to confront an empty text box in accomplishing their goals. Buttons and other interactive design elements make life easier.The interface makes all the difference in crafting an AI system that augments and amplifies human capabilities rather than adding additional cognitive load.Similar to standup comedy, delightful LLM-powered experiences require a subversion of expectation.
Users will expect the usual drudge of drafting an email or searching for a nearby restaurant, but instead will be surprised by the amount of work that has already been done for them from the moment that their intent is made clear. For example, it would a great experience to discover pre-written email drafts or carefully crafted restaurant and meal recommendations that match your personal taste.If you still need to use a text input box, at a minimum, also provide some buttons to auto-fill the prompt box. The buttons can pass LLM-generated questions to the prompt box.
·varunshenoy.substack.com·
Natural Language Is an Unnatural Interface
The VR winter — Benedict Evans
The VR winter — Benedict Evans
When I started my career 3G was the hot topic, and every investor kept asking ‘what’s the killer app for 3G?’ It turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket. But with each of those, we knew what to build next, and with VR we don’t. That tells me that VR has a place in the future. It just doesn’t tell me what kind of place.
The successor to the smartphone will be something that doesn’t just merge AR and VR but make the distinction irrelevant - something that you can wear all day every day, and that can seamlessly both occlude and supplement the real world and generate indistinguishable volumetric space.
·ben-evans.com·
The VR winter — Benedict Evans
Our web design tools are holding us back ⚒ Nerd
Our web design tools are holding us back ⚒ Nerd
With photoshop we could come up with things that we couldn’t build with CSS. But nowadays we can build things with CSS that are impossible to create with our design tools. We have scroll-snap, we have complicated animations, we have all kinds of wonderful interaction, grid, flexbox, all kinds of shapes, and so much more that you won’t find in the drop down menus of your tool of choice. Yet our websites still look and behave like they were designed with photoshop.
·vasilis.nl·
Our web design tools are holding us back ⚒ Nerd