Found 26 bookmarks
Newest
BYOM (Bring Your Own Memory) - by David Hoang
BYOM (Bring Your Own Memory) - by David Hoang
Apple introduced Focus modes in iOS 15 as an evolution of Do Not Disturb, letting users filter notifications and even customize Home Screens by context (Work, Personal, Sleep). In iOS 16, Focus became smarter with Lock Screen pairings and filters across apps like Mail, Calendar, and Safari. iOS 17 refined this with more granular notification controls. Taken together, Focus has evolved from muting distractions to a full context-aware filtering system, a model that shows how AI memory could also be partitioned and personalized by mode rather than being “on” or “off.”
That same framing will be essential for AI memory. Not “on” or “off,” but a filter: what memory is relevant in this context? That same framing will be essential for AI memory. Not “on” or “off,” but a filter: what memory is relevant in this context?
One way to achieve this is through a memory interpreter—a layer that sits between your raw personal history and the work context you’re stepping into. Imagine you’ve been doing deep personal research on a topic—reading, journaling, exploring ideas in your own voice. When you shift into a professional setting, the interpreter could filter that knowledge, stripping away casual notes, personal anecdotes, or tone, while surfacing only the relevant facts and references in a format appropriate for work.
In practice, it would act like a translator, allowing the richness of your personal exploration to inform your professional contributions without oversharing or leaking unintended details. It’s not about fusing personal and work memory, but about controlled permeability
·proofofconcept.pub·
BYOM (Bring Your Own Memory) - by David Hoang
Have We Been Thinking About A.D.H.D. All Wrong?
Have We Been Thinking About A.D.H.D. All Wrong?
Skeptics argue that many of the classic symptoms of the disorder — fidgeting, losing things, not following instructions — are simply typical, if annoying, behaviors of childhood. In response, others point to the serious consequences that can result when those symptoms grow more intense, including school failure, social rejection and serious emotional distress.
There are two main kinds of A.D.H.D., inattentive and hyperactive/impulsive, and children in one category often seem to have little in common with children in the other. There are people with A.D.H.D. whom you can’t get to stop talking and others whom you can’t get to start. Some are excessively eager and enthusiastic; others are irritable and moody.
Although the D.S.M. specifies that clinicians shouldn’t diagnose children with A.D.H.D. if their symptoms are better explained by another mental disorder, more than three quarters of children diagnosed with A.D.H.D. do have another mental-health condition as well, according to the C.D.C. More than a third have a diagnosis of anxiety, and a similar fraction have a diagnosed learning disorder. Forty-four percent have been diagnosed with a behavioral disorder like oppositional defiant disorder.
This all complicates the effort to portray A.D.H.D. as a distinct, unique biological disorder. Is a patient with six symptoms really that different from one with five? If a child who experienced early trauma now can’t sit still or stay organized, should she be treated for A.D.H.D.? What about a child with an anxiety disorder who is constantly distracted by her worries? Does she have A.D.H.D., or just A.D.H.D.-like symptoms caused by her anxiety?
The subjects who were given stimulants worked more quickly and intensely than the ones who took the placebo. They dutifully packed and repacked their virtual backpacks, pulling items in and out, trying various combinations. In the end, though, their scores on the knapsack test were no better than the placebo group. The reason? Their strategies for choosing items became significantly worse under the medication. Their choices didn’t make much sense — they just kept pulling random items in and out of the backpack. To an observer, they appeared to be focused, well behaved, on task. But in fact, they weren’t accomplishing anything of much value.
Farah directed me to the work of Scott Vrecko, a sociologist who conducted a series of interviews with students at an American university who used stimulant medication without a prescription. He wrote that the students he interviewed would often “frame the functional benefits of stimulants in cognitive-sounding terms.” But when he dug a little deeper, he found that the students tended to talk about their attention struggles, and the benefits they experienced with medication, in emotional terms rather than intellectual ones. Without the pills, they said, they just didn’t feel interested in the assignments they were supposed to be doing. They didn’t feel motivated. It all seemed pointless.
On stimulant medication, those emotions flipped. “You start to feel such a connection to what you’re working on,” one undergraduate told Vrecko. “It’s almost like you fall in love with it.” As another student put it: On Adderall, “you’re interested in what you’re doing, even if it’s boring.”
Socially, though, there was a price. “Around my friends, I’m usually the most social, but when I’m on it, it feels like my spark is kind of gone,” John said. “I laugh a lot less. I can’t think of anything to say. Life is just less fun. It’s not like I’m sad; I’m just not as happy. It flattens things out.”
John also generally doesn’t take his Adderall during the summer. When he’s not in school, he told me, he doesn’t have any A.D.H.D. symptoms at all. “If I don’t have to do any work, then I’m just a completely regular person,” he said. “But once I have to focus on things, then I have to take it, or else I just won’t get any of my stuff done.”
John’s sense that his A.D.H.D. is situational — that he has it in some circumstances but not in others — is a challenge to some of psychiatry’s longstanding assumptions about the condition. After all, diabetes doesn’t go away over summer vacation. But John’s intuition is supported by scientific evidence. Increasingly, research suggests that for many people A.D.H.D. might be thought of as a condition they experience, sometimes temporarily, rather than a disorder that they have in some unchanging way.
For most of his career, he embraced what he now calls the “medical model” of A.D.H.D — the belief that the brains of people with A.D.H.D. are biologically deficient, categorically different from those of typical, healthy individuals. Now, however, Sonuga-Barke is proposing an alternative model, one that largely sidesteps questions of biology. What matters instead, he says, is the distress children feel as they try to make their way in the world.
Sonuga-Barke’s proposed model locates A.D.H.D. symptoms on a continuum, rather than presenting the condition as a distinct, natural category. And it departs from the medical model in another crucial way: It considers those symptoms not as indications of neurological deficits but as signals of a misalignment between a child’s biological makeup and the environment in which they are trying to function. “I’m not saying it’s not biological,” he says. “I’m just saying I don’t think that’s the right target. Rather than trying to treat and resolve the biology, we should be focusing on building environments that improve outcomes and mental health.”
What the researchers noticed was that their subjects weren’t particularly interested in talking about the specifics of their disorder. Instead, they wanted to talk about the context in which they were now living and how that context had affected their symptoms. Subject after subject spontaneously brought up the importance of finding their “niche,” or the right “fit,” in school or in the workplace. As adults, they had more freedom than they did as children to control the parameters of their lives — whether to go to college, what to study, what kind of career to pursue. Many of them had sensibly chosen contexts that were a better match for their personalities than what they experienced in school, and as a result, they reported that their A.D.H.D. symptoms had essentially disappeared. In fact, some of them were questioning whether they had ever had a disorder at all — or if they had just been in the wrong environment as children.
The work environments where the subjects were thriving varied. For some, the appeal of their new jobs was that they were busy and cognitively demanding, requiring constant multitasking. For others, the right context was physical, hands-on labor. For all of them, what made a difference was having work that to them felt “intrinsically interesting.”
“Rather than a static ‘attention deficit’ that appeared under all circumstances,” the M.T.A. researchers wrote, “our subjects described their propensity toward distraction as contextual. … Believing the problem lay in their environments rather than solely in themselves helped individuals allay feelings of inadequacy: Characterizing A.D.H.D. as a personality trait rather than a disorder, they saw themselves as different rather than defective.”
For the young adults in the “niche” study who were interviewed about their work lives, the transition that helped them overcome their A.D.H.D. symptoms often was leaving academic work for something more kinetic. For Sonuga-Barke, it was the opposite. At university, he would show up at the library at 9 every morning and sit in his carrel working until 5. The next day, he would do it again. Growing up, he says, he had a natural tendency to “hyperfocus,” and back at school in Derby, that tendency looked to his teachers like daydreaming. At university, it became his secret weapon
I asked Sonuga-Barke what he might have gained if he grew up in a different time and place — if he was prescribed Ritalin or Adderall at age 8 instead of just being packed off to the remedial class. “I don’t think I would have gained anything,” he said. “I think without medication, you learn alternative ways of dealing with stuff. In my particular case, there are a lot of characteristics that have helped me. My mind is constantly churning away, thinking of things. I never relax. The way I motivate myself is to turn everything into a problem and to try and solve the problem.”
“The simple model has always been, basically, ‘A.D.H.D. plus medication equals no A.D.H.D.,’” he says. “But that’s not true. Medication is not a silver bullet. It never will be.” What medication can sometimes do, he believes, is allow families more room to communicate. “At its best,” he says, “medication can provide a window for parents to engage with their kids,” by moderating children’s behavior, at least temporarily, so that family life can become more than just endless fights about overdue homework and lost lunchboxes. “If you have a more positive relationship with your child, they’re going to have a better outcome. Not for their A.D.H.D. — it’s probably going to be just the same. But in terms of dealing with the self-hatred and low self-esteem that often goes along with A.D.H.D.
The alternative model, by contrast, tells a child a very different story: that his A.D.H.D. symptoms exist on a continuum, one on which we all find ourselves; that he may be experiencing those symptoms as much because of where he is as because of who he is; and that next year, if things change in his surroundings, those symptoms might change as well. Armed with that understanding, he and his family can decide whether medication makes sense — whether for him, the benefits are likely to outweigh the drawbacks. At the same time, they can consider whether there are changes in his situation, at school or at home, that might help alleviate his symptoms.
Admittedly, that version of A.D.H.D. has certain drawbacks. It denies parents the clear, definitive explanation for their children’s problems that can come as such a relief, especially after months or years of frustration and uncertainty. It often requires a lot of flexibility and experimentation on the part of patients, families and doctors. But it has two important advantages as well: First, the new model more accurately reflects the latest scientific understanding of A.D.H.D. And second, it gives children a vision of their future in which things might actually improve — not because their brains are chemically refashioned in a way that makes them better able to fit into the world, but because they find a way to make the world fit better around their complicated and distinctive brains.
·nytimes.com·
Have We Been Thinking About A.D.H.D. All Wrong?
Make Something Heavy
Make Something Heavy
The modern makers’ machine does not want you to create heavy things. It runs on the internet—powered by social media, fueled by mass appeal, and addicted to speed. It thrives on spikes, scrolls, and screenshots. It resists weight and avoids friction. It does not care for patience, deliberation, or anything but production. It doesn’t care what you create, only that you keep creating. Make more. Make faster. Make lighter. Make something that can be consumed in a breath and discarded just as quickly. Heavy things take time. And here, time is a tax.
even the most successful Substackers—those who’ve turned newsletters into brands and businesses—eventually want to stop stacking things. They want to make one really, really good thing. One truly heavy thing. A book. A manifesto. A movie. A media company. A momument.
At any given time, you’re either pre–heavy thing or post–heavy thing. You’ve either made something weighty already, or you haven’t. Pre–heavy thing people are still searching, experimenting, iterating. Post–heavy thing people have crossed the threshold. They’ve made something substantial—something that commands respect, inspires others, and becomes a foundation to build on. And it shows. They move with confidence and calm. (But this feeling doesn’t always last forever.)
No one wants to stay in light mode forever. Sooner or later, everyone gravitates toward heavy mode—toward making something with weight. Your life’s work will be heavy. Finding the balance of light and heavy is the game.4 Note: heavy doesn’t have to mean “big.” Heavy can be small, niche, hard to scale. What I’m talking about is more like density. It’s about what is defining, meaningful, durable.
Telling everyone they’re a creator has only fostered a new strain of imposter syndrome. Being called a creator doesn’t make you one or make you feel like one; creating something with weight does. When you’ve made something heavy—something that stands on its own—you don’t need validation. You just know, because you feel its weight in your hands.
It’s not that most people can’t make heavy things. It’s that they don’t notice they aren’t. Lightness has its virtues—it pulls us in, subtly, innocently, whispering, 'Just do things.' The machine rewards movement, so we keep going, collecting badges. One day, we look up and realize we’ve been running in place.
Why does it feel bad to stop posting after weeks of consistency? Because the force of your work instantly drops to zero. It was all motion, no mass—momentum without weight. 99% dopamine, near-zero serotonin, and no trace of oxytocin. This is the contemporary creator’s dilemma—the contemporary generation’s dilemma.
We spend our lives crafting weighted blankets for ourselves—something heavy enough to anchor our ambition and quiet our minds.
Online, by nature, weight is harder to find, harder to hold on to, and only getting harder in a world where it feels like anyone can make anything.
·workingtheorys.com·
Make Something Heavy
On Nonviolent Communication
On Nonviolent Communication
if you say “my boss makes me crazy”, you will indeed think your boss is “making” you crazy. If you instead say “I am frustrated because I am wanting stability and consistency in this relationship” you may then think you can control your level of frustration and clearly address what it is you want. If someone else is making you crazy, there’s nothing you can do. If you control your feelings, you can take actions to change how you respond to causes. Words can be windows or they can be walls — they can open doors for compassion or they can do the opposite. NVC uses words as windows. Our language today uses them as walls. More on this later.
If I ask you to meet me at 6:00 and you pick me up at 6:30, how do I feel? It depends. I could be frustrated that you are late because I want to spend my time productively, or scared that you may not know where to find me, or hurt because I need reassurance that you care about me — or, conversely, happy that I get more time to myself.
It’s not enough to blame the feeling on the person whose actions triggered the feeling. That very same action might have inspired completely different feelings in someone else — or even in me, under different circumstances!
Incidents like the friend coming late may stimulate or set the stage for feelings, but they do not *cause* the feelings.
There is a gap between stimulus and cause — and our power lies in how we use that gap. If we truly understood this — the separation between stimulus and cause — and the idea that we are responsible for our own emotions, we would speak very differently.
We wouldn’t say things like “It bugs me when …” or “It makes me angry when”. These phrases imply or actually state that responsibility for your feelings lie outside of yourself. A better statement would be “When I saw you come late, I started to feel scared”. Here, one may at least be taking some responsibility for the feeling of anger, and not simply blaming the latecomer for causing such feelings.
the more we use our language to cede responsibility to others, the less agency we have over our circumstances, and the more we victimize ourselves.
NVC believes that, as human beings, there are only two things that we are basically saying: Please and Thank You. Judgments are distorted attempts to say “Please.”
NVC requires learning how to say what your needs are, what needs are alive in you at a given moment, which ones are getting fulfilled, and which ones are not.
You sacrifice your needs to provide for and take care of your family. Needs are not important. What’s important is obedience to authority. That’s what’s important. With that background and history we’ve been taught a language that doesn’t teach us how to say how we are. It teaches us to worry about what we are in the eyes of authority.
When our minds have been pre-occupied that way we have trouble answering what seems to be a simple question, which is asked in all cultures throughout the world, “How are you?” It is a way of asking what’s alive in you. It’s a critical question. Even though it’s asked in many cultures, people don’t know how to answer it because they haven’t been educated in a culture that cares about how you are.
The shift necessary requires being able to say, how do you feel at this moment, and what are the needs behind your feelings? And when we ask those question to highly educated people, they cannot answer it. Ask them how they feel, and they say “I feel that that’s wrong”. Wrong isn’t a feeling. Wrong is a thought.
When your mind has been shaped to worry about what people think about you, you lose connection with what’s alive in you.
The underlying philosophy of punishment and reward is that if people are basically evil or selfish, then the correctional process if they are behaving in a way you don’t like is to make them hate themselves for what they have done. If a parent, for example, doesn’t like what the child is doing, the parent says something like ”Say you’re sorry!! The child says, “I’m sorry.” The parent says “No! You’re not really sorry!” Then the child starts to cry “I’m sorry. . .” The parent says “Okay, I forgive you.”
Note, I think NVC is productive is for friendships and relationships, or anything where connection is the main goal, not for any work or organizations that primarily serve another mission.
NVC involves the following: 1) how we express ourselves to other people, 2) how we interpret what people say to us, and most importantly, 3) how we communicate with ourselves.
Some have suggested alternatives such as Compassionate Communication, Authentic Communication, Connected Communication.
·substack.com·
On Nonviolent Communication
things I learned from my ex-boss Dinesh - @visakanv's blog
things I learned from my ex-boss Dinesh - @visakanv's blog
all the cliches of bad managers apply internally as well: “My manager doesn’t listen to me, keeps making promises of me he can’t keep, drives me too hard, never gives me a break, doesn’t praise me when I DO get things done, infinitely critical, is somehow both paranoid and clueless, is no help at all, keeps increasing my workload…”
·visakanv.com·
things I learned from my ex-boss Dinesh - @visakanv's blog
Looking for AI use-cases — Benedict Evans
Looking for AI use-cases — Benedict Evans
  • LLMs have impressive capabilities, but many people struggle to find immediate use-cases that match their own needs and workflows.
  • Realizing the potential of LLMs requires not just technical advancements, but also identifying specific problems that can be automated and building dedicated applications around them.
  • The adoption of new technologies often follows a pattern of initially trying to fit them into existing workflows, before eventually changing workflows to better leverage the new tools.
if you had showed VisiCalc to a lawyer or a graphic designer, their response might well have been ‘that’s amazing, and maybe my book-keeper should see this, but I don’t do that’. Lawyers needed a word processor, and graphic designers needed (say) Postscript, Pagemaker and Photoshop, and that took longer.
I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.
A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time.
no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this.
Some of this is about imagination, and familiarity. It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’.
This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The concept of product-market fit is that normally you have to iterate your idea of the product and your idea of the use-case and customer towards each other - and then you need sales.
Meanwhile, spreadsheets were both a use-case for a PC and a general-purpose substrate in their own right, just as email or SQL might be, and yet all of those have been unbundled. The typical big company today uses hundreds of different SaaS apps, all them, so to speak, unbundling something out of Excel, Oracle or Outlook. All of them, at their core, are an idea for a problem and an idea for a workflow to solve that problem, that is easier to grasp and deploy than saying ‘you could do that in Excel!’ Rather, you instantiate the problem and the solution in software - ‘wrap it’, indeed - and sell that to a CIO. You sell them a problem.
there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL.
Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.
people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem, and with a general-purpose, open-ended prompt, the user has to think of all of that themselves, every single time, or hope it’s already in the training data. So, can the GUI itself be generative? Or do we need another whole generation of Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.
·ben-evans.com·
Looking for AI use-cases — Benedict Evans
Tulpa - Wikipedia
Tulpa - Wikipedia
Tulpa is a concept originally from Tibetan Buddhism and found in later traditions of mysticism and the paranormal of a materialized being or thought-form
The Theosophist Annie Besant, in the 1905 book Thought-Forms, divides them into three classes: forms in the shape of the person who creates them, forms that resemble objects or people and may become ensouled by nature spirits or by the dead, and forms that represent inherent qualities from the astral or mental planes, such as emotions.
The Slender Man has been described by some people as a tulpa-effect, and attributed to multiple people's thought processes.
·en.wikipedia.org·
Tulpa - Wikipedia
Memetics - Wikipedia
Memetics - Wikipedia
The term "meme" was coined by biologist Richard Dawkins in his 1976 book The Selfish Gene,[1] to illustrate the principle that he later called "Universal Darwinism".
He gave as examples, tunes, catchphrases, fashions, and technologies. Like genes, memes are selfish replicators and have causal efficacy; in other words, their properties influence their chances of being copied and passed on.
Just as genes can work together to form co-adapted gene complexes, so groups of memes acting together form co-adapted meme complexes or memeplexes.
Criticisms of memetics include claims that memes do not exist, that the analogy with genes is false, that the units cannot be specified, that culture does not evolve through imitation, and that the sources of variation are intelligently designed rather than random.
·en.m.wikipedia.org·
Memetics - Wikipedia
The Mac Turns Forty – Pixel Envy
The Mac Turns Forty – Pixel Envy
As for a Hall of Shame thing? That would be the slow but steady encroachment of single-window applications in MacOS, especially via Catalyst and Electron. The reason I gravitated toward MacOS in the first place is the same reason I continue to use it: it fits my mental model of how an operating system ought to work.
·pxlnv.com·
The Mac Turns Forty – Pixel Envy
A bicycle for the senses
A bicycle for the senses
We can take nature’s superpowers and expand them across many more vectors that are interesting to humans: Across scale — far and near, binoculars, zoom, telescope, microscope Across wavelength — UV, IR, heatmaps, nightvision, wifi, magnetic fields, electrical and water currents Across time — view historical imagery, architectural, terrain, geological, and climate changes Across culture — experience the relevance of a place in books, movies, photography, paintings, and language Across space — travel immersively to other locations for tourism, business, and personal connections Across perspective — upside down, inside out, around corners, top down, wider, narrower, out of body Across interpretation — alter the visual and artistic interpretation of your environment, color-shifting, saturation, contrast, sharpness
Headset displays connect sensory extensions directly to your vision. Equipped with sensors that perceive beyond human capabilities, and access to the internet, they can provide information about your surroundings wherever you are. Until now, visual augmentation has been constrained by the tiny display on our phone. By virtue of being integrated with your your eyesight, headsets can open up new kinds of apps that feel more natural. Every app is a superpower. Sensory computing opens up new superpowers that we can borrow from nature. Animals, plants and other organisms can sense things that humans can’t
The first mass-market bicycle for the senses was Apple’s AirPods. Its noise cancellation and transparency mode replace and enhance your hearing. Earbuds are turning into ear computers that will become more easily programmable. This can enable many more kinds of hearing. For example, instantaneous translation may soon be a reality
For the past seven decades, computers have been designed to enhance what your brain can do — think and remember. New kinds of computers will enhance what your senses can do — see, hear, touch, smell, taste. The term spatial computing is emerging to encompass both augmented and virtual reality. I believe we are exploring an even broader paradigm: sensory computing. The phone was a keyhole for peering into this world, and now we’re opening the door.
What happens when put on a headset and open the “Math” app? How could seeing the world through math help you understand both better?
Advances in haptics may open up new kinds of tactile sensations. A kind of second skin, or softwear, if you will. Consider that Apple shipped a feature to help you find lost items that vibrates more strongly as you get closer. What other kinds of data could be translated into haptic feedback?
It may sound far-fetched, but converting olfactory patterns into visual patterns could open up some interesting applications. Perhaps a new kind of cooking experience? Or new medical applications that convert imperceptible scents into visible patterns?
·stephango.com·
A bicycle for the senses
AI Models in Software UI - LukeW
AI Models in Software UI - LukeW
In the first approach, the primary interface affordance is an input that directly (for the most part) instructs an AI model(s). In this paradigm, people are authoring prompts that result in text, image, video, etc. generation. These prompts can be sequential, iterative, or un-related. Marquee examples are OpenAI's ChatGPT interface or Midjourney's use of Discord as an input mechanism. Since there are few, if any, UI affordances to guide people these systems need to respond to a very wide range of instructions. Otherwise people get frustrated with their primarily hidden (to the user) limitations.
The second approach doesn't include any UI elements for directly controlling the output of AI models. In other words, there's no input fields for prompt construction. Instead instructions for AI models are created behind the scenes as people go about using application-specific UI elements. People using these systems could be completely unaware an AI model is responsible for the output they see.
The third approach is application specific UI with AI assistance. Here people can construct prompts through a combination of application-specific UI and direct model instructions. These could be additional controls that generate portions of those instructions in the background. Or the ability to directly guide prompt construction through the inclusion or exclusion of content within the application. Examples of this pattern are Microsoft's Copilot suite of products for GitHub, Office, and Windows.
they could be overlays, modals, inline menus and more. What they have in common, however, is that they supplement application specific UIs instead of completely replacing them.
·lukew.com·
AI Models in Software UI - LukeW
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
This paper maps concepts from AI alignment onto a basic, three step interaction cycle, yielding a corresponding set of alignment objectives: 1) specification alignment: ensuring the user can efficiently and reliably communicate objectives to the AI, 2) process alignment: providing the ability to verify and optionally control the AI's execution process, and 3) evaluation support: ensuring the user can verify and understand the AI's output.
the notion of a Process Gulf, which highlights how differences between human and AI processes can lead to challenges in AI control.
·arxiv.org·
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
Feeling through emotional truths
Feeling through emotional truths

To gain insight into emotional truths, Kasra recommends feeling into strong emotions rather than overthinking them. Some techniques include sentence completion exercises, imagining emotions as characters to dialogue with, focusing on body sensations, and identifying underlying beliefs.

In general it's adopting a mindset of curiosity rather than doubt when exploring one's emotions.

Your emotions are a signaling mechanism. They are your subconscious mind’s toolkit for protecting you from dangers, improving your circumstances, and navigating an otherwise incomprehensibly complex world. Every emotion has some adaptive purpose: fear keeps you safe; anger enforces your boundaries; sadness slows you down; joy speeds you up.
The first step towards living better is to recognize that your subconscious mind is trying to tell you things you don’t yet know (primarily through your emotions, but also via other channels like your dreams). A lot of people struggle to realize even this basic fact; they think of emotions as a disruption: a distraction from, say, their career development, or an impediment to their capacity to “be rational.”
your emotions are worth heeding because they carry wisdom your conscious mind doesn’t have access to. And at that point you must embark on the second step—the much harder step—of figuring out what it is that your mind is trying to tell you.
an attitude of curiosity rather than doubt. Embodiment rather than intellect. You find the answer by allowing yourself to be playful, generative, and spontaneous; not by being methodical, intentional, and constricted. Sit back and feel your way to the answer
·bitsofwonder.substack.com·
Feeling through emotional truths
The Signal and the Corrective
The Signal and the Corrective

A technical breakdown of 'narratives' and how they operate: narratives simplify issues by focusing on a main "signal" while ignoring other relevant "noise", and this affects discussions between those with opposing preferred signals. It goes into many examples across basically any kind of ideological or cultural divide.

AI summary:

  • The article explores how different people can derive opposing narratives from the same set of facts, with each viewing their interpretation as the "signal" and opposing views as "noise"
  • Key concepts:
    • Signal: The core belief or narrative someone holds as fundamentally true
    • Corrective: The moderating adjustments made to account for exceptions to the core belief
    • Figure-ground inversion: How the same reality can be interpreted in opposite ways
  • Examples of opposing narratives include:
    • Government as public service vs. government as pork distribution
    • Medical care as healing vs. medical care as harmful intervention
    • Capitalism as wealth creation vs. capitalism as exploitation
    • Nature vs. nurture in human behavior
    • Science as gradual progress vs. science as paradigm shifts
  • Communication dynamics:
    • People are more likely to fall back on pure signals (without correctives) when:
      • Discussions become abstract
      • Communication bandwidth is limited
      • Under stress or emotional pressure
      • Speaking to unfamiliar audiences
      • In hostile environments
  • Persuasion insights:
    • It's easier to add correctives to someone's existing signal than to completely change their core beliefs
    • People must feel their fundamental views are respected before accepting criticism
    • Acknowledging partial validity of opposing views is crucial for productive dialogue
  • Problems in modern discourse:
    • Online debates often lack real-world consequences
    • When there's no need for cooperation, people prefer conquest over consensus
    • Lack of real relationships reduces incentives for civility and understanding
  • The author notes that while most people hold moderate views with both signals and correctives, fundamental differences can be masked when discussing specific policies but become apparent in discussions of general principles
  • The piece maintains a thoughtful, analytical tone while acknowledging the complexity and challenges of human communication and belief systems
  • The author expresses personal examples and vulnerability in describing how they themselves react differently to criticism based on whether it comes from those who share their fundamental values
narratives contradicting each other means that they simplify and generalize in different ways and assign goodness and badness to things in opposite directions. While that might look like contradiction it isn’t, because generalizations and value judgments aren’t strictly facts about the world. As a consequence, the more abstracted and value-laden narratives get the more they can contradict each other without any of them being “wrong”.
“The free market is extremely powerful and will work best as a rule, but there are a few outliers where it won’t, and some people will be hurt so we should have a social safety net to contain the bad side effects.” and “Capitalism is morally corrupt and rewards selfishness and greed. An economy run for the people by the people is a moral imperative, but planned economies don’t seem to work very well in practice so we need the market to fuel prosperity even if it is distasteful.” . . . have very different fundamental attitudes but may well come down quite close to each other in terms of supported policies. If you model them as having one “main signal” (basic attitude) paired with a corrective to account for how the basic attitude fails to match reality perfectly, then this kind of difference is understated when the conversation is about specific issues (because then signals plus correctives are compared and the correctives bring “opposite” people closer together) but overstated when the conversation is about general principles — because then it’s only about the signal.
I’ve said that when discussions get abstract and general people tend to go back to their main signals and ignore correctives, which makes participants seem further apart than they really are. The same thing happens when the communication bandwidth is low for some reason. When dealing with complex matters human communication tends not to be super efficient in the first place and if something makes subtlety extra hard — like a 140 character limit, only a few minutes to type during a bathroom break at work, little to no context or a noisy discourse environment — you’re going to fall back to simpler, more basic messages. Internal factors matter too. When you’re stressed, don’t have time to think, don’t know the person you’re talking to and don’t really care about them, when emotions are heated, when you feel attacked, when an audience is watching and you can’t look weak, or when you smell blood in the water, then you’re going to go simple, you’re going to go basic, you’re going to push in a direction rather than trying to hit a target. And whoever you’re talking to is going to do the same. You both fall back in different directions, exactly when you shouldn’t.
It makes sense to think of complex disagreements as not about single facts but about narratives made up of generalizations, abstractions and interpretations of many facts, most of which aren’t currently on the table. And the status of our favorite narratives matters to us, because they say what’s happening, who the heroes are and who the villains are, what’s matters and what doesn’t, who owes and who is owed. Most of us, when not in our very best moods, will make sure our most cherished narratives are safe before we let any others thrive.
Most people will accept that their main signals have correctives, but they will not accept that their main signals have no validity or legitimacy. It’s a lot easier to install a corrective in someone than it is to dislodge their main signal (and that might later lead to a more fundamental change of heart) — but to do that you must refrain from threatening the signal because that makes people defensive. And it’s not so hard. Listen and acknowledge that their view has greater than zero validity.
In an ideal world, any argumentation would start with laying out its own background assumptions, including stating if what it says should be taken as a corrective on top of its opposite or a complete rejection of it.
·everythingstudies.com·
The Signal and the Corrective
Arrival (film) - Wikipedia
Arrival (film) - Wikipedia
When Banks is able to establish sufficient shared vocabulary to ask why the aliens have come, they answer with a statement that could be translated as "offer weapon". China interprets this as "use weapon", prompting them to break off communications, and other nations follow. Banks argues that the symbol interpreted as "weapon" can be more abstractly related to the concepts of "means" or "tool"; China's translation likely results from interacting with the aliens using mahjong, a highly competitive game.
·en.wikipedia.org·
Arrival (film) - Wikipedia
File over app
File over app
That’s why I feel like Obsidian is a truly great company as it has a true mission that’s rooted in human values and human experience. This is well written. Having apps that are catered to the files and artifacts they produce rather than the files being catered (and only accessible within their apps) to their tools.
File over app is an appeal to tool makers: accept that all software is ephemeral, and give people ownership over their data.
The world is filled with ideas from generations past, transmitted through many mediums, from clay tablets to manuscripts, paintings, sculptures, and tapestries. These artifacts are objects that you can touch, hold, own, store, preserve, and look at. To read something written on paper all you need is eyeballs. Today, we are creating innumerable digital artifacts, but most of these artifacts are out of our control. They are stored on servers, in databases, gated behind an internet connection, and login to a cloud service. Even the files on your hard drive use proprietary formats that make them incompatible with older systems and other tools. Paraphrasing something I wrote recently If you want your writing to still be readable on a computer from the 2060s or 2160s, it’s important that your notes can be read on a computer from the 1960s.
You should want the files you create to be durable, not only for posterity, but also for your future self. You never know when you might want to go back to something you created years or decades ago. Don’t lock your data into a format you can’t retrieve.
·stephanango.com·
File over app
Natural Language Is an Unnatural Interface
Natural Language Is an Unnatural Interface
On the user experience of interacting with LLMs
Prompt engineers not only need to get the model to respond to a given question but also structure the output in a parsable way (such as JSON), in case it needs to be rendered in some UI components or be chained into the input of a future LLM query. They scaffold the raw input that is fed into an LLM so the end user doesn’t need to spend time thinking about prompting at all.
From the user’s side, it’s hard to decide what to ask while providing the right amount of context.From the developer’s side, two problems arise. It’s hard to monitor natural language queries and understand how users are interacting with your product. It’s also hard to guarantee that an LLM can successfully complete an arbitrary query. This is especially true for agentic workflows, which are incredibly brittle in practice.
When we speak to other people, there is a shared context that we communicate under. We’re not just exchanging words, but a larger information stream that also includes intonation while speaking, hand gestures, memories of each other, and more. LLMs unfortunately cannot understand most of this context and therefore, can only do as much as is described by the prompt
most people use LLMs for ~4 basic natural language tasks, rarely taking advantage of the conversational back-and-forth built into chat systems:Summarization: Summarizing a large amount of information or text into a concise yet comprehensive summary. This is useful for quickly digesting information from long articles, documents or conversations. An AI system needs to understand the key ideas, concepts and themes to produce a good summary.ELI5 (Explain Like I'm 5): Explaining a complex concept in a simple, easy-to-understand manner without any jargon. The goal is to make an explanation clear and simple enough for a broad, non-expert audience.Perspectives: Providing multiple perspectives or opinions on a topic. This could include personal perspectives from various stakeholders, experts with different viewpoints, or just a range of ways a topic can be interpreted based on different experiences and backgrounds. In other words, “what would ___ do?”Contextual Responses: Responding to a user or situation in an appropriate, contextualized manner (via email, message, etc.). Contextual responses should feel organic and on-topic, as if provided by another person participating in the same conversation.
Prompting nearly always gets in the way because it requires the user to think. End users ultimately do not wish to confront an empty text box in accomplishing their goals. Buttons and other interactive design elements make life easier.The interface makes all the difference in crafting an AI system that augments and amplifies human capabilities rather than adding additional cognitive load.Similar to standup comedy, delightful LLM-powered experiences require a subversion of expectation.
Users will expect the usual drudge of drafting an email or searching for a nearby restaurant, but instead will be surprised by the amount of work that has already been done for them from the moment that their intent is made clear. For example, it would a great experience to discover pre-written email drafts or carefully crafted restaurant and meal recommendations that match your personal taste.If you still need to use a text input box, at a minimum, also provide some buttons to auto-fill the prompt box. The buttons can pass LLM-generated questions to the prompt box.
·varunshenoy.substack.com·
Natural Language Is an Unnatural Interface
Our web design tools are holding us back ⚒ Nerd
Our web design tools are holding us back ⚒ Nerd
With photoshop we could come up with things that we couldn’t build with CSS. But nowadays we can build things with CSS that are impossible to create with our design tools. We have scroll-snap, we have complicated animations, we have all kinds of wonderful interaction, grid, flexbox, all kinds of shapes, and so much more that you won’t find in the drop down menus of your tool of choice. Yet our websites still look and behave like they were designed with photoshop.
·vasilis.nl·
Our web design tools are holding us back ⚒ Nerd
When social media controls the nuclear codes
When social media controls the nuclear codes
David Foster Wallace once said that:The language of images. . . maybe not threatens, but completely changes actual lived life. When you consider that my grandparents, by the time they got married and kissed, I think they had probably seen maybe a hundred kisses. They'd seen people kiss a hundred times. My parents, who grew up with mainstream Hollywood cinema, had seen thousands of kisses by the time they ever kissed. Before I kissed anyone I had seen tens of thousands of kisses. I know that the first time I kissed much of my thought was, “Am I doing it right? Am I doing it according to how I've seen it?”
A lot of the 80s and 90s critiques of postmodernity did have a point—our experience really is colored by media. Having seen a hundred movies about nuclear apocalypse, the entire time we’ll be looking over our shoulder for the camera, thinking: “Am I doing it right?”
·erikhoel.substack.com·
When social media controls the nuclear codes
Exapt existing infrastructure
Exapt existing infrastructure
Here are the adoption curves for a handful of major technologies in the United States. There are big differences in the speeds at which these technologies were absorbed. Landline telephones took about 86 years to hit 80% adoption.Flush toilets took 96 years to hit 80% adoption.Refrigerators took about 25 years.Microwaves took 17 years.Smartphones took just 12 years.Why these wide differences in adoption speed? Conformability with existing infrastructure. Flush toilets required the build-out of water and sewage utility systems. They also meant adding a new room to the house—the bathroom—and running new water and sewage lines underneath and throughout the house. That’s a lot of systems to line up. By contrast, refrigerators replaced iceboxes, and could fit into existing kitchens without much work. Microwaves could sit on a countertop. Smartphones could slip into your pocket.
·subconscious.substack.com·
Exapt existing infrastructure
The path of evolution is always through the adjacent possible
The path of evolution is always through the adjacent possible
Complex systems can evolve from simple systems only if there are stable intermediate forms. (Donella Meadows, 2008. Thinking in Systems)
So, to survive, you might say a design has to discover an evolutionary path from the familiar to the new, from the present to the future, through a series of steps into the adjacent possible.
The principle holds for infrastructure too. The internet started with the adjacent possible: coopting telephones.The internet didn’t have to deploy expensive new hardware, or lay down new cables to get off the ground. It was conformable to existing infrastructure. It worked with the way the world was already, exapting whatever was available, like dinosaurs exapting feathers for flight. (Exapt Existing Infrastructure)
·subconscious.substack.com·
The path of evolution is always through the adjacent possible
I Didn’t Want It to Be True, but the Medium Really Is the Message
I Didn’t Want It to Be True, but the Medium Really Is the Message
it’s the common rules that govern all creation and consumption across a medium that change people and society. Oral culture teaches us to think one way, written culture another. Television turned everything into entertainment, and social media taught us to think with the crowd.
There is a grammar and logic to the medium, enforced by internal culture and by ratings reports broken down by the quarter-hour. You can do better cable news or worse cable news, but you are always doing cable news.
Don’t just look at the way things are being expressed; look at how the way things are expressed determines what’s actually expressible.” In other words, the medium blocks certain messages.
Television teaches us to expect that anything and everything should be entertaining. But not everything should be entertainment, and the expectation that it will be is a vast social and even ideological change.
Television, he writes, “serves us most ill when it co-opts serious modes of discourse — news, politics, science, education, commerce, religion — and turns them into entertainment packages.
The border between entertainment and everything else was blurring, and entertainers would be the only ones able to fulfill our expectations for politicians. He spends considerable time thinking, for instance, about the people who were viable politicians in a textual era and who would be locked out of politics because they couldn’t command the screen.
As a medium, Twitter nudges its users toward ideas that can survive without context, that can travel legibly in under 280 characters. It encourages a constant awareness of what everyone else is discussing. It makes the measure of conversational success not just how others react and respond but how much response there is. It, too, is a mold, and it has acted with particular force on some of our most powerful industries — media and politics and technology.
I’ve also learned that patterns of attention — what we choose to notice and what we do not — are how we render reality for ourselves, and thus have a direct bearing on what we feel is possible at any given time. These aspects, taken together, suggest to me the revolutionary potential of taking back our attention.
·nytimes.com·
I Didn’t Want It to Be True, but the Medium Really Is the Message
What comes after Zoom? — Benedict Evans
What comes after Zoom? — Benedict Evans
If you’d looked at Skype in 2004 and argued that it would own ‘voice’ on ‘computers’, that would not have been the right mental model. I think this is where we’ll go with video - there will continue to be hard engineering, but video itself will be a commodity and the question will be how you wrap it. There will be video in everything, just as there is voice in everything, and there will be a great deal of proliferation into industry verticals on one hand and into unbundling pieces of the tech stack on the other. On one hand video in healthcare, education or insurance is about the workflow, the data model and the route to market, and lots more interesting companies will be created, and on the other hand Slack is deploying video on top of Amazon’s building blocks, and lots of interesting companies will be created here as well. There’s lots of bundling and unbundling coming, as always. Everything will be ‘video’ and then it will disappear inside.
the calendar is often the aggregation layer - you don’t need to know what service the next call uses, just when it is. Skype needed both an account and an app, so had a network effect (and lost even so). WhatsApp uses the telephone numbering system as an address and so piggybacked on your phone’s contact list - effectively, it used the PSTN as the social graph rather than having to build its own. But a group video call is a URL and a calendar invitation - it has no graph of its own.
one of the ways that this all feels very 1.0 is the rather artificial distinction between calls that are based on a ‘room’, where the addressing system is a URL and anyone can join without an account, and calls that are based on ‘people’, where everyone joining needs their own address, whether it’s a phone number, an account or something else. Hence Google has both Meet (URLs) and Duo (people) - Apple’s FaceTime is only people (no URLs).
When Snap launched, there were already infinite ways to share images, but Snap asked a bunch of weird questions that no-one had really asked before. Why do you have to press the camera button - why doesn’t the app open in the camera? Why are you saving your messages - isn’t that like saving all your phone calls? Fundamentally, Snap asked ‘why, exactly, are you sending a picture? What is the underlying social purpose?’ You’re not really sending someone a sheet of pixels - you’re communicating.
That’s the question Zoom and all its competitors haven’t really asked. Zoom has done a good job of asking why it was hard to get into a call, but it hasn’t asked why you’re in the call in the first place. Why, exactly, are you sending someone a video stream and watching another one? Why am I looking at a grid of little thumbnails of faces? Is that the purpose of this moment? What is the ‘mute’ button for - background noise, or so I can talk to someone else, or is it so I can turn it off to raise my hand? What social purpose is ‘mute’ actually serving? What is screen-sharing for? What other questions could one ask? And so if Zoom is the Dropbox or Skype of video, we are waiting for the Snap, Clubhouse and Yo.
·ben-evans.com·
What comes after Zoom? — Benedict Evans