Found 12 bookmarks
Newest
Have We Been Thinking About A.D.H.D. All Wrong?
Have We Been Thinking About A.D.H.D. All Wrong?
Skeptics argue that many of the classic symptoms of the disorder — fidgeting, losing things, not following instructions — are simply typical, if annoying, behaviors of childhood. In response, others point to the serious consequences that can result when those symptoms grow more intense, including school failure, social rejection and serious emotional distress.
There are two main kinds of A.D.H.D., inattentive and hyperactive/impulsive, and children in one category often seem to have little in common with children in the other. There are people with A.D.H.D. whom you can’t get to stop talking and others whom you can’t get to start. Some are excessively eager and enthusiastic; others are irritable and moody.
Although the D.S.M. specifies that clinicians shouldn’t diagnose children with A.D.H.D. if their symptoms are better explained by another mental disorder, more than three quarters of children diagnosed with A.D.H.D. do have another mental-health condition as well, according to the C.D.C. More than a third have a diagnosis of anxiety, and a similar fraction have a diagnosed learning disorder. Forty-four percent have been diagnosed with a behavioral disorder like oppositional defiant disorder.
This all complicates the effort to portray A.D.H.D. as a distinct, unique biological disorder. Is a patient with six symptoms really that different from one with five? If a child who experienced early trauma now can’t sit still or stay organized, should she be treated for A.D.H.D.? What about a child with an anxiety disorder who is constantly distracted by her worries? Does she have A.D.H.D., or just A.D.H.D.-like symptoms caused by her anxiety?
The subjects who were given stimulants worked more quickly and intensely than the ones who took the placebo. They dutifully packed and repacked their virtual backpacks, pulling items in and out, trying various combinations. In the end, though, their scores on the knapsack test were no better than the placebo group. The reason? Their strategies for choosing items became significantly worse under the medication. Their choices didn’t make much sense — they just kept pulling random items in and out of the backpack. To an observer, they appeared to be focused, well behaved, on task. But in fact, they weren’t accomplishing anything of much value.
Farah directed me to the work of Scott Vrecko, a sociologist who conducted a series of interviews with students at an American university who used stimulant medication without a prescription. He wrote that the students he interviewed would often “frame the functional benefits of stimulants in cognitive-sounding terms.” But when he dug a little deeper, he found that the students tended to talk about their attention struggles, and the benefits they experienced with medication, in emotional terms rather than intellectual ones. Without the pills, they said, they just didn’t feel interested in the assignments they were supposed to be doing. They didn’t feel motivated. It all seemed pointless.
On stimulant medication, those emotions flipped. “You start to feel such a connection to what you’re working on,” one undergraduate told Vrecko. “It’s almost like you fall in love with it.” As another student put it: On Adderall, “you’re interested in what you’re doing, even if it’s boring.”
Socially, though, there was a price. “Around my friends, I’m usually the most social, but when I’m on it, it feels like my spark is kind of gone,” John said. “I laugh a lot less. I can’t think of anything to say. Life is just less fun. It’s not like I’m sad; I’m just not as happy. It flattens things out.”
John also generally doesn’t take his Adderall during the summer. When he’s not in school, he told me, he doesn’t have any A.D.H.D. symptoms at all. “If I don’t have to do any work, then I’m just a completely regular person,” he said. “But once I have to focus on things, then I have to take it, or else I just won’t get any of my stuff done.”
John’s sense that his A.D.H.D. is situational — that he has it in some circumstances but not in others — is a challenge to some of psychiatry’s longstanding assumptions about the condition. After all, diabetes doesn’t go away over summer vacation. But John’s intuition is supported by scientific evidence. Increasingly, research suggests that for many people A.D.H.D. might be thought of as a condition they experience, sometimes temporarily, rather than a disorder that they have in some unchanging way.
For most of his career, he embraced what he now calls the “medical model” of A.D.H.D — the belief that the brains of people with A.D.H.D. are biologically deficient, categorically different from those of typical, healthy individuals. Now, however, Sonuga-Barke is proposing an alternative model, one that largely sidesteps questions of biology. What matters instead, he says, is the distress children feel as they try to make their way in the world.
Sonuga-Barke’s proposed model locates A.D.H.D. symptoms on a continuum, rather than presenting the condition as a distinct, natural category. And it departs from the medical model in another crucial way: It considers those symptoms not as indications of neurological deficits but as signals of a misalignment between a child’s biological makeup and the environment in which they are trying to function. “I’m not saying it’s not biological,” he says. “I’m just saying I don’t think that’s the right target. Rather than trying to treat and resolve the biology, we should be focusing on building environments that improve outcomes and mental health.”
What the researchers noticed was that their subjects weren’t particularly interested in talking about the specifics of their disorder. Instead, they wanted to talk about the context in which they were now living and how that context had affected their symptoms. Subject after subject spontaneously brought up the importance of finding their “niche,” or the right “fit,” in school or in the workplace. As adults, they had more freedom than they did as children to control the parameters of their lives — whether to go to college, what to study, what kind of career to pursue. Many of them had sensibly chosen contexts that were a better match for their personalities than what they experienced in school, and as a result, they reported that their A.D.H.D. symptoms had essentially disappeared. In fact, some of them were questioning whether they had ever had a disorder at all — or if they had just been in the wrong environment as children.
The work environments where the subjects were thriving varied. For some, the appeal of their new jobs was that they were busy and cognitively demanding, requiring constant multitasking. For others, the right context was physical, hands-on labor. For all of them, what made a difference was having work that to them felt “intrinsically interesting.”
“Rather than a static ‘attention deficit’ that appeared under all circumstances,” the M.T.A. researchers wrote, “our subjects described their propensity toward distraction as contextual. … Believing the problem lay in their environments rather than solely in themselves helped individuals allay feelings of inadequacy: Characterizing A.D.H.D. as a personality trait rather than a disorder, they saw themselves as different rather than defective.”
For the young adults in the “niche” study who were interviewed about their work lives, the transition that helped them overcome their A.D.H.D. symptoms often was leaving academic work for something more kinetic. For Sonuga-Barke, it was the opposite. At university, he would show up at the library at 9 every morning and sit in his carrel working until 5. The next day, he would do it again. Growing up, he says, he had a natural tendency to “hyperfocus,” and back at school in Derby, that tendency looked to his teachers like daydreaming. At university, it became his secret weapon
I asked Sonuga-Barke what he might have gained if he grew up in a different time and place — if he was prescribed Ritalin or Adderall at age 8 instead of just being packed off to the remedial class. “I don’t think I would have gained anything,” he said. “I think without medication, you learn alternative ways of dealing with stuff. In my particular case, there are a lot of characteristics that have helped me. My mind is constantly churning away, thinking of things. I never relax. The way I motivate myself is to turn everything into a problem and to try and solve the problem.”
“The simple model has always been, basically, ‘A.D.H.D. plus medication equals no A.D.H.D.,’” he says. “But that’s not true. Medication is not a silver bullet. It never will be.” What medication can sometimes do, he believes, is allow families more room to communicate. “At its best,” he says, “medication can provide a window for parents to engage with their kids,” by moderating children’s behavior, at least temporarily, so that family life can become more than just endless fights about overdue homework and lost lunchboxes. “If you have a more positive relationship with your child, they’re going to have a better outcome. Not for their A.D.H.D. — it’s probably going to be just the same. But in terms of dealing with the self-hatred and low self-esteem that often goes along with A.D.H.D.
The alternative model, by contrast, tells a child a very different story: that his A.D.H.D. symptoms exist on a continuum, one on which we all find ourselves; that he may be experiencing those symptoms as much because of where he is as because of who he is; and that next year, if things change in his surroundings, those symptoms might change as well. Armed with that understanding, he and his family can decide whether medication makes sense — whether for him, the benefits are likely to outweigh the drawbacks. At the same time, they can consider whether there are changes in his situation, at school or at home, that might help alleviate his symptoms.
Admittedly, that version of A.D.H.D. has certain drawbacks. It denies parents the clear, definitive explanation for their children’s problems that can come as such a relief, especially after months or years of frustration and uncertainty. It often requires a lot of flexibility and experimentation on the part of patients, families and doctors. But it has two important advantages as well: First, the new model more accurately reflects the latest scientific understanding of A.D.H.D. And second, it gives children a vision of their future in which things might actually improve — not because their brains are chemically refashioned in a way that makes them better able to fit into the world, but because they find a way to make the world fit better around their complicated and distinctive brains.
·nytimes.com·
Have We Been Thinking About A.D.H.D. All Wrong?
Make Something Heavy
Make Something Heavy
The modern makers’ machine does not want you to create heavy things. It runs on the internet—powered by social media, fueled by mass appeal, and addicted to speed. It thrives on spikes, scrolls, and screenshots. It resists weight and avoids friction. It does not care for patience, deliberation, or anything but production. It doesn’t care what you create, only that you keep creating. Make more. Make faster. Make lighter. Make something that can be consumed in a breath and discarded just as quickly. Heavy things take time. And here, time is a tax.
even the most successful Substackers—those who’ve turned newsletters into brands and businesses—eventually want to stop stacking things. They want to make one really, really good thing. One truly heavy thing. A book. A manifesto. A movie. A media company. A momument.
At any given time, you’re either pre–heavy thing or post–heavy thing. You’ve either made something weighty already, or you haven’t. Pre–heavy thing people are still searching, experimenting, iterating. Post–heavy thing people have crossed the threshold. They’ve made something substantial—something that commands respect, inspires others, and becomes a foundation to build on. And it shows. They move with confidence and calm. (But this feeling doesn’t always last forever.)
No one wants to stay in light mode forever. Sooner or later, everyone gravitates toward heavy mode—toward making something with weight. Your life’s work will be heavy. Finding the balance of light and heavy is the game.4 Note: heavy doesn’t have to mean “big.” Heavy can be small, niche, hard to scale. What I’m talking about is more like density. It’s about what is defining, meaningful, durable.
Telling everyone they’re a creator has only fostered a new strain of imposter syndrome. Being called a creator doesn’t make you one or make you feel like one; creating something with weight does. When you’ve made something heavy—something that stands on its own—you don’t need validation. You just know, because you feel its weight in your hands.
It’s not that most people can’t make heavy things. It’s that they don’t notice they aren’t. Lightness has its virtues—it pulls us in, subtly, innocently, whispering, 'Just do things.' The machine rewards movement, so we keep going, collecting badges. One day, we look up and realize we’ve been running in place.
Why does it feel bad to stop posting after weeks of consistency? Because the force of your work instantly drops to zero. It was all motion, no mass—momentum without weight. 99% dopamine, near-zero serotonin, and no trace of oxytocin. This is the contemporary creator’s dilemma—the contemporary generation’s dilemma.
We spend our lives crafting weighted blankets for ourselves—something heavy enough to anchor our ambition and quiet our minds.
Online, by nature, weight is harder to find, harder to hold on to, and only getting harder in a world where it feels like anyone can make anything.
·workingtheorys.com·
Make Something Heavy
On the necessity of a sin
On the necessity of a sin
AI excels at tasks that are intensely human: writing, ideation, faking empathy. However, it struggles with tasks that machines typically excel at, such as repeating a process consistently or performing complex calculations without assistance. In fact, it tends to solve problems that machines are good at in a very human way. When you get GPT-4 to do data analysis of a spreadsheet for you, it doesn’t innately read and understand the numbers. Instead, it uses tools the way we might, glancing at a bit of the data to see what is in it, and then writing Python programs to try to actually do the analysis. And its flaws — making up information, false confidence in wrong answers, and occasional laziness — also seem very much more like human than machine errors.
This quasi-human weirdness is why the best users of AI are often managers and teachers, people who can understand the perspective of others and correct it when it is going wrong.
Rather than focusing purely on teaching people to write good prompts, we might want to spend more time teaching them to manage the AI.
Telling the system “who” it is helps shape the outputs of the system. Telling it to act as a teacher of MBA students will result in a different output than if you ask it to act as a circus clown. This isn’t magical—you can’t say Act as Bill Gates and get better business advice or write like Hemingway and get amazing prose —but it can help make the tone and direction appropriate for your purpose.
·oneusefulthing.org·
On the necessity of a sin
Looking for AI use-cases — Benedict Evans
Looking for AI use-cases — Benedict Evans
  • LLMs have impressive capabilities, but many people struggle to find immediate use-cases that match their own needs and workflows.
  • Realizing the potential of LLMs requires not just technical advancements, but also identifying specific problems that can be automated and building dedicated applications around them.
  • The adoption of new technologies often follows a pattern of initially trying to fit them into existing workflows, before eventually changing workflows to better leverage the new tools.
if you had showed VisiCalc to a lawyer or a graphic designer, their response might well have been ‘that’s amazing, and maybe my book-keeper should see this, but I don’t do that’. Lawyers needed a word processor, and graphic designers needed (say) Postscript, Pagemaker and Photoshop, and that took longer.
I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.
A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time.
no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this.
Some of this is about imagination, and familiarity. It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’.
This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The concept of product-market fit is that normally you have to iterate your idea of the product and your idea of the use-case and customer towards each other - and then you need sales.
Meanwhile, spreadsheets were both a use-case for a PC and a general-purpose substrate in their own right, just as email or SQL might be, and yet all of those have been unbundled. The typical big company today uses hundreds of different SaaS apps, all them, so to speak, unbundling something out of Excel, Oracle or Outlook. All of them, at their core, are an idea for a problem and an idea for a workflow to solve that problem, that is easier to grasp and deploy than saying ‘you could do that in Excel!’ Rather, you instantiate the problem and the solution in software - ‘wrap it’, indeed - and sell that to a CIO. You sell them a problem.
there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL.
Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.
people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem, and with a general-purpose, open-ended prompt, the user has to think of all of that themselves, every single time, or hope it’s already in the training data. So, can the GUI itself be generative? Or do we need another whole generation of Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.
·ben-evans.com·
Looking for AI use-cases — Benedict Evans
AI Models in Software UI - LukeW
AI Models in Software UI - LukeW
In the first approach, the primary interface affordance is an input that directly (for the most part) instructs an AI model(s). In this paradigm, people are authoring prompts that result in text, image, video, etc. generation. These prompts can be sequential, iterative, or un-related. Marquee examples are OpenAI's ChatGPT interface or Midjourney's use of Discord as an input mechanism. Since there are few, if any, UI affordances to guide people these systems need to respond to a very wide range of instructions. Otherwise people get frustrated with their primarily hidden (to the user) limitations.
The second approach doesn't include any UI elements for directly controlling the output of AI models. In other words, there's no input fields for prompt construction. Instead instructions for AI models are created behind the scenes as people go about using application-specific UI elements. People using these systems could be completely unaware an AI model is responsible for the output they see.
The third approach is application specific UI with AI assistance. Here people can construct prompts through a combination of application-specific UI and direct model instructions. These could be additional controls that generate portions of those instructions in the background. Or the ability to directly guide prompt construction through the inclusion or exclusion of content within the application. Examples of this pattern are Microsoft's Copilot suite of products for GitHub, Office, and Windows.
they could be overlays, modals, inline menus and more. What they have in common, however, is that they supplement application specific UIs instead of completely replacing them.
·lukew.com·
AI Models in Software UI - LukeW
The VR winter — Benedict Evans
The VR winter — Benedict Evans
When I started my career 3G was the hot topic, and every investor kept asking ‘what’s the killer app for 3G?’ It turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket. But with each of those, we knew what to build next, and with VR we don’t. That tells me that VR has a place in the future. It just doesn’t tell me what kind of place.
The successor to the smartphone will be something that doesn’t just merge AR and VR but make the distinction irrelevant - something that you can wear all day every day, and that can seamlessly both occlude and supplement the real world and generate indistinguishable volumetric space.
·ben-evans.com·
The VR winter — Benedict Evans
When social media controls the nuclear codes
When social media controls the nuclear codes
David Foster Wallace once said that:The language of images. . . maybe not threatens, but completely changes actual lived life. When you consider that my grandparents, by the time they got married and kissed, I think they had probably seen maybe a hundred kisses. They'd seen people kiss a hundred times. My parents, who grew up with mainstream Hollywood cinema, had seen thousands of kisses by the time they ever kissed. Before I kissed anyone I had seen tens of thousands of kisses. I know that the first time I kissed much of my thought was, “Am I doing it right? Am I doing it according to how I've seen it?”
A lot of the 80s and 90s critiques of postmodernity did have a point—our experience really is colored by media. Having seen a hundred movies about nuclear apocalypse, the entire time we’ll be looking over our shoulder for the camera, thinking: “Am I doing it right?”
·erikhoel.substack.com·
When social media controls the nuclear codes
Exapt existing infrastructure
Exapt existing infrastructure
Here are the adoption curves for a handful of major technologies in the United States. There are big differences in the speeds at which these technologies were absorbed. Landline telephones took about 86 years to hit 80% adoption.Flush toilets took 96 years to hit 80% adoption.Refrigerators took about 25 years.Microwaves took 17 years.Smartphones took just 12 years.Why these wide differences in adoption speed? Conformability with existing infrastructure. Flush toilets required the build-out of water and sewage utility systems. They also meant adding a new room to the house—the bathroom—and running new water and sewage lines underneath and throughout the house. That’s a lot of systems to line up. By contrast, refrigerators replaced iceboxes, and could fit into existing kitchens without much work. Microwaves could sit on a countertop. Smartphones could slip into your pocket.
·subconscious.substack.com·
Exapt existing infrastructure
I Didn’t Want It to Be True, but the Medium Really Is the Message
I Didn’t Want It to Be True, but the Medium Really Is the Message
it’s the common rules that govern all creation and consumption across a medium that change people and society. Oral culture teaches us to think one way, written culture another. Television turned everything into entertainment, and social media taught us to think with the crowd.
There is a grammar and logic to the medium, enforced by internal culture and by ratings reports broken down by the quarter-hour. You can do better cable news or worse cable news, but you are always doing cable news.
Don’t just look at the way things are being expressed; look at how the way things are expressed determines what’s actually expressible.” In other words, the medium blocks certain messages.
Television teaches us to expect that anything and everything should be entertaining. But not everything should be entertainment, and the expectation that it will be is a vast social and even ideological change.
Television, he writes, “serves us most ill when it co-opts serious modes of discourse — news, politics, science, education, commerce, religion — and turns them into entertainment packages.
The border between entertainment and everything else was blurring, and entertainers would be the only ones able to fulfill our expectations for politicians. He spends considerable time thinking, for instance, about the people who were viable politicians in a textual era and who would be locked out of politics because they couldn’t command the screen.
As a medium, Twitter nudges its users toward ideas that can survive without context, that can travel legibly in under 280 characters. It encourages a constant awareness of what everyone else is discussing. It makes the measure of conversational success not just how others react and respond but how much response there is. It, too, is a mold, and it has acted with particular force on some of our most powerful industries — media and politics and technology.
I’ve also learned that patterns of attention — what we choose to notice and what we do not — are how we render reality for ourselves, and thus have a direct bearing on what we feel is possible at any given time. These aspects, taken together, suggest to me the revolutionary potential of taking back our attention.
·nytimes.com·
I Didn’t Want It to Be True, but the Medium Really Is the Message