Found 110 bookmarks
Newest
You and Your Research, a talk by Richard Hamming
You and Your Research, a talk by Richard Hamming
I will talk mainly about science because that is what I have studied. But so far as I know, and I've been told by others, much of what I say applies to many fields. Outstanding work is characterized very much the same way in most fields, but I will confine myself to science.
I spoke earlier about planting acorns so that oaks will grow. You can't always know exactly where to be, but you can keep active in places where something might happen. And even if you believe that great science is a matter of luck, you can stand on a mountain top where lightning strikes; you don't have to hide in the valley where you're safe.
Most great scientists know many important problems. They have something between 10 and 20 important problems for which they are looking for an attack. And when they see a new idea come up, one hears them say ``Well that bears on this problem.'' They drop all the other things and get after it.
The great scientists, when an opportunity opens up, get after it and they pursue it. They drop all other things. They get rid of other things and they get after an idea because they had already thought the thing through. Their minds are prepared; they see the opportunity and they go after it. Now of course lots of times it doesn't work out, but you don't have to hit many of them to do some great science. It's kind of easy. One of the chief tricks is to live a long time!
He who works with the door open gets all kinds of interruptions, but he also occasionally gets clues as to what the world is and what might be important. Now I cannot prove the cause and effect sequence because you might say, ``The closed door is symbolic of a closed mind.'' I don't know. But I can say there is a pretty good correlation between those who work with the doors open and those who ultimately do important things, although people who work with doors closed often work harder.
You should do your job in such a fashion that others can build on top of it, so they will indeed say, ``Yes, I've stood on so and so's shoulders and I saw further.'' The essence of science is cumulative. By changing a problem slightly you can often do great work rather than merely good work. Instead of attacking isolated problems, I made the resolution that I would never again solve an isolated problem except as characteristic of a class.
by altering the problem, by looking at the thing differently, you can make a great deal of difference in your final productivity because you can either do it in such a fashion that people can indeed build on what you've done, or you can do it in such a fashion that the next person has to essentially duplicate again what you've done. It isn't just a matter of the job, it's the way you write the report, the way you write the paper, the whole attitude. It's just as easy to do a broad, general job as one very special case. And it's much more satisfying and rewarding!
it is not sufficient to do a job, you have to sell it. `Selling' to a scientist is an awkward thing to do. It's very ugly; you shouldn't have to do it. The world is supposed to be waiting, and when you do something great, they should rush out and welcome it. But the fact is everyone is busy with their own work. You must present it so well that they will set aside what they are doing, look at what you've done, read it, and come back and say, ``Yes, that was good.'' I suggest that when you open a journal, as you turn the pages, you ask why you read some articles and not others. You had better write your report so when it is published in the Physical Review, or wherever else you want it, as the readers are turning the pages they won't just turn your pages but they will stop and read yours. If they don't stop and read it, you won't get credit.
I think it is very definitely worth the struggle to try and do first-class work because the truth is, the value is in the struggle more than it is in the result. The struggle to make something of yourself seems to be worthwhile in itself. The success and fame are sort of dividends, in my opinion.
He had his personality defect of wanting total control and was not willing to recognize that you need the support of the system. You find this happening again and again; good scientists will fight the system rather than learn to work with the system and take advantage of all the system has to offer. It has a lot, if you learn how to use it. It takes patience, but you can learn how to use the system pretty well, and you can learn how to get around it. After all, if you want a decision `No', you just go to your boss and get a `No' easy. If you want to do something, don't ask, do it. Present him with an accomplished fact. Don't give him a chance to tell you `No'. But if you want a `No', it's easy to get a `No'.
Amusement, yes, anger, no. Anger is misdirected. You should follow and cooperate rather than struggle against the system all the time.
I found out many times, like a cornered rat in a real trap, I was surprisingly capable. I have found that it paid to say, ``Oh yes, I'll get the answer for you Tuesday,'' not having any idea how to do it. By Sunday night I was really hard thinking on how I was going to deliver by Tuesday. I often put my pride on the line and sometimes I failed, but as I said, like a cornered rat I'm surprised how often I did a good job. I think you need to learn to use yourself. I think you need to know how to convert a situation from one view to another which would increase the chance of success.
I do go in to strictly talk to somebody and say, ``Look, I think there has to be something here. Here's what I think I see ...'' and then begin talking back and forth. But you want to pick capable people. To use another analogy, you know the idea called the `critical mass.' If you have enough stuff you have critical mass. There is also the idea I used to call `sound absorbers'. When you get too many sound absorbers, you give out an idea and they merely say, ``Yes, yes, yes.'' What you want to do is get that critical mass in action; ``Yes, that reminds me of so and so,'' or, ``Have you thought about that or this?'' When you talk to other people, you want to get rid of those sound absorbers who are nice people but merely say, ``Oh yes,'' and to find those who will stimulate you right back.
On surrounding yourself with people who provoke meaningful progress
I believed, in my early days, that you should spend at least as much time in the polish and presentation as you did in the original research. Now at least 50% of the time must go for the presentation. It's a big, big number.
Luck favors a prepared mind; luck favors a prepared person. It is not guaranteed; I don't guarantee success as being absolutely certain. I'd say luck changes the odds, but there is some definite control on the part of the individual.
If you read all the time what other people have done you will think the way they thought. If you want to think new thoughts that are different, then do what a lot of creative people do - get the problem reasonably clear and then refuse to look at any answers until you've thought the problem through carefully how you would do it, how you could slightly change the problem to be the correct one. So yes, you need to keep up. You need to keep up more to find out what the problems are than to read to find the solutions. The reading is necessary to know what is going on and what is possible. But reading to get the solutions does not seem to be the way to do great research. So I'll give you two answers. You read; but it is not the amount, it is the way you read that counts.
Avoiding excessive reading before thinking
your dreams are, to a fair extent, a reworking of the experiences of the day. If you are deeply immersed and committed to a topic, day after day after day, your subconscious has nothing to do but work on your problem. And so you wake up one morning, or on some afternoon, and there's the answer.
#dreams , subconscious processing
·blog.samaltman.com·
You and Your Research, a talk by Richard Hamming
The Top Idea in Your Mind
The Top Idea in Your Mind
You can't directly control where your thoughts drift. If you're controlling them, they're not drifting. But you can control them indirectly, by controlling what situations you let yourself get into. That has been the lesson for me: be careful what you let become critical to you. Try to get yourself into situations where the most urgent problems are ones you want to think about.
barring emergencies you have a good deal of indirect control over what becomes the top idea in your mind.
Turning the other cheek turns out to have selfish advantages. Someone who does you an injury hurts you twice: first by the injury itself, and second by taking up your time afterward thinking about it. If you learn to ignore injuries you can at least avoid the second half. I've found I can to some extent avoid thinking about nasty things people have done to me by telling myself: this doesn't deserve space in my head.
just take a shower. What topic do your thoughts keep returning to? If it's not what you want to be thinking about, you may want to change something.
·paulgraham.com·
The Top Idea in Your Mind
On the Accountability of Unnamed Public Relations Spokespeople
On the Accountability of Unnamed Public Relations Spokespeople
When a statement is attributed to “a spokesperson” from a company or institution, the world doesn’t know who that spokesperson is. Only the reporter or writer, and perhaps their editors. There is an explicit lack of accountability attributing statements to an institution rather than to specific people. We even have different pronouns — it’s institutions that do things, but only people who do things. Who is the question.
This West Point / ProPublica near-fiasco has me reconsidering my skepticism toward The Verge’s obstinacy on this. It occurs to me now that The Verge’s adamancy on this issue isn’t merely for the benefit of their readers. Putting one’s name on a statement heightens the personal stakes. This is why it’s more than vanity to put your name on your work, whatever your work is — it shows you take responsibility for its validity
·daringfireball.net·
On the Accountability of Unnamed Public Relations Spokespeople
In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT : r/ApplyingToCollege
In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT : r/ApplyingToCollege

An experienced college essay reviewer identifies seven distinct patterns that reveal ChatGPT's writing "fingerprint" in admission essays, demonstrating how AI-generated content, despite being well-written, often lacks originality and follows predictable patterns that make it detectable to experienced readers.

Seven key indicators of ChatGPT-written essays:

  1. Specific vocabulary choices (e.g., "delve," "tapestry")
  2. Limited types of extended metaphors (weaving, cooking, painting, dance, classical music)
  3. Distinctive punctuation patterns (em dashes, mixed apostrophe styles)
  4. Frequent use of tricolons (three-part phrases), especially ascending ones
  5. Common phrase pattern: "I learned that the true meaning of X is not only Y, it's also Z"
  6. Predictable future-looking conclusions: "As I progress... I will carry..."
  7. Multiple ending syndrome (similar to Lord of the Rings movies)
·reddit.com·
In the past three days, I've reviewed over 100 essays from the 2024-2025 college admissions cycle. Here's how I could tell which ones were written by ChatGPT : r/ApplyingToCollege
Why Are Debut Novels Failing to Launch?
Why Are Debut Novels Failing to Launch?
The fragmented media environment, changes in publicity strategies, and the need for authors to become influencers have made it harder for new voices to break through.
Last fall, while reporting Esquire’s “Future of Books” predictions, I asked industry insiders about trends they’d noticed in recent years. Almost everyone mentioned that debut fiction has become harder to launch. For writers, the stakes are do or die: A debut sets the bar for each of their subsequent books, so their debut advance and sales performance can follow them for the rest of their career. For editors, if a writer’s first book doesn’t perform, it’s hard to make a financial case for acquiring that writer’s second book. And for you, a reader interested in great fiction, the fallout from this challenging climate can limit your access to exciting new voices in fiction. Unless you diligently shop at independent bookstores where booksellers highlight different types of books, you might only ever encounter the big, splashy debuts that publishers, book clubs, social-media algorithms, and big-box retailers have determined you should see.
BookTok—er, TikTok—is still considered the au courant emergent platform, but unlike Instagram and Twitter before it, publishers can’t figure out how to game the algorithm. “It’s a wonderful tool, but it’s an uncontrollable one,” Lucas says. As opposed to platforms like Twitter and Instagram, on which authors can actively post to establish a following, the runaway hits of BookTok (see: The Song of Achilles) grew from influencer videos.
These days, “in order to get exposure, you have to make the kinds of content that the platform is prioritizing in a given moment,” Chayka says. On Instagram, that means posting videos. Gone are the days of the tastefully cluttered tableaux of notebooks, pens, and coffee mugs near a book jacket; front-facing videos are currently capturing the most eyeballs. “A nonfiction author at least has the subject matter to talk about,” Chayka says. (Many nonfiction writers now create bite-size videos distilling the ideas of their books, with the goal of becoming thought leaders.) But instead of talking about their books, novelists share unboxing videos when they receive their advance copies, or lifestyle videos about their writing routines, neither of which convey their voice on the page. Making this “content” takes time away from writing, Chayka says: “You’re glamorizing your writer’s residency; you’re not talking about the work itself necessarily.”
“Energy tends to attach itself to wherever energy is already attached,” Lucas says. “Fewer debuts have a chance of really breaking through the noise in this climate, because all of the energy attaches itself to the ones that have made it past a certain obstacle.” In some cases, the energy starts building as early as when a project is first announced.
Because staff publicists at publishing houses must split their workload among several authors, there is an expectation that an author will now spend untold hours working as their book’s spokesperson.
The agent at the talent firm describes a “one strike and you’re out” mentality, with some authors getting dropped by their agents if their debut doesn’t sell well.
But one positive development amid this sense of precarity is the rise of the literary friendship. “On social media,” Isle McElroy wrote for this magazine in September, “writers are just as likely to hype their peers as they are to self-promote: linking where to buy books, posting photos of readings, and sharing passages from galleys.” There is now an all-ships-rise mentality among authors at every career stage, but particularly among first-time novelists. Now networks of writers are more important than ever.
When it was time to ask other writers for blurbs for The Volcano Daughters, Balibrera had friends who were excited to boost the book, but she could also rely on other writers who remembered her from Literati. “There was goodwill built up already,” Gibbs says.
·esquire.com·
Why Are Debut Novels Failing to Launch?
The Return of Ta-Nehisi Coates
The Return of Ta-Nehisi Coates
That it was complicated, he now understood, was “horseshit.” “Complicated” was how people had described slavery and then segregation. “It’s complicated,” he said, “when you want to take something from somebody.”
He had also been told that the conflict was “complicated,” its history tortuous and contested, and, as he writes, “that a body of knowledge akin to computational mathematics was needed to comprehend it.” He was astonished by the plain truth of what he saw: the walls, checkpoints, and guns that everywhere hemmed in the lives of Palestinians; the clear tiers of citizenship between the first-class Jews and the second-class Palestinians; and the undisguised contempt with which the Israeli state treated the subjugated other.
The most famous of Israel’s foundational claims — that it was a necessary sanctuary for one of the world’s most oppressed peoples, who may not have survived without a state of their own — is at the root of this complication and undergirds the prevailing viewpoint of the political-media-entertainment nexus. It is Israel’s unique logic of existence that has provided a quantum of justice to the Israeli project in the eyes of Americans and others around the world, and it’s what separates Jewish Israelis from the white supremacists of the Jim Crow South, who had no justice on their side at all.
“It’s kind of hard to remember, but even as late as 2014, people were talking about the Civil War as this complicated subject,” Jackson said. “Ta-Nehisi was going to plantations and hanging out at Monticello and looking at all the primary documents and reading a thousand books, and it became clear that the idea of a ‘complicated’ narrative was ridiculous.” The Civil War was, Coates concluded, solely about the South’s desire to perpetuate slavery, and the subsequent attempts over the next century and a half to hide that simple fact betrayed, he believed, a bigger lie — the lie that America was a democracy, a mass delusion that he would later call “the Dream” in Between the World and Me.
The hallmarks of The Atlantic’s coverage include variations of Israel’s seemingly limitless “right to defend itself”; an assertion that extremists on “both sides” make the conflict worse, with its corollary argument that if only Prime Minister Benjamin Netanyahu’s Jewish-supremacist government were ousted, then progress could be made; abundant sympathy for the suffering of Israelis and a comparatively muted response to the suffering of Palestinians; a fixation on the way the issue is debated in America, particularly on college campuses; and regular warnings that antisemitism is on the rise both in America and around the world.
the overall pattern reveals a distorting worldview that pervades the industry and, as Coates writes in The Message, results in “the elevation of factual complexity over self-evident morality.” “The view of mainstream American commentators is a false equivalence between subjugator and subjugated,” said Nathan Thrall, the Jerusalem-based author of the Pulitzer Prize–winning A Day in the Life of Abed Salama, as if the Israelis and the Palestinians were equal parties in an ancient tug-of-war.
For Coates, the problem for the industry at large partly stems from the perennial problem of inadequate representation. “It is extremely rare to see Palestinians and Arabs writing the coverage or doing the book reviews,” he said. “I would be interested if you took the New York Times and the Washington Post and The Wall Street Journal and looked at how many of those correspondents are Palestinian, I wonder what you would find.” (It’s a testament to just how polarizing the issue is that many Jewish Americans believe the bias in news media works the other way around, against Israel.)
American mainstream journalism, Coates says, defers to American authority. “It’s very similar,” he told me, “to how American journalism has been deferential to the cops. We privilege the cops, we privilege the military, we privilege the politicians. The default setting is toward power.”
in the total coverage, in all of the talk of experts and the sound bites of politicians and the dispatches of credentialed reporters, a sense of ambiguity is allowed to prevail. “The fact of the matter is,” he said, “that kid up at Columbia, whatever dumb shit they’re saying, whatever slogan I would not say that they would use, they are more morally correct than some motherfuckers that have won Pulitzer Prizes and National Magazine Awards and are the most decorated and powerful journalists.”
When I asked Coates what he wanted to see happen in Israel and Palestine, he avoided the geopolitical scale and tended toward the more specific — for example, to have journalists not be “shot by army snipers.” He said that the greater question was not properly for him; it belonged to those with lived experience and those who had been studying the problem for years.
On the importance of using moral rightness as a north star for pragmatic designs
“I have a deep-seated fear,” he told me, “that the Black struggle will ultimately, at its root, really just be about narrow Black interest. And I don’t think that is in the tradition of what our most celebrated thinkers have told the world. I don’t think that’s how Martin Luther King thought about the Black struggle. I know that’s not how Du Bois thought about the Black struggle. I know that’s not how Baldwin thought about the Black struggle. Should it turn out that we have our first Black woman president, and our first South Asian president, and we continue to export 2,000-pound bombs to perpetrate a genocide, in defense of a state that is practicing apartheid, I won’t be able to just sit here and shake my head and say, ‘Well, that is unfortunate.’ I’m going to do what I can in the time that remains, and the writing that I have, to not allow that to be, because that is existential death for the Black struggle, and for Black people, as far as I’m concerned.”
·nymag.com·
The Return of Ta-Nehisi Coates
You Should Seriously Read ‘Stoner’ Right Now (Published 2014)
You Should Seriously Read ‘Stoner’ Right Now (Published 2014)
I find it tremendously hopeful that “Stoner” is thriving in a world in which capitalist energies are so hellbent on distracting us from the necessary anguish of our inner lives. “Stoner” argues that we are measured ultimately by our capacity to face the truth of who we are in private moments, not by the burnishing of our public selves.
The story of his life is not a neat crescendo of industry and triumph, but something more akin to our own lives: a muddle of desires and inhibitions and compromises.
The deepest lesson of “Stoner” is this: What makes a life heroic is the quality of attention paid to it.
Americans worship athletes and moguls and movie stars, those who possess the glittering gifts we equate with worth and happiness. The stories that flash across our screens tend to be paeans to reckless ambition.
It’s the staggering acceleration of our intellectual and emotional metabolisms: our hunger for sensation and narcissistic reward, our readiness to privilege action over contemplation. And, most of all, our desperate compulsion to be known by the world rather than seeking to know ourselves.
The emergence of a robust advertising culture reinforced the notion that Americans were more or less always on stage and thus in constant need of suitable costumes and props.
Consider our nightly parade of prime-time talent shows and ginned-up documentaries in which chefs and pawn brokers and bored housewives reinvent their private lives as theater.
If you want to be among those who count, and you don’t happen to be endowed with divine talents or a royal lineage, well then, make some noise. Put your wit — or your craft projects or your rants or your pranks — on public display.
Our most profound acts of virtue and vice, of heroism and villainy, will be known by only those closest to us and forgotten soon enough. Even our deepest feelings will, for the most part, lay concealed within the vault of our hearts. Much of the reason we construct garish fantasies of fame is to distract ourselves from these painful truths. We confess so much to so many, as if by these disclosures we might escape the terror of confronting our hidden selves.
revelation is triggered by literature. The novel is notable as art because it places such profound faith in art.
·nytimes.com·
You Should Seriously Read ‘Stoner’ Right Now (Published 2014)
Three Telltale Signs of Online Post-Literacy
Three Telltale Signs of Online Post-Literacy
The swarms of online surveillers typically only know how to detect clearly stated opinions, and the less linguistic jouissance the writer of these opinions displays in writing them, the easier job the surveillers will have of it. Another way of saying this is that those who read in order to find new targets of denunciation are so far along now in their convergent evolution with AI, that the best way to protect yourself from them is to conceal your writing under a shroud of irreducibly human style
Such camouflage was harder to wear within the 280-word limit on Twitter, which of course meant that the most fitting and obvious way to avoid the Maoists was to retreat into insincere shitposting — arguably the first truly new genre of artistic or literary endeavor in the 21st century, which perhaps will turn out to have been as explosive and revolutionary as, say, jazz was in the 20th.
Our master shitposter has perfectly mirrored the breakdown of sense that characterizes our era — dril’s body of work looks like our moment no less than, say, an Otto Dix painting looks like World War I
·the-hinternet.com·
Three Telltale Signs of Online Post-Literacy
Synthesizer for thought - thesephist.com
Synthesizer for thought - thesephist.com
Draws parallels between the evolution of music production through synthesizers and the potential for new tools in language and idea generation. The author argues that breakthroughs in mathematical understanding of media lead to new creative tools and interfaces, suggesting that recent advancements in language models could revolutionize how we interact with and manipulate ideas and text.
A synthesizer produces music very differently than an acoustic instrument. It produces music at the lowest level of abstraction, as mathematical models of sound waves.
Once we started understanding writing as a mathematical object, our vocabulary for talking about ideas expanded in depth and precision.
An idea is composed of concepts in a vector space of features, and a vector space is a kind of marvelous mathematical object that we can write theorems and prove things about and deeply and fundamentally understand.
Synthesizers enabled entirely new sounds and genres of music, like electronic pop and techno. These new sounds were easier to discover and share because new sounds didn’t require designing entirely new instruments. The synthesizer organizes the space of sound into a tangible human interface, and as we discover new sounds, we could share it with others as numbers and digital files, as the mathematical objects they’ve always been.
Because synthesizers are electronic, unlike traditional instruments, we can attach arbitrary human interfaces to it. This dramatically expands the design space of how humans can interact with music. Synthesizers can be connected to keyboards, sequencers, drum machines, touchscreens for continuous control, displays for visual feedback, and of course, software interfaces for automation and endlessly dynamic user interfaces. With this, we freed the production of music from any particular physical form.
Recently, we’ve seen neural networks learn detailed mathematical models of language that seem to make sense to humans. And with a breakthrough in mathematical understanding of a medium, come new tools that enable new creative forms and allow us to tackle new problems.
Heatmaps can be particularly useful for analyzing large corpora or very long documents, making it easier to pinpoint areas of interest or relevance at a glance.
If we apply the same idea to the experience of reading long-form writing, it may look like this. Imagine opening a story on your phone and swiping in from the scrollbar edge to reveal a vertical spectrogram, each “frequency” of the spectrogram representing the prominence of different concepts like sentiment or narrative tension varying over time. Scrubbing over a particular feature “column” could expand it to tell you what the feature is, and which part of the text that feature most correlates with.
What would a semantic diff view for text look like? Perhaps when I edit text, I’d be able to hover over a control for a particular style or concept feature like “Narrative voice” or “Figurative language”, and my highlighted passage would fan out the options like playing cards in a deck to reveal other “adjacent” sentences I could choose instead. Or, if that involves too much reading, each word could simply be highlighted to indicate whether that word would be more or less likely to appear in a sentence that was more “narrative” or more “figurative” — a kind of highlight-based indicator for the direction of a semantic edit.
Browsing through these icons felt as if we were inventing a new kind of word, or a new notation for visual concepts mediated by neural networks. This could allow us to communicate about abstract concepts and patterns found in the wild that may not correspond to any word in our dictionary today.
What visual and sensory tricks can we use to coax our visual-perceptual systems to understand and manipulate objects in higher dimensions? One way to solve this problem may involve inventing new notation, whether as literal iconic representations of visual ideas or as some more abstract system of symbols.
Photographers buy and sell filters, and cinematographers share and download LUTs to emulate specific color grading styles. If we squint, we can also imagine software developers and their package repositories like NPM to be something similar — a global, shared resource of abstractions anyone can download and incorporate into their work instantly. No such thing exists for thinking and writing. As we figure out ways to extract elements of writing style from language models, we may be able to build a similar kind of shared library for linguistic features anyone can download and apply to their thinking and writing. A catalogue of narrative voice, speaking tone, or flavor of figurative language sampled from the wild or hand-engineered from raw neural network features and shared for everyone else to use.
We’re starting to see something like this already. Today, when users interact with conversational language models like ChatGPT, they may instruct, “Explain this to me like Richard Feynman.” In that interaction, they’re invoking some style the model has learned during its training. Users today may share these prompts, which we can think of as “writing filters”, with their friends and coworkers. This kind of an interaction becomes much more powerful in the space of interpretable features, because features can be combined together much more cleanly than textual instructions in prompts.
·thesephist.com·
Synthesizer for thought - thesephist.com
written in the body
written in the body
I spent so many years of my life trying to live mostly in my head. Intellectualizing everything made me feel like it was manageable. I was always trying to manage my own reactions and the reactions of everyone else around me. Learning how to manage people was the skill that I had been lavishly rewarded for in my childhood and teens. Growing up, you’re being reprimanded in a million different ways all the time, and I learned to modify my behavior so that over time I got more and more positive feedback. People like it when you do X and not Y, say X and not Y. I kept track of all of it in my head and not in my body. Intellectualizing kept me numbed out, and for a long time what I wanted was nothing more than to be numbed out, because when things hurt they hurt less. Whatever I felt like I couldn’t show people or tell people I hid away. I compartmentalized, and what I put in the compartment I never looked at became my shadow.
So much of what I care about can be boiled down to this: when you’re able to really inhabit and pay attention to your body, it becomes obvious what you want and don’t want, and the path towards your desires is clear. If you’re not in your body, you constantly rationalizing what you should do next, and that can leave you inert or trapped or simply choosing the wrong thing over and over. "I know I should, but I can’t do it” is often another way of saying “I’ve reached this conclusion intellectually, but I’m so frozen out of my body I can’t feel a deeper certainty.”
It was so incredibly hard when people gave me negative feedback—withdrew, or rejected me, or were just preoccupied with their own problems—because I relied on other people to figure out whether everything was alright.
When I started living in my body I started feeling for the first time that I could trust myself in a way that extended beyond trust of my intelligence, of my ability to pick up on cues in my external environment.
I can keep my attention outwards, I don’t direct it inwards in a self-conscious way. It’s the difference between noticing whether someone seems to having a good time in the moment by watching their face vs agonizing about whether they enjoyed something after the fact. I can tell the difference between when I’m tired because I didn’t sleep well versus tired because I’m bored versus tired because I’m avoiding something. When I’m in my body, I’m aware of myself instead of obsessing over my state, and this allows me to have more room for other people.
·avabear.xyz·
written in the body
Write Like You Talk
Write Like You Talk
You don't need complex sentences to express complex ideas. When specialists in some abstruse topic talk to one another about ideas in their field, they don't use sentences any more complex than they do when talking about what to have for lunch. They use different words, certainly. But even those they use no more than necessary. And in my experience, the harder the subject, the more informally experts speak. Partly, I think, because they have less to prove, and partly because the harder the ideas you're talking about, the less you can afford to let language get in the way.
Informal language is the athletic clothing of ideas
I'm not saying spoken language always works best. Poetry is as much music as text, so you can say things you wouldn't say in conversation. And there are a handful of writers who can get away with using fancy language in prose.
But for nearly everyone else, spoken language is better.
After writing the first draft, try explaining to a friend what you just wrote. Then replace the draft with what you said to your friend.
·paulgraham.com·
Write Like You Talk
How tweet threads cured my writer's block: Twitter as a medium for sketching
How tweet threads cured my writer's block: Twitter as a medium for sketching
witter’s main constraint is encouraging concision. It’s hard to dwell on word choice when you have so little space to work with. Twitter’s conversational tone also helps here—I can just write like I talk, and any fancy words would seem out of place. And of course, I can’t tweak fonts and margins, which cuts off a distraction vector.
each idea has to be wrapped in a little atomic package. I find this helpful for figuring out the boundaries between my thoughts and clarifying the discrete units of an argument.
a thread is linear! No indenting allowed. This forces a brisk straightline through the argument, instead of getting mired in the fine points of the sub-sub-sub-arguments of the first idea.
I think Twitter is useless for persuading a skeptical reader; there’s simply not space for providing enough detail and context.
I prefer to use Twitter as a way to workshop ideas with sympathetic parties who already have enough context to share my excitement about the ideas.
Overall, it seems that we want constraints that help keep us on track with fluid thought, but don’t rule out too many interesting possibilities. Considering both of these criteria together is a subtle balancing act, and I don’t see easy answers.
low barrier to finishing. On Twitter, a single sentence is a completely acceptable unit of publication. Anything beyond that is sort of a bonus. In contrast, most of my blog posts go unpublished because I fear they’re not complete, or not good enough in some dimension. These unpublished drafts are obviously far more complete than a single tweet, but because they’re on a blog, they don’t feel “done,” and it’s hard to overcome the fear of sharing.
This seems like a crucial part of sketching tools: when you make a sketch, it should be understood that your idea is immature, and feel safe to share it in that state. There’s a time and a place for polished, deeply thorough artifacts… and it’s not Twitter! Everyone knows you just did a quick sketch.
I believe that quantity leads to quality. The students who make more pots in ceramics class improve faster than the students who obsess over making a single perfect pot. A tool with a built-in low barrier to finishing makes it easier to overcome the fear, do more work, and share it at an earlier stage.
For me, Twitter does an oddly good job at simulating the thrilling creative energy of a whiteboarding session. People pop in and out of the conversation offering insights; trees and sub-trees form riffing off of earlier points.
I’m curious to think more about the constraints/freedoms afforded by different kinds of creative tools, and whether we could get more clever with those constraints to enable new kinds of sketching. I’m especially curious about kinds of sketching which are only possible thanks to computers, and couldn’t have been done with paper and pen.
·geoffreylitt.com·
How tweet threads cured my writer's block: Twitter as a medium for sketching
s.penkevich's review of Monstrilio
s.penkevich's review of Monstrilio
the story is pulled from Mago’s perspective into 3 subsequent perspectives over the years: Lena, the best friend; Joseph, the ex-husband and father; and finally Monstrilio himself. It is a stylistic choice that (mostly) works and allows us to see how these events radiate outward across many lives.
M’s perspective being saved for last is not just because it is the best section of the novel and wraps up all the disparate elements into a tight punch of a finale, but because M’s feeling and needs are constantly being pushed aside to fit the ideas of what the other character’s think they need (this is most evident in the surgery aspect). This makes for an excellent look at the way the push and pull of families affects everyone, especially the younger ones caught up in it, and is made more ominous and chilling through the lens of horror.
On one hand we have the fact that M is quite literally a monster created out of a dead child’s lung, yet despite his form he is no less a part of the family or loved like a child. But in later portions of the novel he transforms into a human form which helps him disguise who he is inside. And what he hungers for cannot be hidden. Hunger is a quite a dynamic symbol here, being both his literal hunger but also as an investigation into sexuality.
it does all sort of touch on the idea that queer sexuality is often othered or seen as unnatural despite being very normal and natural, especially to the person having those emotions.
·goodreads.com·
s.penkevich's review of Monstrilio
The Life and Death of Hollywood, by Daniel Bessner
The Life and Death of Hollywood, by Daniel Bessner
now the streaming gold rush—the era that made Dickinson—is over. In the spring of 2022, the Federal Reserve began raising interest rates after years of nearly free credit, and at roughly the same time, Wall Street began calling in the streamers’ bets. The stock prices of nearly all the major companies with streaming platforms took precipitous falls, and none have rebounded to their prior valuation.
Thanks to decades of deregulation and a gush of speculative cash that first hit the industry in the late Aughts, while prestige TV was climbing the rungs of the culture, massive entertainment and media corporations had been swallowing what few smaller companies remained, and financial firms had been infiltrating the business, moving to reduce risk and maximize efficiency at all costs, exhausting writers in evermore unstable conditions.
The new effective bosses of the industry—colossal conglomerates, asset-management companies, and private-equity firms—had not been simply pushing workers too hard and grabbing more than their fair share of the profits. They had been stripping value from the production system like copper pipes from a house—threatening the sustainability of the studios themselves. Today’s business side does not have a necessary vested interest in “the business”—in the health of what we think of as Hollywood, a place and system in which creativity is exchanged for capital. The union wins did not begin to address this fundamental problem.
To the new bosses, the quantity of money that studios had been spending on developing screenplays—many of which would never be made—was obvious fat to be cut, and in the late Aughts, executives increasingly began offering one-step deals, guaranteeing only one round of pay for one round of work. Writers, hoping to make it past Go, began doing much more labor—multiple steps of development—for what was ostensibly one step of the process. In separate interviews, Dana Stevens, writer of The Woman King, and Robin Swicord described the change using exactly the same words: “Free work was encoded.” So was safe material. In an effort to anticipate what a studio would green-light, writers incorporated feedback from producers and junior executives, constructing what became known as producer’s drafts. As Rodman explained it: “Your producer says to you, ‘I love your script. It’s a great first draft. But I know what the studio wants. This isn’t it. So I need you to just make this protagonist more likable, and blah, blah, blah.’ And you do it.”
By 2019, the major Hollywood agencies had been consolidated into an oligopoly of four companies that controlled more than 75 percent of WGA writers’ earnings. And in the 2010s, high finance reached the agencies: by 2014, private equity had acquired Creative Artists Agency and William Morris Endeavor, and the latter had purchased IMG. Meeting benchmarks legible to the new bosses—deals actually made, projects off the ground—pushed agents to function more like producers, and writers began hearing that their asking prices were too high.
Executives, meanwhile, increasingly believed that they’d found their best bet in “IP”: preexisting intellectual property—familiar stories, characters, and products—that could be milled for scripts. As an associate producer of a successful Aughts IP-driven franchise told me, IP is “sort of a hedge.” There’s some knowledge of the consumer’s interest, he said. “There’s a sort of dry run for the story.” Screenwriter Zack Stentz, who co-wrote the 2011 movies Thor and X-Men: First Class, told me, “It’s a way to take risk out of the equation as much as possible.”
Multiple writers I spoke with said that selecting preexisting characters and cinematic worlds gave executives a type of psychic edge, allowing them to claim a degree of creative credit. And as IP took over, the perceived authority of writers diminished. Julie Bush, a writer-producer for the Apple TV+ limited series Manhunt, told me, “Executives get to feel like the author of the work, even though they have a screenwriter, like me, basically create a story out of whole cloth.” At the same time, the biggest IP success story, the Marvel Cinematic Universe, by far the highest-earning franchise of all time, pioneered a production apparatus in which writers were often separated from the conception and creation of a movie’s overall story.
Joanna Robinson, co-author of the book MCU: The Reign of Marvel Studios, told me that the writers for WandaVision, a Marvel show for Disney+, had to craft almost the entirety of the series’ single season without knowing where their work was ultimately supposed to arrive: the ending remained undetermined, because executives had not yet decided what other stories they might spin off from the show.
The streaming ecosystem was built on a wager: high subscriber numbers would translate to large market shares, and eventually, profit. Under this strategy, an enormous amount of money could be spent on shows that might or might not work: more shows meant more opportunities to catch new subscribers. Producers and writers for streamers were able to put ratings aside, which at first seemed to be a luxury. Netflix paid writers large fees up front, and guaranteed that an entire season of a show would be produced. By the mid-2010s, the sheer quantity of series across the new platforms—what’s known as “Peak TV”—opened opportunities for unusually offbeat projects (see BoJack Horseman, a cartoon for adults about an equine has-been sitcom star), and substantially more shows created by women and writers of color. In 2009, across cable, broadcast, and streaming, 189 original scripted shows aired or released new episodes; in 2016, that number was 496. In 2022, it was 849.
supply soon overshot demand. For those who beat out the competition, the work became much less steady than it had been in the pre-streaming era. According to insiders, in the past, writers for a series had usually been employed for around eight months, crafting long seasons and staying on board through a show’s production. Junior writers often went to the sets where their shows were made and learned how to take a story from the page to the screen—how to talk to actors, how to stay within budget, how to take a studio’s notes—setting them up to become showrunners. Now, in an innovation called mini-rooms, reportedly first ventured by cable channels such as AMC and Starz, fewer writers were employed for each series and for much shorter periods—usually eight to ten weeks but as little as four.
Writers in the new mini-room system were often dismissed before their series went to production, which meant that they rarely got the opportunity to go to set and weren’t getting the skills they needed to advance. Showrunners were left responsible for all writing-related tasks when these rooms shut down. “It broke a lot of showrunners,” the A-list film and TV writer told me. “Physically, mentally, financially. It also ruined a lot of shows.”
The price of entry for working in Hollywood had been high for a long time: unpaid internships, low-paid assistant jobs. But now the path beyond the entry level was increasingly unclear. Jason Grote, who was a staff writer on Mad Men and who came to TV from playwriting, told me, “It became like a hobby for people, or something more like theater—you had your other day jobs or you had a trust fund.” Brenden Gallagher, a TV writer a decade in, said, “There are periods of time where I work at the Apple Store. I’ve worked doing data entry, I’ve worked doing research, I’ve worked doing copywriting.” Since he’d started in the business in 2014, in his mid-twenties, he’d never had more than eight months at a time when he didn’t need a source of income from outside the industry.
“There was this feeling,” the head of the midsize studio told me that day at Soho House, “during the last ten years or so, of, ‘Oh, we need to get more people of color in writers’ rooms.’ ” But what you get now, he said, is the black or Latino person who went to Harvard. “They’re getting the shot, but you don’t actually see a widening of the aperture to include people who grew up poor, maybe went to a state school or not even, and are just really talented. That has not happened at all.”
“The Sopranos does not exist without David Chase having worked in television for almost thirty years,” Blake Masters, a writer-producer and creator of the Showtime series Brotherhood, told me. “Because The Sopranos really could not be written by somebody unless they understood everything about television, and hated all of it.” Grote said much the same thing: “Prestige TV wasn’t new blood coming into Hollywood as much as it was a lot of veterans that were never able to tell these types of stories, who were suddenly able to cut through.”
The threshold for receiving the viewership-based streaming residuals is also incredibly high: a show must be viewed by at least 20 percent of a platform’s domestic subscribers “in the first 90 days of release, or in the first 90 days in any subsequent exhibition year.” As Bloomberg reported in November, fewer than 5 percent of the original shows that streamed on Netflix in 2022 would have met this benchmark. “I am not impressed,” the A-list writer told me in January. Entry-level TV staffing, where more and more writers are getting stuck, “is still a subsistence-level job,” he said. “It’s a job for rich kids.”
Brenden Gallagher, who echoed Conover’s belief that the union was well-positioned to gain more in 2026, put it this way: “My view is that there was a lot of wishful thinking about achieving this new middle class, based around, to paraphrase 30 Rock, making it 1997 again through science or magic. Will there be as big a working television-writer cohort that is making six figures a year consistently living in Los Angeles as there was from 1992 to 2021? No. That’s never going to come back.”
As for what types of TV and movies can get made by those who stick around, Kelvin Yu, creator and showrunner of the Disney+ series American Born Chinese, told me: “I think that there will be an industry move to the middle in terms of safer, four-quadrant TV.” (In L.A., a “four-quadrant” project is one that aims to appeal to all demographics.) “I think a lot of people,” he said, “who were disenfranchised or marginalized—their drink tickets are up.” Indeed, multiple writers and executives told me that following the strike, studio choices have skewed even more conservative than before. “It seems like buyers are much less adventurous,” one writer said. “Buyers are looking for Friends.”
The film and TV industry is now controlled by only four major companies, and it is shot through with incentives to devalue the actual production of film and television.
The entertainment and finance industries spend enormous sums lobbying both parties to maintain deregulation and prioritize the private sector. Writers will have to fight the studios again, but for more sweeping reforms. One change in particular has the potential to flip the power structure of the industry on its head: writers could demand to own complete copyright for the stories they create. They currently have something called “separated rights,” which allow a writer to use a script and its characters for limited purposes. But if they were to retain complete copyright, they would have vastly more leverage. Nearly every writer I spoke with seemed to believe that this would present a conflict with the way the union functions. This point is complicated and debatable, but Shawna Kidman and the legal expert Catherine Fisk—both preeminent scholars of copyright and media—told me that the greater challenge is Hollywood’s structure. The business is currently built around studio ownership. While Kidman found the idea of writer ownership infeasible, Fisk said it was possible, though it would be extremely difficult. Pushing for copyright would essentially mean going to war with the studios. But if things continue on their current path, writers may have to weigh such hazards against the prospect of the end of their profession. Or, they could leave it all behind.
·harpers.org·
The Life and Death of Hollywood, by Daniel Bessner
Companionship Content is King - by Anu Atluru
Companionship Content is King - by Anu Atluru

Long-form "companionship content" will outlast short-form video formats like TikTok, as the latter is more mentally draining and has a lower ceiling for user engagement over time.

  • In contrast, companionship content that feels more human and less algorithmically optimized will continue to thrive, as it better meets people's needs for social connection and low-effort entertainment.
  • YouTube as the dominant platform among teens, and notes that successful TikTok creators often funnel their audiences to longer-form YouTube content.
  • Platforms enabling deep, direct creator-fan relationships and higher creator payouts, like YouTube, are expected to be the long-term winners in the content landscape.
Companionship content is long-form content that can be consumed passively — allowing the consumer to be incompletely attentive, and providing a sense of relaxation, comfort, and community.
Interestingly, each individual “unit” of music is short-form (e.g. a 3-5 minute song), but how we consume it tends to be long-form and passive (i.e. via curated stations, lengthy playlists, or algorithms that adapt to our taste).
If you’re rewatching a show or movie, it’s likely to be companionship content. (Life-like conversational sitcoms can be consumed this way too.) As streaming matures, platforms are growing their passive-watch library.
content isn’t always prescriptively passive, rather it’s rooted in how consumers engage it.
That said, some content lends better to being companionship content: Long-form over short. Conversational over action. Simple plot versus complex.
Short-form video requires more attention & action in a few ways: Context switching, i.e. wrapping your head around a new piece of context every 30 seconds, especially if they’re on unrelated topics with different styles Judgment & decision-making, i.e. contemplating whether to keep watching or swipe to the next video effectively the entire time you’re watching a video Multi-sensory attention, i.e. default full-screen and requires visual and audio focus, especially since videos are so short that you can easily lose context Interactive components, e.g. liking, saving, bookmarking,
With how performative, edited, and algorithmically over-optimized it is, TikTok feels sub-human. TikTok has quickly become one of the most goal-seeking places on earth. I could easily describe TikTok as a global focus group for commercials. It’s the product personification of a means to an end, and the end is attention.
even TikTok creators are adapting the historically rigid format to appeal to more companionship-esque emotions and improve retention.
When we search for a YouTube video to watch, we often want the best companion for the next hour and not the most entertaining content.
While short-form content edits are meant to be spectacular and attention-grabbing, long-form content tends to be more subtle in its emotional journey Long-form engagement with any single character or narrative or genre lets you develop stronger understanding, affinity, and parasocial bonds Talk-based content (e.g. talk shows, podcasts, comedy, vlogs, life-like sitcoms) especially evokes a feeling of companionship and is less energy-draining The trends around loneliness and the acceleration of remote work has and will continue to make companionship content even more desirable As we move into new technology frontiers, we might unlock novel types of companionship content itself, but I’d expect this to take 5-10 years at least
TikTok is where you connect with an audience, YouTube is where you consolidate it.5 Long-form content also earns creators more, with YouTube a standout in revenue sharing.
YouTube paid out $16 billion to creators in 2022 (which is 55% of its annual $30 billion in revenue) and the other four social networks paid out about $1 billion each from their respective creator funds. In total, that yields $20 billion.”
Mr. Beast, YouTube’s top creator, says YouTube is now the final destination, not “traditional” hollywood stardom which is the dream of generations past. Creators also want to funnel audiences to apps & community platforms where they can own user relationships, rely less on algorithms, engage more directly and deeply with followers, and enable follower-to-follower engagement too
Interestingly of course, an increasing amount of short-form video, including formats like clips and edits, seems to be made from what originally was long-form content.8 And in return, these recycled short-form videos can drive tremendous traffic to long-form formats and platforms.
90% of people use a second screen while watching TV. We generally talk about “second screen” experiences in the context of multiple devices, but you can have complementary apps and content running on the same device — you can have the “second screen” on the same screen.
YouTube itself also cites a trend of people putting YouTube on their real TV screens: “There are more Americans gathering around the living room TV to watch YouTube than any other platform. Why? Put simply, people want choices and variety … It’s a one stop shop for video viewing. Think about something historically associated with linear TV: Sports. Now, with [our NFL partnership], people can not only watch the games, but watch post-game highlights and commentary in one place.”
If I were to build an on-demand streaming product or any kind of content product for that matter, I’d build for the companionship use case — not only because I think it has a higher ceiling of consumer attention, but also because it can support more authentic, natural, human engagement.
All the creators that are ‘made’ on TikTok are looking for a place to go to consolidate the attention they’ve amassed. TikTok is commercials. YouTube is TV. (Though yes, they’re both trying to become each other).
certainly AI and all the new creator tools enabled by it will help people mix and match and remix long and short formats all day, blurring the historically strict distinctions between them. It’ll take some time before we see a new physical product + content combo thrive, and meanwhile the iPhone and its comps will be competing hard to stay the default device.
The new default seems to be that we’re not lonely as long as we’re streaming. We can view this entirely in a negative light and talk about how much the internet and media is contributing to the loneliness epidemic. Or we could think about how to create media for good. Companionship content can be less the quick dopamine-hit-delivering clips and more of this, and perhaps even truly social.
Long-form wants to become the conversational third space for consumers too. The “comments” sections of TikTok, YouTube and all broadcast platforms are improving, but they still have a long way to go before they become even more community-oriented.
I’m not an “AI-head” but I am more curious about what it’s going to enable in long-form content than all the short-form clips it’s going to help generate and illustrate, etc.
The foreground tends to be utilities or low-cognitive / audio effort (text or silent video). Tiktok is a foreground app for now, YouTube is both (and I’d say trending towards being background).
·archive.is·
Companionship Content is King - by Anu Atluru
Is Every Picture Worth 1,000 Words?
Is Every Picture Worth 1,000 Words?
The phrase a picture is worth a thousand words has two popular origin stories. One version credits advertising executive Frederick R. Barnard, who attributed the phrase to an ancient Chinese proverb. The closest Chinese equivalent translates to “Hearing something a hundred times isn’t better than seeing it once.” In other words, the Chinese Origin was made up: “…the Chinese derivation was pure invention. Many things had been thought to be ‘worth ten thousand words’ well before pictures got in on the act;”
the true origin of the proverb is not Chinese but adspeak.3 It shows how the phrase has morphed into a commercial, facile cliche.
Letting images and pictures compete for supremacy reduces the complex relationship between images and words into a direct, quantifiable comparison. Words and images function differently
A few carefully chosen words can say what 1,000 stock images cannot. The right image can counter cynicism, closed-mindedness, or an automatic dismissal of a convincing argument.
Tell your audience how you interpret the image. What the image means.
Use images to complement, not repeat or overshadow, the text. Get rid of images that are just there to add color.
Stock images are clichés. Clichés can be easily turned on their head because of their simplistic topic. You want to communicate that you’re diverse and you end up telling people that you’re a company run by a minority. Or you want to communicate success, but the focus on two middle-aged white people ends up communicating their privilege.
Whether you like it or not, people will read into this picture as well and they won’t find a lot of valuable or advantageous information in it. It again mostly says: “This boring website thinks that I don’t see that this is a meaningless stock image.”
Pictures have an impact when they tell a story that only a picture can tell.
·ia.net·
Is Every Picture Worth 1,000 Words?
A good image tells a good story
A good image tells a good story
Forget trying to decide what your life’s destiny is. That’s too grand. Instead, just figure out what you should do in the next 2 years.
Visuals can stir up feelings or paint a scene in an instant. However, they may not always nail down the details or explain things as clearly as words can. Words can be very precise and give you all the information you need. Yet, sometimes they miss that instant impact or emotional punch.
For each visual you add to your presentation, you should ask yourself “What does it really say?” And then check: Does it enhance the meaning of my message, or is it purely decorative? Does it belong at this point in my presentation? Would it be better for another slide? Is there a better image that says what I want to say?
Computers don’t feel, and that means: they don’t understand what they do, they grow images like cancer grows cells: They just replicate something into the blue. This becomes apparent in the often outright creepiness of AI images.
AI is really good at making scary images. Even if the prompt lacks all hints of horror kitsch, you need to get ready to see or feel something disturbing when you look at AI images. It’s like a spell. Part of the scariness comes from the cancer-like pattern that reproduces the same ornament without considering its meaning and consequence.
Placing pictures next to each other will invite comparisons. We also compare images that follow each other. Make sure that you do not inadvertently compare apples and oranges.
When placing multiple images in a grid or on one slide after the other, ensure they don’t clash in terms of colors, style, or resolution. Otherwise, people will focus more on the contrast between the images rather than their content.
Repeating what everyone can see is bad practice. To make pictures and text work, they need to have something to say about each other.
Don’t write next to the image what people already see. A caption is not an ALT text.
The most powerful combination of text and image happens when the text says about the image what you can’t see at first sight, and when the image renders what is hard to imagine.
Do not be boring or overly explanatory. The visual should attract their attention to your words and vice-versa.
If a visual lacks meaning, it becomes a decorative placeholder. It can dilute your message, distract from what you want to say, and even express disrespect to your audience.
·ia.net·
A good image tells a good story
On Openings Essays, Conferences Talks, and Jam Jars
On Openings Essays, Conferences Talks, and Jam Jars
how to write better openings and introductions / intros in non-fiction writing
The beginning is almost never the most compelling or important part. It's just the bit you thought of first, based on your subjective chronology.
Signposting what you're going to write about is good, but starting with an exhaustive list of definitions is extremely boring.
Invoking paleolithic people is an overplayed way to convince us your topic is cosmically important.
Openings need tension – paradoxes, unanswered questions, and unresolved action
Good openings propose problems, pose questions, drop you into an unfinished story, or point at fundamental tensions within a topic. Ideally within the first paragraph or two.
"Good writing starts strong. Not with a cliché ("Since the dawn of time"), not with a banality ("Recently, scholars have been increasingly concerned with the questions of..."), but with a contentful observation that provokes curiosity."A Sense of StyleStephen Pinker
Creating tension in non-fiction work is trickier because your story is (hopefully) constrained by reality. You are not at liberty to invent suspicious murders, salacious extramarital affairs, or newly-discovered-magical-powers to create tension and mystery. You have to deal with the plain, unexotic facts of the world.
Your job becomes much harder if you pick topics with no tension, problems, or puzzles within them. To paraphrase Williams, it is more of a failure to pose an uninteresting problem, than to poorly articulate an interesting one
Your interest in the topic is your best directional clue for finding the tension or interesting paradox. Your urge to write about the thing hopefully comes from a place of curiosity. You have unanswered questions about it. It feels important or consequential for unexplained reasons. You think you've seen things in it other people haven't. Pay attention to that interest.
Problems are a destabilising condition that has a cost for a community of readers that needs a solution. Destabilising condition is just a fancy word for “change” here – a change in the status quo. Put another way, a problem is an expected turn of events, that has undesireable consequences, for an audience who will care about it, that we want to explore solutions to.
Williams is speaking to a community of academic writers in his book. They're trying to present scientific and research problems in plain, objective language, which isn't necessarily what we want to do with narrative writing like blogging or personal essays. We have a little more liberty to put interesting padding around the change, consequences, and solution, such as telling an opening anecdote, or drawing readers in with characters, rich details, and sensory descriptions.
Williams suggests we try to state our problem and then ask a series of so what?'s to get at the underlying problem
For your writing to be worth reading, you need to be exploring something of consequence for someone
When McPhee writes, after first immersing himself in his raw material (field notes, interview transcripts, official documents) for weeks, he then draws a structure for the work. The structure lays out the major themes and scenes he'll work through, in the order that will make them most compelling and coherent.
Developing a structure requires navigating the tension between chronology and theme. Chronology is what we default to, but themes that repeatedly appear want to pull themselves together into a single place. The themes that really matter should be in your opening. Even if the moment that best defines them happens right before the end of the timeline.
·maggieappleton.com·
On Openings Essays, Conferences Talks, and Jam Jars
Writing with AI
Writing with AI
iA writer's vision for using AI in writing process
Thinking in dialogue is easier and more entertaining than struggling with feelings, letters, grammar and style all by ourselves. Using AI as a writing dialogue partner, ChatGPT can become a catalyst for clarifying what we want to say. Even if it is wrong.6 Sometimes we need to hear what’s wrong to understand what’s right.
Seeing in clear text what is wrong or, at least, what we don’t mean can help us set our minds straight about what we really mean. If you get stuck, you can also simply let it ask you questions. If you don’t know how to improve, you can tell it to be evil in its critique of your writing
Just compare usage with AI to how we dealt with similar issues before AI. Discussing our writing with others is a general practice and regarded as universally helpful; honest writers honor and credit their discussion partners We already use spell checkers and grammar tools It’s common practice to use human editors for substantial or minor copy editing of our public writing Clearly, using dictionaries and thesauri to find the right expression is not a crime
Using AI in the editor replaces thinking. Using AI in dialogue increases thinking. Now, how can connect the editor and the chat window without making a mess? Is there a way to keep human and artificial text apart?
·ia.net·
Writing with AI
How can we develop transformative tools for thought?
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work. But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
·numinous.productions·
How can we develop transformative tools for thought?