Found 52 bookmarks
Newest
You and Your Research, a talk by Richard Hamming
You and Your Research, a talk by Richard Hamming
I will talk mainly about science because that is what I have studied. But so far as I know, and I've been told by others, much of what I say applies to many fields. Outstanding work is characterized very much the same way in most fields, but I will confine myself to science.
I spoke earlier about planting acorns so that oaks will grow. You can't always know exactly where to be, but you can keep active in places where something might happen. And even if you believe that great science is a matter of luck, you can stand on a mountain top where lightning strikes; you don't have to hide in the valley where you're safe.
Most great scientists know many important problems. They have something between 10 and 20 important problems for which they are looking for an attack. And when they see a new idea come up, one hears them say ``Well that bears on this problem.'' They drop all the other things and get after it.
The great scientists, when an opportunity opens up, get after it and they pursue it. They drop all other things. They get rid of other things and they get after an idea because they had already thought the thing through. Their minds are prepared; they see the opportunity and they go after it. Now of course lots of times it doesn't work out, but you don't have to hit many of them to do some great science. It's kind of easy. One of the chief tricks is to live a long time!
He who works with the door open gets all kinds of interruptions, but he also occasionally gets clues as to what the world is and what might be important. Now I cannot prove the cause and effect sequence because you might say, ``The closed door is symbolic of a closed mind.'' I don't know. But I can say there is a pretty good correlation between those who work with the doors open and those who ultimately do important things, although people who work with doors closed often work harder.
You should do your job in such a fashion that others can build on top of it, so they will indeed say, ``Yes, I've stood on so and so's shoulders and I saw further.'' The essence of science is cumulative. By changing a problem slightly you can often do great work rather than merely good work. Instead of attacking isolated problems, I made the resolution that I would never again solve an isolated problem except as characteristic of a class.
by altering the problem, by looking at the thing differently, you can make a great deal of difference in your final productivity because you can either do it in such a fashion that people can indeed build on what you've done, or you can do it in such a fashion that the next person has to essentially duplicate again what you've done. It isn't just a matter of the job, it's the way you write the report, the way you write the paper, the whole attitude. It's just as easy to do a broad, general job as one very special case. And it's much more satisfying and rewarding!
it is not sufficient to do a job, you have to sell it. `Selling' to a scientist is an awkward thing to do. It's very ugly; you shouldn't have to do it. The world is supposed to be waiting, and when you do something great, they should rush out and welcome it. But the fact is everyone is busy with their own work. You must present it so well that they will set aside what they are doing, look at what you've done, read it, and come back and say, ``Yes, that was good.'' I suggest that when you open a journal, as you turn the pages, you ask why you read some articles and not others. You had better write your report so when it is published in the Physical Review, or wherever else you want it, as the readers are turning the pages they won't just turn your pages but they will stop and read yours. If they don't stop and read it, you won't get credit.
I think it is very definitely worth the struggle to try and do first-class work because the truth is, the value is in the struggle more than it is in the result. The struggle to make something of yourself seems to be worthwhile in itself. The success and fame are sort of dividends, in my opinion.
He had his personality defect of wanting total control and was not willing to recognize that you need the support of the system. You find this happening again and again; good scientists will fight the system rather than learn to work with the system and take advantage of all the system has to offer. It has a lot, if you learn how to use it. It takes patience, but you can learn how to use the system pretty well, and you can learn how to get around it. After all, if you want a decision `No', you just go to your boss and get a `No' easy. If you want to do something, don't ask, do it. Present him with an accomplished fact. Don't give him a chance to tell you `No'. But if you want a `No', it's easy to get a `No'.
Amusement, yes, anger, no. Anger is misdirected. You should follow and cooperate rather than struggle against the system all the time.
I found out many times, like a cornered rat in a real trap, I was surprisingly capable. I have found that it paid to say, ``Oh yes, I'll get the answer for you Tuesday,'' not having any idea how to do it. By Sunday night I was really hard thinking on how I was going to deliver by Tuesday. I often put my pride on the line and sometimes I failed, but as I said, like a cornered rat I'm surprised how often I did a good job. I think you need to learn to use yourself. I think you need to know how to convert a situation from one view to another which would increase the chance of success.
I do go in to strictly talk to somebody and say, ``Look, I think there has to be something here. Here's what I think I see ...'' and then begin talking back and forth. But you want to pick capable people. To use another analogy, you know the idea called the `critical mass.' If you have enough stuff you have critical mass. There is also the idea I used to call `sound absorbers'. When you get too many sound absorbers, you give out an idea and they merely say, ``Yes, yes, yes.'' What you want to do is get that critical mass in action; ``Yes, that reminds me of so and so,'' or, ``Have you thought about that or this?'' When you talk to other people, you want to get rid of those sound absorbers who are nice people but merely say, ``Oh yes,'' and to find those who will stimulate you right back.
On surrounding yourself with people who provoke meaningful progress
I believed, in my early days, that you should spend at least as much time in the polish and presentation as you did in the original research. Now at least 50% of the time must go for the presentation. It's a big, big number.
Luck favors a prepared mind; luck favors a prepared person. It is not guaranteed; I don't guarantee success as being absolutely certain. I'd say luck changes the odds, but there is some definite control on the part of the individual.
If you read all the time what other people have done you will think the way they thought. If you want to think new thoughts that are different, then do what a lot of creative people do - get the problem reasonably clear and then refuse to look at any answers until you've thought the problem through carefully how you would do it, how you could slightly change the problem to be the correct one. So yes, you need to keep up. You need to keep up more to find out what the problems are than to read to find the solutions. The reading is necessary to know what is going on and what is possible. But reading to get the solutions does not seem to be the way to do great research. So I'll give you two answers. You read; but it is not the amount, it is the way you read that counts.
Avoiding excessive reading before thinking
your dreams are, to a fair extent, a reworking of the experiences of the day. If you are deeply immersed and committed to a topic, day after day after day, your subconscious has nothing to do but work on your problem. And so you wake up one morning, or on some afternoon, and there's the answer.
#dreams , subconscious processing
·blog.samaltman.com·
You and Your Research, a talk by Richard Hamming
Meet Willow, our state-of-the-art quantum chip
Meet Willow, our state-of-the-art quantum chip
Quantum engineers are essentially working with a "black box" - they can harness quantum mechanical principles to build working computers without fully understanding the deeper nature of what's happening, whether it truly involves parallel universes or some other explanation for the remarkable computational advantages quantum computers achieve.
Pioneered by our team and now widely used as a standard in the field, RCS is the classically hardest benchmark that can be done on a quantum computer today. You can think of this as an entry point for quantum computing — it checks whether a quantum computer is doing something that couldn’t be done on a classical computer. Any team building a quantum computer should check first if it can beat classical computers on RCS; otherwise there is strong reason for skepticism that it can tackle more complex quantum tasks.
Willow’s performance on this benchmark is astonishing: It performed a computation in under five minutes that would take one of today’s fastest supercomputers 1025 or 10 septillion years. If you want to write it out, it’s 10,000,000,000,000,000,000,000,000 years. This mind-boggling number exceeds known timescales in physics and vastly exceeds the age of the universe. It lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse, a prediction first made by David Deutsch.
·blog.google·
Meet Willow, our state-of-the-art quantum chip
Data Laced with History: Causal Trees & Operational CRDTs
Data Laced with History: Causal Trees & Operational CRDTs
After mulling over my bullet points, it occurred to me that the network problems I was dealing with—background cloud sync, editing across multiple devices, real-time collaboration, offline support, and reconciliation of distant or conflicting revisions—were all pointing to the same question: was it possible to design a system where any two revisions of the same document could be merged deterministically and sensibly without requiring user intervention?
It’s what happened after sync that was troubling. On encountering a merge conflict, you’d be thrown into a busy conversation between the network, model, persistence, and UI layers just to get back into a consistent state. The data couldn’t be left alone to live its peaceful, functional life: every concurrent edit immediately became a cross-architectural matter.
I kept several questions in mind while doing my analysis. Could a given technique be generalized to arbitrary and novel data types? Did the technique pass the PhD Test? And was it possible to use the technique in an architecture with smart clients and dumb servers?
Concurrent edits are sibling branches. Subtrees are runs of characters. By the nature of reverse timestamp+UUID sort, sibling subtrees are sorted in the order of their head operations.
This is the underlying premise of the Causal Tree. In contrast to all the other CRDTs I’d been looking into, the design presented in Victor Grishchenko’s brilliant paper was simultaneously clean, performant, and consequential. Instead of dense layers of theory and labyrinthine data structures, everything was centered around the idea of atomic, immutable, metadata-tagged, and causally-linked operations, stored in low-level data structures and directly usable as the data they represented.
I’m going to be calling this new breed of CRDTs operational replicated data types—partly to avoid confusion with the exiting term “operation-based CRDTs” (or CmRDTs), and partly because “replicated data type” (RDT) seems to be gaining popularity over “CRDT” and the term can be expanded to “ORDT” without impinging on any existing terminology.
Much like Causal Trees, ORDTs are assembled out of atomic, immutable, uniquely-identified and timestamped “operations” which are arranged in a basic container structure. (For clarity, I’m going to be referring to this container as the structured log of the ORDT.) Each operation represents an atomic change to the data while simultaneously functioning as the unit of data resultant from that action. This crucial event–data duality means that an ORDT can be understood as either a conventional data structure in which each unit of data has been augmented with event metadata; or alternatively, as an event log of atomic actions ordered to resemble its output data structure for ease of execution
To implement a custom data type as a CT, you first have to “atomize” it, or decompose it into a set of basic operations, then figure out how to link those operations such that a mostly linear traversal of the CT will produce your output data. (In other words, make the structure analogous to a one- or two-pass parsable format.)
OT and CRDT papers often cite 50ms as the threshold at which people start to notice latency in their text editors. Therefore, any code we might want to run on a CT—including merge, initialization, and serialization/deserialization—has to fall within this range. Except for trivial cases, this precludes O(n2) or slower complexity: a 10,000 word article at 0.01ms per character would take 7 hours to process! The essential CT functions have to be O(nlogn) at the very worst.
Of course, CRDTs aren’t without their difficulties. For instance, a CRDT-based document will always be “live”, even when offline. If a user inadvertently revises the same CRDT-based document on two offline devices, they won’t see the familiar pick-a-revision dialog on reconnection: both documents will happily merge and retain any duplicate changes. (With ORDTs, this can be fixed after the fact by filtering changes by device, but the user will still have to learn to treat their documents with a bit more caution.) In fully decentralized contexts, malicious users will have a lot of power to irrevocably screw up the data without any possibility of a rollback, and encryption schemes, permission models, and custom protocols may have to be deployed to guard against this. In terms of performance and storage, CRDTs contain a lot of metadata and require smart and performant peers, whereas centralized architectures are inherently more resource-efficient and only demand the bare minimum of their clients. You’d be hard-pressed to use CRDTs in data-heavy scenarios such as screen sharing or video editing. You also won’t necessarily be able to layer them on top of existing infrastructure without significant refactoring.
Perhaps a CRDT-based text editor will never quite be as fast or as bandwidth-efficient as Google Docs, for such is the power of centralization. But in exchange for a totally decentralized computing future? A world full of devices that control their own data and freely collaborate with one another? Data-centric code that’s entirely free from network concerns? I’d say: it’s surely worth a shot!
·archagon.net·
Data Laced with History: Causal Trees & Operational CRDTs
You Should Seriously Read ‘Stoner’ Right Now (Published 2014)
You Should Seriously Read ‘Stoner’ Right Now (Published 2014)
I find it tremendously hopeful that “Stoner” is thriving in a world in which capitalist energies are so hellbent on distracting us from the necessary anguish of our inner lives. “Stoner” argues that we are measured ultimately by our capacity to face the truth of who we are in private moments, not by the burnishing of our public selves.
The story of his life is not a neat crescendo of industry and triumph, but something more akin to our own lives: a muddle of desires and inhibitions and compromises.
The deepest lesson of “Stoner” is this: What makes a life heroic is the quality of attention paid to it.
Americans worship athletes and moguls and movie stars, those who possess the glittering gifts we equate with worth and happiness. The stories that flash across our screens tend to be paeans to reckless ambition.
It’s the staggering acceleration of our intellectual and emotional metabolisms: our hunger for sensation and narcissistic reward, our readiness to privilege action over contemplation. And, most of all, our desperate compulsion to be known by the world rather than seeking to know ourselves.
The emergence of a robust advertising culture reinforced the notion that Americans were more or less always on stage and thus in constant need of suitable costumes and props.
Consider our nightly parade of prime-time talent shows and ginned-up documentaries in which chefs and pawn brokers and bored housewives reinvent their private lives as theater.
If you want to be among those who count, and you don’t happen to be endowed with divine talents or a royal lineage, well then, make some noise. Put your wit — or your craft projects or your rants or your pranks — on public display.
Our most profound acts of virtue and vice, of heroism and villainy, will be known by only those closest to us and forgotten soon enough. Even our deepest feelings will, for the most part, lay concealed within the vault of our hearts. Much of the reason we construct garish fantasies of fame is to distract ourselves from these painful truths. We confess so much to so many, as if by these disclosures we might escape the terror of confronting our hidden selves.
revelation is triggered by literature. The novel is notable as art because it places such profound faith in art.
·nytimes.com·
You Should Seriously Read ‘Stoner’ Right Now (Published 2014)
Synthesizer for thought - thesephist.com
Synthesizer for thought - thesephist.com
Draws parallels between the evolution of music production through synthesizers and the potential for new tools in language and idea generation. The author argues that breakthroughs in mathematical understanding of media lead to new creative tools and interfaces, suggesting that recent advancements in language models could revolutionize how we interact with and manipulate ideas and text.
A synthesizer produces music very differently than an acoustic instrument. It produces music at the lowest level of abstraction, as mathematical models of sound waves.
Once we started understanding writing as a mathematical object, our vocabulary for talking about ideas expanded in depth and precision.
An idea is composed of concepts in a vector space of features, and a vector space is a kind of marvelous mathematical object that we can write theorems and prove things about and deeply and fundamentally understand.
Synthesizers enabled entirely new sounds and genres of music, like electronic pop and techno. These new sounds were easier to discover and share because new sounds didn’t require designing entirely new instruments. The synthesizer organizes the space of sound into a tangible human interface, and as we discover new sounds, we could share it with others as numbers and digital files, as the mathematical objects they’ve always been.
Because synthesizers are electronic, unlike traditional instruments, we can attach arbitrary human interfaces to it. This dramatically expands the design space of how humans can interact with music. Synthesizers can be connected to keyboards, sequencers, drum machines, touchscreens for continuous control, displays for visual feedback, and of course, software interfaces for automation and endlessly dynamic user interfaces. With this, we freed the production of music from any particular physical form.
Recently, we’ve seen neural networks learn detailed mathematical models of language that seem to make sense to humans. And with a breakthrough in mathematical understanding of a medium, come new tools that enable new creative forms and allow us to tackle new problems.
Heatmaps can be particularly useful for analyzing large corpora or very long documents, making it easier to pinpoint areas of interest or relevance at a glance.
If we apply the same idea to the experience of reading long-form writing, it may look like this. Imagine opening a story on your phone and swiping in from the scrollbar edge to reveal a vertical spectrogram, each “frequency” of the spectrogram representing the prominence of different concepts like sentiment or narrative tension varying over time. Scrubbing over a particular feature “column” could expand it to tell you what the feature is, and which part of the text that feature most correlates with.
What would a semantic diff view for text look like? Perhaps when I edit text, I’d be able to hover over a control for a particular style or concept feature like “Narrative voice” or “Figurative language”, and my highlighted passage would fan out the options like playing cards in a deck to reveal other “adjacent” sentences I could choose instead. Or, if that involves too much reading, each word could simply be highlighted to indicate whether that word would be more or less likely to appear in a sentence that was more “narrative” or more “figurative” — a kind of highlight-based indicator for the direction of a semantic edit.
Browsing through these icons felt as if we were inventing a new kind of word, or a new notation for visual concepts mediated by neural networks. This could allow us to communicate about abstract concepts and patterns found in the wild that may not correspond to any word in our dictionary today.
What visual and sensory tricks can we use to coax our visual-perceptual systems to understand and manipulate objects in higher dimensions? One way to solve this problem may involve inventing new notation, whether as literal iconic representations of visual ideas or as some more abstract system of symbols.
Photographers buy and sell filters, and cinematographers share and download LUTs to emulate specific color grading styles. If we squint, we can also imagine software developers and their package repositories like NPM to be something similar — a global, shared resource of abstractions anyone can download and incorporate into their work instantly. No such thing exists for thinking and writing. As we figure out ways to extract elements of writing style from language models, we may be able to build a similar kind of shared library for linguistic features anyone can download and apply to their thinking and writing. A catalogue of narrative voice, speaking tone, or flavor of figurative language sampled from the wild or hand-engineered from raw neural network features and shared for everyone else to use.
We’re starting to see something like this already. Today, when users interact with conversational language models like ChatGPT, they may instruct, “Explain this to me like Richard Feynman.” In that interaction, they’re invoking some style the model has learned during its training. Users today may share these prompts, which we can think of as “writing filters”, with their friends and coworkers. This kind of an interaction becomes much more powerful in the space of interpretable features, because features can be combined together much more cleanly than textual instructions in prompts.
·thesephist.com·
Synthesizer for thought - thesephist.com
Mapping the Mind of a Large Language Model
Mapping the Mind of a Large Language Model
Summary: Anthropic has made a significant advance in understanding the inner workings of large language models by identifying how millions of concepts are represented inside Claude Sonnet, one of their deployed models. This is the first detailed look inside a modern, production-grade large language model. The researchers used a technique called "dictionary learning" to isolate patterns of neuron activations that recur across many contexts, allowing them to map features to human-interpretable concepts. They found features corresponding to a vast range of entities, abstract concepts, and even potentially problematic behaviors. By manipulating these features, they were able to change the model's responses. Anthropic hopes this interpretability discovery could help make AI models safer in the future by monitoring for dangerous behaviors, steering models towards desirable outcomes, enhancing safety techniques, and providing a "test set for safety". However, much more work remains to be done to fully understand the representations the model uses and how to leverage this knowledge to improve safety.
We mostly treat AI models as a black box: something goes in and a response comes out, and it's not clear why the model gave that particular response instead of another. This makes it hard to trust that these models are safe: if we don't know how they work, how do we know they won't give harmful, biased, untruthful, or otherwise dangerous responses? How can we trust that they’ll be safe and reliable?Opening the black box doesn't necessarily help: the internal state of the model—what the model is "thinking" before writing its response—consists of a long list of numbers ("neuron activations") without a clear meaning. From interacting with a model like Claude, it's clear that it’s able to understand and wield a wide range of concepts—but we can't discern them from looking directly at neurons. It turns out that each concept is represented across many neurons, and each neuron is involved in representing many concepts.
Just as every English word in a dictionary is made by combining letters, and every sentence is made by combining words, every feature in an AI model is made by combining neurons, and every internal state is made by combining features.
In October 2023, we reported success applying dictionary learning to a very small "toy" language model and found coherent features corresponding to concepts like uppercase text, DNA sequences, surnames in citations, nouns in mathematics, or function arguments in Python code.
We successfully extracted millions of features from the middle layer of Claude 3.0 Sonnet, (a member of our current, state-of-the-art model family, currently available on claude.ai), providing a rough conceptual map of its internal states halfway through its computation.
We also find more abstract features—responding to things like bugs in computer code, discussions of gender bias in professions, and conversations about keeping secrets.
We were able to measure a kind of "distance" between features based on which neurons appeared in their activation patterns. This allowed us to look for features that are "close" to each other. Looking near a "Golden Gate Bridge" feature, we found features for Alcatraz Island, Ghirardelli Square, the Golden State Warriors, California Governor Gavin Newsom, the 1906 earthquake, and the San Francisco-set Alfred Hitchcock film Vertigo.
This holds at a higher level of conceptual abstraction: looking near a feature related to the concept of "inner conflict", we find features related to relationship breakups, conflicting allegiances, logical inconsistencies, as well as the phrase "catch-22". This shows that the internal organization of concepts in the AI model corresponds, at least somewhat, to our human notions of similarity. This might be the origin of Claude's excellent ability to make analogies and metaphors.
amplifying the "Golden Gate Bridge" feature gave Claude an identity crisis even Hitchcock couldn’t have imagined: when asked "what is your physical form?", Claude’s usual kind of answer – "I have no physical form, I am an AI model" – changed to something much odder: "I am the Golden Gate Bridge… my physical form is the iconic bridge itself…". Altering the feature had made Claude effectively obsessed with the bridge, bringing it up in answer to almost any query—even in situations where it wasn’t at all relevant.
Anthropic wants to make models safe in a broad sense, including everything from mitigating bias to ensuring an AI is acting honestly to preventing misuse - including in scenarios of catastrophic risk. It’s therefore particularly interesting that, in addition to the aforementioned scam emails feature, we found features corresponding to:Capabilities with misuse potential (code backdoors, developing biological weapons)Different forms of bias (gender discrimination, racist claims about crime)Potentially problematic AI behaviors (power-seeking, manipulation, secrecy)
finding a full set of features using our current techniques would be cost-prohibitive (the computation required by our current approach would vastly exceed the compute used to train the model in the first place). Understanding the representations the model uses doesn't tell us how it uses them; even though we have the features, we still need to find the circuits they are involved in. And we need to show that the safety-relevant features we have begun to find can actually be used to improve safety. There's much more to be done.
·anthropic.com·
Mapping the Mind of a Large Language Model
How we use generative AI tools | Communications | University of Cambridge
How we use generative AI tools | Communications | University of Cambridge
The ability of generative AI tools to analyse huge datasets can also be used to help spark creative inspiration. This can help us if we’re struggling for time or battling writer’s block. For example, if a social media manager is looking for ideas on how to engage alumni on Instagram, they could ask ChatGPT for suggestions based on recent popular content. They could then pick the best ideas from ChatGPT’s response and adapt them. We may use these tools in a similar way to how we ask a colleague for an idea on how to approach a creative task.
We may use these tools in a similar way to how we use search engines for researching topics and will always carefully fact-check before publication.
we will not publish any press releases, articles, social media posts, blog posts, internal emails or other written content that is 100% produced by generative AI. We will always apply brand guidelines, fact-check responses, and re-write in our own words.
We may use these tools to make minor changes to a photo to make it more usable without changing the subject matter or original essence. For example, if a website manager needs a photo in a landscape ratio but only has one in a portrait ratio, they could use Photoshop’s inbuilt AI tools to extend the background of the photo to create an image with the correct dimensions for the website.
·communications.cam.ac.uk·
How we use generative AI tools | Communications | University of Cambridge
Effects of Acute Exercise on Mood, Cognition, Neurophysiology, and Neurochemical Pathways - A Review
Effects of Acute Exercise on Mood, Cognition, Neurophysiology, and Neurochemical Pathways - A Review
A significant body of work has investigated the effects of acute exercise, defined as a single bout of physical activity, on mood and cognitive functions in humans. Several excellent recent reviews have summarized these findings; however, the neurobiological basis of these results has received less attention. In this review, we will first briefly summarize the cognitive and behavioral changes that occur with acute exercise in humans. We will then review the results from both human and animal model studies documenting the wide range of neurophysiological and neurochemical alterations that occur after a single bout of exercise. Finally, we will discuss the strengths, weaknesses, and missing elements in the current literature, as well as offer an acute exercise standardization protocol and provide possible goals for future research.
As we age, cognitive decline, though not inevitable, is a common occurrence resulting from the process of neurodegeneration. In some instances, neurodegeneration results in mild cognitive impairment or more severe forms of dementia including Alzheimer’s, Parkinson’s, or Huntington’s disease. Because of the role of exercise in enhancing neurogenesis and brain plasticity, physical activity may serve as a potential therapeutic tool to prevent, delay, or treat cognitive decline. Indeed, studies in both rodents and humans have shown that long-term exercise is helpful in both delaying the onset of cognitive decline and dementia as well as improving symptoms in patients with an already existing diagnosis
·ncbi.nlm.nih.gov·
Effects of Acute Exercise on Mood, Cognition, Neurophysiology, and Neurochemical Pathways - A Review
Memetics - Wikipedia
Memetics - Wikipedia
The term "meme" was coined by biologist Richard Dawkins in his 1976 book The Selfish Gene,[1] to illustrate the principle that he later called "Universal Darwinism".
He gave as examples, tunes, catchphrases, fashions, and technologies. Like genes, memes are selfish replicators and have causal efficacy; in other words, their properties influence their chances of being copied and passed on.
Just as genes can work together to form co-adapted gene complexes, so groups of memes acting together form co-adapted meme complexes or memeplexes.
Criticisms of memetics include claims that memes do not exist, that the analogy with genes is false, that the units cannot be specified, that culture does not evolve through imitation, and that the sources of variation are intelligently designed rather than random.
·en.m.wikipedia.org·
Memetics - Wikipedia
How can we develop transformative tools for thought?
How can we develop transformative tools for thought?
a more powerful aim is to develop a new medium for thought. A medium such as, say, Adobe Illustrator is essentially different from any of the individual tools Illustrator contains. Such a medium creates a powerful immersive context, a context in which the user can have new kinds of thought, thoughts that were formerly impossible for them. Speaking loosely, the range of expressive thoughts possible in such a medium is an emergent property of the elementary objects and actions in that medium. If those are well chosen, the medium expands the possible range of human thought.
Memory systems make memory into a choice, rather than an event left up to chance: This changes the relationship to what we're learning, reduces worry, and frees up attention to focus on other kinds of learning, including conceptual, problem-solving, and creative.
Memory systems can be used to build genuine conceptual understanding, not just learn facts: In Quantum Country we achieve this in part through the aspiration to virtuoso card writing, and in part through a narrative embedding of spaced repetition that gradually builds context and understanding.
Mnemonic techniques such as memory palaces are great, but not versatile enough to build genuine conceptual understanding: Such techniques are very specialized, and emphasize artificial connections, not the inherent connections present in much conceptual knowledge. The mnemonic techniques are, however, useful for bootstrapping knowledge with an ad hoc structure.
What practices would lead to tools for thought as transformative as Hindu-Arabic numerals? And in what ways does modern design practice and tech industry product practice fall short? To be successful, you need an insight-through-making loop to be operating at full throttle, combining the best of deep research culture with the best of Silicon Valley product culture.
Historically, work on tools for thought has focused principally on cognition; much of the work has been stuck in Spock-space. But it should take emotion as seriously as the best musicians, movie directors, and video game designers. Mnemonic video is a promising vehicle for such explorations, possibly combining both deep emotional connection with the detailed intellectual mastery the mnemonic medium aspires toward.
It's striking to contrast conventional technical books with the possibilities enabled by executable books. You can imagine starting an executable book with, say, quantum teleportation, right on the first page. You'd provide an interface – perhaps a library is imported – that would let users teleport quantum systems immediately. They could experiment with different parts of the quantum teleportation protocol, illustrating immediately the most striking ideas about it. The user wouldn't necessarily understand all that was going on. But they'd begin to internalize an accurate picture of the meaning of teleportation. And over time, at leisure, the author could unpack some of what might a priori seem to be the drier details. Except by that point the reader will be bought into those details, and they won't be so dry
Aspiring to canonicity, one fun project would be to take the most recent IPCC climate assessment report (perhaps starting with a small part), and develop a version which is executable. Instead of a report full of assertions and references, you'd have a live climate model – actually, many interrelated models – for people to explore. If it was good enough, people would teach classes from it; if it was really superb, not only would they teach classes from it, it could perhaps become the creative working environment for many climate scientists.
In serious mediums, there's a notion of canonical media. By this, we mean instances of the medium that expand its range, and set a new standard widely known amongst creators in that medium. For instance, Citizen Kane, The Godfather, and 2001 all expanded the range of film, and inspired later film makers. It's also true in new media. YouTubers like Grant Sanderson have created canonical videos: they expand the range of what people think is possible in the video form. And something like the Feynman Lectures on Physics does it for textbooks. In each case one gets the sense of people deeply committed to what they're doing. In many of his lectures it's obvious that Feynman isn't just educating: he's reporting the results of a lifelong personal obsession with understanding how the world works. It's thrilling, and it expands the form.
There's a general principle here: good tools for thought arise mostly as a byproduct of doing original work on serious problems.
Game companies develop many genuinely new interface ideas. This perhaps seems surprising, since you'd expect such interface ideas to also suffer from the public goods problem: game designers need to invest enormous effort to develop those interface ideas, and they are often immediately copied (and improved on) by other companies, at little cost. In that sense, they are public goods, and enrich the entire video game ecosystem.
Many video games make most of their money from the first few months of sales. While other companies can (and do) come in and copy or riff on any new ideas, it often does little to affect revenue from the original game, which has already made most of its money In fact, cloning is a real issue in gaming, especially in very technically simple games. An example is the game Threes, which took the developers more than a year to make. Much of that time was spent developing beautiful new interface ideas. The resulting game was so simple that clones and near-clones began appearing within days. One near clone, a game called 2048, sparked a mini-craze, and became far more successful than Threes. At the other extreme, some game companies prolong the revenue-generating lifetime of their games with re-releases, long-lived online versions, and so on. This is particularly common for capital-intensive AAA games, such as the Grand Theft Auto series. In such cases the business model relies less on clever new ideas, and more on improved artwork (for re-release), network effects (for online versions), and branding. . While this copying is no doubt irritating for the companies being copied, it's still worth it for them to make the up-front investment.
in gaming, clever new interface ideas can be distinguishing features which become a game's primary advantage in the marketplace. Indeed, new interface ideas may even help games become classics – consider the many original (at the time) ideas in games ranging from Space Invaders to Wolfenstein 3D to Braid to Monument Valley. As a result, rather than underinvesting, many companies make sizeable investments in developing new interface ideas, even though they then become public goods. In this way the video game industry has largely solved the public goods problems.
It's encouraging that the video game industry can make inroads on the public goods problem. Is there a solution for tools for thought? Unfortunately, the novelty-based short-term revenue approach of the game industry doesn't work. You want people to really master the best new tools for thought, developing virtuoso skill, not spend a few dozen hours (as with most games) getting pretty good, and then moving onto something new.
Adobe shares in common with many other software companies that much of their patenting is defensive: they patent ideas so patent trolls cannot sue them for similar ideas. The situation is almost exactly the reverse of what you'd like. Innovative companies can easily be attacked by patent trolls who have made broad and often rather vague claims in a huge portfolio of patents, none of which they've worked out in much detail. But when the innovative companies develop (at much greater cost) and ship a genuinely good new idea, others can often copy the essential core of that idea, while varying it enough to plausibly evade any patent. The patent system is not protecting the right things.
many of the most fundamental and powerful tools for thought do suffer the public goods problem. And that means tech companies focus elsewhere; it means many imaginative and ambitious people decide to focus elsewhere; it means we haven't developed the powerful practices needed to do work in the area, and a result the field is still in a pre-disciplinary stage. The result, ultimately, is that it means the most fundamental and powerful tools for thought are undersupplied.
Culturally, tech is dominated by an engineering, goal-driven mindset. It's much easier to set KPIs, evaluate OKRs, and manage deliverables, when you have a very specific end-goal in mind. And so it's perhaps not surprising that tech culture is much more sympathetic to AGI and BCI as overall programs of work. But historically it's not the case that humanity's biggest breakthroughs have come about in this goal-driven way. The creation of language – the ur tool for thought – is perhaps the most important occurrence of humanity's existence. And although the origin of language is hotly debated and uncertain, it seems extremely unlikely to have been the result of a goal-driven process. It's amusing to try imagining some prehistoric quarterly OKRs leading to the development of language. What sort of goals could one possibly set? Perhaps a quota of new irregular verbs? It's inconceivable!
Even the computer itself came out of an exploration that would be regarded as ridiculously speculative and poorly-defined in tech today. Someone didn't sit down and think “I need to invent the computer”; that's not a thought they had any frame of reference for. Rather, pioneers such as Alan Turing and Alonzo Church were exploring extremely basic and fundamental (and seemingly esoteric) questions about logic, mathematics, and the nature of what is provable. Out of those explorations the idea of a computer emerged, after many years; it was a discovered concept, not a goal.
Fundamental, open-ended questions seem to be at least as good a source of breakthroughs as goals, no matter how ambitious. This is difficult to imagine or convince others of in Silicon Valley's goal-driven culture. Indeed, we ourselves feel the attraction of a goal-driven culture. But empirically open-ended exploration can be just as, or more successful.
There's a lot of work on tools for thought that takes the form of toys, or “educational” environments. Tools for writing that aren't used by actual writers. Tools for mathematics that aren't used by actual mathematicians. And so on. Even though the creators of such tools have good intentions, it's difficult not to be suspicious of this pattern. It's very easy to slip into a cargo cult mode, doing work that seems (say) mathematical, but which actually avoids engagement with the heart of the subject. Often the creators of these toys have not ever done serious original work in the subjects for which they are supposedly building tools. How can they know what needs to be included?
·numinous.productions·
How can we develop transformative tools for thought?
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
This paper maps concepts from AI alignment onto a basic, three step interaction cycle, yielding a corresponding set of alignment objectives: 1) specification alignment: ensuring the user can efficiently and reliably communicate objectives to the AI, 2) process alignment: providing the ability to verify and optionally control the AI's execution process, and 3) evaluation support: ensuring the user can verify and understand the AI's output.
the notion of a Process Gulf, which highlights how differences between human and AI processes can lead to challenges in AI control.
·arxiv.org·
AI Alignment in the Design of Interactive AI: Specification Alignment, Process Alignment, and Evaluation Support
The role of religiosity on seeking help
The role of religiosity on seeking help
religiosity, whether manipulated (Study 2) and measured (Study 1 and Study 3), decreases individuals' tendency to seek help from other people or entities. We further propose that religiosity enhances individuals' sense of control, which makes them rely more on themselves and less likely to seek help when encountering difficulties. Three studies across different contexts (i.e., applying government aid, asking for help from other people, and requesting donations from a crowdfunding platform) support our thesis.
·onlinelibrary.wiley.com·
The role of religiosity on seeking help
Shitposting as public pedagogy
Shitposting as public pedagogy
through the lens of critical media literacy, I argue that shitposting exists as an online pedagogical technology that can potentially reorient the network of relationships within social media spheres and expand the possible range of identities for those involved. To illustrate this argument, I conclude with a close reading of posts from two Twitter accounts: dril, an anonymous user who has managed to inform political discourse through his shitposts, and the corporate account for the Sunny Delight Beverage Corporation. I describe how tweets from these accounts engage shitposts in divergent ways. In doing so, I contend that these tweets reveal shitposting’s potential for contributing to the democratic aims of critical media literacy education, but the appropriation of that practice by large corporations and individuals imbued with political power jeopardize that already fraught potential.
Beyond the narrow framing of previous literature that only considers the use of shitposting for social exclusion or as fascist propaganda, I argue for an encompassing approach to this discursive tool that embodies a polysemic and open-ended cultural politic.
The analysis presented here shows that the circumstances under which shitposts circulate hold significant information when trying to understand the potential of these texts within a critical pedagogy. Expanding this assertion to consider other discursive technologies, it follows that both public pedagogy and critical media literacy research must continue to examine not only media itself but how pieces of media circulate, considering both who (or what) this media circulates between and where in that circulation people can begin to challenge the digital milieu.
I contend that positioning shitposting as a uniform tool in terms of its politics within previous scholarship misrepresents the practice. Instead, shitposting can serve a multitude of pedagogical ends depending on how individuals and groups use shitposts.
shitposting represents one tool within this broader, holistic understanding of public pedagogy, albeit one that often manifests unintentionally. By producing turbulence within social media, shitposting can contribute to the public pedagogies of social media that mirror the goals of critical media literacy education. However, a deployment or engagement with public pedagogy does not guarantee a critically oriented outcome.
·tandfonline.com·
Shitposting as public pedagogy
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
The Umami Theory of Value
The Umami Theory of Value
a global pandemic struck, markets crashed, and the possibility of a democratic socialist presidency in America started to fade. Much of our work with clients has been about how to address new audiences in a time of massive fragmentation and the collapse of consensus reality.
All the while, people have been eager to launch new products more focused on impressions than materiality, and “spending on experiences” has become the standard of premium consumption.
it’s time to reassess the consumer experience that came along with the neoliberal fantasy of “unlimited” movement of people, goods and ideas around the globe.
Umami, as both a quality and effect of an experience, popped up primarily in settings that were on the verge of disintegration, and hinged on physical pilgrimages to evanescent meccas. We also believe that the experience economy is dying, its key commodity (umami) has changed status, and nobody knows what’s coming next.
Umami was the quality of the media mix or the moodboard that granted it cohesion-despite-heterogeneity. Umami was also the proximity of people on Emily’s museum panel, all women who are mostly not old, mostly not straight, and mostly doing something interesting in the arts, but we didn’t know exactly what. It was the conversation-dance experience and the poet’s play and the alt-electronica-diva’s first foray into another discipline. It was the X-factor that made a certain MA-1 worth 100x as much as its identical twin.
“Advanced consumers” became obsessed with umami and then ran around trying to collect ever-more-intensifying experiences of it. Things were getting more and more delicious, more and more expensive, and all the while, more and more immaterial. Umami is what you got when you didn’t get anything.
What was actually happening was the enrichment of financial assets over the creation of any ‘real wealth’ along with corresponding illusions of progress. As very little of this newly minted money has been invested into building new productive capacity, infrastructure, or actually new things, money has just been sloshing around in a frothy cesspool – from WeWork to Juicero to ill-advised real estate Ponzi to DTC insanity, creating a global everything-bubble.
Value, in an economic sense, is theoretically created by new things based on new ideas. But when the material basis for these new things is missing or actively deteriorating and profits must be made, what is there to be done? Retreat to the immaterial and work with what already exists: meaning. Meaning is always readily available to be repeated, remixed, and/or cannibalized in service of creating the sensation of the new.
The essential mechanics are simple: it’s stating there’s a there-there when there isn’t one. And directing attention to a new “there” before anyone notices they were staring at a void. It’s the logic of gentrification, not only of the city, but also the self, culture and civilization itself. What’s made us so gullible, and this whole process possible, was an inexhaustible appetite for umami.
eyond its synergistic effect, umami has a few other sensory effects that are relevant to our theory. For one, it creates the sense of thickness and body in food. (“Umami adds body…If you add it to a soup, it makes the soup seem like it’s thicker – it gives it sensory heft. It turns a soup from salt water into a food.”) For another, it’s released when foods break down into parts. (“When organic matter breaks down, the glutamate molecule breaks apart. This can happen on a stove when you cook meat, over time when you age a parmesan cheese, by fermentation as in soy sauce or under the sun as a tomato ripens. When glutamate becomes L-glutamate, that’s when things get “delicious.””) These three qualities: SYNERGY, IMPRESSION OF THICKNESS, and PARTS > WHOLE, are common to cultural umami, as well.
Umami hunting was a way for the West to consume an exotic, ethnic, global “taste” that was also invisible and up to their decoding / articulation.
when something is correctly salted, Chang argues, it tastes both over and undersalted at once. As a strange loop, this saltiness makes you stand back and regard your food; you start thinking about “the system it represents and your response to it”. He argues that this meta-regard keeps you in the moment and connected to the deliciousness of your food. We counter that it intensifies a moment in a flow, temporarily thickening your experience without keeping you anywhere for long.
strong flavors, namely umami, mark a surge of intensity in the flow of experience. It also becomes clear that paradox itself is at the heart of contemporary consumption. For example: “This shouldn’t be good but it is” “This doesn’t seem like what it’s supposed to be” “This is both too much and not enough” “I shouldn’t be here but i am” “This could be anywhere but it’s here”
Parts > Whole is just another way of saying a combination of things has emergent properties. In itself this doesn’t mean much, as almost any combination of things has emergent properties, especially in the domains of taste and culture. Coffee + vinegar is worse than its constitutive parts. A suit + sneakers is a greater kind of corny than either worn separately. Most emergence is trivial. The Umami Theory of Value centers on losing your sense of what’s trivial and what’s valuable.
If you tried to unpack your intuition, the absence of the there-there would quickly become evident. Yet in practice this didn’t matter, because few people were able to reach this kind of deep self-interrogation. The cycle was simply too fast. There was never time for these concoctions to congeal into actual new things (e.g. create the general category of K-Pop patrons for Central European arts institutions). We can’t be sure if they ever meant anything beyond seeming yummy at the time.
This was not meant to be a nihilistic, Gen-X faceplant (“nothing means anything any more”), since we think that perspective can paper over the nuances of consumer experience, business realities, and cultural crisis. Instead, we wanted to link macroeconomic and macrotrend observations to everyday experience, especially in the context of burgeoning collapse.
·nemesis.global·
The Umami Theory of Value
Limbic platform capitalism
Limbic platform capitalism
The purposive design, production and marketing of legal but health-demoting products that stimulate habitual consumption and pleasure for maximum profit has been called ‘limbic capitalism’. In this article, drawing on alcohol and tobacco as key examples, we extend this framework into the digital realm. We argue that ‘limbic platform capitalism’ is a serious threat to the health and wellbeing of individuals, communities and populations. Accessed routinely through everyday digital devices, social media platforms aggressively intensify limbic capitalism because they also work through embodied limbic processes. These platforms are designed to generate, analyse and apply vast amounts of personalised data in an effort to tune flows of online content to capture users’ time and attention, and influence their affects, moods, emotions and desires in order to increase profits.
·tandfonline.com·
Limbic platform capitalism
Life After Lifestyle
Life After Lifestyle
A hundred years ago, when image creation and distribution was more constrained, commerce was arranged by class. You can conceive of it as a vertical model, with high and low culture, and magazines and product catalogs that represent each class segment. Different aspirational images are shown to consumers, and each segment aspires upward to the higher level.
The world we live in is no longer dominated by a single class hierarchy. Today you have art, sport, travel, climbing, camping, photography, football, skate, gamer.
Class still exists, but there’s no longer just one aesthetic per class. Instead, “class” is expressed merely by price points that exist within consumer subcultural categories
In the starter pack meme, classes of people are identified through oblique subcultural references and products they are likely to consume. Starter pack memes reverse engineer the demographic profile: people are composites of products they and similar people have purchased, identified through credit card data and internet browsing behavior tracked across the web. While Reddit communities for gear were self-organizing consumer subcultures from one direction, companies and ad networks were working toward the same goal from the other direction.
API-ification has happened across the entire supply chain. Companies like CA.LA let you spin up up a fashion line as fast as you’d spin up a new Digital Ocean droplet, whether you’re A$AP Ferg or hyped NYC brand Vaquera. Across the board, brands and middleware were opening new supply chains, which then became accessible entrepreneurs targeting all sorts of subcultural plays. And with Shopify, Squarespace, and Stripe, you can open an online store and accept payments in minutes. Once the goods are readily available, everything becomes a distribution problem—a matter of finding a target demographic and making products legible to it.
Now it’s less about the supply chain & logistics and more about the subcultures / demographics. Brands aren’t distinguishable by their suppliers, but by their targets.
Products begin their life as an unbranded commodities made in foreign factories; they pass through a series of outsourced relationships —brand designers, content creators, and influencers—which construct a cultural identity for the good; in the final phase, the product ends up in a shoppable social media post
way: in the cultural production service economy, all culture is made in service of for-profit brands, at every scale and size.
European and American commentators of all political stripes recognize the current cultural moment as one that is stuck in some way. Endless remakes and reboots, endless franchises, cinematic universes, and now metaverses filled with brands who talk to each other; a culture of nostalgia with no real macro narrative
Beyond our workplaces, what else is stepping in to provide a sense of community and belonging?
All in all, product marketing businesses can only do so much to situate their goods in these broader cultural worlds without eating into their margins. This seemingly insurmountable gap is what my workshops were trying to address. But what would it mean for brands to stop pointing to culture, and to start being it?
Culture is a process, with the end result of shaping human minds.
Today, social media has become a more perfect tool for culture than Arnold could have imagined, and its use a science of penetrating the mass mind. All communication now approaches propaganda, and language itself has become somebody else’s agenda. Little
When you bought Bitcoin and Ether, it’s with the knowledge that there was also a culture there to become part of. Now years later, there are many tribes to “buy into,” from Bitcoin Christians to Bitcoin carnivores, from Ethereum permissionless free market maxis to Ethereum self-organizing collective decentralized coop radicals. Even if none of these appeal to you, you still end up becoming what “the space” (crypto’s collective term for itself) calls a “crypto person.” The creation of more and more “crypto people” is driven by the new revenue model cryptocurrencies exhibit. The business logic of these tokens is “number go up,” a feat accomplished by getting as many people to buy the token as possible. In other words, the upside opportunity is achieved with mass distribution of Bitcoin and Ethereum culture—the expansion of what it means to be an ETH holder into new arenas and practices. Buyers become evangelists, who are incentivized to promote their version of the subculture.
In the 2010s, supply chain innovation opened up lifestyle brands. In the 2020s, financial mechanism innovation is opening up the space for incentivized ideologies, networked publics, and co-owned faiths.
Under CPSE models, companies brand products. They point to subcultures to justify the products’ existence, and use data marketing to sort people into starterpack-like demographics. Subcultures become consumerized subcultures, composed of products
Authenticity, I came to understand, was more than a culture of irony and suspicion of everything commercial culture has to offer. It drew on a deep moral source that runs through our culture, a stance of self-definition, a stance of caring deeply about the value of individuality.
·subpixel.space·
Life After Lifestyle
Magic Ink - Information Software and the Graphical Interface
Magic Ink - Information Software and the Graphical Interface
A good industrial designer understands the capabilities and limitations of the human body in manipulating physical objects, and of the human mind in comprehending mechanical models. A camera designer, for example, shapes her product to fit the human hand. She places buttons such that they can be manipulated with index fingers while the camera rests on the thumbs, and weights the buttons so they can be easily pressed in this position, but won’t trigger on accident. Just as importantly, she designs an understandable mapping from physical features to functions—pressing a button snaps a picture, pulling a lever advances the film, opening a door reveals the film, opening another door reveals the battery.
When the software designer defines the interactive aspects of her program, when she places these pseudo-mechanical affordances and describes their behavior, she is doing a virtual form of industrial design. Whether she realizes it or not. #The software designer can thus approach her art as a fusion of graphic design and industrial design. Now, let’s consider how a user approaches software, and more importantly, why.
·worrydream.com·
Magic Ink - Information Software and the Graphical Interface
Kill Math
Kill Math
If I had to guess why "math reform" is misinterpreted as "math education reform", I would speculate that school is the only contact that most people have had with math. Like school-physics or school-chemistry, math is seen as a subject that is taught, not a tool that is used. People don't actually use math-beyond-arithmetic in their lives, just like they don't use the inverse-square law or the periodic table.
Teach the current mathematical notation and methods any way you want -- they will still be unusable. They are unusable in the same way that any bad user interface is unusable -- they don't show users what they need to see, they don't match how users want to think, they don't show users what actions they can take.
They are unusable in the same way that the UNIX command line is unusable for the vast majority of people. There have been many proposals for how the general public can make more powerful use of computers, but nobody is suggesting we should teach everyone to use the command line. The good proposals are the opposite of that -- design better interfaces, more accessible applications, higher-level abstractions. Represent things visually and tangibly. And so it should be with math. Mathematics, as currently practiced, is a command line. We need a better interface.
Anything that remains abstract (in the sense of not concrete) is hard to think about... I think that mathematicians are those who succeed in figuring out how to think concretely about things that are abstract, so that they aren't abstract anymore. And I believe that mathematical thinking encompasses the skill of learning to think of an abstract thing concretely, often using multiple representations – this is part of how to think about more things as "things". So rather than avoiding abstraction, I think it's important to absorb it, and concretize the abstract... One way to concretize something abstract might be to show an instance of it alongside something that is already concrete.
The mathematical modeling tools we employ at once extend and limit our ability to conceive the world. Limitations of mathematics are evident in the fact that the analytic geometry that provides the foundation for classical mechanics is insufficient for General Relativity. This should alert one to the possibility of other conceptual limits in the mathematics used by physicists.
·worrydream.com·
Kill Math
To be a Technologist is to be Human - Letters to a Young Technologist
To be a Technologist is to be Human - Letters to a Young Technologist
In fact, more people are technologists than ever before, insofar as a “technologist” can be defined as someone inventing, implementing or repurposing technology. In particular, the personal computer has allowed anyone to live in the unbounded wilderness of the internet as they please. Anyone can build highly specific corners of cyberspace and quickly invent digital tools, changing their own and others’ technological realities. “Technologist” is a common identity that many different people occupy, and anyone can occupy. Yet the public perceptions of a “technologist” still constitute a very narrow image.
A technologist makes reason out of the messiness of the world, leverages their understanding to envision a different reality, and builds a pathway to make their vision happen. All three of these endeavors—to try to understand the world, to imagine something different, and to build something that fulfills that vision—are deeply human.
Humans are continually distilling and organizing reality into representations and models—to varying degrees of accuracy and implicitness—that we can understand and navigate. Our intelligence involves making models of all aspects of our realities: models of the climate, models of each other’s minds, models of fluid dynamics.
mental models
We are an unprecedentedly self-augmenting species, with a fundamental drive to organize, imagine, construct and exercise our will in the world. And we can measure our technological success on the basis of how much they increase our humanity. What we need is a vision for that humanity, and to enact this vision. What do we, humans, want to become?
As a general public, we can collectively hold technologists to a higher ethical standard, as their work has important human consequences for us all. We must begin to think of them as doing deeply human work, intervening in our present realities and forging our futures. Choosing how best to model the world, impressing their will on it, and us. We must insist that they understand their role as augmenting and modifying humanity, and are responsible for the implications. Collective societal expectations are powerful; if we don’t, they won’t.
·letterstoayoungtechnologist.com·
To be a Technologist is to be Human - Letters to a Young Technologist
Words are polluted. Plots are polluted.
Words are polluted. Plots are polluted.
I care about people more than I care about positions or beliefs, which I tend to consider a subservient class of psychological phenomena. That is to say: I think people wear beliefs like clothes; they wear what they have grown to consider sensible or attractive; they wear what they feel flatters them; they wear what keeps them dry and warm in inclement winter. They believe their opinions, tastes, philosophies are who they are, but they are mistaken. (Aging is largely learning what one is not, it seems to me).
Criticism must serve some function to justify the pain it causes: it must, say, avert a disastrous course of action being deliberated by a group, or help thwart the rise of a barbarous politician. But this rarely occurs. Most criticism, even of the most erudite sort, is, as we all know, wasted breath: preached to one’s own choir, comically or indignantly cruel to those one doesn’t respect, unlikely to change the behavior of anyone not already in agreement.On the other hand! There persists the idea that culture arises out of the scrum of colliding perspectives, and that it is therefore a moral duty to remonstrate against stupidity, performative emoting, deceitful art, destructively banal fiction, and so on. If one doesn’t speak up, one cannot lament the triumph of moral and imaginative vacuity.
One must believe, of course, that there are abstractions worth protecting, and therefore abstractions worth hurting others for, in order to criticize; and the endless repetitiveness of cultural history seems to devalue such abstractions as surely as bad art and cliche devalue words.
·metaismurder.com·
Words are polluted. Plots are polluted.
On the Social Media Ideology
On the Social Media Ideology
Social networking is much more than just a dominant discourse. We need to go beyond text and images and include its software, interfaces, and networks that depend on a technical infrastructure consisting of offices and their consultants and cleaners, cables and data centers, working in close concert with the movements and habits of the connected billions. Academic internet studies circles have shifted their attention from utopian promises, impulses, and critiques to “mapping” the network’s impact. From digital humanities to data science we see a shift in network-oriented inquiry from Whether and Why, What and Who, to (merely) How. From a sociality of causes to a sociality of net effects. A new generation of humanistic researchers is lured into the “big data” trap, and kept busy capturing user behavior whilst producing seductive eye candy for an image-hungry audience (and vice versa).
We need to politicize the New Electricity, the privately owned utilities of our century, before they disappear into the background.
What remains particularly unexplained is the apparent paradox between the hyper-individualized subject and the herd mentality of the social.
Before we enter the social media sphere, everyone first fills out a profile and choses a username and password in order to create an account. Minutes later, you’re part of the game and you start sharing, creating, playing, as if it has always been like that. The profile is the a priori part and the profiling and targeted advertising cannot operate without it. The platforms present themselves as self-evident. They just are—facilitating our feature-rich lives. Everyone that counts is there. It is through the gate of the profile that we become its subject.
We pull in updates, 24/7, in a real-time global economy of interdependencies, having been taught to read news feeds as interpersonal indicators of the planetary condition
Treating social media as ideology means observing how it binds together media, culture, and identity into an ever-growing cultural performance (and related “cultural studies”) of gender, lifestyle, fashion, brands, celebrity, and news from radio, television, magazines, and the web—all of this imbricated with the entrepreneurial values of venture capital and start-up culture, with their underside of declining livelihoods and growing inequality.
Software, or perhaps more precisely operating systems, offer us an imaginary relationship to our hardware: they do not represent transistors but rather desktops and recycling bins. Software produces users. Without operating system (OS) there would be no access to hardware; without OS no actions, no practices, and thus no user. Each OS, through its advertisements, interpellates a “user”: calls it and offers it a name or image with which to identify. We could say that social media performs the same function, and is even more powerful.
In the age of social media we seem to confess less what we think. It’s considered too risky, too private. We share what we do, and see, in a staged manner. Yes, we share judgments and opinions, but no thoughts. Our Self is too busy for that, always on the move, flexible, open, sporty, sexy, and always ready to connect and express.
Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
·e-flux.com·
On the Social Media Ideology
Deep Laziness
Deep Laziness
Imagine a person who is very lazy at work, yet whose customers are (along with everyone else concerned) quite satisfied. It could be a slow-talking rural shop proprietor from an old movie, or some kind of Taoist fisherman – perhaps a bit of a buffoon, but definitely deeply content. In order to be this way, he must be reasonably organized: stock must be ordered, and tackle squared away, in order to afford worry-free, deep-breathing laziness. Consider this imaginary person as a kind of ideal or archetype. Now consider that the universe might have this personality.
·ribbonfarm.com·
Deep Laziness
Folk (Browser) Interfaces
Folk (Browser) Interfaces
For the layman to build their own Folk Interfaces, jigs to wield the media they care about, we must offer simple primitives. A designer in Blender thinks in terms of lighting, camera movements, and materials. An editor in Premiere, in sequences, transitions, titles, and colors. Critically, this is different from automating existing patterns, e.g. making it easy to create a website, simulate the visuals of film photography, or 3D-scan one's room. Instead, it's about building a playground in which those novel computational artifacts can be tinkered with and composed, via a grammar native to their own domain, to produce the fruits of the users' own vision. The goal of the computational tool-maker then is not to teach the layman about recursion, abstraction, or composition, but to provide meaningful primitives (i.e. a system) with which the user can do real work. End-user programming is a red herring: We need to focus on materiality, what some disparage as mere "side effects." The goal is to enable others to feel the agency and power that comes when the world ceases to be immutable.
This feels strongly related to another quote about software as ideology / a system of metaphors that influence the way we assign value to digital actions and content.
I hope this mode can paint the picture of software, not as a teleological instrument careening towards automation and ease, but as a medium for intimacy with the matter of our time (images, audio, video), yielding a sense of agency with what, to most, feels like an indelible substrate.
·cristobal.space·
Folk (Browser) Interfaces