Saved

Saved

3630 bookmarks
Newest
Synthesizer for thought - thesephist.com
Synthesizer for thought - thesephist.com
Draws parallels between the evolution of music production through synthesizers and the potential for new tools in language and idea generation. The author argues that breakthroughs in mathematical understanding of media lead to new creative tools and interfaces, suggesting that recent advancements in language models could revolutionize how we interact with and manipulate ideas and text.
A synthesizer produces music very differently than an acoustic instrument. It produces music at the lowest level of abstraction, as mathematical models of sound waves.
Once we started understanding writing as a mathematical object, our vocabulary for talking about ideas expanded in depth and precision.
An idea is composed of concepts in a vector space of features, and a vector space is a kind of marvelous mathematical object that we can write theorems and prove things about and deeply and fundamentally understand.
Synthesizers enabled entirely new sounds and genres of music, like electronic pop and techno. These new sounds were easier to discover and share because new sounds didn’t require designing entirely new instruments. The synthesizer organizes the space of sound into a tangible human interface, and as we discover new sounds, we could share it with others as numbers and digital files, as the mathematical objects they’ve always been.
Because synthesizers are electronic, unlike traditional instruments, we can attach arbitrary human interfaces to it. This dramatically expands the design space of how humans can interact with music. Synthesizers can be connected to keyboards, sequencers, drum machines, touchscreens for continuous control, displays for visual feedback, and of course, software interfaces for automation and endlessly dynamic user interfaces. With this, we freed the production of music from any particular physical form.
Recently, we’ve seen neural networks learn detailed mathematical models of language that seem to make sense to humans. And with a breakthrough in mathematical understanding of a medium, come new tools that enable new creative forms and allow us to tackle new problems.
Heatmaps can be particularly useful for analyzing large corpora or very long documents, making it easier to pinpoint areas of interest or relevance at a glance.
If we apply the same idea to the experience of reading long-form writing, it may look like this. Imagine opening a story on your phone and swiping in from the scrollbar edge to reveal a vertical spectrogram, each “frequency” of the spectrogram representing the prominence of different concepts like sentiment or narrative tension varying over time. Scrubbing over a particular feature “column” could expand it to tell you what the feature is, and which part of the text that feature most correlates with.
What would a semantic diff view for text look like? Perhaps when I edit text, I’d be able to hover over a control for a particular style or concept feature like “Narrative voice” or “Figurative language”, and my highlighted passage would fan out the options like playing cards in a deck to reveal other “adjacent” sentences I could choose instead. Or, if that involves too much reading, each word could simply be highlighted to indicate whether that word would be more or less likely to appear in a sentence that was more “narrative” or more “figurative” — a kind of highlight-based indicator for the direction of a semantic edit.
Browsing through these icons felt as if we were inventing a new kind of word, or a new notation for visual concepts mediated by neural networks. This could allow us to communicate about abstract concepts and patterns found in the wild that may not correspond to any word in our dictionary today.
What visual and sensory tricks can we use to coax our visual-perceptual systems to understand and manipulate objects in higher dimensions? One way to solve this problem may involve inventing new notation, whether as literal iconic representations of visual ideas or as some more abstract system of symbols.
Photographers buy and sell filters, and cinematographers share and download LUTs to emulate specific color grading styles. If we squint, we can also imagine software developers and their package repositories like NPM to be something similar — a global, shared resource of abstractions anyone can download and incorporate into their work instantly. No such thing exists for thinking and writing. As we figure out ways to extract elements of writing style from language models, we may be able to build a similar kind of shared library for linguistic features anyone can download and apply to their thinking and writing. A catalogue of narrative voice, speaking tone, or flavor of figurative language sampled from the wild or hand-engineered from raw neural network features and shared for everyone else to use.
We’re starting to see something like this already. Today, when users interact with conversational language models like ChatGPT, they may instruct, “Explain this to me like Richard Feynman.” In that interaction, they’re invoking some style the model has learned during its training. Users today may share these prompts, which we can think of as “writing filters”, with their friends and coworkers. This kind of an interaction becomes much more powerful in the space of interpretable features, because features can be combined together much more cleanly than textual instructions in prompts.
·thesephist.com·
Synthesizer for thought - thesephist.com
Surprise! The Latest ‘Comprehensive’ US Privacy Bill Is Doomed
Surprise! The Latest ‘Comprehensive’ US Privacy Bill Is Doomed
Deleting sections of a bill holding companies accountable for making data-driven decisions that could lead to discrimination in housing, employment, health care, and the like spurred a strong response from civil society organizations including the NAACP, the Japanese American Citizens League, the Autistic Self Advocacy Network, and Asian Americans Advancing Justice, among dozens of others.
In a letter this week to E&C Democrats, obtained by WIRED, the groups wrote: “Privacy rights and civil rights are no longer separate concepts—they are inextricably bound together and must be protected. Abuse of our data is no longer limited to targeted advertising or data breaches. Instead, our data are used in decisions about who gets a mortgage, who gets into which schools, and who gets hired—and who does not.”
these provisions contained generous “pro-business” caveats. For instance, users would be able to opt out of algorithmic decisionmaking only if doing so wasn’t “prohibitively costly” or “demonstrably impracticable due to technological limitations.” Similarly, companies could have limited the public’s knowledge about the results of any audits by simply hiring an independent assessor to complete the task rather than doing so internally.
·wired.com·
Surprise! The Latest ‘Comprehensive’ US Privacy Bill Is Doomed
the best way to please is not to please
the best way to please is not to please
I wanted to take care of everyone’s feelings. If I made them feel good, I would rewarded with their affection. For a long time, socializing involved playing a weird form of Mad-Libs: I wanted to say whatever you wanted to hear. I wanted to be assertive, but also understanding and reasonable and thoughtful.
I really took what I learned and ran with it. I wanted to master what I was bad at and made other people happy. I realized that it was: bad to talk too much about yourself good to show interest in other people’s hobbies, problems, and interests important to pay attention to body language my job to make sure that whatever social situation we were in was a delightful experience for everyone involved
·avabear.xyz·
the best way to please is not to please
I did retail theft at an Apple Store
I did retail theft at an Apple Store
More than anything I felt like I had been airlifted into a surreal parallel universe, in which everyone is wealthy and on vacation and having beautiful children who go on field trips to aquaria. The inbox in question belongs to Jane Appleseed, and one wonders whether Jane knows her private life is being used to sell hardware and promises.
·escapethealgorithm.substack.com·
I did retail theft at an Apple Store
Blessed and emoji-pilled: why language online is so absurd
Blessed and emoji-pilled: why language online is so absurd
AI: This article explores the evolution of online language and communication, highlighting the increasing absurdity and surrealism in digital discourse. It discusses how traditional language is being replaced by memes, emojis, and seemingly nonsensical phrases, reflecting the influence of social media platforms and algorithms on our communication styles. The piece examines the implications of this shift, touching on themes of information overload, AI-like speech patterns, and the potential consequences of this new form of digital dialect.
Layers upon layers of references are stacked together in a single post, while the posts themselves fly by faster than ever in our feeds. To someone who isn’t “chronically online” a few dislocated images or words may trigger a flash of recognition – a member of the royal family, a beloved cartoon character – but their relationship with each other is impossible to unpick. Add the absurdist language of online culture and the impenetrable algorithms that decide what we see in our feeds, and it seems like all hope is lost when it comes to making sense of the internet.
Forget words! Don’t think! In today’s digitally-mediated landscape, there’s no need for knowledge or understanding, just information. Scroll the feed and you’ll find countless video clips and posts advocating this smooth-brained agenda: lobotomy chic, sludge content, silly girl summer.
“With memes, images are converging more on the linguistic, becoming flattened into something more like symbols/hieroglyphs/words,” says writer Olivia Kan-Sperling, who specialises in programming language critique. For the meme-fluent, the form isn’t important, but rather the message it carries. “A meme is lower-resolution in terms of its aesthetic affordances than a normal pic because you barely have to look at it to know what it’s ‘doing’,” she expands. “For the literate, its full meaning unfolds at a glance.” To understand this way of “speaking writing posting” means we must embrace the malleability of language, the ambiguities and interpretations – and free it from ‘real-world’ rules.
Hey guys, I just got an order in from Sephora – here’s everything that I got. Get ready with me for a boat day in Miami. Come and spend the day with me – starting off with coffee. TikTok influencers engage in a high-pitched and breathless way of speaking that over-emphasises keywords in a youthful, singsong cadence. For the Attention Economy, it’s the sort of algorithm-friendly repetition that’s quantified by clicks and likes, monetised by engagement for short attention spans. “Now, we have to speak machine with machines that were trained on humans,” says Basar, who refers to this algorithm-led style as promptcore.
As algorithms digest our online behaviour into data, we resemble a swarm, a hivemind. We are beginning to think and speak like machines, in UI-friendly keywords and emoji-pilled phrases.
·dazeddigital.com·
Blessed and emoji-pilled: why language online is so absurd
The secret digital behaviors of Gen Z
The secret digital behaviors of Gen Z

shift from traditional notions of information literacy to "information sensibility" among Gen Zers, who prioritize social signals and peer influence over fact-checking. The research by Jigsaw, a Google subsidiary, reveals that Gen Zers spend their digital lives in "timepass" mode, engaging with light content and trusting influencers over traditional news sources.

Comment sections for social validation and information signaling

·businessinsider.com·
The secret digital behaviors of Gen Z
WWDC 2024: Apple Intelligence
WWDC 2024: Apple Intelligence
their models are almost entirely based on personal context, by way of an on-device semantic index. In broad strokes, this on-device semantic index can be thought of as a next-generation Spotlight. Apple is focusing on what it can do that no one else can on Apple devices, and not really even trying to compete against ChatGPT et al. for world-knowledge context. They’re focusing on unique differentiation, and eschewing commoditization.
Apple is doing what no one else can do: integrating generative AI into the frameworks in iOS and MacOS used by developers to create native apps. Apps built on the system APIs and frameworks will gain generative AI features for free, both in the sense that the features come automatically when the app is running on a device that meets the minimum specs to qualify for Apple Intelligence, and in the sense that Apple isn’t charging developers or users to utilize these features.
·daringfireball.net·
WWDC 2024: Apple Intelligence
Apple intelligence and AI maximalism — Benedict Evans
Apple intelligence and AI maximalism — Benedict Evans
The chatbot might replace all software with a prompt - ‘software is dead’. I’m skeptical about this, as I’ve written here, but Apple is proposing the opposite: that generative AI is a technology, not a product.
Apple is, I think, signalling a view that generative AI, and ChatGPT itself, is a commodity technology that is most useful when it is: Embedded in a system that gives it broader context about the user (which might be search, social, a device OS, or a vertical application) and Unbundled into individual features (ditto), which are inherently easier to run as small power-efficient models on small power-efficient devices on the edge (paid for by users, not your capex budget) - which is just as well, because… This stuff will never work for the mass-market if we have marginal cost every time the user presses ‘OK’ and we need a fleet of new nuclear power-stations to run it all.
Apple has built its own foundation models, which (on the benchmarks it published) are comparable to anything else on the market, but there’s nowhere that you can plug a raw prompt directly into the model and get a raw output back - there are always sets of buttons and options shaping what you ask, and that’s presented to the user in different ways for different features. In most of these features, there’s no visible bot at all. You don’t ask a question and get a response: instead, your emails are prioritised, or you press ‘summarise’ and a summary appears. You can type a request into Siri (and Siri itself is only one of the many features using Apple’s models), but even then you don’t get raw model output back: you get GUI. The LLM is abstracted away as an API call.
Apple is treating this as a technology to enable new classes of features and capabilities, where there is design and product management shaping what the technology does and what the user sees, not as an oracle that you ask for things.
Apple is drawing a split between a ‘context model’ and a ‘world model’. Apple’s models have access to all the context that your phone has about you, powering those features, and this is all private, both on device and in Apple’s ‘Private Cloud’. But if you ask for ideas for what to make with a photo of your grocery shopping, then this is no longer about your context, and Apple will offer to send that to a third-party world model - today, ChatGPT.
that’s clearly separated into a different experience where you should have different expectations, and it’s also, of course, OpenAI’s brand risk, not Apple’s. Meanwhile, that world model gets none of your context, only your one-off prompt.
Neither OpenAI nor any of the other cloud models from new companies (Anthropic, Mistral etc) have your emails, messages, locations, photos, files and so on.
Apple is letting OpenAI take the brand risk of creating pizza glue recipes, and making error rates and abuse someone else’s problem, while Apple watches from a safe distance.
The next step, probably, is to take bids from Bing and Google for the default slot, but meanwhile, more and more use-cases will be quietly shifted from the third party to Apple’s own models. It’s Apple’s own software that decides where the queries go, after all, and which ones need the third party at all.
A lot of the compute to run Apple Intelligence is in end-user devices paid for by the users, not Apple’s capex budget, and Apple Intelligence is free.
Commoditisation is often also integration. There was a time when ‘spell check’ was a separate product that you had to buy, for hundreds of dollars, and there were dozens of competing products on the market, but over time it was integrated first into the word processor and then the OS. The same thing happened with the last wave of machine learning - style transfer or image recognition were products for five minutes and then became features. Today ‘summarise this document’ is AI, and you need a cloud LLM that costs $20/month, but tomorrow the OS will do that for free. ‘AI is whatever doesn’t work yet.’
Apple is big enough to take its own path, just as it did moving the Mac to its own silicon: it controls the software and APIs on top of the silicon that are the basis of those developer network effects, and it has a world class chip team and privileged access to TSMC.
Apple is doing something slightly different - it’s proposing a single context model for everything you do on your phone, and powering features from that, rather than adding disconnected LLM-powered features at disconnected points across the company.
·ben-evans.com·
Apple intelligence and AI maximalism — Benedict Evans
written in the body
written in the body
I spent so many years of my life trying to live mostly in my head. Intellectualizing everything made me feel like it was manageable. I was always trying to manage my own reactions and the reactions of everyone else around me. Learning how to manage people was the skill that I had been lavishly rewarded for in my childhood and teens. Growing up, you’re being reprimanded in a million different ways all the time, and I learned to modify my behavior so that over time I got more and more positive feedback. People like it when you do X and not Y, say X and not Y. I kept track of all of it in my head and not in my body. Intellectualizing kept me numbed out, and for a long time what I wanted was nothing more than to be numbed out, because when things hurt they hurt less. Whatever I felt like I couldn’t show people or tell people I hid away. I compartmentalized, and what I put in the compartment I never looked at became my shadow.
So much of what I care about can be boiled down to this: when you’re able to really inhabit and pay attention to your body, it becomes obvious what you want and don’t want, and the path towards your desires is clear. If you’re not in your body, you constantly rationalizing what you should do next, and that can leave you inert or trapped or simply choosing the wrong thing over and over. "I know I should, but I can’t do it” is often another way of saying “I’ve reached this conclusion intellectually, but I’m so frozen out of my body I can’t feel a deeper certainty.”
It was so incredibly hard when people gave me negative feedback—withdrew, or rejected me, or were just preoccupied with their own problems—because I relied on other people to figure out whether everything was alright.
When I started living in my body I started feeling for the first time that I could trust myself in a way that extended beyond trust of my intelligence, of my ability to pick up on cues in my external environment.
I can keep my attention outwards, I don’t direct it inwards in a self-conscious way. It’s the difference between noticing whether someone seems to having a good time in the moment by watching their face vs agonizing about whether they enjoyed something after the fact. I can tell the difference between when I’m tired because I didn’t sleep well versus tired because I’m bored versus tired because I’m avoiding something. When I’m in my body, I’m aware of myself instead of obsessing over my state, and this allows me to have more room for other people.
·avabear.xyz·
written in the body
Richard Linklater Sees the Killer Inside Us All
Richard Linklater Sees the Killer Inside Us All
What’s your relationship now to the work back then? Are you as passionate? I really had to think about that. My analysis of that is, you’re a different person with different needs. A lot of that is based on confidence. When you’re starting out in an art form or anything in life, you can’t have confidence because you don’t have experience, and you can only get confidence through experience. But you have to be pretty confident to make a film. So the only way you counterbalance that lack of experience and confidence is absolute passion, fanatical spirit. And I’ve had this conversation over the years with filmmaker friends: Am I as passionate as I was in my 20s? Would I risk my whole life? If it was my best friend or my negative drowning, which do I save? The 20-something self goes, I’m saving my film! Now it’s not that answer. I’m not ashamed to say that, because all that passion doesn’t go away. It disperses a little healthfully. I’m passionate about more things in the world. I care about more things, and that serves me. The most fascinating relationship we all have is to ourselves at different times in our lives. You look back, and it’s like, I’m not as passionate as I was at 25. Thank God. That person was very insecure, very unkind. You’re better than that now. Hopefully.
·nytimes.com·
Richard Linklater Sees the Killer Inside Us All
How to read a movie - Roger Ebert
How to read a movie - Roger Ebert
When the Sun-Times appointed me film critic, I hadn't taken a single film course (the University of Illinois didn't offer them in those days). One of the reasons I started teaching was to teach myself. Look at a couple dozen New Wave films, you know more about the New Wave. Same with silent films, documentaries, specific directors.
visual compositions have "intrinsic weighting." By that I believe he means that certain areas of the available visual space have tendencies to stir emotional or aesthetic reactions. These are not "laws." To "violate" them can be as meaningful as to "follow" them. I have never heard of a director or cinematographer who ever consciously applied them.
I suspect that filmmakers compose shots from images that well up emotionally, instinctively or strategically, just as a good pianist never thinks about the notes.
I already knew about the painter's "Golden Mean," or the larger concept of the "golden ratio." For a complete explanation, see Wiki, and also look up the "Rule of Thirds." To reduce the concept to a crude rule of thumb in the composition of a shot in a movie: A person located somewhat to the right of center will seem ideally placed. A person to the right of that position will seem more positive; to the left, more negative. A centered person will seem objectified, like a mug shot. I call that position somewhat to the right of center the "strong axis."
They are not absolutes. But in general terms, in a two-shot, the person on the right will "seem" dominant over the person on the left
In simplistic terms: Right is more positive, left more negative. Movement to the right seems more favorable; to the left, less so. The future seems to live on the right, the past on the left. The top is dominant over the bottom. The foreground is stronger than the background. Symmetrical compositions seem at rest. Diagonals in a composition seem to "move" in the direction of the sharpest angle they form, even though of course they may not move at all. Therefore, a composition could lead us into a background that becomes dominant over a foreground.
Of course I should employ quotation marks every time I write such words as positive, negative, stronger, weaker, stable, past, future, dominant or submissive. All of these are tendencies, not absolutes, and as I said, can work as well by being violated as by being followed. Think of "intrinsic weighting" as a process that gives all areas of the screen complete freedom, but acts like an invisible rubber band to create tension or attention when stretched. Never make the mistake of thinking of these things as absolutes. They exist in the realm of emotional tendencies. Often use the cautionary phrase, "all things being equal" -- which of course they never are.
·rogerebert.com·
How to read a movie - Roger Ebert
The Difference Between a Framework and a Library
The Difference Between a Framework and a Library
A library is like going to Ikea. You already have a home, but you need a bit of help with furniture. You don’t feel like making your own table from scratch. Ikea allows you to pick and choose different things to go in your home. You are in control. A framework, on the other hand, is like building a model home. You have a set of blueprints and a few limited choices when it comes to architecture and design. Ultimately, the contractor and blueprint are in control. And they will let you know when and where you can provide your input.
·freecodecamp.org·
The Difference Between a Framework and a Library