The Vogue Archive - Google Arts & Culture
Playbrary
Calculating Empires: A Genealogy of Technology and Power since 1500
The Bear Is Not a Good Show
WWDC 2024: Apple Intelligence
their models are almost entirely based on personal context, by way of an on-device semantic index. In broad strokes, this on-device semantic index can be thought of as a next-generation Spotlight. Apple is focusing on what it can do that no one else can on Apple devices, and not really even trying to compete against ChatGPT et al. for world-knowledge context. They’re focusing on unique differentiation, and eschewing commoditization.
Apple is doing what no one else can do: integrating generative AI into the frameworks in iOS and MacOS used by developers to create native apps. Apps built on the system APIs and frameworks will gain generative AI features for free, both in the sense that the features come automatically when the app is running on a device that meets the minimum specs to qualify for Apple Intelligence, and in the sense that Apple isn’t charging developers or users to utilize these features.
Apple intelligence and AI maximalism — Benedict Evans
The chatbot might replace all software with a prompt - ‘software is dead’. I’m skeptical about this, as I’ve written here, but Apple is proposing the opposite: that generative AI is a technology, not a product.
Apple is, I think, signalling a view that generative AI, and ChatGPT itself, is a commodity technology that is most useful when it is:
Embedded in a system that gives it broader context about the user (which might be search, social, a device OS, or a vertical application) and
Unbundled into individual features (ditto), which are inherently easier to run as small power-efficient models on small power-efficient devices on the edge (paid for by users, not your capex budget) - which is just as well, because…
This stuff will never work for the mass-market if we have marginal cost every time the user presses ‘OK’ and we need a fleet of new nuclear power-stations to run it all.
Apple has built its own foundation models, which (on the benchmarks it published) are comparable to anything else on the market, but there’s nowhere that you can plug a raw prompt directly into the model and get a raw output back - there are always sets of buttons and options shaping what you ask, and that’s presented to the user in different ways for different features. In most of these features, there’s no visible bot at all. You don’t ask a question and get a response: instead, your emails are prioritised, or you press ‘summarise’ and a summary appears. You can type a request into Siri (and Siri itself is only one of the many features using Apple’s models), but even then you don’t get raw model output back: you get GUI. The LLM is abstracted away as an API call.
Apple is treating this as a technology to enable new classes of features and capabilities, where there is design and product management shaping what the technology does and what the user sees, not as an oracle that you ask for things.
Apple is drawing a split between a ‘context model’ and a ‘world model’. Apple’s models have access to all the context that your phone has about you, powering those features, and this is all private, both on device and in Apple’s ‘Private Cloud’. But if you ask for ideas for what to make with a photo of your grocery shopping, then this is no longer about your context, and Apple will offer to send that to a third-party world model - today, ChatGPT.
that’s clearly separated into a different experience where you should have different expectations, and it’s also, of course, OpenAI’s brand risk, not Apple’s. Meanwhile, that world model gets none of your context, only your one-off prompt.
Neither OpenAI nor any of the other cloud models from new companies (Anthropic, Mistral etc) have your emails, messages, locations, photos, files and so on.
Apple is letting OpenAI take the brand risk of creating pizza glue recipes, and making error rates and abuse someone else’s problem, while Apple watches from a safe distance.
The next step, probably, is to take bids from Bing and Google for the default slot, but meanwhile, more and more use-cases will be quietly shifted from the third party to Apple’s own models. It’s Apple’s own software that decides where the queries go, after all, and which ones need the third party at all.
A lot of the compute to run Apple Intelligence is in end-user devices paid for by the users, not Apple’s capex budget, and Apple Intelligence is free.
Commoditisation is often also integration. There was a time when ‘spell check’ was a separate product that you had to buy, for hundreds of dollars, and there were dozens of competing products on the market, but over time it was integrated first into the word processor and then the OS. The same thing happened with the last wave of machine learning - style transfer or image recognition were products for five minutes and then became features. Today ‘summarise this document’ is AI, and you need a cloud LLM that costs $20/month, but tomorrow the OS will do that for free. ‘AI is whatever doesn’t work yet.’
Apple is big enough to take its own path, just as it did moving the Mac to its own silicon: it controls the software and APIs on top of the silicon that are the basis of those developer network effects, and it has a world class chip team and privileged access to TSMC.
Apple is doing something slightly different - it’s proposing a single context model for everything you do on your phone, and powering features from that, rather than adding disconnected LLM-powered features at disconnected points across the company.
written in the body
I spent so many years of my life trying to live mostly in my head. Intellectualizing everything made me feel like it was manageable. I was always trying to manage my own reactions and the reactions of everyone else around me. Learning how to manage people was the skill that I had been lavishly rewarded for in my childhood and teens. Growing up, you’re being reprimanded in a million different ways all the time, and I learned to modify my behavior so that over time I got more and more positive feedback. People like it when you do X and not Y, say X and not Y. I kept track of all of it in my head and not in my body. Intellectualizing kept me numbed out, and for a long time what I wanted was nothing more than to be numbed out, because when things hurt they hurt less. Whatever I felt like I couldn’t show people or tell people I hid away. I compartmentalized, and what I put in the compartment I never looked at became my shadow.
So much of what I care about can be boiled down to this: when you’re able to really inhabit and pay attention to your body, it becomes obvious what you want and don’t want, and the path towards your desires is clear. If you’re not in your body, you constantly rationalizing what you should do next, and that can leave you inert or trapped or simply choosing the wrong thing over and over. "I know I should, but I can’t do it” is often another way of saying “I’ve reached this conclusion intellectually, but I’m so frozen out of my body I can’t feel a deeper certainty.”
It was so incredibly hard when people gave me negative feedback—withdrew, or rejected me, or were just preoccupied with their own problems—because I relied on other people to figure out whether everything was alright.
When I started living in my body I started feeling for the first time that I could trust myself in a way that extended beyond trust of my intelligence, of my ability to pick up on cues in my external environment.
I can keep my attention outwards, I don’t direct it inwards in a self-conscious way. It’s the difference between noticing whether someone seems to having a good time in the moment by watching their face vs agonizing about whether they enjoyed something after the fact. I can tell the difference between when I’m tired because I didn’t sleep well versus tired because I’m bored versus tired because I’m avoiding something. When I’m in my body, I’m aware of myself instead of obsessing over my state, and this allows me to have more room for other people.
Richard Linklater Sees the Killer Inside Us All
What’s your relationship now to the work back then? Are you as passionate? I really had to think about that. My analysis of that is, you’re a different person with different needs. A lot of that is based on confidence. When you’re starting out in an art form or anything in life, you can’t have confidence because you don’t have experience, and you can only get confidence through experience. But you have to be pretty confident to make a film. So the only way you counterbalance that lack of experience and confidence is absolute passion, fanatical spirit. And I’ve had this conversation over the years with filmmaker friends: Am I as passionate as I was in my 20s? Would I risk my whole life? If it was my best friend or my negative drowning, which do I save? The 20-something self goes, I’m saving my film! Now it’s not that answer. I’m not ashamed to say that, because all that passion doesn’t go away. It disperses a little healthfully. I’m passionate about more things in the world. I care about more things, and that serves me. The most fascinating relationship we all have is to ourselves at different times in our lives. You look back, and it’s like, I’m not as passionate as I was at 25. Thank God. That person was very insecure, very unkind. You’re better than that now. Hopefully.
Movie Answer Man (05/19/1996) | Movie Answer Man | Roger Ebert
How to read a movie - Roger Ebert
When the Sun-Times appointed me film critic, I hadn't taken a single film course (the University of Illinois didn't offer them in those days). One of the reasons I started teaching was to teach myself. Look at a couple dozen New Wave films, you know more about the New Wave. Same with silent films, documentaries, specific directors.
visual compositions have "intrinsic weighting." By that I believe he means that certain areas of the available visual space have tendencies to stir emotional or aesthetic reactions. These are not "laws." To "violate" them can be as meaningful as to "follow" them. I have never heard of a director or cinematographer who ever consciously applied them.
I suspect that filmmakers compose shots from images that well up emotionally, instinctively or strategically, just as a good pianist never thinks about the notes.
I already knew about the painter's "Golden Mean," or the larger concept of the "golden ratio." For a complete explanation, see Wiki, and also look up the "Rule of Thirds." To reduce the concept to a crude rule of thumb in the composition of a shot in a movie: A person located somewhat to the right of center will seem ideally placed. A person to the right of that position will seem more positive; to the left, more negative. A centered person will seem objectified, like a mug shot. I call that position somewhat to the right of center the "strong axis."
They are not absolutes. But in general terms, in a two-shot, the person on the right will "seem" dominant over the person on the left
In simplistic terms: Right is more positive, left more negative. Movement to the right seems more favorable; to the left, less so. The future seems to live on the right, the past on the left. The top is dominant over the bottom. The foreground is stronger than the background. Symmetrical compositions seem at rest. Diagonals in a composition seem to "move" in the direction of the sharpest angle they form, even though of course they may not move at all. Therefore, a composition could lead us into a background that becomes dominant over a foreground.
Of course I should employ quotation marks every time I write such words as positive, negative, stronger, weaker, stable, past, future, dominant or submissive. All of these are tendencies, not absolutes, and as I said, can work as well by being violated as by being followed. Think of "intrinsic weighting" as a process that gives all areas of the screen complete freedom, but acts like an invisible rubber band to create tension or attention when stretched. Never make the mistake of thinking of these things as absolutes. They exist in the realm of emotional tendencies. Often use the cautionary phrase, "all things being equal" -- which of course they never are.
The Serrated Knife. 1/3
The Difference Between a Framework and a Library
A library is like going to Ikea. You already have a home, but you need a bit of help with furniture. You don’t feel like making your own table from scratch. Ikea allows you to pick and choose different things to go in your home. You are in control.
A framework, on the other hand, is like building a model home. You have a set of blueprints and a few limited choices when it comes to architecture and design. Ultimately, the contractor and blueprint are in control. And they will let you know when and where you can provide your input.
Language is primarily a tool for communication rather than thought
I Will Fucking Piledrive You If You Mention AI Again — Ludicity
Fast Crimes at Lambda School
Arc Browser is a Fantastic Browser. I Can't Recommend It. Ever.
The human cost of an Apple update
Dating apps don’t work, and meeting people in person seems foreign, even impossible. But it was dating apps that drove IRL connections nearly extinct. In other words, dating apps did work, for almost a decade, by promising to cut out all the things about in-person dating that made us feel vulnerable or uncomfortable. Rejection now happens with a swipe, out of sight, with neither party the wiser. If you match and then change your mind, you can just unmatch without explanation.
This arc plays out across all kinds of apps, and all kinds of human relationships, as tech companies seek to find and solve every type of “friction” and discomfort. But those efforts are rooted in the mistaken idea that being a person shouldn’t come with difficult emotions—that we aren’t often, in fact, served by hard conversations or uncomfortable feelings.
Useful and Overlooked Skills
A diplomatic “no” is when you’re clear about your feelings but empathetic to how the person on the receiving end might interpret those feelings.
What Is Going On With Next-Generation Apple CarPlay?
I’d posit that a reason why people love CarPlay so much is because the media, communication, and navigation experiences have traditionally been pretty poor. CarPlay supplants those, and it does so with aplomb because people use those same media, communication, and navigation features that are personalized to them with their phones when they’re not in their cars.
No one is walking around with a speedometer and a tachometer on their iPhone that need to have a familiar look and feel, rendered exclusively in San Francisco.
As long as automakers supply the existing level of CarPlay support, which isn’t a given, then customers like us would be content with the status quo, or even a slight improvement.
In my humble opinion, Next-Gen CarPlay is dead on arrival. Too late, too complicated, and it doesn’t solve the needs of automakers or customers.
Instead of letting the vehicle’s interface peak through, Apple should consider letting CarPlay peak through for the non-critical systems people prefer to use with CarPlay.
Design a CarPlay that can output multiple display streams (which Apple already over-designed) and display that in the cluster. Integrate with the existing controls for managing the interfaces in the vehicle. When the phone isn’t there, the vehicle will still be the same vehicle. When the phone is there, it’s got Apple Maps right in the cluster how you like it without changing the gauges, or the climate controls, or where the seat massage button is.
The everyday irritations people have are mundane, practical, and are not related to how Apple-like their car displays can look.
ivanreese/visual-programming-codex: Waypoints to the past and future of visual programming.
The best WWDC videos about interface design
We've spent billions to fix our medical records, and they're still a mess. Here's why.
Despite the U.S. government spending billions to digitize medical records through the HITECH Act, the system remains fragmented, with doctors unable to easily exchange patient information across different practices and hospitals. This is largely due to a lack of interoperability between the proprietary software of electronic health record (EHR) vendors like Epic Systems. Epic has grown into the leading EHR vendor, but its software doesn't readily connect with competing systems, hindering the original goals of digitization. Patients are hurt by this inability to ensure their complete records are accessible to all their doctors. While Epic says it supports data sharing, it has charged additional fees and allegedly engaged in information blocking. The government has started pushing back on Epic's practices, but with many hospitals deeply invested in Epic's system, the issues persist, and the promised cost savings and benefits of EHRs have yet to fully materialize.
A 2014 RAND report singled out Epic as a roadblock to interoperability. With the company’s rise, researchers wrote, came an increasingly walled-off system. “By subsidizing ‘where the industry is’ rather than where it needed to go,” the report said, the government propped up an EHR market “that did not have the level of connectivity envisioned by the authors of the HITECH Act.”
In terms of bringing digital records to practices across the country, the HITECH Act has unquestionably succeeded: The percentage of US hospitals using digital records skyrocketed from 9.4 to 75.5 percent between 2008 and 2014. But the HITECH Act didn’t prioritize “interoperability”—the ability to transfer a medical file from one hospital to another. Unless programmers ensure that their system properly integrates with another, a doctor’s computer might spit out something akin to emoticons when queried for key test results.
Epic does work with hospitals and practices to link its system with competing ones, but it usually charges top dollar to do so.
A recent study by the American Medical Association and the online network AmericanEHR Partners found that 43 percent of physicians thought their software actually made their jobs more difficult. Doctors are investing the time to input data, but their offices are still having to fax and mail records like they did a decade ago. Less than 10 percent of hospitals say they’ve been able to trade records entirely through their digital systems.
All together, it’s like the Microsoft Office of health care software—more comprehensive than any of its competitors, even if its individual components are kind of meh.
“What you hear is that, if you were to buy the best of breed—the best cardiology system, or the best chemotherapy system—no one would ever choose Epic,” says Julia Adler-Milstein, a University of Michigan researcher who studies health care IT. As it stands, she says, using Epic is easier than trying to piece together better options from various software vendors. On top of that, Epic will tailor each installation on-site to a customer’s specific needs. What it doesn’t have—and ditto systems created by competitors Cerner and Meditech, the other bigwigs in EHR—is a framework to connect to other facilities using competing EHR systems.
Epic is probably here to stay, especially given the incredible investments hospitals have made to implement its system—Duke University, for example, reportedly spent $700 million on its Epic installation. That doesn’t mean Americans have to accept the status quo. According to Adler-Milstein, the University of Michigan researcher, “What we can do is force them to open up their system a little more, so that it plays better with others.” She hopes increased scrutiny pushes the company to publish its API—the code that lets others access information in its system—to allow other firms to build more user-friendly software.
Luma Dream Machine
text to video AI generator
What Is the Best Way to Cut an Onion?
As it turns out, cutting radially is, in fact, marginally worse than the traditional method. With all your knife strokes converging at a single central point, the thin wedges of onion that you create with your first strokes taper drastically as they get toward the center, resulting in large dice cut from the outer layers and much larger dice from the center. But even the classic method doesn’t produce particularly even dice, with a standard deviation of about 48 percent.
For the next set of simulations, I wondered what would happen if, instead of making radial cuts with the knife pointed directly at the circle’s center, we aimed our knife at an imaginary point somewhere below the surface of the cutting board, producing cuts somewhere between perfectly vertical and completely radial.
This proved to be key. By plotting the standard deviation of the onion pieces against the point below the cutting board surface at which your knife is aimed, Dr. Poulsen produced a chart that revealed the ideal point to be exactly .557 onion radiuses below the surface of the cutting board. Or, if it’s easier: Angle your knife toward a point roughly six-tenths of an onion’s height below the surface of the cutting board. If you want to be even more lax about it, making sure your knife isn’t quite oriented vertically or radially for those initial cuts is enough to make a measurable difference in dice evenness.
‘We cannot simply go, go, go.’ What is girl mossing, the wellness trend that rejects hustle culture?
Girl mossing recognises a need to step away from the pressures of modern, urban life, promoting spending time in nature as a restorative practice.
The fast pace and pressure of neoliberal capitalism take an enormous toll on wellbeing: not just personal, but social and planetary. These pressures are most acutely felt by women – whose labour remains, in large part, undervalued and underpaid – and by young people, who are often in precarious work, priced out of the housing market. Yet they’re still bombarded with images of unattainable success on social media. Not so the moss selfies.
Girl rotting is another subversive form of rest and retreat, focused on being intentionally “unproductive” at home.
In China, there’s a parallel rise in “tangping/lying flat” among Chinese young people who are “rejecting high-pressure jobs” in favour of a “low-pressure life”, and in “bai lan” (letting things rot), “a voluntary retreat” from pursuing goals that are now seen as “too difficult to achieve”.
We typically strive for material rewards through hard work and achieve success through doing.
We celebrate the “wins”: the promotion, the new house, marriage, the birth of children. By contrast, we really struggle “when things fall apart”, as they inevitably do, particularly when we are confronted with old age, sickness, and death – basically, with human decomposition.
Spreadsheet Assassins | Matthew King
Rhe real key to SaaS success is often less about innovative software and more about locking in customers and extracting maximum value. Many SaaS products simply digitize spreadsheet workflows into proprietary systems, making it difficult for customers to switch. As SaaS proliferates into every corner of the economy, it imposes a growing "software tax" on businesses and consumers alike. While spreadsheets remain a flexible, interoperable stalwart, the trajectory of SaaS points to an increasingly extractive model prioritizing rent-seeking over genuine productivity gains.
As a SaaS startup scales, sales and customer support staff pay for themselves, and the marginal cost to serve your one-thousandth versus one-millionth user is near-zero. The result? Some SaaS companies achieve gross profit margins of 75 to 90 percent, rivaling Windows in its monopolistic heyday.
Rent-seeking has become an explicit playbook for many shameless SaaS investors. Private equity shop Thoma Bravo has acquired over four hundred software companies, repeatedly mashing products together to amplify lock-in effects so it can slash costs and boost prices—before selling the ravaged Franken-platform to the highest bidder.
In the Kafkaesque realm of health care, software giant Epic’s 1990s-era UI is still widely used for electronic medical records, a nuisance that arguably puts millions of lives at risk, even as it accrues billions in annual revenue and actively resists system interoperability. SAP, the antiquated granddaddy of enterprise resource planning software, has endured for decades within frustrated finance and supply chain teams, even as thousands of SaaS startups try to chip away at its dominance. Salesforce continues to grow at a rapid clip, despite a clunky UI that users say is “absolutely terrible” and “stuck in the 80s”—hence, the hundreds of “SalesTech” startups that simplify a single platform workflow (and pray for a billion-dollar acquihire to Benioff’s mothership). What these SaaS overlords might laud as an ecosystem of startup innovation is actually a reflection of their own technical shortcomings and bloated inertia.
Over 1,500 software startups are focused on billing and invoicing alone. The glut of tools extends to sectors without any clear need for complex software: no fewer than 378 hair salon platforms, 166 parking management solutions, and 70 operating systems for funeral homes and cemeteries are currently on the market. Billions of public pension and university endowment dollars are being burned on what amounts to hackathon curiosities, driven by the machinations of venture capital and private equity. To visit a much-hyped “demo day” at a startup incubator like Y Combinator or Techstars is to enter a realm akin to a high-end art fair—except the objects being admired are not texts or sculptures or paintings but slightly nicer faces for the drudgery of corporate productivity.
As popular as SaaS has become, much of the modern economy still runs on the humble, unfashionable spreadsheet. For all its downsides, there are virtues. Spreadsheets are highly interoperable between firms, partly because of another monopoly (Excel) but also because the generic .csv format is recognized by countless applications. They offer greater autonomy and flexibility, with tabular cells and formulas that can be shaped into workflows, processes, calculators, databases, dashboards, calendars, to-do lists, bug trackers, accounting workbooks—the list goes on. Spreadsheets are arguably the most popular programming language on Earth.
On the necessity of a sin
AI excels at tasks that are intensely human: writing, ideation, faking empathy. However, it struggles with tasks that machines typically excel at, such as repeating a process consistently or performing complex calculations without assistance. In fact, it tends to solve problems that machines are good at in a very human way. When you get GPT-4 to do data analysis of a spreadsheet for you, it doesn’t innately read and understand the numbers. Instead, it uses tools the way we might, glancing at a bit of the data to see what is in it, and then writing Python programs to try to actually do the analysis. And its flaws — making up information, false confidence in wrong answers, and occasional laziness — also seem very much more like human than machine errors.
This quasi-human weirdness is why the best users of AI are often managers and teachers, people who can understand the perspective of others and correct it when it is going wrong.
Rather than focusing purely on teaching people to write good prompts, we might want to spend more time teaching them to manage the AI.
Telling the system “who” it is helps shape the outputs of the system. Telling it to act as a teacher of MBA students will result in a different output than if you ask it to act as a circus clown. This isn’t magical—you can’t say Act as Bill Gates and get better business advice or write like Hemingway and get amazing prose —but it can help make the tone and direction appropriate for your purpose.
What Apple's AI Tells Us: Experimental Models⁴
Companies are exploring various approaches, from large, less constrained frontier models to smaller, more focused models that run on devices. Apple's AI focuses on narrow, practical use cases and strong privacy measures, while companies like OpenAI and Anthropic pursue the goal of AGI.
the most advanced generalist AI models often outperform specialized models, even in the specific domains those specialized models were designed for. That means that if you want a model that can do a lot - reason over massive amounts of text, help you generate ideas, write in a non-robotic way — you want to use one of the three frontier models: GPT-4o, Gemini 1.5, or Claude 3 Opus.
Working with advanced models is more like working with a human being, a smart one that makes mistakes and has weird moods sometimes. Frontier models are more likely to do extraordinary things but are also more frustrating and often unnerving to use. Contrast this with Apple’s narrow focus on making AI get stuff done for you.
Every major AI company argues the technology will evolve further and has teased mysterious future additions to their systems. In contrast, what we are seeing from Apple is a clear and practical vision of how AI can help most users, without a lot of effort, today. In doing so, they are hiding much of the power, and quirks, of LLMs from their users. Having companies take many approaches to AI is likely to lead to faster adoption in the long term. And, as companies experiment, we will learn more about which sets of models are correct.
Carabiner Collection
a celebration of carabiners
BookFinder.com: New & Used Books, Rare Books, Textbooks
A really useful search engine for finding physical copies of hard-to-find books from lesser known retailers.