Found 35 bookmarks
Newest
Zuckerberg officially gives up
Zuckerberg officially gives up
I floated a theory of mine to Atlantic writer Charlie Warzel on this week’s episode of Panic World that content moderation, as we’ve understood, it effectively ended on January 6th, 2021. You can listen to the whole episode here, but the way I look at it is that the Insurrection was the first time Americans could truly see the radicalizing effects of algorithmic platforms like Facebook and YouTube that other parts of the world, particularly the Global South, had dealt with for years. A moment of political violence Silicon Valley could no longer ignore or obfuscate the way it had with similar incidents in countries like Myanmar, India, Ethiopia, or Brazil. And once faced with the cold, hard truth of what their platforms had been facilitating, companies like Google and Meta, at least internally, accepted that they would never be able to moderate them at scale. And so they just stopped.
After 2021, the major tech platforms we’ve relied on since the 2010s could no longer pretend that they would ever be able to properly manage the amount of users, the amount of content, the amount of influence they “need” to exist at the size they “need” to exist at to make the amount of money they “need” to exist.
Under Zuckerberg’s new “censorship”-free plan, Meta’s social networks will immediately fill up with hatred and harassment. Which will make a fertile ground for terrorism and extremism. Scams and spam will clog comments and direct messages. And illicit content, like non-consensual sexual material, will proliferate in private corners of networks like group messages and private Groups. Algorithms will mindlessly spread this slop, boosted by the loudest, dumbest, most reactionary users on the platform, helping it evolve and metastasize into darker, stickier social movements. And the network will effectively break down. But Meta is betting that the average user won’t care or notice. AI profiles will like their posts, comment on them, and even make content for them. A feedback loop of nonsense and violence. Our worst, unmoderated impulses, shared by algorithm and reaffirmed by AI. Where nothing has to be true and everything is popular.
·garbageday.email·
Zuckerberg officially gives up
Local-first software: You own your data, in spite of the cloud
Local-first software: You own your data, in spite of the cloud
While cloud apps have become dominant due to their collaborative features, they often compromise user ownership and data longevity. Local-first software seeks to provide a better alternative by prioritizing local storage and networks while still enabling seamless collaboration. The article outlines seven ideals for local-first software, discusses existing technologies, and proposes Conflict-free Replicated Data Types (CRDTs) as a promising foundation for realizing these ideals.
Cloud apps like Google Docs and Trello are popular because they enable real-time collaboration with colleagues, and they make it easy for us to access our work from all of our devices. However, by centralizing data storage on servers, cloud apps also take away ownership and agency from users. If a service shuts down, the software stops functioning, and data created with that software is lost.
In this article we propose “local-first software”: a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.
This article has also been published in PDF format in the proceedings of the Onward! 2019 conference. Please cite it as: Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. Local-first software: you own your data, in spite of the cloud. 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), October 2019, pages 154–178. doi:10.1145/3359591.3359737
To sum up: the cloud gives us collaboration, but old-fashioned apps give us ownership. Can’t we have the best of both worlds? We would like both the convenient cross-device access and real-time collaboration provided by cloud apps, and also the personal ownership of your own data embodied by “old-fashioned” software.
In old-fashioned apps, the data lives in files on your local disk, so you have full agency and ownership of that data: you can do anything you like, including long-term archiving, making backups, manipulating the files using other programs, or deleting the files if you no longer want them. You don’t need anybody’s permission to access your files, since they are yours. You don’t have to depend on servers operated by another company.
In cloud apps, the data on the server is treated as the primary, authoritative copy of the data; if a client has a copy of the data, it is merely a cache that is subordinate to the server. Any data modification must be sent to the server, otherwise it “didn’t happen.” In local-first applications we swap these roles: we treat the copy of the data on your local device — your laptop, tablet, or phone — as the primary copy. Servers still exist, but they hold secondary copies of your data in order to assist with access from multiple devices. As we shall see, this change in perspective has profound implications.
For several years the Offline First movement has been encouraging developers of web and mobile apps to improve offline support, but in practice it has been difficult to retrofit offline support to cloud apps, because tools and libraries designed for a server-centric model do not easily adapt to situations in which users make edits while offline.
In local-first apps, our ideal is to support real-time collaboration that is on par with the best cloud apps today, or better. Achieving this goal is one of the biggest challenges in realizing local-first software, but we believe it is possible
Some file formats (such as plain text, JPEG, and PDF) are so ubiquitous that they will probably be readable for centuries to come. The US Library of Congress also recommends XML, JSON, or SQLite as archival formats for datasets. However, in order to read less common file formats and to preserve interactivity, you need to be able to run the original software (if necessary, in a virtual machine or emulator). Local-first software enables this.
Of these, email attachments are probably the most common sharing mechanism, especially among users who are not technical experts. Attachments are easy to understand and trustworthy. Once you have a copy of a document, it does not spontaneously change: if you view an email six months later, the attachments are still there in their original form. Unlike a web app, an attachment can be opened without any additional login process. The weakest point of email attachments is collaboration. Generally, only one person at a time can make changes to a file, otherwise a difficult manual merge is required. File versioning quickly becomes messy: a back-and-forth email thread with attachments often leads to filenames such as Budget draft 2 (Jane's version) final final 3.xls.
Web apps have set the standard for real-time collaboration. As a user you can trust that when you open a document on any device, you are seeing the most current and up-to-date version. This is so overwhelmingly useful for team work that these applications have become dominant.
The flip side to this is a total loss of ownership and control: the data on the server is what counts, and any data on your client device is unimportant — it is merely a cache
We think the Git model points the way toward a future for local-first software. However, as it currently stands, Git has two major weaknesses: Git is excellent for asynchronous collaboration, especially using pull requests, which take a coarse-grained set of changes and allow them to be discussed and amended before merging them into the shared master branch. But Git has no capability for real-time, fine-grained collaboration, such as the automatic, instantaneous merging that occurs in tools like Google Docs, Trello, and Figma. Git is highly optimized for code and similar line-based text files; other file formats are treated as binary blobs that cannot meaningfully be edited or merged. Despite GitHub’s efforts to display and compare images, prose, and CAD files, non-textual file formats remain second-class in Git.
A web app in its purest form is usually a Rails, Django, PHP, or Node.js program running on a server, storing its data in a SQL or NoSQL database, and serving web pages over HTTPS. All of the data is on the server, and the user’s web browser is only a thin client. This architecture offers many benefits: zero installation (just visit a URL), and nothing for the user to manage, as all data is stored and managed in one place by the engineering and DevOps professionals who deploy the application. Users can access the application from all of their devices, and colleagues can easily collaborate by logging in to the same application. JavaScript frameworks such as Meteor and ShareDB, and services such as Pusher and Ably, make it easier to add real-time collaboration features to web applications, building on top of lower-level protocols such as WebSocket. On the other hand, a web app that needs to perform a request to a server for every user action is going to be slow. It is possible to hide the round-trip times in some cases by using client-side JavaScript, but these approaches quickly break down if the user’s internet connection is unstable.
As we have shown, none of the existing data layers for application development fully satisfy the local-first ideals. Thus, three years ago, our lab set out to search for a solution that gives seven green checkmarks.
*Fast, multi-device, offline, collaboration, longevity, privacy, and user control
Thus, CRDTs have some similarity to version control systems like Git, except that they operate on richer data types than text files. CRDTs can sync their state via any communication channel (e.g. via a server, over a peer-to-peer connection, by Bluetooth between local devices, or even on a USB stick). The changes tracked by a CRDT can be as small as a single keystroke, enabling Google Docs-style real-time collaboration. But you could also collect a larger set of changes and send them to collaborators as a batch, more like a pull request in Git. Because the data structures are general-purpose, we can develop general-purpose tools for storage, communication, and management of CRDTs, saving us from having to re-implement those things in every single app.
we believe that CRDTs have the potential to be a foundation for a new generation of software. Just as packet switching was an enabling technology for the Internet and the web, or as capacitive touchscreens were an enabling technology for smartphones, so we think CRDTs may be the foundation for collaborative software that gives users full ownership of their data.
We are often asked about the effectiveness of automatic merging, and many people assume that application-specific conflict resolution mechanisms are required. However, we found that users surprisingly rarely encounter conflicts in their work when collaborating with others, and that generic resolution mechanisms work well. The reasons for this are: Automerge tracks changes at a fine-grained level, and takes datatype semantics into account. For example, if two users concurrently insert items at the same position into an array, Automerge combines these changes by positioning the two new items in a deterministic order. In contrast, a textual version control system like Git would treat this situation as a conflict requiring manual resolution. Users have an intuitive sense of human collaboration and avoid creating conflicts with their collaborators. For example, when users are collaboratively editing an article, they may agree in advance who will be working on which section for a period of time, and avoid concurrently modifying the same section.
Conflicts arise only if users concurrently modify the same property of the same object: for example, if two users concurrently change the position of the same image object on a canvas. In such cases, it is often arbitrary how they are resolved and satisfactory either way.
We experimented with a number of mechanisms for sharing documents with other users, and found that a URL model, inspired by the web, makes the most sense to users and developers. URLs can be copied and pasted, and shared via communication channels such as email or chat. Access permissions for documents beyond secret URLs remain an open research question.
As with a Git repository, what a particular user sees in the “master” branch is a function of the last time they communicated with other users. Newly arriving changes might unexpectedly modify parts of the document you are working on, but manually merging every change from every user is tedious. Decentralized documents enable users to be in control over their own data, but further study is needed to understand what this means in practical user-interface terms.
Performance and memory/disk usage quickly became a problem because CRDTs store all history, including character-by-character text edits. These pile up, but can’t easily be truncated because it’s impossible to know when someone might reconnect to your shared document after six months away and need to merge changes from that point forward.
Servers thus have a role to play in the local-first world — not as central authorities, but as “cloud peers” that support client applications without being on the critical path. For example, a cloud peer that stores a copy of the document, and forwards it to other peers when they come online, could solve the closed-laptop problem above.
These experiments suggest that local-first software is possible. Collaboration and ownership are not at odds with each other — we can get the best of both worlds, and users can benefit. However, the underlying technologies are still a work in progress. They are good for developing prototypes, and we hope that they will evolve and stabilize in the coming years, but realistically, it is not yet advisable to replace a proven product like Firebase with an experimental project like Automerge in a production setting today.
Most CRDT research operates in a model where all collaborators immediately apply their edits to a single version of a document. However, practical local-first applications require more flexibility: users must have the freedom to reject edits made by another collaborator, or to make private changes to a version of the document that is not shared with others. A user might want to apply changes speculatively or reformat their change history. These concepts are well understood in the distributed source control world as “branches,” “forks,” “rebasing,” and so on. There is little work to date on understanding the algorithms and programming models for collaboration in situations where multiple document versions and branches exist side-by-side.
Different collaborators may be using different versions of an application, potentially with different features. As there is no central database server, there is no authoritative “current” schema for the data. How can we write software so that varying application versions can safely interoperate, even as data formats evolve? This question has analogues in cloud-based API design, but a local-first setting provides additional challenges.
When every document can develop a complex version history, simply through daily operation, an acute problem arises: how do we communicate this version history to users? How should users think about versioning, share and accept changes, and understand how their documents came to be a certain way when there is no central source of truth? Today there are two mainstream models for change management: a source-code model of diffs and patches, and a Google Docs model of suggestions and comments. Are these the best we can do? How do we generalize these ideas to data formats that are not text?
We believe that the assumption of centralization is deeply ingrained in our user experiences today, and we are only beginning to discover the consequences of changing that assumption. We hope these open questions will inspire researchers to explore what we believe is an untapped area.
some strategies for improving each area: Fast. Aggressive caching and downloading resources ahead of time can be a way to prevent the user from seeing spinners when they open your app or a document they previously had open. Trust the local cache by default instead of making the user wait for a network fetch. Multi-device. Syncing infrastructure like Firebase and iCloud make multi-device support relatively painless, although they do introduce longevity and privacy concerns. Self-hosted infrastructure like Realm Object Server provides an alternative trade-off. Offline. In the web world, Progressive Web Apps offer features like Service Workers and app manifests that can help. In the mobile world, be aware of WebKit frames and other network-dependent components. Test your app by turning off your WiFi, or using traffic shapers such as the Chrome Dev Tools network condition simulator or the iOS network link conditioner. Collaboration. Besides CRDTs, the more established technology for real-time collaboration is Operational Transformation (OT), as implemented e.g. in ShareDB. Longevity. Make sure your software can easily export to flattened, standard formats like JSON or PDF. For example: mass export such as Google Takeout; continuous backup into stable file formats such as in GoodNotes; and JSON download of documents such as in Trello. Privacy. Cloud apps are fundamentally non-private, with employees of the company and governments able to peek at user data at any time. But for mobile or desktop applications, try to make clear to users when the data is stored only on their device versus being transmitted to a backend. User control. Can users easily back up, duplicate, or delete some or all of their documents within your application? Often this involves re-implementing all the basic filesystem operations, as Google Docs has done with Google Drive.
If you are an entrepreneur interested in building developer infrastructure, all of the above suggests an interesting market opportunity: “Firebase for CRDTs.” Such a startup would need to offer a great developer experience and a local persistence library (something like SQLite or Realm). It would need to be available for mobile platforms (iOS, Android), native desktop (Windows, Mac, Linux), and web technologies (Electron, Progressive Web Apps). User control, privacy, multi-device support, and collaboration would all be baked in. Application developers could focus on building their app, knowing that the easiest implementation path would also given them top marks on the local-first scorecard. As litmus test to see if you have succeeded, we suggest: do all your customers’ apps continue working in perpetuity, even if all servers are shut down? We believe the “Firebase for CRDTs” opportunity will be huge as CRDTs come of age.
In the pursuit of better tools we moved many applications to the cloud. Cloud software is in many regards superior to “old-fashioned” software: it offers collaborative, always-up-to-date applications, accessible from anywhere in the world. We no longer worry about what software version we are running, or what machine a file lives on. However, in the cloud, ownership of data is vested in the servers, not the users, and so we became borrowers of our own data. The documents created in cloud apps are destined to disappear when the creators of those services cease to maintain them. Cloud services defy long-term preservation. No Wayback Machine can restore a sunsetted web application. The Internet Archive cannot preserve your Google Docs. In this article we explored a new way forward for software of the future. We have shown that it is possible for users to retain ownership and control of their data, while also benefiting from the features we associate with the cloud: seamless collaboration and access from anywhere. It is possible to get the best of both worlds. But more work is needed to realize the local-first approach in practice. Application developers can take incremental steps, such as improving offline support and making better use of on-device storage. Researchers can continue improving the algorithms, programming models, and user interfaces for local-first software. Entrepreneurs can develop foundational technologies such as CRDTs and peer-to-peer networking into mature products able to power the next generation of applications. Today it is easy to create a web application in which the server takes ownership of all the data. But it is too hard to build collaborative software that respects users’ ownership and agency. In order to shift the balance, we need to improve the tools for developing local-first software. We hope that you will join us.
·inkandswitch.com·
Local-first software: You own your data, in spite of the cloud
The Collapse of Self-Worth in the Digital Age - The Walrus
The Collapse of Self-Worth in the Digital Age - The Walrus
My problems were too complex and modern to explain. So I skated across parking lots, breezeways, and sidewalks, I listened to the vibration of my wheels on brick, I learned the names of flowers, I put deserted paths to use. I decided for myself each curve I took, and by the time I rolled home, I felt lighter. One Saturday, a friend invited me to roller-skate in the park. I can still picture her in green protective knee pads, flying past. I couldn’t catch up, I had no technique. There existed another scale to evaluate roller skating, beyond joy, and as Rollerbladers and cyclists overtook me, it eclipsed my own. Soon after, I stopped skating.
the end point for the working artist is to create an object for sale. Once the art object enters the market, art’s intrinsic value is emptied out, compacted by the market’s logic of ranking, until there’s only relational worth, no interior worth. Two novelists I know publish essays one week apart; in a grim coincidence, each writer recounts their own version of the same traumatic life event. Which essay is better, a friend asks. I explain they’re different; different life circumstances likely shaped separate approaches. Yes, she says, but which one is better?
we are inundated with cold, beautiful stats, some publicized by trade publications or broadcast by authors themselves on all socials. How many publishers bid? How big is the print run? How many stops on the tour? How many reviews on Goodreads? How many mentions on Bookstagram, BookTok? How many bloggers on the blog tour? How exponential is the growth in follower count? Preorders? How many printings? How many languages in translation? How many views on the unboxing? How many mentions on most-anticipated lists?
A starred review from Publisher’s Weekly, but I wasn’t in “Picks of the Week.” A mention from Entertainment Weekly, but last on a click-through list.
There must exist professions that are free from capture, but I’m hard pressed to find them. Even non-remote jobs, where work cannot pursue the worker home, are dogged by digital tracking: a farmer says Instagram Story views directly correlate to farm subscriptions, a server tells me her manager won’t give her the Saturday-night money shift until she has more followers.
What we hardly talk about is how we’ve reorganized not just industrial activity but any activity to be capturable by computer, a radical expansion of what can be mined. Friendship is ground zero for the metrics of the inner world, the first unquantifiable shorn into data points: Friendster testimonials, the MySpace Top 8, friending. Likewise, the search for romance has been refigured by dating apps that sell paid-for rankings and paid access to “quality” matches. Or, if there’s an off-duty pursuit you love—giving tarot readings, polishing beach rocks—it’s a great compliment to say: “You should do that for money.” Join the passion economy, give the market final say on the value of your delights. Even engaging with art—say, encountering some uncanny reflection of yourself in a novel, or having a transformative epiphany from listening, on repeat, to the way that singer’s voice breaks over the bridge—can be spat out as a figure, on Goodreads or your Spotify year in review.
And those ascetics who disavow all socials? They are still caught in the network. Acts of pure leisure—photographing a sidewalk cat with a camera app or watching a video on how to make a curry—are transmuted into data to grade how well the app or the creators’ deliverables are delivering. If we’re not being tallied, we affect the tally of others. We are all data workers.
In a nightmarish dispatch in Esquire on how hard it is for authors to find readers, Kate Dwyer argues that all authors must function like influencers now, which means a fire sale on your “private” life. As internet theorist Kyle Chayka puts it to Dwyer: “Influencers get attention by exposing parts of their life that have nothing to do with the production of culture.”
what happens to artists is happening to all of us. As data collection technology hollows out our inner worlds, all of us experience the working artist’s plight: our lot is to numericize and monetize the most private and personal parts of our experience.
We are not giving away our value, as a puritanical grandparent might scold; we are giving away our facility to value. We’ve been cored like apples, a dependency created, hooked on the public internet to tell us the worth.
When we scroll, what are we looking for?
While other fast fashion brands wait for high-end houses to produce designs they can replicate cheaply, Shein has completely eclipsed the runway, using AI to trawl social media for cues on what to produce next. Shein’s site operates like a casino game, using “dark patterns”—a countdown clock puts a timer on an offer, pop-ups say there’s only one item left in stock, and the scroll of outfits never ends—so you buy now, ask if you want it later. Shein’s model is dystopic: countless reports detail how it puts its workers in obscene poverty in order to sell a reprieve to consumers who are also moneyless—a saturated plush world lasting as long as the seams in one of their dresses. Yet the day to day of Shein’s target shopper is so bleak, we strain our moral character to cosplay a life of plenty.
(Unsplash) Technology The Collapse of Self-Worth in the Digital Age Why are we letting algorithms rewrite the rules of art, work, and life? BY THEA LIM Updated 17:52, Sep. 20, 2024 | Published 6:30, Sep. 17, 2024 W HEN I WAS TWELVE, I used to roller-skate in circles for hours. I was at another new school, the odd man out, bullied by my desk mate. My problems were too complex and modern to explain. So I skated across parking lots, breezeways, and sidewalks, I listened to the vibration of my wheels on brick, I learned the names of flowers, I put deserted paths to use. I decided for myself each curve I took, and by the time I rolled home, I felt lighter. One Saturday, a friend invited me to roller-skate in the park. I can still picture her in green protective knee pads, flying past. I couldn’t catch up, I had no technique. There existed another scale to evaluate roller skating, beyond joy, and as Rollerbladers and cyclists overtook me, it eclipsed my own. Soon after, I stopped skating. Y EARS AGO, I worked in the backroom of a Tower Records. Every few hours, my face-pierced, gunk-haired co-workers would line up by my workstation, waiting to clock in or out. When we typed in our staff number at 8:59 p.m., we were off time, returned to ourselves, free like smoke. There are no words to describe the opposite sensations of being at-our-job and being not-at-our-job even if we know the feeling of crossing that threshold by heart. But the most essential quality that makes a job a job is that when we are at work, we surrender the power to decide the worth of what we do. At-job is where our labour is appraised by an external meter: the market. At-job, our labour is never a means to itself but a means to money; its value can be expressed only as a number—relative, fluctuating, out of our control. At-job, because an outside eye measures us, the workplace is a place of surveillance. It’s painful to have your sense of worth extracted. For Marx, the poet of economics, when a person’s innate value is replaced with exchange value, it is as if we’ve been reduced to “a mere jelly.” Wait—Is ChatGPT Even Legal? AI Is a False God How Israel Is Using AI as a Weapon of War Not-job, or whatever name you prefer—“quitting time,” “off duty,” “downtime”—is where we restore ourselves from a mere jelly, precisely by using our internal meter to determine the criteria for success or failure. Find the best route home—not the one that optimizes cost per minute but the one that offers time enough to hear an album from start to finish. Plant a window garden, and if the plants are half dead, try again. My brother-in-law found a toy loom in his neighbour’s garbage, and nightly he weaves tiny technicolour rugs. We do these activities for the sake of doing them, and their value can’t be arrived at through an outside, top-down measure. It would be nonsensical to treat them as comparable and rank them from one to five. We can assess them only by privately and carefully attending to what they contain and, on our own, concluding their merit. And so artmaking—the cultural industries—occupies the middle of an uneasy Venn diagram. First, the value of an artwork is internal—how well does it fulfill the vision that inspired it? Second, a piece of art is its own end. Third, a piece of art is, by definition, rare, one of a kind, nonfungible. Yet the end point for the working artist is to create an object for sale. Once the art object enters the market, art’s intrinsic value is emptied out, compacted by the market’s logic of ranking, until there’s only relational worth, no interior worth. Two novelists I know publish essays one week apart; in a grim coincidence, each writer recounts their own version of the same traumatic life event. Which essay is better, a friend asks. I explain they’re different; different life circumstances likely shaped separate approaches. Yes, she says, but which one is better? I GREW UP a Catholic, a faithful, an anachronism to my friends. I carried my faith until my twenties, when it finally broke. Once I couldn’t gain comfort from religion anymore, I got it from writing. Sitting and building stories, side by side with millions of other storytellers who have endeavoured since the dawn of existence to forge meaning even as reality proves endlessly senseless, is the nearest thing to what it felt like back when I was a believer. I spent my thirties writing a novel and paying the bills as low-paid part-time faculty at three different colleges. I could’ve studied law or learned to code. Instead, I manufactured sentences. Looking back, it baffles me that I had the wherewithal to commit to a project with no guaranteed financial value, as if I was under an enchantment. Working on that novel was like visiting a little town every day for four years, a place so dear and sweet. Then I sold it. As the publication date advanced, I was awash with extrinsic measures. Only twenty years ago, there was no public, complete data on book sales. U
·thewalrus.ca·
The Collapse of Self-Worth in the Digital Age - The Walrus
How Elon Musk Got Tangled Up in Blue
How Elon Musk Got Tangled Up in Blue
Mr. Musk had largely come to peace with a price of $100 a year for Blue. But during one meeting to discuss pricing, his top assistant, Jehn Balajadia, felt compelled to speak up. “There’s a lot of people who can’t even buy gas right now,” she said, according to two people in attendance. It was hard to see how any of those people would pony up $100 on the spot for a social media status symbol. Mr. Musk paused to think. “You know, like, what do people pay for Starbucks?” he asked. “Like $8?” Before anyone could raise objections, he whipped out his phone to set his word in stone. “Twitter’s current lords & peasants system for who has or doesn’t have a blue checkmark is bullshit,” he tweeted on Nov. 1. “Power to the people! Blue for $8/month.”
·nytimes.com·
How Elon Musk Got Tangled Up in Blue
The cult of Obsidian: Why people are obsessed with the note-taking app
The cult of Obsidian: Why people are obsessed with the note-taking app
Even Obsidian’s most dedicated users don’t expect it to take on Notion and other note-taking juggernauts. They see Obsidian as having a different audience with different values.
Obsidian is on some ways the opposite of a quintessential MacStories app—the site often spotlights apps that are tailored exclusively for Apple platforms, whereas Obsidian is built on a web-based technology called Electron—but Voorhees says it’s his favorite writing tool regardless.
·fastcompany.com·
The cult of Obsidian: Why people are obsessed with the note-taking app
Generative AI’s Act Two
Generative AI’s Act Two
This page also has many infographics providing an overview of different aspects of the AI industry at time of writing.
We still believe that there will be a separation between the “application layer” companies and foundation model providers, with model companies specializing in scale and research and application layer companies specializing in product and UI. In reality, that separation hasn’t cleanly happened yet. In fact, the most successful user-facing applications out of the gate have been vertically integrated.
We predicted that the best generative AI companies could generate a sustainable competitive advantage through a data flywheel: more usage → more data → better model → more usage. While this is still somewhat true, especially in domains with very specialized and hard-to-get data, the “data moats” are on shaky ground: the data that application companies generate does not create an insurmountable moat, and the next generations of foundation models may very well obliterate any data moats that startups generate. Rather, workflows and user networks seem to be creating more durable sources of competitive advantage.
Some of the best consumer companies have 60-65% DAU/MAU; WhatsApp’s is 85%. By contrast, generative AI apps have a median of 14% (with the notable exception of Character and the “AI companionship” category). This means that users are not finding enough value in Generative AI products to use them every day yet.
generative AI’s biggest problem is not finding use cases or demand or distribution, it is proving value. As our colleague David Cahn writes, “the $200B question is: What are you going to use all this infrastructure to do? How is it going to change people’s lives?”
·sequoiacap.com·
Generative AI’s Act Two
Welcome in my mind 🧠 - My second-brain
Welcome in my mind 🧠 - My second-brain
I consider myself as an internet offspring. I had the chance to access to computers very early in my life and I think it had a big influence on who I am right now. Like a lot of us, internet citizens, what I value the most is learning. Whatever the subject, whatever it takes, whatever it cost, money or time, what I like most is learning. That's, I think, the biggest reason of why I'm starting this "Limitless Exploration" project.
·anthonyamar.fr·
Welcome in my mind 🧠 - My second-brain
Inside the AI Factory
Inside the AI Factory
Over the past six months, I spoke with more than two dozen annotators from around the world, and while many of them were training cutting-edge chatbots, just as many were doing the mundane manual labor required to keep AI running. There are people classifying the emotional content of TikTok videos, new variants of email spam, and the precise sexual provocativeness of online ads. Others are looking at credit-card transactions and figuring out what sort of purchase they relate to or checking e-commerce recommendations and deciding whether that shirt is really something you might like after buying that other shirt. Humans are correcting customer-service chatbots, listening to Alexa requests, and categorizing the emotions of people on video calls. They are labeling food so that smart refrigerators don’t get confused by new packaging, checking automated security cameras before sounding alarms, and identifying corn for baffled autonomous tractors.
·nymag.com·
Inside the AI Factory
The VR winter — Benedict Evans
The VR winter — Benedict Evans
When I started my career 3G was the hot topic, and every investor kept asking ‘what’s the killer app for 3G?’ It turned out that the killer app for having the internet in your pocket was, well, having the internet in your pocket. But with each of those, we knew what to build next, and with VR we don’t. That tells me that VR has a place in the future. It just doesn’t tell me what kind of place.
The successor to the smartphone will be something that doesn’t just merge AR and VR but make the distinction irrelevant - something that you can wear all day every day, and that can seamlessly both occlude and supplement the real world and generate indistinguishable volumetric space.
·ben-evans.com·
The VR winter — Benedict Evans
Vision Pro — Benedict Evans
Vision Pro — Benedict Evans
Meta, today, has roughly the right price and is working forward to the right device: Apple has started with the right device and will work back to the right price. Meta is trying to catalyse an ecosystem while we wait for the right hardware - Apple is trying to catalyse an ecosystem while we wait for the right price.
one of the things I wondered before the event was how Apple would show a 3D experience in 2D. Meta shows either screenshots from within the system (with the low visual quality inherent in the spec you can make and sell for $500) or shots of someone wearing the headset and grinning - neither are satisfactory. Apple shows the person in the room, with the virtual stuff as though it was really there, because it looks as though it is.
For Meta, the device places you in ‘the metaverse’ and there could be many experiences within that. For Apple, this device itself doesn’t take you anywhere - it’s a screen and there could be five different ‘metaverse’ apps. This iPhone was a piece of glass that could be anything - this is trying to be a piece of glass that can show anything.
A lot of what Apple shows is possibility and experiment - it could be this, this or that, just as when Apple launched the watch it suggested it as fitness, social or fashion, and it turn out to work best for fitness (and is now a huge business).
Mark Zuckerberg, speaking to a Meta all-hands after Apple’s event, made the perfectly reasonable point that Apple hasn’t shown much that no-one had thought of before - there’s no ‘magic’ invention. Everyone already knows we need better screens, eye-tracking and hand-tracking, in a thin and light device.
It’s worth remembering that Meta isn’t in this to make a games device, nor really to sell devices per se - rather, the thesis is that if VR is the next platform, Meta has to make sure it isn’t controlled by a platform owner who can screw them, as Apple did with IDFA in 2021.
On the other hand, the Vision Pro is an argument that current devices just aren’t good enough to break out of the enthusiast and gaming market, incremental improvement isn’t good enough either, and you need a step change in capability.
Apple’s privacy positioning, of course, has new strategic value now that it’s selling a device you wear that’s covered in cameras
the genesis of the current wave of VR was the realisation a decade ago that the VR concepts of the 1990s would work now, and with nothing more than off-the-shelf smartphone components and gaming PCs, plus a bit more work. But ‘a bit more work’ turned out to be thirty or forty billion dollars from Meta and God only knows how much more from Apple - something over $100bn combined, almost certainly.
So it might be that a wearable screen of any kind, no matter how good, is just a staging post - the summit of a foothill on the way to the top of Everest. Maybe the real Reality device is glasses, or contact lenses projecting onto your retina, or some kind of neural connection, all of which might be a decade or decades away again, and the piece of glass in our pocket remains the right device all the way through.
I think the price and the challenge of category creation are tightly connected. Apple has decided that the capabilities of the Vision Pro are the minimum viable product - that it just isn’t worth making or selling a device without a screen so good you can’t see the pixels, pass-through where you can’t see any lag, perfect eye-tracking and perfect hand-tracking. Of course the rest of the industry would like to do that, and will in due course, but Apple has decided you must do that.
For VR, better screens are merely better, but for AR Apple thinks this this level of display system is a base below which you don’t have a product at all.
For Meta, the device places you in ‘the metaverse’ and there could be many experiences within that. For Apple, this device itself doesn’t take you anywhere - it’s a screen and there could be five different ‘metaverse’ apps. The iPhone was a piece of glass that could be anything - this is trying to be a piece of glass that can show anything.
This reminds me a little of when Meta tried to make a phone, and then a Home Screen for a phone, and Mark Zuckerberg said “your phone should be about people.” I thought “no, this is a computer, and there are many apps, some of which are about people and some of which are not.” Indeed there’s also an echo of telco thinking: on a feature phone, ‘internet stuff’ was one or two icons on your portable telephone, but on the iPhone the entire telephone was just one icon on your computer. On a Vision Pro, the ‘Meta Metaverse’ is one app amongst many. You have many apps and panels, which could be 2D or 3D, or could be spaces.
·ben-evans.com·
Vision Pro — Benedict Evans
Leviathan Wakes: the case for Apple's Vision Pro - Redeem Tomorrow
Leviathan Wakes: the case for Apple's Vision Pro - Redeem Tomorrow
The existing VR hardware has not received sufficient investment to fully demonstrate the potential of this technology. It is unclear whether the issues lie with augmented reality (AR) itself or the technology used to deliver it. However, Apple has taken a different approach by investing significantly in creating a serious computer with an optical overlay as its primary interface. Unlike other expensive headsets, Apple has integrated the ecosystem to make it appealing right out of the box, allowing users to watch movies, view photos, and run various apps. This comprehensive solution aims to address the uncertainties surrounding AR. The display quality is top-notch, finger-based interaction replaces clunky joysticks, and performance is optimized to minimize motion sickness. Furthermore, a large and experienced developer community stands ready to create apps, supported by mature tools and extensive documentation. With these factors in place, there is anticipation for a new paradigm enabled by a virtually limitless monitor. The author expresses eagerness to witness how this technology unfolds.
What can you do with this thing? There’s a good chance that, whatever killer apps may emerge, they don’t need the entire complement of sensors and widgets to deliver a great experience. As that’s discovered, Apple will be able to open a second tier in this category and sell you a simplified model at a lower cost. Meanwhile, the more they manufacture the essentials—high density displays, for example—the higher their yields will become, the more their margins will increase. It takes time to perfect manufacturing processes and build up capacity. Vision Pro isn’t just about 2024’s model. It’s setting up the conditions for Apple to build the next five years of augmented reality wearable technology.
VR/AR doesn’t have to suck ass. It doesn’t have to give you motion sickness. It doesn’t have to use these awkward, stupid controllers you accidentally swap into the wrong hand. It doesn’t have to be fundamentally isolating. If this paradigm shift could have been kicked off by cheap shit, we’d be there already. May as well pursue the other end of the market.
what starts as clunky needn’t remain so. As the technology for augmented reality becomes more affordable, more lightweight, more energy efficient, more stylish, it will be more feasible for more people to use. In the bargain, we’ll get a display technology entirely unshackled from the constraints of a monitor stand. We’ll have much broader canvases subject to the flexibility of digital creativity, collaboration and expression. What this unlocks, we can’t say.
·redeem-tomorrow.com·
Leviathan Wakes: the case for Apple's Vision Pro - Redeem Tomorrow
Making Our Hearts Sing
Making Our Hearts Sing
One thing I learned long ago is that people who prioritize design, UI, and UX in the software they prefer can empathize with and understand the choices made by people who prioritize other factors (e.g. raw feature count, or the ability to tinker with their software at the system level, or software being free-of-charge). But it doesn’t work the other way: most people who prioritize other things can’t fathom why anyone cares deeply about design/UI/UX because they don’t perceive it. Thus they chalk up iOS and native Mac-app enthusiasm to being hypnotized by marketing, Pied Piper style.
Those who see and value the artistic value in software and interface design have overwhelmingly wound up on iOS; those who don’t have wound up on Android. Of course there are exceptions. Of course there are iOS users and developers who are envious of Android’s more open nature. Of course there are Android users and developers who do see how crude the UIs are for that platform’s best-of-breed apps. But we’re left with two entirely different ecosystems with entirely different cultural values — nothing like (to re-use my example from yesterday) the Coke-vs.-Pepsi state of affairs in console gaming platforms.
·daringfireball.net·
Making Our Hearts Sing
How the Find My App Became an Accidental Friendship Fixture
How the Find My App Became an Accidental Friendship Fixture
The impact is particularly noticeable among Generation Z and millennials, the first generations to come of age with the possibility of knowing where their peers are at all times. It has changed how friends communicate with one another and blurred lines of privacy. Friends now, sometimes unwittingly yet obsessively, check one another’s locations and bypass whole conversations — about where somebody is, what they are doing or how their days are going — when socializing. All of that information can be gleaned from Find My.
Although Find My is not marketed as a social experience, sharing locations has become a test of sorts, much like being included on a close friends list on Instagram or on a private story on Snapchat can signal closer friendships.
With Find My, “you aren’t actively choosing to do something as you reach a certain location because you’re constantly sharing your location,” said Michael Saker, a senior lecturer in digital sociology at City, University of London. As a result, “there’s an intimacy that’s intertwined with that act,” he added. “There’s a verification of being friends.”
·nytimes.com·
How the Find My App Became an Accidental Friendship Fixture
Instagram, TikTok, and the Three Trends
Instagram, TikTok, and the Three Trends
In other words, when Kylie Jenner posts a petition demanding that Meta “Make Instagram Instagram again”, the honest answer is that changing Instagram is the most Instagram-like behavior possible.
The first trend is the shift towards ever more immersive mediums. Facebook, for example, started with text but exploded with the addition of photos. Instagram started with photos and expanded into video. Gaming was the first to make this progression, and is well into the 3D era. The next step is full immersion — virtual reality — and while the format has yet to penetrate the mainstream this progression in mediums is perhaps the most obvious reason to be bullish about the possibility.
The second trend is the increase in artificial intelligence. I’m using the term colloquially to refer to the overall trend of computers getting smarter and more useful, even if those smarts are a function of simple algorithms, machine learning, or, perhaps someday, something approaching general intelligence.
The third trend is the change in interaction models from user-directed to computer-controlled. The first version of Facebook relied on users clicking on links to visit different profiles; the News Feed changed the interaction model to scrolling. Stories reduced that to tapping, and Reels/TikTok is about swiping. YouTube has gone further than anyone here: Autoplay simply plays the next video without any interaction required at all.
·stratechery.com·
Instagram, TikTok, and the Three Trends
One startup's quest to take on Chrome and reinvent the web browser
One startup's quest to take on Chrome and reinvent the web browser
Miller is the CEO of a new startup called The Browser Company, and he wants to change the way people think about browsers altogether. He sees browsers as operating systems, and likes to wonder aloud what "iOS for the web" might look like. What if your browser could build you a personalized news feed because it knows the sites you go to? What if every web app felt like a native app, and the browser itself was just the app launcher? What if you could drag a file from one tab to another, and it just worked? What if the web browser was a shareable, synced, multiplayer experience?
Miller became convinced that the next big platform was right in front of his face: the open web. The underlying infrastructure worked, the apps were great, there were no tech giants in the way imposing rules and extracting huge commissions. The only thing missing was a tool to bring it all together in a user-friendly way, and make the web more than the sum of its parts.
Browser's team instead spent its time thinking about how to solve things like tab overload, that all-too-familiar feeling of not being able to find anything in a sea of tiny icons at the top of the screen.That's something Nate Parrott, a designer on the team, had been thinking about for a long time. "Before I met Josh," he said, "I had this fascination with browsers, because it's the window through which you experience so much of the web, and yet it feels like no one is working on web browsers." Outside of his day job at Snap, he was also building a web browser with some new interaction ideas. "A big one for me was that I wanted to get rid of the distinction between open and closed tabs," he said. "I wanted to encourage tab-hoarding behavior, where you can open as many tabs as you want and organize them so you're not constantly overwhelmed seeing them all at the same time."
One of Arc's most immediately noticeable features is that it combines bookmarks and tabs. Clicking an icon in the sidebar opens the app, just like on iOS or Android. When users navigate somewhere else, they don't have to close the tab; it just waits in the background until it's needed again, and Arc manages its background performance so it doesn't use too much memory. Instead of opening Gmail in a tab, users just … open Gmail.
Everyone at The Browser Company swears there's no Master Plan, or much of a roadmap. What they have is a lot of ideas, a base on which they can develop really quickly, and a deep affinity for prototypes. "You can't just think really hard and design the best web browser," Parrott said. "You have to feel it and put it in front of people and get them to react to it."
The Browser Company could become an R&D shop, full of interesting ideas but unable to build a browser that anyone actually uses. The company does have plenty of runway: It recently raised more than $13 million in funding from investors including Jeff Weiner, Eric Yuan, Patrick Collison, Fidji Simo and a number of other people with long experience building for the internet, that values The Browser Company at $100 million. Still, Agrawal said, "We're paranoid that we could end up in this world of just having a Bell Labs kind of situation, where you have a lot of interesting stuff, but it's not monetizable, it's not sticky, any of that." That's why they're religious about talking to users all the time, getting feedback on everything, making sure that the stuff they're building is genuinely useful. And when it's not, they pivot fast.
·protocol.com·
One startup's quest to take on Chrome and reinvent the web browser
What comes after Zoom? — Benedict Evans
What comes after Zoom? — Benedict Evans
If you’d looked at Skype in 2004 and argued that it would own ‘voice’ on ‘computers’, that would not have been the right mental model. I think this is where we’ll go with video - there will continue to be hard engineering, but video itself will be a commodity and the question will be how you wrap it. There will be video in everything, just as there is voice in everything, and there will be a great deal of proliferation into industry verticals on one hand and into unbundling pieces of the tech stack on the other. On one hand video in healthcare, education or insurance is about the workflow, the data model and the route to market, and lots more interesting companies will be created, and on the other hand Slack is deploying video on top of Amazon’s building blocks, and lots of interesting companies will be created here as well. There’s lots of bundling and unbundling coming, as always. Everything will be ‘video’ and then it will disappear inside.
the calendar is often the aggregation layer - you don’t need to know what service the next call uses, just when it is. Skype needed both an account and an app, so had a network effect (and lost even so). WhatsApp uses the telephone numbering system as an address and so piggybacked on your phone’s contact list - effectively, it used the PSTN as the social graph rather than having to build its own. But a group video call is a URL and a calendar invitation - it has no graph of its own.
one of the ways that this all feels very 1.0 is the rather artificial distinction between calls that are based on a ‘room’, where the addressing system is a URL and anyone can join without an account, and calls that are based on ‘people’, where everyone joining needs their own address, whether it’s a phone number, an account or something else. Hence Google has both Meet (URLs) and Duo (people) - Apple’s FaceTime is only people (no URLs).
When Snap launched, there were already infinite ways to share images, but Snap asked a bunch of weird questions that no-one had really asked before. Why do you have to press the camera button - why doesn’t the app open in the camera? Why are you saving your messages - isn’t that like saving all your phone calls? Fundamentally, Snap asked ‘why, exactly, are you sending a picture? What is the underlying social purpose?’ You’re not really sending someone a sheet of pixels - you’re communicating.
That’s the question Zoom and all its competitors haven’t really asked. Zoom has done a good job of asking why it was hard to get into a call, but it hasn’t asked why you’re in the call in the first place. Why, exactly, are you sending someone a video stream and watching another one? Why am I looking at a grid of little thumbnails of faces? Is that the purpose of this moment? What is the ‘mute’ button for - background noise, or so I can talk to someone else, or is it so I can turn it off to raise my hand? What social purpose is ‘mute’ actually serving? What is screen-sharing for? What other questions could one ask? And so if Zoom is the Dropbox or Skype of video, we are waiting for the Snap, Clubhouse and Yo.
·ben-evans.com·
What comes after Zoom? — Benedict Evans