Saved

Saved

3243 bookmarks
Custom sorting
Data Laced with History: Causal Trees & Operational CRDTs
Data Laced with History: Causal Trees & Operational CRDTs
After mulling over my bullet points, it occurred to me that the network problems I was dealing with—background cloud sync, editing across multiple devices, real-time collaboration, offline support, and reconciliation of distant or conflicting revisions—were all pointing to the same question: was it possible to design a system where any two revisions of the same document could be merged deterministically and sensibly without requiring user intervention?
It’s what happened after sync that was troubling. On encountering a merge conflict, you’d be thrown into a busy conversation between the network, model, persistence, and UI layers just to get back into a consistent state. The data couldn’t be left alone to live its peaceful, functional life: every concurrent edit immediately became a cross-architectural matter.
I kept several questions in mind while doing my analysis. Could a given technique be generalized to arbitrary and novel data types? Did the technique pass the PhD Test? And was it possible to use the technique in an architecture with smart clients and dumb servers?
Concurrent edits are sibling branches. Subtrees are runs of characters. By the nature of reverse timestamp+UUID sort, sibling subtrees are sorted in the order of their head operations.
This is the underlying premise of the Causal Tree. In contrast to all the other CRDTs I’d been looking into, the design presented in Victor Grishchenko’s brilliant paper was simultaneously clean, performant, and consequential. Instead of dense layers of theory and labyrinthine data structures, everything was centered around the idea of atomic, immutable, metadata-tagged, and causally-linked operations, stored in low-level data structures and directly usable as the data they represented.
I’m going to be calling this new breed of CRDTs operational replicated data types—partly to avoid confusion with the exiting term “operation-based CRDTs” (or CmRDTs), and partly because “replicated data type” (RDT) seems to be gaining popularity over “CRDT” and the term can be expanded to “ORDT” without impinging on any existing terminology.
Much like Causal Trees, ORDTs are assembled out of atomic, immutable, uniquely-identified and timestamped “operations” which are arranged in a basic container structure. (For clarity, I’m going to be referring to this container as the structured log of the ORDT.) Each operation represents an atomic change to the data while simultaneously functioning as the unit of data resultant from that action. This crucial event–data duality means that an ORDT can be understood as either a conventional data structure in which each unit of data has been augmented with event metadata; or alternatively, as an event log of atomic actions ordered to resemble its output data structure for ease of execution
To implement a custom data type as a CT, you first have to “atomize” it, or decompose it into a set of basic operations, then figure out how to link those operations such that a mostly linear traversal of the CT will produce your output data. (In other words, make the structure analogous to a one- or two-pass parsable format.)
OT and CRDT papers often cite 50ms as the threshold at which people start to notice latency in their text editors. Therefore, any code we might want to run on a CT—including merge, initialization, and serialization/deserialization—has to fall within this range. Except for trivial cases, this precludes O(n2) or slower complexity: a 10,000 word article at 0.01ms per character would take 7 hours to process! The essential CT functions have to be O(nlogn) at the very worst.
Of course, CRDTs aren’t without their difficulties. For instance, a CRDT-based document will always be “live”, even when offline. If a user inadvertently revises the same CRDT-based document on two offline devices, they won’t see the familiar pick-a-revision dialog on reconnection: both documents will happily merge and retain any duplicate changes. (With ORDTs, this can be fixed after the fact by filtering changes by device, but the user will still have to learn to treat their documents with a bit more caution.) In fully decentralized contexts, malicious users will have a lot of power to irrevocably screw up the data without any possibility of a rollback, and encryption schemes, permission models, and custom protocols may have to be deployed to guard against this. In terms of performance and storage, CRDTs contain a lot of metadata and require smart and performant peers, whereas centralized architectures are inherently more resource-efficient and only demand the bare minimum of their clients. You’d be hard-pressed to use CRDTs in data-heavy scenarios such as screen sharing or video editing. You also won’t necessarily be able to layer them on top of existing infrastructure without significant refactoring.
Perhaps a CRDT-based text editor will never quite be as fast or as bandwidth-efficient as Google Docs, for such is the power of centralization. But in exchange for a totally decentralized computing future? A world full of devices that control their own data and freely collaborate with one another? Data-centric code that’s entirely free from network concerns? I’d say: it’s surely worth a shot!
·archagon.net·
Data Laced with History: Causal Trees & Operational CRDTs
The art of the pivot, part 2: How, why and when to pivot
The art of the pivot, part 2: How, why and when to pivot
people mix up two very different types of pivots and that it’s important to differentiate which path you’re on: Ideation pivots: This is when an early-stage startup changes its idea before having a fully formed product or meaningful traction. These pivots are easy to make, normally happen quickly after launch, and the new idea is often completely unrelated to the previous one. For example, Brex went from VR headsets to business banking, Retool went from Venmo for the U.K. to a no-code internal tools app, and Okta went from reliability monitoring to identity management all in under three months. YouTube changed direction from a dating site to a video streaming platform in less than a week. Hard pivots: This is when a company with a live product and real users/customers changes direction. In these cases, you are truly “pivoting”—keeping one element of the previous idea and doubling down on it. For example, Instagram stripped down its check-in app and went all in on its photo-sharing feature, Slack on its internal chat tool, and Loom on its screen recording feature. Occasionally a pivot is a mix of the two (i.e. you’re pivoting multiple times over 1+ years), but generally, when you’re following the advice below, make sure you’re clear on which category you’re in.
When looking at the data, a few interesting trends emerged: Ideation pivots generally happen within three months of launching your original idea. Note, a launch at this stage is typically just telling a bunch of your friends and colleagues about it. Hard pivots generally happen within two years after launch, and most around the one-year mark. I suspect the small number of companies that took longer regret not changing course earlier.
ou should have a hard conversation with your co-founder around the three-month mark, and depending on how it’s going (see below), either re-commit or change the idea. Then schedule a yearly check-in. If things are clicking, full speed ahead. If things feel meh, at least spend a few days talking about other potential directions.
Brex: “We applied to YC with this VR idea, which, looking back, it was pretty bad, but at the time we thought it was great. And within YC, we were like, ‘Yeah, we don’t even know where to start to build this.’” —Henrique Dubugras, co-founder and CEO
·lennysnewsletter.com·
The art of the pivot, part 2: How, why and when to pivot
Psilocybin desynchronizes the human brain - Nature
Psilocybin desynchronizes the human brain - Nature

Claude summary: This research provides new insights into how psilocybin affects large-scale brain activity and connectivity. The key finding is that psilocybin causes widespread desynchronization of brain activity, particularly in association cortex areas. This desynchronization correlates with the intensity of subjective psychedelic experiences and may underlie both the acute effects and potential therapeutic benefits of psilocybin. The desynchronization of brain networks may allow for increased flexibility and plasticity, potentially explaining both the acute psychedelic experience and longer-term therapeutic effects.

Psilocybin acutely caused profound and widespread brain FC changes (Fig. 1a) across most of the cerebral cortex (P < 0.05 based on two-sided linear mixed-effects (LME) model and permutation testing), but most prominent in association networks
Across psilocybin sessions and participants, FC change tracked with the intensity of the subjective experience (Fig. 1f and Extended Data Fig. 4).
·nature.com·
Psilocybin desynchronizes the human brain - Nature
Alpine Loop: the fruit of collaboration between Fukui craftsmanship and Apple
Alpine Loop: the fruit of collaboration between Fukui craftsmanship and Apple
These ribbons, upon closer inspection, appear to be two layers of machine-made fabric sewn together to form a single piece with one side puffed out like an arch in bridges, the “Alpine Loop” band that symbolizes the Apple Watch Ultra, which was just announced in the fall of 2022. The band is made of lightweight yet sturdy polyester fiber, the band is designed for outdoor activities by threading a metal hook through a hole in the fabric that expands in arch pattern, which prevents it from being pulled out in any direction. The fact that this intricate and delicate band is woven is astonishing.
The “Alpine Loop” uses 520 warp threads, which is far more than the number of threads used in ordinary fabrics, and this first process alone takes about six full days even for experienced employees.
After inspecting the heat treatment process on the first floor of the factory, I asked Tim Cook about his impressions of the company. “I love the the ability to scale something that is so intricate, something that is so detailed. And you know they’re making a lot of these as you can, tell but they’re doing it in such a high quality way. And the yields are very high.”
“They were very flexible, and willing to try new processes, new ways of doing things. This was the first time that this particular process was ever used. And so they have to be very nimble but that nimbleness has to be underpinned by great expertise. And they have that great expertise here. And I can’t stress enough the attention to detail and quality. These are the things that make the products look so great right out of the box.”
Apple prefers to use the term “supplier” over alternatives such as “subcontractor” because they believe in equal business partnership.
“What sets Apple apart [from other companies] is that they let us work as a team. If we have a problem, we spend time together to come up with a solution.” Seiji Inoue, managing director of Inoue Ribbon Industry, spoke from the opposite side of Cook’s statement.
In addition to bands for the Apple Watch, the company also produces handles made from woven paper for “Mac Pro” product packaging. Normally, nylon or other materials would be mixed into paper to give sturdiness, but Apple places importance on recyclability, so they need to make them from 100% paper. The team worked together with the Apple staff to find a way to meet these requirements, and when we introduced a manufacturer that could produce paper, they said, “Great,” and accompanied us to the manufacturer.
The first product they worked on was a band for the Apple Watch called “Woven Nylon.” It took four years to develop. At first, Mr. Inoue was fed up with the high quality requirements. Compared to other industries, the textile industry is not very strict about size control.
at some point the front-line workers became accustomed to Apple’s standards, and are now saying, “We have to do this much, don’t we?” and aiming for higher quality manufacturing. He added, “Apple taught me from scratch about quantification and other things. They taught me how to manage, how to make a table like this, how to do standard deviation like this, how to take data like this, and so on. You can’t learn so much even if you paid someone. But Apple shared all those knowledges sayin we are on the same team.”
Mr. Nobunari Sawanobori, the president of Teikoku Ink, which supplies white ink for the iPhone, once said, “The loss of learning through working with Apple is a bigger loss than the loss of orders from Apple.”
After working for so long with Apple, recently Inoue Ribbon Industry began to make proposal or provide supplement data when they work with other clients, Most of those clients are surprised and delighted.
·medium.com·
Alpine Loop: the fruit of collaboration between Fukui craftsmanship and Apple
The Only Reason to Explore Space
The Only Reason to Explore Space

Claude summary: > This article argues that the only enduring justification for space exploration is its potential to fundamentally transform human civilization and our understanding of ourselves. The author traces the history of space exploration, from the mystical beliefs of early rocket pioneers to the geopolitical motivations of the Space Race, highlighting how current economic, scientific, and military rationales fall short of sustaining long-term commitment. The author contends that achieving interstellar civilization will require unprecedented organizational efforts and societal commitment, likely necessitating institutions akin to governments or religions. Ultimately, the piece suggests that only a society that embraces the pursuit of interstellar civilization as its central legitimating project may succeed in this monumental endeavor, framing space exploration not as an inevitable outcome of progress, but as a deliberate choice to follow a "golden path to a destiny among the stars."

·palladiummag.com·
The Only Reason to Explore Space
It's Time to Talk About America's Disorder Problem
It's Time to Talk About America's Disorder Problem
  • "Disorder" as distinct from crime, encompassing behaviors that dominate public spaces for private purposes (e.g., public drug use, homelessness, littering).
  • Despite decreasing violent crime rates in many cities, public perception of safety remains low, which the author attributes to increased disorder. Ex. retail theft, unsheltered homelessness, uncontrolled dogs, reckless driving, and public drug use.
Most conspicuous, in my experience, is the way that retailers have responded. It’s not just CVS; coffee shops seem to have gotten more hostile and less welcoming. This is, I suspect, because they are dealing with people who steal, cause a ruckus, or shoot up in the bathroom—disorderly behaviors that they have to deter before they cost them customers.
I increasingly think this is a more general phenomenon. Disorder is not measured like crime—there is no system for aggregating measures of disorder across cities. But if you look for the signs, they are there. Retail theft, though hard to measure, has grown bad enough that major retailers now lock up their wares in many cities. The unsheltered homeless population has risen sharply. People seem to be controlling their dogs less. Road deaths have risen, even as vehicle miles driven declined, suggesting people are driving more irresponsibly. Public drug use in cities from San Francisco to Philadelphia has gotten bad enough to prompt crack-downs.
Cities’ comparative advantage is agglomeration and network effects: concentrating people in one place can create innovation that yields ore than linear returns. But that only is possible if people have shared public spaces in which to interact. Community life, of the sort that makes cities worth living in, is harder to live in the presence of disorder.
A large share of disorder is generated by a small number of people and places—one drunk or one vacant lot, one uncontrolled bar or one guy shouting on the street, can ruin the whole experience for everyone else. Identifying these problem places and people, and remediating them—not exclusively through the criminal justice system—can bring disorder under control.
·thecausalfallacy.com·
It's Time to Talk About America's Disorder Problem
On being a great gift-giver
On being a great gift-giver
Some people are great at giving gifts. The kinds of gifts that dig into your soul and make you feel seen. I'm trying to become one of those people
Simon conspired with a friend who owns a 3D printer and designed and created a little desktop bear that can hold all of the nice things people have written about Bear. He then wrote each of these entries by hand (suffering only minor carpel tunnel) on sticky notes which the bear now carries like a human bear directional.
These are the kinds of gifts I want to learn how to give. Ones that make the receiver feel like they've been listened to and understood. That don't cost much money but are priceless at the same time.
·herman.bearblog.dev·
On being a great gift-giver
What are conference talks about? - the stream
What are conference talks about? - the stream
It's crazy how so much industry conf content is an ad these days. Ads obfuscate and conflate truth and opinion.
This is why events like Handmade Seattle or Strange Loop get so much love. They are about technology and people and values, not tools and companies.
When I write a talk, I almost always just want you to walk away thinking about the technology you create as an instrument for advancing your values, and a lens through which to view the world with those values.
if I do my job right, you won't go back and use the library I talked about, or whatever. You'll think about the values you're advancing when you build your technology, and think about the perspective it reveals to its users and audiences.
·stream.thesephist.com·
What are conference talks about? - the stream
I’m Running Out of Ways to Explain How Bad This Is
I’m Running Out of Ways to Explain How Bad This Is
this is more than just a misinformation crisis. To watch as real information is overwhelmed by crank theories and public servants battle death threats is to confront two alarming facts: first, that a durable ecosystem exists to ensconce citizens in an alternate reality, and second, that the people consuming and amplifying those lies are not helpless dupes but willing participants.
The journalist Parker Molloy compiled screenshots of people “acknowledging that this image is AI but still insisting that it’s real on some deeper level”—proof, Molloy noted, that we’re “living in the post-reality.” The technology writer Jason Koebler argued that we’ve entered the “‘Fuck It’ Era” of AI slop and political messaging, with AI-generated images being used to convey whatever partisan message suits the moment, regardless of truth.
This reality-fracturing is the result of an information ecosystem that is dominated by platforms that offer financial and attentional incentives to lie and enrage, and to turn every tragedy and large event into a shameless content-creation opportunity.
So much of the conversation around misinformation suggests that its primary job is to persuade. But as Michael Caulfield, an information researcher at the University of Washington, has argued, “The primary use of ‘misinformation’ is not to change the beliefs of other people at all. Instead, the vast majority of misinformation is offered as a service for people to maintain their beliefs in face of overwhelming evidence to the contrary.” This distinction is important, in part because it assigns agency to those who consume and share obviously fake information.
the far right’s world-building project, where feel is always greater than real.
Rather than deal with the realities of a warming planet hurling once-in-a-generation storms at them every few weeks, they’d rather malign and threaten meteorologists, who, in their minds, are “nothing but a trained subversive liar programmed to spew stupid shit to support the global warming bullshit,” as one X user put it.
If you are a weatherperson, you’re a target. The same goes for journalists, election workers, scientists, doctors, and first responders. These jobs are different, but the thing they share is that they all must attend to and describe the world as it is. This makes them dangerous to people who cannot abide by the agonizing constraints of reality, as well as those who have financial and political interests in keeping up the charade.
The world feels dark; for many people, it’s tempting to meet that with a retreat into the delusion that they’ve got everything figured out, that the powers that be have conspired against them directly.
·theatlantic.com·
I’m Running Out of Ways to Explain How Bad This Is
One weird trick to being Victoria Paris on TikTok
One weird trick to being Victoria Paris on TikTok
“Facts. The reason why I blew up so fast is because I’m white, thin, privileged, and live in New York City,” she says, pointing out that her own content performed worse when she was living in North Carolina because there was nothing there to glamorize. She also shared how she worked to grow her account by making tons of different videos, privating the ones that didn’t perform, and replicating the ones that did until she nailed what TikTok wanted from her.  But “what TikTok wants” is still the most influential part of that, and as long as that’s still someone who looks like Victoria, there’s not one trick that can change it.
·embedded.substack.com·
One weird trick to being Victoria Paris on TikTok
The bucket theory of creativity
The bucket theory of creativity
Forget the myth of the 'Eureka!' moment, and allow me to suggest another way: the bucket theory of creativity. Buckets are little homes for the things you want to explore deeper. Maybe you’ll write or draw or build about them one day, but that’s not really the point. All you gotta do is make some buckets.  Because making buckets creates a magnetic force that draws related ideas towards you.
·sublimeinternet.substack.com·
The bucket theory of creativity
Nike: An Epic Saga of Value Destruction | LinkedIn
Nike: An Epic Saga of Value Destruction | LinkedIn
Things seemed to go well at the beginning. Due to the pandemic and the objective challenges of the traditional Brick & Mortar business, the business operated by Nike Direct (the business unit in charge of DTC) was flying and justifying the important strategic decisions of the CEO. Then, once normality came back, things slowly but regularly, quarter by quarter, showed that the separation line between being ambitious or being wrong was very thin.
In 6 months, hundreds of colleagues were fired and together with them Nike lost a solid process and thousands of years of experience and expertise in running, football, basketball, fitness, training, sportwear, etc., built in decades of footwear leadership (and apparel too). Product engine became gender led: women, men, and kids (like Zara, GAP, H&M or any other generic fashion brand).
Consumers are not so elastic as some business leaders think or hope. And consumers are not so loyal as some business leaders think or hope. So, what happened? Simple. Many consumers - mainly occasional buyers - did not follow Nike (surprise, surprise) but continued shopping where they were shopping before the decision of the CEO and the President of the Brand. So, once they could not find Nike sneakers in “their” stores – because Nike wasn’t serving those stores any longer -, they simply opted for other brands.
Until late 2010s, Nike had been on a total offense mode (being #1 in every market, in every category, in every product BU, basically in every dimension), a sort of military occupation of the marketplace and a huge problem for competitors that did not know how to react under such a domination. The strategic focus was only one: win anywhere. The new strategy determined the end of the marketplace occupation. Nike opened unexpected spaces to competitors, small, medium, or large brands (with exception of the company based in Herzogenaurach, that – as they usually do - copied and pasted the Nike strategy and executed it in a milder format).
One of the empiric laws of business says that online, the main lever of competition is “price” (as the organic consumer funnel is built on price comparison). The proverbial ability of Nike to leverage the power of the brand to sell sneakers at 200$ began to be threatened by the online appetite for discounts and the search for a definitive solution to the inventory issue. Gross margin – because of that – instead of growing due to the growth of DTC business, showed a rapid decline due to a never-ending promotional attitude on Nike.com
Nike has been built for 50 years on a very simple foundation: brand, product, and marketplace. The DC Investment model, since Nike became a public company, has been always the same: invest at least one tenth of the revenues in demand creation and sports marketing. The brand model has been very simple as well: focus on innovation and inspiration, creativity and storytelling based on athletes-products synergy, leveraging the power of the emotions that sport can create, trying to inspire a growing number of athletes* (*if you have a body, you are an athlete) to play sport. That’s what made Nike the Nike we used to know, love, admire, professionally and emotionally.
What happened in 2020? Well, the brand team shifted from brand marketing to digital marketing and from brand enhancing to sales activation.
shift from CREATE DEMAND to SERVE AND RETAIN DEMAND, that meant that most of the investment were directed to those who were already Nike consumers
as of 2021, to drive traffic to Nike.com, Nike started investing in programmatic adv and performance marketing the double or more of the share of resources usually invested in the other brand activities
the former CMO was ignoring the growing academic literature around the inefficiencies of investment in performance marketing/programmatic advertising, due to frauds, rising costs of mediators and declining consumer response to those activities.
Because of that, Nike invested a material amount of dollars (billions) into something that was less effective but easier to be measured vs something that was more effective but less easy to be measured.
To feed the digital marketing ecosystem, one of the historic functions of the marketing team (brand communications) was “de facto” absorbed and marginalized by the brand design team, which took the leadership in marketing content production (together with the mar-tech “scientists”). Nike didn’t need brand creativity anymore, just a polished and never stopping supply chain of branded stuff.
He made “Nike.com” the center of everything and diverted focus and dollars to it. Due to all of that, Nike hasn’t made a history making brand campaign since 2018, as the Brand organization had to become a huge sales activation machine.
·linkedin.com·
Nike: An Epic Saga of Value Destruction | LinkedIn
The CrowdStrike Outage and Market-Driven Brittleness
The CrowdStrike Outage and Market-Driven Brittleness
Redundancies are unprofitable. Being slow and careful is unprofitable. Being less embedded in and less essential and having less access to the customers’ networks and machines is unprofitable—at least in the short term, by which these companies are measured. This is true for companies like CrowdStrike. It’s also true for CrowdStrike’s customers, who also didn’t have resilience, redundancy, or backup systems in place for failures such as this because they are also an expense that affects short-term profitability.
The market rewards short-term profit-maximizing systems, and doesn’t sufficiently penalize such companies for the impact their mistakes can have. (Stock prices depress only temporarily. Regulatory penalties are minor. Class-action lawsuits settle. Insurance blunts financial losses.) It’s not even clear that the information technology industry could exist in its current form if it had to take into account all the risks such brittleness causes.
The asymmetry of costs is largely due to our complex interdependency on so many systems and technologies, any one of which can cause major failures. Each piece of software depends on dozens of others, typically written by other engineering teams sometimes years earlier on the other side of the planet. Some software systems have not been properly designed to contain the damage caused by a bug or a hack of some key software dependency.
This market force has led to the current global interdependence of systems, far and wide beyond their industry and original scope. It’s why flying planes depends on software that has nothing to do with the avionics. It’s why, in our connected internet-of-things world, we can imagine a similar bad software update resulting in our cars not starting one morning or our refrigerators failing.
Right now, the market incentives in tech are to focus on how things succeed: A company like CrowdStrike provides a key service that checks off required functionality on a compliance checklist, which makes it all about the features that they will deliver when everything is working. That’s exactly backward. We want our technological infrastructure to mimic nature in the way things fail. That will give us deep complexity rather than just surface complexity, and resilience rather than brittleness.
Netflix is famous for its Chaos Monkey tool, which intentionally causes failures to force the systems (and, really, the engineers) to be more resilient. The incentives don’t line up in the short term: It makes it harder for Netflix engineers to do their jobs and more expensive for them to run their systems. Over years, this kind of testing generates more stable systems. But it requires corporate leadership with foresight and a willingness to spend in the short term for possible long-term benefits.
The National Highway Traffic Safety Administration crashes cars to learn what happens to the people inside. But cars are relatively simple, and keeping people safe is straightforward. Software is different. It is diverse, is constantly changing, and has to continually adapt to novel circumstances. We can’t expect that a regulation that mandates a specific list of software crash tests would suffice. Again, security and resilience are achieved through the process by which we fail and fix, not through any specific checklist. Regulation has to codify that process.
·lawfaremedia.org·
The CrowdStrike Outage and Market-Driven Brittleness
Why Are Debut Novels Failing to Launch?
Why Are Debut Novels Failing to Launch?
The fragmented media environment, changes in publicity strategies, and the need for authors to become influencers have made it harder for new voices to break through.
Last fall, while reporting Esquire’s “Future of Books” predictions, I asked industry insiders about trends they’d noticed in recent years. Almost everyone mentioned that debut fiction has become harder to launch. For writers, the stakes are do or die: A debut sets the bar for each of their subsequent books, so their debut advance and sales performance can follow them for the rest of their career. For editors, if a writer’s first book doesn’t perform, it’s hard to make a financial case for acquiring that writer’s second book. And for you, a reader interested in great fiction, the fallout from this challenging climate can limit your access to exciting new voices in fiction. Unless you diligently shop at independent bookstores where booksellers highlight different types of books, you might only ever encounter the big, splashy debuts that publishers, book clubs, social-media algorithms, and big-box retailers have determined you should see.
BookTok—er, TikTok—is still considered the au courant emergent platform, but unlike Instagram and Twitter before it, publishers can’t figure out how to game the algorithm. “It’s a wonderful tool, but it’s an uncontrollable one,” Lucas says. As opposed to platforms like Twitter and Instagram, on which authors can actively post to establish a following, the runaway hits of BookTok (see: The Song of Achilles) grew from influencer videos.
These days, “in order to get exposure, you have to make the kinds of content that the platform is prioritizing in a given moment,” Chayka says. On Instagram, that means posting videos. Gone are the days of the tastefully cluttered tableaux of notebooks, pens, and coffee mugs near a book jacket; front-facing videos are currently capturing the most eyeballs. “A nonfiction author at least has the subject matter to talk about,” Chayka says. (Many nonfiction writers now create bite-size videos distilling the ideas of their books, with the goal of becoming thought leaders.) But instead of talking about their books, novelists share unboxing videos when they receive their advance copies, or lifestyle videos about their writing routines, neither of which convey their voice on the page. Making this “content” takes time away from writing, Chayka says: “You’re glamorizing your writer’s residency; you’re not talking about the work itself necessarily.”
“Energy tends to attach itself to wherever energy is already attached,” Lucas says. “Fewer debuts have a chance of really breaking through the noise in this climate, because all of the energy attaches itself to the ones that have made it past a certain obstacle.” In some cases, the energy starts building as early as when a project is first announced.
Because staff publicists at publishing houses must split their workload among several authors, there is an expectation that an author will now spend untold hours working as their book’s spokesperson.
The agent at the talent firm describes a “one strike and you’re out” mentality, with some authors getting dropped by their agents if their debut doesn’t sell well.
But one positive development amid this sense of precarity is the rise of the literary friendship. “On social media,” Isle McElroy wrote for this magazine in September, “writers are just as likely to hype their peers as they are to self-promote: linking where to buy books, posting photos of readings, and sharing passages from galleys.” There is now an all-ships-rise mentality among authors at every career stage, but particularly among first-time novelists. Now networks of writers are more important than ever.
When it was time to ask other writers for blurbs for The Volcano Daughters, Balibrera had friends who were excited to boost the book, but she could also rely on other writers who remembered her from Literati. “There was goodwill built up already,” Gibbs says.
·esquire.com·
Why Are Debut Novels Failing to Launch?
Pricing of Webflow freelancers : r/webflow
Pricing of Webflow freelancers : r/webflow
Some good questions to ask: Can they show you some examples of past projects they've built in Webflow? Do some checking if their work looks good on smaller devices (Webflow cascades styles down from desktop, so mobile design can sometimes get overlooked) Are they using a class framework (e.g., Client-first, Mast, Lumos, or custom)? And are they comfortable building the site with a level of modularity that enables easy reuse of sections, components, and styles? This approach simplifies future design updates and makes it easier for your team to manage and expand the site after the handover. What's their approach to SEO in Webflow? What do they do to ensure performance and loading speed are optimized? Do they have a process QA and testing before launching a site? How do they handle client feedback and revisions? Do they have a process for educating you on how to use and manage the site they built?
·reddit.com·
Pricing of Webflow freelancers : r/webflow
Perfectionism is optimizing at the wrong scale | Hacker News discussion
Perfectionism is optimizing at the wrong scale | Hacker News discussion
The thing I most worry about using anti-perfectionism arguments is that it begs a vision in the first place—perfectionism requires an idea of what's perfect. Projects suffer from a lack of real hypotheses. Fine, just build. But if you're cutting something important to others by calling it too perfect, can you define the goal (not just the ingredients)? We tend to justify these things by saying, we'll iterate. Much like perfectionism can always be criticized, iteration can theoretically always make a thing better. Iteration is not vision and strategy, it's nearly the reverse, it hedges vision and strategy. This is a slightly different point, but when we say we don't need this extra security or that UX performance, you're setting a ceiling on the people who are passionate about them. Those things really do have limits (no illusions!), but you're not just cutting corners, you're cutting specific corners. That's a company's culture. Being accused of perfectionism justifiably leads to upset that the company doesn't care about security or users. Yeah, maybe it's limited to this one project, but often not.
Perfection can be the enemy of the good. It's that it's not a particularly a helpful critique. To use the article’s concept, it’s the wrong scale. It might be helpful to an individual in a performance review, but it doesn’t say why X is unnecessary in this project or at this company. Little is added to the discussion until I describe X relative to the goal. Perfectionism is indeed good to avoid—it's basically defined as a bad thing by being "too". But the better conversation says how X falls short on certain measuring sticks. At the very least it actually engages X in the X discussion. Perfectionism is more of a critique of the person.
It takes effort to understand the person's idea enough to engage it, but more importantly it takes work that was supposed to (but might not) have gone into developing good projects or goals in the first place. Projects well-formed enough to create constraints for themselves.
I agree with the thesis of this article but I actually think the point would be better made if we switch from talking about optimizing to talking about satisficing[1]. Simply put, satisficing is searching for a solution that meets a particular threshold for acceptability, and then stopping. My personal high-level strategy for success is one of continual iterative satisficing. The iterative part means that once I have met an acceptability criterion, I am free to either move on to something else, or raise my bar for acceptability and search again. I never worry about whether a solution is optimal, though, only if it is good enough. I think that this is what many people are really doing when they say they are "optimizing", but using the term "optimzing" leads to confusion, because satisficing solutions are by definition non-optimal (except by luck), and some people (especially the young, in my experience) seem to feel compelled to actually optimize, leading to unnecessary perfectionism.
Perfectionism is sort of polarizing, and a lot of product manager / CEO types see it as the enemy. In certain contexts it might be, but in others “perfectionism” translates to “building the foundation flawlessly with the downstream dependencies in mind to minimize future tech debt.” Of course, a lot of managers prefer to pretend that tech debt doesn’t exist but that’s just because they don’t think they can pay it off in time before their team gets cut for not producing any value because they were so busy paying off tech debt.
kthejoker2 3 months ago | prev | next [–] Not sure you can talk about perfectionism without clarifying between "healthy" perfectionism and "unhealthy" perfectionism. Both exist, but often people are thinking of one or the other when discussing perfectionism, and it creates cognitive dissonance when two people thinking of the two different modes are singing perfectionism's praises or denouncing its practice.
looking at these comments, it seems perfectionism is ill-defined. it seems to be positive - perfectionism is not giving up, it is excellence, it is beyond mediocre. it also seems to be negative - it is going too far, it is avoiding/procrastinating, it is self-defeating. I wonder what the perfect definition would be?
·news.ycombinator.com·
Perfectionism is optimizing at the wrong scale | Hacker News discussion
Local-first software: You own your data, in spite of the cloud
Local-first software: You own your data, in spite of the cloud
While cloud apps have become dominant due to their collaborative features, they often compromise user ownership and data longevity. Local-first software seeks to provide a better alternative by prioritizing local storage and networks while still enabling seamless collaboration. The article outlines seven ideals for local-first software, discusses existing technologies, and proposes Conflict-free Replicated Data Types (CRDTs) as a promising foundation for realizing these ideals.
Cloud apps like Google Docs and Trello are popular because they enable real-time collaboration with colleagues, and they make it easy for us to access our work from all of our devices. However, by centralizing data storage on servers, cloud apps also take away ownership and agency from users. If a service shuts down, the software stops functioning, and data created with that software is lost.
In this article we propose “local-first software”: a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.
This article has also been published in PDF format in the proceedings of the Onward! 2019 conference. Please cite it as: Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. Local-first software: you own your data, in spite of the cloud. 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), October 2019, pages 154–178. doi:10.1145/3359591.3359737
To sum up: the cloud gives us collaboration, but old-fashioned apps give us ownership. Can’t we have the best of both worlds? We would like both the convenient cross-device access and real-time collaboration provided by cloud apps, and also the personal ownership of your own data embodied by “old-fashioned” software.
In old-fashioned apps, the data lives in files on your local disk, so you have full agency and ownership of that data: you can do anything you like, including long-term archiving, making backups, manipulating the files using other programs, or deleting the files if you no longer want them. You don’t need anybody’s permission to access your files, since they are yours. You don’t have to depend on servers operated by another company.
In cloud apps, the data on the server is treated as the primary, authoritative copy of the data; if a client has a copy of the data, it is merely a cache that is subordinate to the server. Any data modification must be sent to the server, otherwise it “didn’t happen.” In local-first applications we swap these roles: we treat the copy of the data on your local device — your laptop, tablet, or phone — as the primary copy. Servers still exist, but they hold secondary copies of your data in order to assist with access from multiple devices. As we shall see, this change in perspective has profound implications.
For several years the Offline First movement has been encouraging developers of web and mobile apps to improve offline support, but in practice it has been difficult to retrofit offline support to cloud apps, because tools and libraries designed for a server-centric model do not easily adapt to situations in which users make edits while offline.
In local-first apps, our ideal is to support real-time collaboration that is on par with the best cloud apps today, or better. Achieving this goal is one of the biggest challenges in realizing local-first software, but we believe it is possible
Some file formats (such as plain text, JPEG, and PDF) are so ubiquitous that they will probably be readable for centuries to come. The US Library of Congress also recommends XML, JSON, or SQLite as archival formats for datasets. However, in order to read less common file formats and to preserve interactivity, you need to be able to run the original software (if necessary, in a virtual machine or emulator). Local-first software enables this.
Of these, email attachments are probably the most common sharing mechanism, especially among users who are not technical experts. Attachments are easy to understand and trustworthy. Once you have a copy of a document, it does not spontaneously change: if you view an email six months later, the attachments are still there in their original form. Unlike a web app, an attachment can be opened without any additional login process. The weakest point of email attachments is collaboration. Generally, only one person at a time can make changes to a file, otherwise a difficult manual merge is required. File versioning quickly becomes messy: a back-and-forth email thread with attachments often leads to filenames such as Budget draft 2 (Jane's version) final final 3.xls.
Web apps have set the standard for real-time collaboration. As a user you can trust that when you open a document on any device, you are seeing the most current and up-to-date version. This is so overwhelmingly useful for team work that these applications have become dominant.
The flip side to this is a total loss of ownership and control: the data on the server is what counts, and any data on your client device is unimportant — it is merely a cache
We think the Git model points the way toward a future for local-first software. However, as it currently stands, Git has two major weaknesses: Git is excellent for asynchronous collaboration, especially using pull requests, which take a coarse-grained set of changes and allow them to be discussed and amended before merging them into the shared master branch. But Git has no capability for real-time, fine-grained collaboration, such as the automatic, instantaneous merging that occurs in tools like Google Docs, Trello, and Figma. Git is highly optimized for code and similar line-based text files; other file formats are treated as binary blobs that cannot meaningfully be edited or merged. Despite GitHub’s efforts to display and compare images, prose, and CAD files, non-textual file formats remain second-class in Git.
A web app in its purest form is usually a Rails, Django, PHP, or Node.js program running on a server, storing its data in a SQL or NoSQL database, and serving web pages over HTTPS. All of the data is on the server, and the user’s web browser is only a thin client. This architecture offers many benefits: zero installation (just visit a URL), and nothing for the user to manage, as all data is stored and managed in one place by the engineering and DevOps professionals who deploy the application. Users can access the application from all of their devices, and colleagues can easily collaborate by logging in to the same application. JavaScript frameworks such as Meteor and ShareDB, and services such as Pusher and Ably, make it easier to add real-time collaboration features to web applications, building on top of lower-level protocols such as WebSocket. On the other hand, a web app that needs to perform a request to a server for every user action is going to be slow. It is possible to hide the round-trip times in some cases by using client-side JavaScript, but these approaches quickly break down if the user’s internet connection is unstable.
As we have shown, none of the existing data layers for application development fully satisfy the local-first ideals. Thus, three years ago, our lab set out to search for a solution that gives seven green checkmarks.
*Fast, multi-device, offline, collaboration, longevity, privacy, and user control
Thus, CRDTs have some similarity to version control systems like Git, except that they operate on richer data types than text files. CRDTs can sync their state via any communication channel (e.g. via a server, over a peer-to-peer connection, by Bluetooth between local devices, or even on a USB stick). The changes tracked by a CRDT can be as small as a single keystroke, enabling Google Docs-style real-time collaboration. But you could also collect a larger set of changes and send them to collaborators as a batch, more like a pull request in Git. Because the data structures are general-purpose, we can develop general-purpose tools for storage, communication, and management of CRDTs, saving us from having to re-implement those things in every single app.
we believe that CRDTs have the potential to be a foundation for a new generation of software. Just as packet switching was an enabling technology for the Internet and the web, or as capacitive touchscreens were an enabling technology for smartphones, so we think CRDTs may be the foundation for collaborative software that gives users full ownership of their data.
We are often asked about the effectiveness of automatic merging, and many people assume that application-specific conflict resolution mechanisms are required. However, we found that users surprisingly rarely encounter conflicts in their work when collaborating with others, and that generic resolution mechanisms work well. The reasons for this are: Automerge tracks changes at a fine-grained level, and takes datatype semantics into account. For example, if two users concurrently insert items at the same position into an array, Automerge combines these changes by positioning the two new items in a deterministic order. In contrast, a textual version control system like Git would treat this situation as a conflict requiring manual resolution. Users have an intuitive sense of human collaboration and avoid creating conflicts with their collaborators. For example, when users are collaboratively editing an article, they may agree in advance who will be working on which section for a period of time, and avoid concurrently modifying the same section.
Conflicts arise only if users concurrently modify the same property of the same object: for example, if two users concurrently change the position of the same image object on a canvas. In such cases, it is often arbitrary how they are resolved and satisfactory either way.
We experimented with a number of mechanisms for sharing documents with other users, and found that a URL model, inspired by the web, makes the most sense to users and developers. URLs can be copied and pasted, and shared via communication channels such as email or chat. Access permissions for documents beyond secret URLs remain an open research question.
As with a Git repository, what a particular user sees in the “master” branch is a function of the last time they communicated with other users. Newly arriving changes might unexpectedly modify parts of the document you are working on, but manually merging every change from every user is tedious. Decentralized documents enable users to be in control over their own data, but further study is needed to understand what this means in practical user-interface terms.
Performance and memory/disk usage quickly became a problem because CRDTs store all history, including character-by-character text edits. These pile up, but can’t easily be truncated because it’s impossible to know when someone might reconnect to your shared document after six months away and need to merge changes from that point forward.
Servers thus have a role to play in the local-first world — not as central authorities, but as “cloud peers” that support client applications without being on the critical path. For example, a cloud peer that stores a copy of the document, and forwards it to other peers when they come online, could solve the closed-laptop problem above.
These experiments suggest that local-first software is possible. Collaboration and ownership are not at odds with each other — we can get the best of both worlds, and users can benefit. However, the underlying technologies are still a work in progress. They are good for developing prototypes, and we hope that they will evolve and stabilize in the coming years, but realistically, it is not yet advisable to replace a proven product like Firebase with an experimental project like Automerge in a production setting today.
Most CRDT research operates in a model where all collaborators immediately apply their edits to a single version of a document. However, practical local-first applications require more flexibility: users must have the freedom to reject edits made by another collaborator, or to make private changes to a version of the document that is not shared with others. A user might want to apply changes speculatively or reformat their change history. These concepts are well understood in the distributed source control world as “branches,” “forks,” “rebasing,” and so on. There is little work to date on understanding the algorithms and programming models for collaboration in situations where multiple document versions and branches exist side-by-side.
Different collaborators may be using different versions of an application, potentially with different features. As there is no central database server, there is no authoritative “current” schema for the data. How can we write software so that varying application versions can safely interoperate, even as data formats evolve? This question has analogues in cloud-based API design, but a local-first setting provides additional challenges.
When every document can develop a complex version history, simply through daily operation, an acute problem arises: how do we communicate this version history to users? How should users think about versioning, share and accept changes, and understand how their documents came to be a certain way when there is no central source of truth? Today there are two mainstream models for change management: a source-code model of diffs and patches, and a Google Docs model of suggestions and comments. Are these the best we can do? How do we generalize these ideas to data formats that are not text?
We believe that the assumption of centralization is deeply ingrained in our user experiences today, and we are only beginning to discover the consequences of changing that assumption. We hope these open questions will inspire researchers to explore what we believe is an untapped area.
some strategies for improving each area: Fast. Aggressive caching and downloading resources ahead of time can be a way to prevent the user from seeing spinners when they open your app or a document they previously had open. Trust the local cache by default instead of making the user wait for a network fetch. Multi-device. Syncing infrastructure like Firebase and iCloud make multi-device support relatively painless, although they do introduce longevity and privacy concerns. Self-hosted infrastructure like Realm Object Server provides an alternative trade-off. Offline. In the web world, Progressive Web Apps offer features like Service Workers and app manifests that can help. In the mobile world, be aware of WebKit frames and other network-dependent components. Test your app by turning off your WiFi, or using traffic shapers such as the Chrome Dev Tools network condition simulator or the iOS network link conditioner. Collaboration. Besides CRDTs, the more established technology for real-time collaboration is Operational Transformation (OT), as implemented e.g. in ShareDB. Longevity. Make sure your software can easily export to flattened, standard formats like JSON or PDF. For example: mass export such as Google Takeout; continuous backup into stable file formats such as in GoodNotes; and JSON download of documents such as in Trello. Privacy. Cloud apps are fundamentally non-private, with employees of the company and governments able to peek at user data at any time. But for mobile or desktop applications, try to make clear to users when the data is stored only on their device versus being transmitted to a backend. User control. Can users easily back up, duplicate, or delete some or all of their documents within your application? Often this involves re-implementing all the basic filesystem operations, as Google Docs has done with Google Drive.
If you are an entrepreneur interested in building developer infrastructure, all of the above suggests an interesting market opportunity: “Firebase for CRDTs.” Such a startup would need to offer a great developer experience and a local persistence library (something like SQLite or Realm). It would need to be available for mobile platforms (iOS, Android), native desktop (Windows, Mac, Linux), and web technologies (Electron, Progressive Web Apps). User control, privacy, multi-device support, and collaboration would all be baked in. Application developers could focus on building their app, knowing that the easiest implementation path would also given them top marks on the local-first scorecard. As litmus test to see if you have succeeded, we suggest: do all your customers’ apps continue working in perpetuity, even if all servers are shut down? We believe the “Firebase for CRDTs” opportunity will be huge as CRDTs come of age.
In the pursuit of better tools we moved many applications to the cloud. Cloud software is in many regards superior to “old-fashioned” software: it offers collaborative, always-up-to-date applications, accessible from anywhere in the world. We no longer worry about what software version we are running, or what machine a file lives on. However, in the cloud, ownership of data is vested in the servers, not the users, and so we became borrowers of our own data. The documents created in cloud apps are destined to disappear when the creators of those services cease to maintain them. Cloud services defy long-term preservation. No Wayback Machine can restore a sunsetted web application. The Internet Archive cannot preserve your Google Docs. In this article we explored a new way forward for software of the future. We have shown that it is possible for users to retain ownership and control of their data, while also benefiting from the features we associate with the cloud: seamless collaboration and access from anywhere. It is possible to get the best of both worlds. But more work is needed to realize the local-first approach in practice. Application developers can take incremental steps, such as improving offline support and making better use of on-device storage. Researchers can continue improving the algorithms, programming models, and user interfaces for local-first software. Entrepreneurs can develop foundational technologies such as CRDTs and peer-to-peer networking into mature products able to power the next generation of applications. Today it is easy to create a web application in which the server takes ownership of all the data. But it is too hard to build collaborative software that respects users’ ownership and agency. In order to shift the balance, we need to improve the tools for developing local-first software. We hope that you will join us.
·inkandswitch.com·
Local-first software: You own your data, in spite of the cloud
How Bad Habits Are Formed (Unconsciously)
How Bad Habits Are Formed (Unconsciously)
I think she enjoys treating her boyfriend like a chore because her relationship with her parents acclimated her to the feeling of being depended on. She likes the feeling of parenting and babying someone because her child-self had to do that to stay on her parents’ good side. In other words, her psyche felt like, in order to keep her parents’ love and protection, she needed to turn herself into a caretaker, going above and beyond what she knows she should be doing.
Patterns that are formed out of necessity in an earlier stage of life determine what you look for for the rest of your life. The behaviors you were forced to do when you were younger become the behaviors you itch to do when you’re older.
Like making a tie-dye T-shirt, the twists and turns of childhood shape the way we’re colored as adults.
·sherryning.com·
How Bad Habits Are Formed (Unconsciously)
The Complex Problem Of Lying For Jobs — Ludicity
The Complex Problem Of Lying For Jobs — Ludicity

Claude summary: Key takeaway Lying on job applications is pervasive in the tech industry due to systemic issues, but it creates an "Infinite Lie Vortex" that erodes integrity and job satisfaction. While honesty may limit short-term opportunities, it's crucial for long-term career fulfillment and ethical work environments.

Summary

  • The author responds to Nat Bennett's article against lying in job interviews, acknowledging its validity while exploring the nuances of the issue.
  • Most people in the tech industry are already lying or misrepresenting themselves on their CVs and in interviews, often through "technically true" statements.
  • The job market is flooded with candidates who are "cosplaying" at engineering, making it difficult for honest, competent individuals to compete.
  • Many employers and interviewers are not seriously engaged in engineering and overlook actual competence in favor of congratulatory conversation and superficial criteria
  • Most tech projects are "default dead," making it challenging for honest candidates to present impressive achievements without embellishment.
  • The author suggests that escaping the "Infinite Lie Vortex" requires building financial security, maintaining low expenses, and cultivating relationships with like-minded professionals.
  • Honesty in job applications may limit short-term opportunities but leads to more fulfilling and ethical work environments in the long run.
  • The author shares personal experiences of navigating the tech job market, including instances of misrepresentation and the challenges of maintaining integrity.
  • The piece concludes with a satirical, honest version of the author's CV, highlighting the absurdity of common resume claims and the value of authenticity.
  • Throughout the article, the author maintains a cynical, humorous tone while addressing serious issues in the tech industry's hiring practices and work culture.
  • The author emphasizes the importance of self-awareness, continuous learning, and valuing personal integrity over financial gain or status.
If your model is "it's okay to lie if I've been lied to" then we're all knee deep in bullshit forever and can never escape Transaction Cost Hell.
Do I agree that entering The Infinite Lie Vortex is wise or good for you spiritually? No, not at all, just look at what it's called.
it is very common practice on the job market to have a CV that obfuscates the reality of your contribution at previous workplaces. Putting aside whether you're a professional web developer because you got paid $20 by your uncle to fix some HTML, the issue with lying lies in the intent behind it. If you have a good idea of what impression you are leaving your interlocutor with, and you are crafting statements such that the image in their head does not map to reality, then you are lying.
Unfortunately thanks to our dear leader's masterful consummation of toxicity and incompetence, the truth of the matter is that: They left their previous job due to burnout related to extensive bullying, which future employers would like to know because they would prefer to blacklist everyone involved to minimize their chances of getting the bad actor. Everyone involved thinks that they were the victim, and an employer does not have access to my direct observations, so this is not even an unreasonable strategy All their projects were failures through no fault of their own, in a market where everyone has "successfully designed and implemented" their data governance initiatives, as indicated previously
What I am trying to say is that I currently believe that there are not enough employers who will appreciate honesty and competence for a strategy of honesty to reliably pay your rent. My concern, with regards to Nat's original article, is that the industry is so primed with nonsense that we effectively have two industries. We have a real engineering market, where people are fairly serious and gather in small conclaves (only two of which I have seen, and one of those was through a blog reader's introduction), and then a gigantic field of people that are cosplaying at engineering. The real market is large in absolute terms, but tiny relative to the number of candidates and companies out there. The fake market is all people that haven't cultivated the discipline to engineer but nonetheless want software engineering salaries and clout.
There are some companies where your interviewer is going to be a reasonable person, and there you can be totally honest. For example, it is a good thing to admit that the last project didn't go that well, because the kind of person that sees the industry for what it is, and who doesn't endorse bullshit, and who works on themselves diligently - that person is going to hear your honesty, and is probably reasonably good at detecting when candidates are revealing just enough fake problems to fake honesty, and then they will hire you. You will both put down your weapons and embrace. This is very rare. A strategy that is based on assuming this happens if you keep repeatedly engaging with random companies on the market is overwhelmingly going to result in a long, long search. For the most part, you will be engaged in a twisted, adversarial game with actors who will relentlessly try to do things like make you say a number first in case you say one that's too low.
Suffice it to say that, if you grin in just the right way and keep a straight face, there is a large class of person that will hear you say "Hah, you know, I'm just reflecting on how nice it is to be in a room full of people who are asking the right questions after all my other terrible interviews." and then they will shake your hand even as they shatter the other one patting themselves on the back at Mach 10. I know, I know, it sounds like that doesn't work but it absolutely does.
Neil Gaiman On Lying People get hired because, somehow, they get hired. In my case I did something which these days would be easy to check, and would get me into trouble, and when I started out, in those pre-internet days, seemed like a sensible career strategy: when I was asked by editors who I'd worked for, I lied. I listed a handful of magazines that sounded likely, and I sounded confident, and I got jobs. I then made it a point of honour to have written something for each of the magazines I'd listed to get that first job, so that I hadn't actually lied, I'd just been chronologically challenged... You get work however you get work.
Nat Bennett, of Start Of This Article fame, writes: If you want to be the kind of person who walks away from your job when you're asked to do something that doesn't fit your values, you need to save money. You need to maintain low fixed expenses. Acting with integrity – or whatever it is that you value – mostly isn't about making the right decision in the moment. It's mostly about the decisions that you make leading up to that moment, that prepare you to be able to make the decision that you feel is right.
As a rough rule, if I've let my relationship with a job deteriorate to the point that I must leave, I have already waited way too long, and will be forced to move to another place that is similarly upsetting.
And that is, of course, what had gradually happened. I very painfully navigated the immigration process, trimmed my expenses, found a position that is frequently silly but tolerable for extended periods of time, and started looking for work before the new gig, mostly the same as the last gig, became unbearable. Everything other than the immigration process was burnout induced, so I can't claim that it was a clever strategy, but the net effect is that I kept sacrificing things at the altar of Being Okay With Less, and now I am in an apartment so small that I think I almost fractured my little toe banging it on the side of my bed frame, but I have the luxury of not lying.
If I had to write down what a potential exit pathway looks like, it might be: Find a job even if you must navigate the Vortex, and it doesn't matter if it's bad because there's a grace period where your brain is not soaking up the local brand of madness, i.e, when you don't even understand the local politics yet Meet good programmers that appreciate things like mindfulness in your local area - you're going to have to figure out how to do this one Repeat Step 1 and Step 2 on a loop, building yourself up as a person, engineer, and friend, until someone who knows you for you hires you based on your personality and values, rather than "I have seven years doing bullshit in React that clearly should have been ten raw HTML pages served off one Django server"
A CEO here told me that he asks people to self-evaluate their skill on a scale of 1 to 10, but he actually has solid measures. You're at 10 at Python if you're a core maintainer. 9 if you speak at major international conferences, etc. On that scale, I'm a 4, or maybe a 5 on my best day ever, and that's the sad truth. We'll get there one day.
I will always hate writing code that moves the overall product further from Quality. I'll write a basic feature and take shortcuts, but not the kind that we are going to build on top of, which is unattractive to employers because sacrificing the long-term health of a product is a big part of status laundering.
The only piece of software I've written that is unambiguously helpful is this dumb hack that I used to cut up episodes of the Glass Cannon Podcast into one minute segments so that my skip track button on my underwater headphones is now a janky fast forward one minute button. It took me like ten minutes to write, and is my greatest pride.
Have I actually worked with Google? My CV says so, but guess what, not quite! I worked on one project where the money came from Google, but we really had one call with one guy who said we were probably on track, which we definitely were not!
Did I salvage a A$1.2M project? Technically yes, but only because I forced the previous developer to actually give us his code before he quit! This is not replicable, and then the whole engineering team quit over a mandatory return to office, so the application never shipped!
Did I save a half million dollars in Snowflake expenses? CV says yes, reality says I can only repeat that trick if someone decided to set another pile of money on fire and hand me the fire extinguisher! Did I really receive departmental recognition for this? Yes, but only in that they gave me A$30 and a pat on the head and told me that a raise wasn't on the table.
Was I the most highly paid senior engineer at that company? Yes, but only because I had insider information that four people quit in the same week, and used that to negotiate a 20% raise over the next highest salary - the decision was based around executive KPIs, not my competence!
·ludic.mataroa.blog·
The Complex Problem Of Lying For Jobs — Ludicity
Culture Needs More Jerks | Defector
Culture Needs More Jerks | Defector
The function of criticism is and has always been to complicate our sense of beauty. Good criticism of music we love—or, occasionally, really hate—increases the dimensions and therefore the volume of feeling. It exercises that part of ourselves which responds to art, making it stronger.
The correction to critics’ failure to take pop music seriously is known as poptimism: the belief that pop music is just as worthy of critical consideration as genres like rock, rap or, god forbid, jazz. In my opinion, this correction was basically good. It’s fun and interesting to think seriously about music that is meant to be heard on the radio or danced to in clubs, the same way it is fun and interesting to think about crime novels or graphic design. For the critic, maybe more than for anyone else, it is important to remember that while a lot of great stuff is not popular, popular stuff can be great, too.
every good idea has a dumber version of itself on the internet. The dumb version of poptimism is the belief that anything sufficiently popular must be good. This idea is supported by certain structural forces, particularly the ability, through digitization, to count streams, pageviews, clicks, and other metrics so exactly that every artist and the music they release can be assigned a numerical value representing their popularity relative to everything else. The answer to the question “What do people like?” is right there on a chart, down to the ones digit, conclusively proving that, for example, Drake (74,706,786,894 lead streams) is more popular than The Weeknd (56,220,309,818 lead streams) on Spotify.
The question “What is good?” remains a matter of disagreement, but in the face of such precise numbers, how could you argue that the Weeknd was better? You would have to appeal to subjective aesthetic assessments (e.g. Drake’s combination of brand-checking and self-pity recreates neurasthenic consumer culture without transcending it) or socioeconomic context (e.g. Drake is a former child actor who raps about street life for listeners who want to romanticize black poverty without hearing from anyone actually affected by it, plus he’s Canadian) in a way that would ultimately just be your opinion. And who needs one jerk’s opinion when democracy is right there in the numbers?
This attitude is how you get criticism like “Why Normal Music Reviews No Longer Make Sense for Taylor Swift,” which cites streaming data (The Tortured Poets Department’s 314.5 million release-day streams versus Cowboy Carter’s 76.6 million) to argue that Swift is better understood not as a singer-songwriter but as an area of brand activity, along the lines of the Marvel Cinematic Universe or Star Wars. “The tepid music reviews often miss the fact that ‘music’ is something that Swift stopped selling long ago,” New Yorker contributor Sinéad O’Sullivan writes. “Instead, she has spent two decades building the foundation of a fan universe, filled with complex, in-sequence narratives that have been contextualized through multiple perspectives across eleven blockbuster installments. She is not creating standalone albums but, rather, a musical franchise.”
The fact that most cognitively normal adults regard these bands as children’s music is what makes their fan bases not just ticket-buyers but subcultures.
The power of the antagonist-subculture dynamic was realized by major record labels in the early 1990s, when the most popular music in America was called “alternative.”
For the person who is not into music—the person who just happens to be rapturously committed to the artists whose music you hear everywhere whether you want to or not, whose new albums are like iPhone releases and whose shows are like Disneyland—the critic is a foil.
·defector.com·
Culture Needs More Jerks | Defector
Dario Amodei — Machines of Loving Grace
Dario Amodei — Machines of Loving Grace
I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.
I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides.
the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.
Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires.
The five categories I am most excited about are: Biology and physical health Neuroscience and mental health Economic development and poverty Peace and governance Work and meaning
We could summarize this as a “country of geniuses in a datacenter”.
you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.
I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.
in science many experiments are often needed in sequence, each learning from or building on the last. All of this means that the speed at which a major project—for example developing a cancer cure—can be completed may have an irreducible minimum that cannot be decreased further even as intelligence continues to increase.
Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator.
Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments. Examples of advances that work well in a technical sense, but whose impact has been substantially reduced by regulations or misplaced fears, include nuclear power, supersonic flight, and even elevators
Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10. The key question is how fast it all happens and in what order.
I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.
CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity.
there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward.
·darioamodei.com·
Dario Amodei — Machines of Loving Grace