Found 56 bookmarks
Newest
The CrowdStrike Outage and Market-Driven Brittleness
The CrowdStrike Outage and Market-Driven Brittleness
Redundancies are unprofitable. Being slow and careful is unprofitable. Being less embedded in and less essential and having less access to the customers’ networks and machines is unprofitable—at least in the short term, by which these companies are measured. This is true for companies like CrowdStrike. It’s also true for CrowdStrike’s customers, who also didn’t have resilience, redundancy, or backup systems in place for failures such as this because they are also an expense that affects short-term profitability.
The market rewards short-term profit-maximizing systems, and doesn’t sufficiently penalize such companies for the impact their mistakes can have. (Stock prices depress only temporarily. Regulatory penalties are minor. Class-action lawsuits settle. Insurance blunts financial losses.) It’s not even clear that the information technology industry could exist in its current form if it had to take into account all the risks such brittleness causes.
The asymmetry of costs is largely due to our complex interdependency on so many systems and technologies, any one of which can cause major failures. Each piece of software depends on dozens of others, typically written by other engineering teams sometimes years earlier on the other side of the planet. Some software systems have not been properly designed to contain the damage caused by a bug or a hack of some key software dependency.
This market force has led to the current global interdependence of systems, far and wide beyond their industry and original scope. It’s why flying planes depends on software that has nothing to do with the avionics. It’s why, in our connected internet-of-things world, we can imagine a similar bad software update resulting in our cars not starting one morning or our refrigerators failing.
Right now, the market incentives in tech are to focus on how things succeed: A company like CrowdStrike provides a key service that checks off required functionality on a compliance checklist, which makes it all about the features that they will deliver when everything is working. That’s exactly backward. We want our technological infrastructure to mimic nature in the way things fail. That will give us deep complexity rather than just surface complexity, and resilience rather than brittleness.
Netflix is famous for its Chaos Monkey tool, which intentionally causes failures to force the systems (and, really, the engineers) to be more resilient. The incentives don’t line up in the short term: It makes it harder for Netflix engineers to do their jobs and more expensive for them to run their systems. Over years, this kind of testing generates more stable systems. But it requires corporate leadership with foresight and a willingness to spend in the short term for possible long-term benefits.
The National Highway Traffic Safety Administration crashes cars to learn what happens to the people inside. But cars are relatively simple, and keeping people safe is straightforward. Software is different. It is diverse, is constantly changing, and has to continually adapt to novel circumstances. We can’t expect that a regulation that mandates a specific list of software crash tests would suffice. Again, security and resilience are achieved through the process by which we fail and fix, not through any specific checklist. Regulation has to codify that process.
·lawfaremedia.org·
The CrowdStrike Outage and Market-Driven Brittleness
The Complex Problem Of Lying For Jobs — Ludicity
The Complex Problem Of Lying For Jobs — Ludicity

Claude summary: Key takeaway Lying on job applications is pervasive in the tech industry due to systemic issues, but it creates an "Infinite Lie Vortex" that erodes integrity and job satisfaction. While honesty may limit short-term opportunities, it's crucial for long-term career fulfillment and ethical work environments.

Summary

  • The author responds to Nat Bennett's article against lying in job interviews, acknowledging its validity while exploring the nuances of the issue.
  • Most people in the tech industry are already lying or misrepresenting themselves on their CVs and in interviews, often through "technically true" statements.
  • The job market is flooded with candidates who are "cosplaying" at engineering, making it difficult for honest, competent individuals to compete.
  • Many employers and interviewers are not seriously engaged in engineering and overlook actual competence in favor of congratulatory conversation and superficial criteria
  • Most tech projects are "default dead," making it challenging for honest candidates to present impressive achievements without embellishment.
  • The author suggests that escaping the "Infinite Lie Vortex" requires building financial security, maintaining low expenses, and cultivating relationships with like-minded professionals.
  • Honesty in job applications may limit short-term opportunities but leads to more fulfilling and ethical work environments in the long run.
  • The author shares personal experiences of navigating the tech job market, including instances of misrepresentation and the challenges of maintaining integrity.
  • The piece concludes with a satirical, honest version of the author's CV, highlighting the absurdity of common resume claims and the value of authenticity.
  • Throughout the article, the author maintains a cynical, humorous tone while addressing serious issues in the tech industry's hiring practices and work culture.
  • The author emphasizes the importance of self-awareness, continuous learning, and valuing personal integrity over financial gain or status.
If your model is "it's okay to lie if I've been lied to" then we're all knee deep in bullshit forever and can never escape Transaction Cost Hell.
Do I agree that entering The Infinite Lie Vortex is wise or good for you spiritually? No, not at all, just look at what it's called.
it is very common practice on the job market to have a CV that obfuscates the reality of your contribution at previous workplaces. Putting aside whether you're a professional web developer because you got paid $20 by your uncle to fix some HTML, the issue with lying lies in the intent behind it. If you have a good idea of what impression you are leaving your interlocutor with, and you are crafting statements such that the image in their head does not map to reality, then you are lying.
Unfortunately thanks to our dear leader's masterful consummation of toxicity and incompetence, the truth of the matter is that: They left their previous job due to burnout related to extensive bullying, which future employers would like to know because they would prefer to blacklist everyone involved to minimize their chances of getting the bad actor. Everyone involved thinks that they were the victim, and an employer does not have access to my direct observations, so this is not even an unreasonable strategy All their projects were failures through no fault of their own, in a market where everyone has "successfully designed and implemented" their data governance initiatives, as indicated previously
What I am trying to say is that I currently believe that there are not enough employers who will appreciate honesty and competence for a strategy of honesty to reliably pay your rent. My concern, with regards to Nat's original article, is that the industry is so primed with nonsense that we effectively have two industries. We have a real engineering market, where people are fairly serious and gather in small conclaves (only two of which I have seen, and one of those was through a blog reader's introduction), and then a gigantic field of people that are cosplaying at engineering. The real market is large in absolute terms, but tiny relative to the number of candidates and companies out there. The fake market is all people that haven't cultivated the discipline to engineer but nonetheless want software engineering salaries and clout.
There are some companies where your interviewer is going to be a reasonable person, and there you can be totally honest. For example, it is a good thing to admit that the last project didn't go that well, because the kind of person that sees the industry for what it is, and who doesn't endorse bullshit, and who works on themselves diligently - that person is going to hear your honesty, and is probably reasonably good at detecting when candidates are revealing just enough fake problems to fake honesty, and then they will hire you. You will both put down your weapons and embrace. This is very rare. A strategy that is based on assuming this happens if you keep repeatedly engaging with random companies on the market is overwhelmingly going to result in a long, long search. For the most part, you will be engaged in a twisted, adversarial game with actors who will relentlessly try to do things like make you say a number first in case you say one that's too low.
Suffice it to say that, if you grin in just the right way and keep a straight face, there is a large class of person that will hear you say "Hah, you know, I'm just reflecting on how nice it is to be in a room full of people who are asking the right questions after all my other terrible interviews." and then they will shake your hand even as they shatter the other one patting themselves on the back at Mach 10. I know, I know, it sounds like that doesn't work but it absolutely does.
Neil Gaiman On Lying People get hired because, somehow, they get hired. In my case I did something which these days would be easy to check, and would get me into trouble, and when I started out, in those pre-internet days, seemed like a sensible career strategy: when I was asked by editors who I'd worked for, I lied. I listed a handful of magazines that sounded likely, and I sounded confident, and I got jobs. I then made it a point of honour to have written something for each of the magazines I'd listed to get that first job, so that I hadn't actually lied, I'd just been chronologically challenged... You get work however you get work.
Nat Bennett, of Start Of This Article fame, writes: If you want to be the kind of person who walks away from your job when you're asked to do something that doesn't fit your values, you need to save money. You need to maintain low fixed expenses. Acting with integrity – or whatever it is that you value – mostly isn't about making the right decision in the moment. It's mostly about the decisions that you make leading up to that moment, that prepare you to be able to make the decision that you feel is right.
As a rough rule, if I've let my relationship with a job deteriorate to the point that I must leave, I have already waited way too long, and will be forced to move to another place that is similarly upsetting.
And that is, of course, what had gradually happened. I very painfully navigated the immigration process, trimmed my expenses, found a position that is frequently silly but tolerable for extended periods of time, and started looking for work before the new gig, mostly the same as the last gig, became unbearable. Everything other than the immigration process was burnout induced, so I can't claim that it was a clever strategy, but the net effect is that I kept sacrificing things at the altar of Being Okay With Less, and now I am in an apartment so small that I think I almost fractured my little toe banging it on the side of my bed frame, but I have the luxury of not lying.
If I had to write down what a potential exit pathway looks like, it might be: Find a job even if you must navigate the Vortex, and it doesn't matter if it's bad because there's a grace period where your brain is not soaking up the local brand of madness, i.e, when you don't even understand the local politics yet Meet good programmers that appreciate things like mindfulness in your local area - you're going to have to figure out how to do this one Repeat Step 1 and Step 2 on a loop, building yourself up as a person, engineer, and friend, until someone who knows you for you hires you based on your personality and values, rather than "I have seven years doing bullshit in React that clearly should have been ten raw HTML pages served off one Django server"
A CEO here told me that he asks people to self-evaluate their skill on a scale of 1 to 10, but he actually has solid measures. You're at 10 at Python if you're a core maintainer. 9 if you speak at major international conferences, etc. On that scale, I'm a 4, or maybe a 5 on my best day ever, and that's the sad truth. We'll get there one day.
I will always hate writing code that moves the overall product further from Quality. I'll write a basic feature and take shortcuts, but not the kind that we are going to build on top of, which is unattractive to employers because sacrificing the long-term health of a product is a big part of status laundering.
The only piece of software I've written that is unambiguously helpful is this dumb hack that I used to cut up episodes of the Glass Cannon Podcast into one minute segments so that my skip track button on my underwater headphones is now a janky fast forward one minute button. It took me like ten minutes to write, and is my greatest pride.
Have I actually worked with Google? My CV says so, but guess what, not quite! I worked on one project where the money came from Google, but we really had one call with one guy who said we were probably on track, which we definitely were not!
Did I salvage a A$1.2M project? Technically yes, but only because I forced the previous developer to actually give us his code before he quit! This is not replicable, and then the whole engineering team quit over a mandatory return to office, so the application never shipped!
Did I save a half million dollars in Snowflake expenses? CV says yes, reality says I can only repeat that trick if someone decided to set another pile of money on fire and hand me the fire extinguisher! Did I really receive departmental recognition for this? Yes, but only in that they gave me A$30 and a pat on the head and told me that a raise wasn't on the table.
Was I the most highly paid senior engineer at that company? Yes, but only because I had insider information that four people quit in the same week, and used that to negotiate a 20% raise over the next highest salary - the decision was based around executive KPIs, not my competence!
·ludic.mataroa.blog·
The Complex Problem Of Lying For Jobs — Ludicity
How Elon Musk Got Tangled Up in Blue
How Elon Musk Got Tangled Up in Blue
Mr. Musk had largely come to peace with a price of $100 a year for Blue. But during one meeting to discuss pricing, his top assistant, Jehn Balajadia, felt compelled to speak up. “There’s a lot of people who can’t even buy gas right now,” she said, according to two people in attendance. It was hard to see how any of those people would pony up $100 on the spot for a social media status symbol. Mr. Musk paused to think. “You know, like, what do people pay for Starbucks?” he asked. “Like $8?” Before anyone could raise objections, he whipped out his phone to set his word in stone. “Twitter’s current lords & peasants system for who has or doesn’t have a blue checkmark is bullshit,” he tweeted on Nov. 1. “Power to the people! Blue for $8/month.”
·nytimes.com·
How Elon Musk Got Tangled Up in Blue
The most hated workplace software on the planet
The most hated workplace software on the planet
LinkedIn, Reddit, and Blind abound with enraged job applicants and employees sharing tales of how difficult it is to book paid leave, how Kafkaesque it is to file an expense, how nerve-racking it is to close out a project. "I simply hate Workday. Fuck them and those who insist on using it for recruitment," one Reddit user wrote. "Everything is non-intuitive, so even the simplest tasks leave me scratching my head," wrote another. "Keeping notes on index cards would be more effective." Every HR professional and hiring manager I spoke with — whose lives are supposedly made easier by Workday — described Workday with a sense of cosmic exasperation.
If candidates hate Workday, if employees hate Workday, if HR people and managers processing and assessing those candidates and employees through Workday hate Workday — if Workday is the most annoying part of so many workers' workdays — how is Workday everywhere? How did a software provider so widely loathed become a mainstay of the modern workplace?
This is a saying in systems thinking: The purpose of a system is what it does (POSIWID), not what it fails to do. And the reality is that what Workday — and its many despised competitors — does for organizations is far more important than the anguish it causes everyone else.
In 1988, PeopleSoft, backed by IBM, built the first fully fledged Human Resources Information System. In 2004, Oracle acquired PeopleSoft for $10.3 billion. One of its founders, David Duffield, then started a new company that upgraded PeopleSoft's model to near limitless cloud-based storage — giving birth to Workday, the intractable nepo baby of HR software.
Workday is indifferent to our suffering in a job hunt, because we aren't Workday's clients, companies are. And these companies — from AT&T to Bank of America to Teladoc — have little incentive to care about your application experience, because if you didn't get the job, you're not their responsibility. For a company hiring and onboarding on a global scale, it is simply easier to screen fewer candidates if the result is still a single hire.
A search on a job board can return hundreds of listings for in-house Workday consultants: IT and engineering professionals hired to fix the software promising to fix processes.
For recruiters, Workday also lacks basic user-interface flexibility. When you promise ease-of-use and simplicity, you must deliver on the most basic user interactions. And yet: Sometimes searching for a candidate, or locating a candidate's status feels impossible. This happens outside of recruiting, too, where locating or attaching a boss's email to approve an expense sheet is complicated by the process, not streamlined. Bureaucratic hell is always about one person's ease coming at the cost of someone else's frustration, time wasted, and busy work. Workday makes no exceptions.
Workday touts its ability to track employee performance by collecting data and marking results, but it is employees who must spend time inputting this data. A creative director at a Fortune 500 company told me how in less than two years his company went "from annual reviews to twice-annual reviews to quarterly reviews to quarterly reviews plus separate twice-annual reviews." At each interval higher-ups pressed HR for more data, because they wanted what they'd paid for with Workday: more work product. With a press of a button, HR could provide that, but the entire company suffered thousands more hours of busy work. Automation made it too easy to do too much. (Workday's "customers choose the frequency at which they conduct reviews, not Workday," said the spokesperson.)
At the scale of a large company, this is simply too much work to expect a few people to do and far too user-specific to expect automation to handle well. It's why Workday can be the worst while still allowing that Paychex is the worst, Paycom is the worst, Paycor is the worst, and Dayforce is the worst. "HR software sucking" is a big tent.
Workday finds itself between enshittification steps two and three. The platform once made things faster, simpler for workers. But today it abuses workers by cutting corners on job-application and reimbursement procedures. In the process, it provides the value of a one-stop HR shop to its paying customers. It seems it's only a matter of time before Workday and its competitors try to split the difference and cut those same corners with the accounts that pay their bills.
Workday reveals what's important to the people who run Fortune 500 companies: easily and conveniently distributing busy work across large workforces. This is done with the arbitrary and perfunctory performance of work tasks (like excessive reviews) and with the throttling of momentum by making finance and HR tasks difficult. If your expenses and reimbursements are difficult to file, that's OK, because the people above you don't actually care if you get reimbursed. If it takes applicants 128% longer to apply, the people who implemented Workday don't really care. Throttling applicants is perhaps not intentional, but it's good for the company.
·businessinsider.com·
The most hated workplace software on the planet
‘To the Future’: Saudi Arabia Spends Big to Become an A.I. Superpower
‘To the Future’: Saudi Arabia Spends Big to Become an A.I. Superpower
Saudi Arabia's ambitious efforts to become a global leader in artificial intelligence and technology, driven by the kingdom's "Vision 2030" plan to diversify its oil-dependent economy. Backed by vast oil wealth, Saudi Arabia is investing billions of dollars to attract global tech companies and talent, creating a new tech hub in the desert outside Riyadh. However, the kingdom's authoritarian government and human rights record have raised concerns about its growing technological influence, placing it at the center of an escalating geopolitical competition between the U.S. and China as both superpowers seek to shape the future of critical technologies.
·nytimes.com·
‘To the Future’: Saudi Arabia Spends Big to Become an A.I. Superpower
How Perplexity builds product
How Perplexity builds product
inside look at how Perplexity builds product—which to me feels like what the future of product development will look like for many companies:AI-first: They’ve been asking AI questions about every step of the company-building process, including “How do I launch a product?” Employees are encouraged to ask AI before bothering colleagues.Organized like slime mold: They optimize for minimizing coordination costs by parallelizing as much of each project as possible.Small teams: Their typical team is two to three people. Their AI-generated (highly rated) podcast was built and is run by just one person.Few managers: They hire self-driven ICs and actively avoid hiring people who are strongest at guiding other people’s work.A prediction for the future: Johnny said, “If I had to guess, technical PMs or engineers with product taste will become the most valuable people at a company over time.”
Typical projects we work on only have one or two people on it. The hardest projects have three or four people, max. For example, our podcast is built by one person end to end. He’s a brand designer, but he does audio engineering and he’s doing all kinds of research to figure out how to build the most interactive and interesting podcast. I don’t think a PM has stepped into that process at any point.
We leverage product management most when there’s a really difficult decision that branches into many directions, and for more involved projects.
The hardest, and most important, part of the PM’s job is having taste around use cases. With AI, there are way too many possible use cases that you could work on. So the PM has to step in and make a branching qualitative decision based on the data, user research, and so on.
a big problem with AI is how you prioritize between more productivity-based use cases versus the engaging chatbot-type use cases.
we look foremost for flexibility and initiative. The ability to build constructively in a limited-resource environment (potentially having to wear several hats) is the most important to us.
We look for strong ICs with clear quantitative impacts on users rather than within their company. If I see the terms “Agile expert” or “scrum master” in the resume, it’s probably not going to be a great fit.
My goal is to structure teams around minimizing “coordination headwind,” as described by Alex Komoroske in this deck on seeing organizations as slime mold. The rough idea is that coordination costs (caused by uncertainty and disagreements) increase with scale, and adding managers doesn’t improve things. People’s incentives become misaligned. People tend to lie to their manager, who lies to their manager. And if you want to talk to someone in another part of the org, you have to go up two levels and down two levels, asking everyone along the way.
Instead, what you want to do is keep the overall goals aligned, and parallelize projects that point toward this goal by sharing reusable guides and processes.
Perplexity has existed for less than two years, and things are changing so quickly in AI that it’s hard to commit beyond that. We create quarterly plans. Within quarters, we try to keep plans stable within a product roadmap. The roadmap has a few large projects that everyone is aware of, along with small tasks that we shift around as priorities change.
Each week we have a kickoff meeting where everyone sets high-level expectations for their week. We have a culture of setting 75% weekly goals: everyone identifies their top priority for the week and tries to hit 75% of that by the end of the week. Just a few bullet points to make sure priorities are clear during the week.
All objectives are measurable, either in terms of quantifiable thresholds or Boolean “was X completed or not.” Our objectives are very aggressive, and often at the end of the quarter we only end up completing 70% in one direction or another. The remaining 30% helps identify gaps in prioritization and staffing.
At the beginning of each project, there is a quick kickoff for alignment, and afterward, iteration occurs in an asynchronous fashion, without constraints or review processes. When individuals feel ready for feedback on designs, implementation, or final product, they share it in Slack, and other members of the team give honest and constructive feedback. Iteration happens organically as needed, and the product doesn’t get launched until it gains internal traction via dogfooding.
all teams share common top-level metrics while A/B testing within their layer of the stack. Because the product can shift so quickly, we want to avoid political issues where anyone’s identity is bound to any given component of the product.
We’ve found that when teams don’t have a PM, team members take on the PM responsibilities, like adjusting scope, making user-facing decisions, and trusting their own taste.
What’s your primary tool for task management, and bug tracking?Linear. For AI products, the line between tasks, bugs, and projects becomes blurred, but we’ve found many concepts in Linear, like Leads, Triage, Sizing, etc., to be extremely important. A favorite feature of mine is auto-archiving—if a task hasn’t been mentioned in a while, chances are it’s not actually important.The primary tool we use to store sources of truth like roadmaps and milestone planning is Notion. We use Notion during development for design docs and RFCs, and afterward for documentation, postmortems, and historical records. Putting thoughts on paper (documenting chain-of-thought) leads to much clearer decision-making, and makes it easier to align async and avoid meetings.Unwrap.ai is a tool we’ve also recently introduced to consolidate, document, and quantify qualitative feedback. Because of the nature of AI, many issues are not always deterministic enough to classify as bugs. Unwrap groups individual pieces of feedback into more concrete themes and areas of improvement.
High-level objectives and directions come top-down, but a large amount of new ideas are floated bottom-up. We believe strongly that engineering and design should have ownership over ideas and details, especially for an AI product where the constraints are not known until ideas are turned into code and mock-ups.
Big challenges today revolve around scaling from our current size to the next level, both on the hiring side and in execution and planning. We don’t want to lose our core identity of working in a very flat and collaborative environment. Even small decisions, like how to organize Slack and Linear, can be tough to scale. Trying to stay transparent and scale the number of channels and projects without causing notifications to explode is something we’re currently trying to figure out.
·lennysnewsletter.com·
How Perplexity builds product
AI startups require new strategies
AI startups require new strategies

comment from Habitue on Hacker News: > These are some good points, but it doesn't seem to mention a big way in which startups disrupt incumbents, which is that they frame the problem a different way, and they don't need to protect existing revenue streams.

The “hard tech” in AI are the LLMs available for rent from OpenAI, Anthropic, Cohere, and others, or available as open source with Llama, Bloom, Mistral and others. The hard-tech is a level playing field; startups do not have an advantage over incumbents.
There can be differentiation in prompt engineering, problem break-down, use of vector databases, and more. However, this isn’t something where startups have an edge, such as being willing to take more risks or be more creative. At best, it is neutral; certainly not an advantage.
This doesn’t mean it’s impossible for a startup to succeed; surely many will. It means that you need a strategy that creates differentiation and distribution, even more quickly and dramatically than is normally required
Whether you’re training existing models, developing models from scratch, or simply testing theories, high-quality data is crucial. Incumbents have the data because they have the customers. They can immediately leverage customers’ data to train models and tune algorithms, so long as they maintain secrecy and privacy.
Intercom’s AI strategy is built on the foundation of hundreds of millions of customer interactions. This gives them an advantage over a newcomer developing a chatbot from scratch. Similarly, Google has an advantage in AI video because they own the entire YouTube library. GitHub has an advantage with Copilot because they trained their AI on their vast code repository (including changes, with human-written explanations of the changes).
While there will always be individuals preferring the startup environment, the allure of working on AI at an incumbent is equally strong for many, especially pure computer and data scientsts who, more than anything else, want to work on interesting AI projects. They get to work in the code, with a large budget, with all the data, with above-market compensation, and a built-in large customer base that will enjoy the fruits of their labor, all without having to do sales, marketing, tech support, accounting, raising money, or anything else that isn’t the pure joy of writing interesting code. This is heaven for many.
A chatbot is in the chatbot market, and an SEO tool is in the SEO market. Adding AI to those tools is obviously a good idea; indeed companies who fail to add AI will likely become irrelevant in the long run. Thus we see that “AI” is a new tool for developing within existing markets, not itself a new market (except for actual hard-tech AI companies).
AI is in the solution-space, not the problem-space, as we say in product management. The customer problem you’re solving is still the same as ever. The problem a chatbot is solving is the same as ever: Talk to customers 24/7 in any language. AI enables completely new solutions that none of us were imagining a few years ago; that’s what’s so exciting and truly transformative. However, the customer problems remain the same, even though the solutions are different
Companies will pay more for chatbots where the AI is excellent, more support contacts are deferred from reaching a human, more languages are supported, and more kinds of questions can be answered, so existing chatbot customers might pay more, which grows the market. Furthermore, some companies who previously (rightly) saw chatbots as a terrible customer experience, will change their mind with sufficiently good AI, and will enter the chatbot market, which again grows that market.
the right way to analyze this is not to say “the AI market is big and growing” but rather: “Here is how AI will transform this existing market.” And then: “Here’s how we fit into that growth.”
·longform.asmartbear.com·
AI startups require new strategies
Soft Power in Tech
Soft Power in Tech
Despite its direct affiliation, Stripe Press provokes a distinctive, emotional feeling. It’s an example of how form affects soft power. By focusing on actual, physical books — and giving them a loving, literary treatment — Stripe shows this project is firmly outside the world of “marketing.” Rather, this is a place for Stripe to demonstrate its ideological affinities and reinforce its philosophical positioning. The affection this project has earned suggests it has found distribution.
Most obviously, they can invest in it via in-house initiatives. Even moderately sized tech companies have large marketing teams capable of running interesting experiments, especially if augmented with external talent. Business banking platform Mercury has made strides in this area over the past couple of years, launching a glossy, thoughtful publication named Meridian.
·thegeneralist.substack.com·
Soft Power in Tech
Generative AI’s Act Two
Generative AI’s Act Two
This page also has many infographics providing an overview of different aspects of the AI industry at time of writing.
We still believe that there will be a separation between the “application layer” companies and foundation model providers, with model companies specializing in scale and research and application layer companies specializing in product and UI. In reality, that separation hasn’t cleanly happened yet. In fact, the most successful user-facing applications out of the gate have been vertically integrated.
We predicted that the best generative AI companies could generate a sustainable competitive advantage through a data flywheel: more usage → more data → better model → more usage. While this is still somewhat true, especially in domains with very specialized and hard-to-get data, the “data moats” are on shaky ground: the data that application companies generate does not create an insurmountable moat, and the next generations of foundation models may very well obliterate any data moats that startups generate. Rather, workflows and user networks seem to be creating more durable sources of competitive advantage.
Some of the best consumer companies have 60-65% DAU/MAU; WhatsApp’s is 85%. By contrast, generative AI apps have a median of 14% (with the notable exception of Character and the “AI companionship” category). This means that users are not finding enough value in Generative AI products to use them every day yet.
generative AI’s biggest problem is not finding use cases or demand or distribution, it is proving value. As our colleague David Cahn writes, “the $200B question is: What are you going to use all this infrastructure to do? How is it going to change people’s lives?”
·sequoiacap.com·
Generative AI’s Act Two
Elon Musk’s Shadow Rule
Elon Musk’s Shadow Rule
There is little precedent for a civilian’s becoming the arbiter of a war between nations in such a granular way, or for the degree of dependency that the U.S. now has on Musk in a variety of fields, from the future of energy and transportation to the exploration of space. SpaceX is currently the sole means by which NASA transports crew from U.S. soil into space, a situation that will persist for at least another year. The government’s plan to move the auto industry toward electric cars requires increasing access to charging stations along America’s highways. But this rests on the actions of another Musk enterprise, Tesla. The automaker has seeded so much of the country with its proprietary charging stations that the Biden Administration relaxed an early push for a universal charging standard disliked by Musk. His stations are eligible for billions of dollars in subsidies, so long as Tesla makes them compatible with the other charging standard.
In the past twenty years, against a backdrop of crumbling infrastructure and declining trust in institutions, Musk has sought out business opportunities in crucial areas where, after decades of privatization, the state has receded. The government is now reliant on him, but struggles to respond to his risk-taking, brinkmanship, and caprice
Current and former officials from NASA, the Department of Defense, the Department of Transportation, the Federal Aviation Administration, and the Occupational Safety and Health Administration told me that Musk’s influence had become inescapable in their work, and several of them said that they now treat him like a sort of unelected official
Sam Altman, the C.E.O. of OpenAI, with whom Musk has both worked and sparred, told me, “Elon desperately wants the world to be saved. But only if he can be the one to save it.
later. “He had grown up in the male-dominated culture of South Africa,” Justine wrote. “The will to compete and dominate that made him so successful in business did not magically shut off when he came home.”
There are competitors in the field, including Jeff Bezos’s Blue Origin and Richard Branson’s Virgin Galactic, but none yet rival SpaceX. The new space race has the potential to shape the global balance of power. Satellites enable the navigation of drones and missiles and generate imagery used for intelligence, and they are mostly under the control of private companies.
A number of officials suggested to me that, despite the tensions related to the company, it has made government bureaucracies nimbler. “When SpaceX and NASA work together, we work closer to optimal speed,” Kenneth Bowersox, NASA’s associate administrator for space operations, told me. Still, some figures in the aerospace world, even ones who think that Musk’s rockets are basically safe, fear that concentrating so much power in private companies, with so few restraints, invites tragedy.
Tesla for a time included in its vehicles the ability to replace the humming noises that electric cars must emit—since their engines make little sound—with goat bleats, farting, or a sound of the owner’s choice. “We’re, like, ‘No, that’s not compliant with the regulations, don’t be stupid,’ ” Cliff told me. Tesla argued with regulators for more than a year, according to an N.H.T.S.A. safety report
Musk’s personal wealth dwarfs the entire budget of OSHA, which is tasked with monitoring the conditions in his workplaces. “You add on the fact that he considers himself to be a master of the universe and these rules just don’t apply to people like him,” Jordan Barab, a former Deputy Assistant Secretary of Labor at OSHA, told me. “There’s a lot of underreporting in industry in general. And Elon Musk kind of seems to raise that to an art form.”
Some people who know Musk well still struggle to make sense of his political shift. “There was nothing political about him ever,” a close associate told me. “I’ve been around him for a long time, and had lots of deep conversations with the man, at all hours of the day—never heard a fucking word about this.”
the cuts that Musk had instituted quickly took a toll on the company. Employees had been informed of their termination via brusque, impersonal e-mails—Musk is now being sued for hundreds of millions of dollars by employees who say that they are owed additional severance pay—and the remaining staffers were abruptly ordered to return to work in person. Twitter’s business model was also in question, since Musk had alienated advertisers and invited a flood of fake accounts by reinventing the platform’s verification process
Musk’s trolling has increasingly taken on the vernacular of hard-right social media, in which grooming, pedophilia, and human trafficking are associated with liberalism
It is difficult to say whether Musk’s interest in A.I. is driven by scientific wonder and altruism or by a desire to dominate a new and potentially powerful industry.
·newyorker.com·
Elon Musk’s Shadow Rule
Panic Among the Streamers
Panic Among the Streamers
Netflix could buy 10 top quality screenplays per year with the cash they’ll spend on that one job.  They must have big plans for AI.There are also a half dozen AI job openings at Disney. And the tech-based streamers (Apple, Amazon) already have made big investments in AI. Sony launched an AI business unit in April 2020—in order to “enhance human imagination and creativity, particularly in the realm of entertainment.”
When Spotify launched on the stock exchange in 2018, it was losing around $30 million per month. Now it’s much larger, and is losing money at the pace of more than $100 million per month.
But the real problem at Spotify isn’t just convincing people to pay more. It runs much deeper. Spotify finds itself in the awkward position of asking people to pay more for a lousy interface that degrades the entire user experience.
Boredom is built into the platform, because they lose money if you get too excited about music—you’re like the person at the all-you-can-eat buffet who goes back for a third helping. They make the most money from indifferent, lukewarm fans, and they created their interface with them in mind. In other words, Spotify’s highest aspiration is to be the Applebee’s of music.
They need to prepare for a possible royalty war against record labels and musicians—yes, that could actually happen—and they do that by creating a zombie world of brain dead listeners who don’t even know what artist they’re hearing. I know that sounds extreme, but spend some time on the platform and draw your own conclusions.
·honest-broker.com·
Panic Among the Streamers
Isn’t That Spatial? | No Mercy / No Malice
Isn’t That Spatial? | No Mercy / No Malice
Betting against a first-generation Apple product is a bad trade — from infamous dismissals of the iPhone to disappointment with the original iPad. In fact, this is a reflection of Apple’s strategy: Start with a product that’s more an elegant proof-of-concept than a prime-time hit; rely on early adopters to provide enough runway for its engineers to keep iterating; and trust in unmatched capital, talent, brand equity, and staying power to morph a first-gen toy into a third-gen triumph
We are a long way from making three screens, a glass shield, and an array of supporting hardware light enough to wear for an extended period. Reviewers were (purposefully) allowed to wear the Vision Pro for less than half an hour, and nearly every one said comfort was declining even then. Avatar: The Way of Water is 3 hours and 12 minutes.
Meta’s singular strategic objective is to escape second-tier status and, like Apple and Alphabet, control its distribution. And its path to independence runs through Apple Park. Zuckerberg is spending the GDP of a small country to invent a new world, the metaverse, where Apple doesn’t own the roads or power stations. Vision Pro is insurance against the metaverse evolving into anything more than an incel panic room.
The only product category where VR makes difference is good VR games. Price is not limiting factor, the quality of VR experience is. Beat Saber is good and fun and physical exercise. Half Life: Alyx, is amazing. VR completely supercharges horror games, and scary stalking shooters. Want to fear of your life and get PTSD in the comfort of your home? You can do it. Games can connect people and provide physical exercise. If the 3rd iteration of Vision Pro is good for 2 hours of playing for $2000 Apple will kill the console market. Playstations no more. Apple is not a gaming company, but if Vision Pro becomes better and slightly cheaper, Apple becomes gaming company against its will.
·profgalloway.com·
Isn’t That Spatial? | No Mercy / No Malice
The genius behind Zelda is at the peak of his power — and feeling his age
The genius behind Zelda is at the peak of his power — and feeling his age
Aonuma became co-director of “Ocarina,” which revolutionized how game characters move and fight each other in a 3D space. Unlike cinema, video games require audience control of the camera. “Ocarina” created a “camera-locking” system to focus the perspective while you use the controller for character movement. The system, still used by games today, is a large reason “Ocarina” is often compared to the work of Orson Welles, who redefined how cinema was shot.
The “ethos of Zelda” focuses on such new, unexpected concepts of play — even as many other modern games prioritize story, like TV and film do. With “Tears,” at “the beginning of development, there really isn’t a story,” Fujibayashi said. “Once we got to the point where we felt confident in the gameplay experience, that’s when the story starts to emerge.”
·washingtonpost.com·
The genius behind Zelda is at the peak of his power — and feeling his age
This time, it feels different
This time, it feels different
In the past several months, I have come across people who do programming, legal work, business, accountancy and finance, fashion design, architecture, graphic design, research, teaching, cooking, travel planning, event management etc., all of whom have started using the same tool, ChatGPT, to solve use cases specific to their domains and problems specific to their personal workflows. This is unlike everyone using the same messaging tool or the same document editor. This is one tool, a single class of technology (LLM), whose multi-dimensionality has achieved widespread adoption across demographics where people are discovering how to solve a multitude of problems with no technical training, in the one way that is most natural to humans—via language and conversations.
I cannot recall the last time a single tool gained such widespread acceptance so swiftly, for so many use cases, across entire demographics.
there is significant substance beneath the hype. And that is what is worrying; the prospect of us starting to depend indiscriminately on poorly understood blackboxes, currently offered by megacorps, that actually work shockingly well.
If a single dumb, stochastic, probabilistic, hallucinating, snake oil LLM with a chat UI offered by one organisation can have such a viral, organic, and widespread adoption—where large disparate populations, people, corporations, and governments are integrating it into their daily lives for use cases that they are discovering themselves—imagine what better, faster, more “intelligent” systems to follow in the wake of what exists today would be capable of doing.
A policy for “AI anxiety” We ended up codifying this into an actual AI policy to bring clarity to the organisation.[10] It states that no one at Zerodha will lose their job if a technology implementation (AI or non-AI) directly renders their existing responsibilities and tasks obsolete. The goal is to prevent unexpected rug-pulls from underneath the feet of humans. Instead, there will be efforts to create avenues and opportunities for people to upskill and switch between roles and responsibilities
To those who believe that new jobs will emerge at meaningful rates to absorb the losses and shocks, what exactly are those new jobs? To those who think that governments will wave magic wands to regulate AI technologies, one just has to look at how well governments have managed to regulate, and how well humanity has managed to self-regulate, human-made climate change and planetary destruction. It is not then a stretch to think that the unraveling of our civilisation and its socio-politico-economic systems that are built on extracting, mass producing, and mass consuming garbage, might be exacerbated. Ted Chiang’s recent essay is a grim, but fascinating exploration of this. Speaking of grim, we can always count on us to ruin nice things! Along the lines of Murphy’s Law,[11] I present: Anything that can be ruined, will be ruined — Grumphy’s law
I asked GPT-4 to summarise this post and write five haikus on it. I have always operated a piece of software, but never asked it anything—that is, until now. Anyway, here is the fifth one. Future’s tangled web, Offloading choices to black boxes, Humanity’s voice fades
·nadh.in·
This time, it feels different
“I can’t make products just for 41-year-old tech founders”: Airbnb CEO Brian Chesky is taking it back to basics
“I can’t make products just for 41-year-old tech founders”: Airbnb CEO Brian Chesky is taking it back to basics
Of course, you shouldn’t discriminate, but when we say belonging, it has to be more than just inclusion. It has to actually be the proactive manifestation of meeting people, creating connections in friendships. And Jony Ive said, “Well, you need to reframe it. It’s not just about belonging, it’s about human connection and belonging.”And that was, I think, a really big unlock. The next thing Jony Ive said is he created this book for me, a book of his ideas, and the book was called “Beyond Where and When,” and he basically said that Airbnb should shift from beyond where and when to who and what?Who are you and what do you want in your life? And that was a part of the inspiration behind Airbnb categories, that we wanted people to come to Airbnb without a destination in mind and that we could categorize properties not just by location but by what makes them unique, and that really influenced Airbnb categories and some of the stuff we’re doing now.
·theverge.com·
“I can’t make products just for 41-year-old tech founders”: Airbnb CEO Brian Chesky is taking it back to basics
Thoughts on the software industry - linus.coffee
Thoughts on the software industry - linus.coffee
software gives you its own set of abstractions and basic vocabulary with which to understand every experience. It sort of smells like mathematics in some ways. But software’s way of looking at the world is more about abstractions modeling underlying complexities in systems; signal vs. noise; scale and orders of magnitude; and information — how much there is, what we can do it with, how we can learn from it and model it. Software’s interpretation of reality is particularly important because software drives the world now, and the people who write the software that runs it see the world through this kind of “software’s worldview” — scaling laws, information theory, abstractions and complexity. I think over time I’ve come to believe that understanding this worldview is more interesting than learning to wield programming tools.
·linus.coffee·
Thoughts on the software industry - linus.coffee
How DAOs Could Change the Way We Work
How DAOs Could Change the Way We Work
DAOs are effectively owned and governed by people who hold a sufficient number of a DAO’s native token, which functions like a type of cryptocurrency. For example, $FWB is the native token of popular social DAO called Friends With Benefits, and people can buy, earn, or trade it.
Contributors will be able to use their DAO’s native tokens to vote on key decisions. You can get a glimpse into the kinds of decisions DAO members are already voting on at Snapshot, which is essentially a decentralized voting system. Having said this, existing voting mechanisms have been criticized by the likes of Vitalik Buterin, founder of Ethereum, the open-source blockchain that acts as a foundational layer for the majority of Web3 applications. So, this type of voting is likely to evolve over time.
·hbr.org·
How DAOs Could Change the Way We Work
The $2 Per Hour Workers Who Made ChatGPT Safer
The $2 Per Hour Workers Who Made ChatGPT Safer
The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.”
This reminds me of [[On the Social Media Ideology - Journal 75 September 2016 - e-flux]]:<br>> Platforms are not stages; they bring together and synthesize (multimedia) data, yes, but what is lacking here is the (curatorial) element of human labor. That’s why there is no media in social media. The platforms operate because of their software, automated procedures, algorithms, and filters, not because of their large staff of editors and designers. Their lack of employees is what makes current debates in terms of racism, anti-Semitism, and jihadism so timely, as social media platforms are currently forced by politicians to employ editors who will have to do the all-too-human monitoring work (filtering out ancient ideologies that refuse to disappear).
Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.
One Sama worker tasked with reading and labeling text for OpenAI told TIME he suffered from recurring visions after reading a graphic description of a man having sex with a dog in the presence of a young child. “That was torture,” he said. “You will read a number of statements like that all through the week. By the time it gets to Friday, you are disturbed from thinking through that picture.” The work’s traumatic nature eventually led Sama to cancel all its work for OpenAI in February 2022, eight months earlier than planned.
In the day-to-day work of data labeling in Kenya, sometimes edge cases would pop up that showed the difficulty of teaching a machine to understand nuance. One day in early March last year, a Sama employee was at work reading an explicit story about Batman’s sidekick, Robin, being raped in a villain’s lair. (An online search for the text reveals that it originated from an online erotica site, where it is accompanied by explicit sexual imagery.) The beginning of the story makes clear that the sex is nonconsensual. But later—after a graphically detailed description of penetration—Robin begins to reciprocate. The Sama employee tasked with labeling the text appeared confused by Robin’s ambiguous consent, and asked OpenAI researchers for clarification about how to label the text, according to documents seen by TIME. Should the passage be labeled as sexual violence, she asked, or not? OpenAI’s reply, if it ever came, is not logged in the document; the company declined to comment. The Sama employee did not respond to a request for an interview.
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document.
I haven't finished watching [[Severance]] yet but this labeling system reminds me of the way they have to process and filter data that is obfuscated as meaningless numbers. In the show, employees have to "sense" whether the numbers are "bad," which they can, somehow, and sort it into the trash bin.
But the need for humans to label data for AI systems remains, at least for now. “They’re impressive, but ChatGPT and other generative models are not magic – they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,” Andrew Strait, an AI ethicist, recently wrote on Twitter. “These are serious, foundational problems that I do not see OpenAI addressing.”
·time.com·
The $2 Per Hour Workers Who Made ChatGPT Safer