Found 406 bookmarks
Newest
The Complex Problem Of Lying For Jobs — Ludicity
The Complex Problem Of Lying For Jobs — Ludicity

Claude summary: Key takeaway Lying on job applications is pervasive in the tech industry due to systemic issues, but it creates an "Infinite Lie Vortex" that erodes integrity and job satisfaction. While honesty may limit short-term opportunities, it's crucial for long-term career fulfillment and ethical work environments.

Summary

  • The author responds to Nat Bennett's article against lying in job interviews, acknowledging its validity while exploring the nuances of the issue.
  • Most people in the tech industry are already lying or misrepresenting themselves on their CVs and in interviews, often through "technically true" statements.
  • The job market is flooded with candidates who are "cosplaying" at engineering, making it difficult for honest, competent individuals to compete.
  • Many employers and interviewers are not seriously engaged in engineering and overlook actual competence in favor of congratulatory conversation and superficial criteria
  • Most tech projects are "default dead," making it challenging for honest candidates to present impressive achievements without embellishment.
  • The author suggests that escaping the "Infinite Lie Vortex" requires building financial security, maintaining low expenses, and cultivating relationships with like-minded professionals.
  • Honesty in job applications may limit short-term opportunities but leads to more fulfilling and ethical work environments in the long run.
  • The author shares personal experiences of navigating the tech job market, including instances of misrepresentation and the challenges of maintaining integrity.
  • The piece concludes with a satirical, honest version of the author's CV, highlighting the absurdity of common resume claims and the value of authenticity.
  • Throughout the article, the author maintains a cynical, humorous tone while addressing serious issues in the tech industry's hiring practices and work culture.
  • The author emphasizes the importance of self-awareness, continuous learning, and valuing personal integrity over financial gain or status.
If your model is "it's okay to lie if I've been lied to" then we're all knee deep in bullshit forever and can never escape Transaction Cost Hell.
Do I agree that entering The Infinite Lie Vortex is wise or good for you spiritually? No, not at all, just look at what it's called.
it is very common practice on the job market to have a CV that obfuscates the reality of your contribution at previous workplaces. Putting aside whether you're a professional web developer because you got paid $20 by your uncle to fix some HTML, the issue with lying lies in the intent behind it. If you have a good idea of what impression you are leaving your interlocutor with, and you are crafting statements such that the image in their head does not map to reality, then you are lying.
Unfortunately thanks to our dear leader's masterful consummation of toxicity and incompetence, the truth of the matter is that: They left their previous job due to burnout related to extensive bullying, which future employers would like to know because they would prefer to blacklist everyone involved to minimize their chances of getting the bad actor. Everyone involved thinks that they were the victim, and an employer does not have access to my direct observations, so this is not even an unreasonable strategy All their projects were failures through no fault of their own, in a market where everyone has "successfully designed and implemented" their data governance initiatives, as indicated previously
What I am trying to say is that I currently believe that there are not enough employers who will appreciate honesty and competence for a strategy of honesty to reliably pay your rent. My concern, with regards to Nat's original article, is that the industry is so primed with nonsense that we effectively have two industries. We have a real engineering market, where people are fairly serious and gather in small conclaves (only two of which I have seen, and one of those was through a blog reader's introduction), and then a gigantic field of people that are cosplaying at engineering. The real market is large in absolute terms, but tiny relative to the number of candidates and companies out there. The fake market is all people that haven't cultivated the discipline to engineer but nonetheless want software engineering salaries and clout.
There are some companies where your interviewer is going to be a reasonable person, and there you can be totally honest. For example, it is a good thing to admit that the last project didn't go that well, because the kind of person that sees the industry for what it is, and who doesn't endorse bullshit, and who works on themselves diligently - that person is going to hear your honesty, and is probably reasonably good at detecting when candidates are revealing just enough fake problems to fake honesty, and then they will hire you. You will both put down your weapons and embrace. This is very rare. A strategy that is based on assuming this happens if you keep repeatedly engaging with random companies on the market is overwhelmingly going to result in a long, long search. For the most part, you will be engaged in a twisted, adversarial game with actors who will relentlessly try to do things like make you say a number first in case you say one that's too low.
Suffice it to say that, if you grin in just the right way and keep a straight face, there is a large class of person that will hear you say "Hah, you know, I'm just reflecting on how nice it is to be in a room full of people who are asking the right questions after all my other terrible interviews." and then they will shake your hand even as they shatter the other one patting themselves on the back at Mach 10. I know, I know, it sounds like that doesn't work but it absolutely does.
Neil Gaiman On Lying People get hired because, somehow, they get hired. In my case I did something which these days would be easy to check, and would get me into trouble, and when I started out, in those pre-internet days, seemed like a sensible career strategy: when I was asked by editors who I'd worked for, I lied. I listed a handful of magazines that sounded likely, and I sounded confident, and I got jobs. I then made it a point of honour to have written something for each of the magazines I'd listed to get that first job, so that I hadn't actually lied, I'd just been chronologically challenged... You get work however you get work.
Nat Bennett, of Start Of This Article fame, writes: If you want to be the kind of person who walks away from your job when you're asked to do something that doesn't fit your values, you need to save money. You need to maintain low fixed expenses. Acting with integrity – or whatever it is that you value – mostly isn't about making the right decision in the moment. It's mostly about the decisions that you make leading up to that moment, that prepare you to be able to make the decision that you feel is right.
As a rough rule, if I've let my relationship with a job deteriorate to the point that I must leave, I have already waited way too long, and will be forced to move to another place that is similarly upsetting.
And that is, of course, what had gradually happened. I very painfully navigated the immigration process, trimmed my expenses, found a position that is frequently silly but tolerable for extended periods of time, and started looking for work before the new gig, mostly the same as the last gig, became unbearable. Everything other than the immigration process was burnout induced, so I can't claim that it was a clever strategy, but the net effect is that I kept sacrificing things at the altar of Being Okay With Less, and now I am in an apartment so small that I think I almost fractured my little toe banging it on the side of my bed frame, but I have the luxury of not lying.
If I had to write down what a potential exit pathway looks like, it might be: Find a job even if you must navigate the Vortex, and it doesn't matter if it's bad because there's a grace period where your brain is not soaking up the local brand of madness, i.e, when you don't even understand the local politics yet Meet good programmers that appreciate things like mindfulness in your local area - you're going to have to figure out how to do this one Repeat Step 1 and Step 2 on a loop, building yourself up as a person, engineer, and friend, until someone who knows you for you hires you based on your personality and values, rather than "I have seven years doing bullshit in React that clearly should have been ten raw HTML pages served off one Django server"
A CEO here told me that he asks people to self-evaluate their skill on a scale of 1 to 10, but he actually has solid measures. You're at 10 at Python if you're a core maintainer. 9 if you speak at major international conferences, etc. On that scale, I'm a 4, or maybe a 5 on my best day ever, and that's the sad truth. We'll get there one day.
I will always hate writing code that moves the overall product further from Quality. I'll write a basic feature and take shortcuts, but not the kind that we are going to build on top of, which is unattractive to employers because sacrificing the long-term health of a product is a big part of status laundering.
The only piece of software I've written that is unambiguously helpful is this dumb hack that I used to cut up episodes of the Glass Cannon Podcast into one minute segments so that my skip track button on my underwater headphones is now a janky fast forward one minute button. It took me like ten minutes to write, and is my greatest pride.
Have I actually worked with Google? My CV says so, but guess what, not quite! I worked on one project where the money came from Google, but we really had one call with one guy who said we were probably on track, which we definitely were not!
Did I salvage a A$1.2M project? Technically yes, but only because I forced the previous developer to actually give us his code before he quit! This is not replicable, and then the whole engineering team quit over a mandatory return to office, so the application never shipped!
Did I save a half million dollars in Snowflake expenses? CV says yes, reality says I can only repeat that trick if someone decided to set another pile of money on fire and hand me the fire extinguisher! Did I really receive departmental recognition for this? Yes, but only in that they gave me A$30 and a pat on the head and told me that a raise wasn't on the table.
Was I the most highly paid senior engineer at that company? Yes, but only because I had insider information that four people quit in the same week, and used that to negotiate a 20% raise over the next highest salary - the decision was based around executive KPIs, not my competence!
·ludic.mataroa.blog·
The Complex Problem Of Lying For Jobs — Ludicity
Dario Amodei — Machines of Loving Grace
Dario Amodei — Machines of Loving Grace
I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.
I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides.
the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.
Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires.
The five categories I am most excited about are: Biology and physical health Neuroscience and mental health Economic development and poverty Peace and governance Work and meaning
We could summarize this as a “country of geniuses in a datacenter”.
you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.
I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.
in science many experiments are often needed in sequence, each learning from or building on the last. All of this means that the speed at which a major project—for example developing a cancer cure—can be completed may have an irreducible minimum that cannot be decreased further even as intelligence continues to increase.
Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator.
Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments. Examples of advances that work well in a technical sense, but whose impact has been substantially reduced by regulations or misplaced fears, include nuclear power, supersonic flight, and even elevators
Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10. The key question is how fast it all happens and in what order.
I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.
CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity.
there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward.
·darioamodei.com·
Dario Amodei — Machines of Loving Grace
The AI trust crisis
The AI trust crisis
The AI trust crisis 14th December 2023 Dropbox added some new AI features. In the past couple of days these have attracted a firestorm of criticism. Benj Edwards rounds it up in Dropbox spooks users with new AI features that send data to OpenAI when used. The key issue here is that people are worried that their private files on Dropbox are being passed to OpenAI to use as training data for their models—a claim that is strenuously denied by Dropbox. As far as I can tell, Dropbox built some sensible features—summarize on demand, “chat with your data” via Retrieval Augmented Generation—and did a moderately OK job of communicating how they work... but when it comes to data privacy and AI, a “moderately OK job” is a failing grade. Especially if you hold as much of people’s private data as Dropbox does! Two details in particular seem really important. Dropbox have an AI principles document which includes this: Customer trust and the privacy of their data are our foundation. We will not use customer data to train AI models without consent. They also have a checkbox in their settings that looks like this: Update: Some time between me publishing this article and four hours later, that link stopped working. I took that screenshot on my own account. It’s toggled “on”—but I never turned it on myself. Does that mean I’m marked as “consenting” to having my data used to train AI models? I don’t think so: I think this is a combination of confusing wording and the eternal vagueness of what the term “consent” means in a world where everyone agrees to the terms and conditions of everything without reading them. But a LOT of people have come to the conclusion that this means their private data—which they pay Dropbox to protect—is now being funneled into the OpenAI training abyss. People don’t believe OpenAI # Here’s copy from that Dropbox preference box, talking about their “third-party partners”—in this case OpenAI: Your data is never used to train their internal models, and is deleted from third-party servers within 30 days. It’s increasing clear to me like people simply don’t believe OpenAI when they’re told that data won’t be used for training. What’s really going on here is something deeper then: AI is facing a crisis of trust. I quipped on Twitter: “OpenAI are training on every piece of data they see, even when they say they aren’t” is the new “Facebook are showing you ads based on overhearing everything you say through your phone’s microphone” Here’s what I meant by that. Facebook don’t spy on you through your microphone # Have you heard the one about Facebook spying on you through your phone’s microphone and showing you ads based on what you’re talking about? This theory has been floating around for years. From a technical perspective it should be easy to disprove: Mobile phone operating systems don’t allow apps to invisibly access the microphone. Privacy researchers can audit communications between devices and Facebook to confirm if this is happening. Running high quality voice recognition like this at scale is extremely expensive—I had a conversation with a friend who works on server-based machine learning at Apple a few years ago who found the entire idea laughable. The non-technical reasons are even stronger: Facebook say they aren’t doing this. The risk to their reputation if they are caught in a lie is astronomical. As with many conspiracy theories, too many people would have to be “in the loop” and not blow the whistle. Facebook don’t need to do this: there are much, much cheaper and more effective ways to target ads at you than spying through your microphone. These methods have been working incredibly well for years. Facebook gets to show us thousands of ads a year. 99% of those don’t correlate in the slightest to anything we have said out loud. If you keep rolling the dice long enough, eventually a coincidence will strike. Here’s the thing though: none of these arguments matter. If you’ve ever experienced Facebook showing you an ad for something that you were talking about out-loud about moments earlier, you’ve already dismissed everything I just said. You have personally experienced anecdotal evidence which overrides all of my arguments here.
One consistent theme I’ve seen in conversations about this issue is that people are much more comfortable trusting their data to local models that run on their own devices than models hosted in the cloud. The good news is that local models are consistently both increasing in quality and shrinking in size.
·simonwillison.net·
The AI trust crisis
‘I Just Want a Dumb Job’
‘I Just Want a Dumb Job’
I realized that the more “luxury” a company is that you’re working for, whether it’s consumer or editorial, the worse the attitudes are. It’s like, “Well, you’re lucky to be an ambassador of this brand.”
There’s training around how you give feedback and how you receive it, how you tackle problems, and how you behave. Seeing all these systems in place, when I first arrived, I was just like, “Wow. I didn’t know work could be like this.
·thecut.com·
‘I Just Want a Dumb Job’
‘I Applied to 2,843 Roles’ With an AI-Powered Job Application Bot
‘I Applied to 2,843 Roles’ With an AI-Powered Job Application Bot
The sudden explosion in popularity of AI Hawk means that we now live in a world where people are using AI-generated resumes and cover letters to automatically apply for jobs, many of which will be reviewed by automated AI software (and where people are sometimes interviewed by AI), creating a bizarre loop where humans have essentially been removed from the job application and hiring process. Essentially, robots are writing cover letters for other robots to read, with uncertain effects for human beings who apply to jobs the old fashioned way.
“Many companies employ automated screening systems that are often limited and ineffective, excluding qualified candidates simply because their resumes lack specific keywords. These systems can overlook valuable talent who possess the necessary skills but do not use the right terms in their CVs,” he said. “This approach creates a more balanced ecosystem where AI not only facilitates selection by companies but also supports the candidacy of talent. By automating repetitive tasks and personalizing applications, AIHawk reduces the time and effort required from candidates, increasing their chances of being noticed by employers.”
AI Hawk was cofounded by Federico Elia, an Italian computer scientist who told 404 Media that one of the reasons he created the project was to “balance the use of artificial intelligence in the recruitment process” in order to (theoretically) re-level the playing field between companies who use AI HR software and the people who are applying for jobs.
our goal with AIHawk is to create a synergistic system in which AI enhances the entire recruitment process without creating a vicious cycle,” Elia said. “The AI in AIHawk is designed to improve the efficiency and personalization of applications, while the AI used by companies focuses on selecting the best talent. This complementary approach avoids the creation of a ‘Dead Internet loop’ and instead fosters more targeted and meaningful connections between job seekers and employers.”
There are many guides teaching human beings how to write ATS-friendly resumes, meaning we are already teaching a generation of job seekers how to tailor their cover letters to algorithmic decision makers.
·404media.co·
‘I Applied to 2,843 Roles’ With an AI-Powered Job Application Bot
The Collapse of Self-Worth in the Digital Age - The Walrus
The Collapse of Self-Worth in the Digital Age - The Walrus
My problems were too complex and modern to explain. So I skated across parking lots, breezeways, and sidewalks, I listened to the vibration of my wheels on brick, I learned the names of flowers, I put deserted paths to use. I decided for myself each curve I took, and by the time I rolled home, I felt lighter. One Saturday, a friend invited me to roller-skate in the park. I can still picture her in green protective knee pads, flying past. I couldn’t catch up, I had no technique. There existed another scale to evaluate roller skating, beyond joy, and as Rollerbladers and cyclists overtook me, it eclipsed my own. Soon after, I stopped skating.
the end point for the working artist is to create an object for sale. Once the art object enters the market, art’s intrinsic value is emptied out, compacted by the market’s logic of ranking, until there’s only relational worth, no interior worth. Two novelists I know publish essays one week apart; in a grim coincidence, each writer recounts their own version of the same traumatic life event. Which essay is better, a friend asks. I explain they’re different; different life circumstances likely shaped separate approaches. Yes, she says, but which one is better?
we are inundated with cold, beautiful stats, some publicized by trade publications or broadcast by authors themselves on all socials. How many publishers bid? How big is the print run? How many stops on the tour? How many reviews on Goodreads? How many mentions on Bookstagram, BookTok? How many bloggers on the blog tour? How exponential is the growth in follower count? Preorders? How many printings? How many languages in translation? How many views on the unboxing? How many mentions on most-anticipated lists?
A starred review from Publisher’s Weekly, but I wasn’t in “Picks of the Week.” A mention from Entertainment Weekly, but last on a click-through list.
There must exist professions that are free from capture, but I’m hard pressed to find them. Even non-remote jobs, where work cannot pursue the worker home, are dogged by digital tracking: a farmer says Instagram Story views directly correlate to farm subscriptions, a server tells me her manager won’t give her the Saturday-night money shift until she has more followers.
What we hardly talk about is how we’ve reorganized not just industrial activity but any activity to be capturable by computer, a radical expansion of what can be mined. Friendship is ground zero for the metrics of the inner world, the first unquantifiable shorn into data points: Friendster testimonials, the MySpace Top 8, friending. Likewise, the search for romance has been refigured by dating apps that sell paid-for rankings and paid access to “quality” matches. Or, if there’s an off-duty pursuit you love—giving tarot readings, polishing beach rocks—it’s a great compliment to say: “You should do that for money.” Join the passion economy, give the market final say on the value of your delights. Even engaging with art—say, encountering some uncanny reflection of yourself in a novel, or having a transformative epiphany from listening, on repeat, to the way that singer’s voice breaks over the bridge—can be spat out as a figure, on Goodreads or your Spotify year in review.
And those ascetics who disavow all socials? They are still caught in the network. Acts of pure leisure—photographing a sidewalk cat with a camera app or watching a video on how to make a curry—are transmuted into data to grade how well the app or the creators’ deliverables are delivering. If we’re not being tallied, we affect the tally of others. We are all data workers.
In a nightmarish dispatch in Esquire on how hard it is for authors to find readers, Kate Dwyer argues that all authors must function like influencers now, which means a fire sale on your “private” life. As internet theorist Kyle Chayka puts it to Dwyer: “Influencers get attention by exposing parts of their life that have nothing to do with the production of culture.”
what happens to artists is happening to all of us. As data collection technology hollows out our inner worlds, all of us experience the working artist’s plight: our lot is to numericize and monetize the most private and personal parts of our experience.
We are not giving away our value, as a puritanical grandparent might scold; we are giving away our facility to value. We’ve been cored like apples, a dependency created, hooked on the public internet to tell us the worth.
When we scroll, what are we looking for?
While other fast fashion brands wait for high-end houses to produce designs they can replicate cheaply, Shein has completely eclipsed the runway, using AI to trawl social media for cues on what to produce next. Shein’s site operates like a casino game, using “dark patterns”—a countdown clock puts a timer on an offer, pop-ups say there’s only one item left in stock, and the scroll of outfits never ends—so you buy now, ask if you want it later. Shein’s model is dystopic: countless reports detail how it puts its workers in obscene poverty in order to sell a reprieve to consumers who are also moneyless—a saturated plush world lasting as long as the seams in one of their dresses. Yet the day to day of Shein’s target shopper is so bleak, we strain our moral character to cosplay a life of plenty.
(Unsplash) Technology The Collapse of Self-Worth in the Digital Age Why are we letting algorithms rewrite the rules of art, work, and life? BY THEA LIM Updated 17:52, Sep. 20, 2024 | Published 6:30, Sep. 17, 2024 W HEN I WAS TWELVE, I used to roller-skate in circles for hours. I was at another new school, the odd man out, bullied by my desk mate. My problems were too complex and modern to explain. So I skated across parking lots, breezeways, and sidewalks, I listened to the vibration of my wheels on brick, I learned the names of flowers, I put deserted paths to use. I decided for myself each curve I took, and by the time I rolled home, I felt lighter. One Saturday, a friend invited me to roller-skate in the park. I can still picture her in green protective knee pads, flying past. I couldn’t catch up, I had no technique. There existed another scale to evaluate roller skating, beyond joy, and as Rollerbladers and cyclists overtook me, it eclipsed my own. Soon after, I stopped skating. Y EARS AGO, I worked in the backroom of a Tower Records. Every few hours, my face-pierced, gunk-haired co-workers would line up by my workstation, waiting to clock in or out. When we typed in our staff number at 8:59 p.m., we were off time, returned to ourselves, free like smoke. There are no words to describe the opposite sensations of being at-our-job and being not-at-our-job even if we know the feeling of crossing that threshold by heart. But the most essential quality that makes a job a job is that when we are at work, we surrender the power to decide the worth of what we do. At-job is where our labour is appraised by an external meter: the market. At-job, our labour is never a means to itself but a means to money; its value can be expressed only as a number—relative, fluctuating, out of our control. At-job, because an outside eye measures us, the workplace is a place of surveillance. It’s painful to have your sense of worth extracted. For Marx, the poet of economics, when a person’s innate value is replaced with exchange value, it is as if we’ve been reduced to “a mere jelly.” Wait—Is ChatGPT Even Legal? AI Is a False God How Israel Is Using AI as a Weapon of War Not-job, or whatever name you prefer—“quitting time,” “off duty,” “downtime”—is where we restore ourselves from a mere jelly, precisely by using our internal meter to determine the criteria for success or failure. Find the best route home—not the one that optimizes cost per minute but the one that offers time enough to hear an album from start to finish. Plant a window garden, and if the plants are half dead, try again. My brother-in-law found a toy loom in his neighbour’s garbage, and nightly he weaves tiny technicolour rugs. We do these activities for the sake of doing them, and their value can’t be arrived at through an outside, top-down measure. It would be nonsensical to treat them as comparable and rank them from one to five. We can assess them only by privately and carefully attending to what they contain and, on our own, concluding their merit. And so artmaking—the cultural industries—occupies the middle of an uneasy Venn diagram. First, the value of an artwork is internal—how well does it fulfill the vision that inspired it? Second, a piece of art is its own end. Third, a piece of art is, by definition, rare, one of a kind, nonfungible. Yet the end point for the working artist is to create an object for sale. Once the art object enters the market, art’s intrinsic value is emptied out, compacted by the market’s logic of ranking, until there’s only relational worth, no interior worth. Two novelists I know publish essays one week apart; in a grim coincidence, each writer recounts their own version of the same traumatic life event. Which essay is better, a friend asks. I explain they’re different; different life circumstances likely shaped separate approaches. Yes, she says, but which one is better? I GREW UP a Catholic, a faithful, an anachronism to my friends. I carried my faith until my twenties, when it finally broke. Once I couldn’t gain comfort from religion anymore, I got it from writing. Sitting and building stories, side by side with millions of other storytellers who have endeavoured since the dawn of existence to forge meaning even as reality proves endlessly senseless, is the nearest thing to what it felt like back when I was a believer. I spent my thirties writing a novel and paying the bills as low-paid part-time faculty at three different colleges. I could’ve studied law or learned to code. Instead, I manufactured sentences. Looking back, it baffles me that I had the wherewithal to commit to a project with no guaranteed financial value, as if I was under an enchantment. Working on that novel was like visiting a little town every day for four years, a place so dear and sweet. Then I sold it. As the publication date advanced, I was awash with extrinsic measures. Only twenty years ago, there was no public, complete data on book sales. U
·thewalrus.ca·
The Collapse of Self-Worth in the Digital Age - The Walrus
New Apple Stuff and the Regular People
New Apple Stuff and the Regular People
"Will it be different?" is the key question the regular people ask. They don't want there to be extra steps or new procedures. They sure as hell don't want the icons to look different or, God forbid, be moved to a new place.
These bright and capable people who will one day help you through knee replacement surgery all bought a Mac when they were college frehmen and then they never updated it. Almost all of them had the default programs still in the dock. They are regular users. You with all your fancy calendars, note taking apps and your customized terminal are an outlier. Never forget.
The majority of iPhone users and Mac owners have no idea what's coming though. They are going to wake up on Monday to an unwelcome notification that there is an update available. Many of them will ask their techie friends (like you) if there is a way to make the update notification go away. They will want to know if they have to install it.
·louplummer.lol·
New Apple Stuff and the Regular People
How Elon Musk Got Tangled Up in Blue
How Elon Musk Got Tangled Up in Blue
Mr. Musk had largely come to peace with a price of $100 a year for Blue. But during one meeting to discuss pricing, his top assistant, Jehn Balajadia, felt compelled to speak up. “There’s a lot of people who can’t even buy gas right now,” she said, according to two people in attendance. It was hard to see how any of those people would pony up $100 on the spot for a social media status symbol. Mr. Musk paused to think. “You know, like, what do people pay for Starbucks?” he asked. “Like $8?” Before anyone could raise objections, he whipped out his phone to set his word in stone. “Twitter’s current lords & peasants system for who has or doesn’t have a blue checkmark is bullshit,” he tweeted on Nov. 1. “Power to the people! Blue for $8/month.”
·nytimes.com·
How Elon Musk Got Tangled Up in Blue
Gemini 1.5 and Google’s Nature
Gemini 1.5 and Google’s Nature
Google is facing many of the same challenges after its decades long dominance of the open web: all of the products shown yesterday rely on a different business model than advertising, and to properly execute and deliver on them will require a cultural shift to supporting customers instead of tolerating them. What hasn’t changed — because it is the company’s nature, and thus cannot — is the reliance on scale and an overwhelming infrastructure advantage. That, more than anything, is what defines Google, and it was encouraging to see that so explicitly put forward as an advantage.
·stratechery.com·
Gemini 1.5 and Google’s Nature
Google’s A.I. Search Errors Cause a Furor Online
Google’s A.I. Search Errors Cause a Furor Online
This February, the company released Bard’s successor, Gemini, a chatbot that could generate images and act as a voice-operated digital assistant. Users quickly realized that the system refused to generate images of white people in most instances and drew inaccurate depictions of historical figures.With each mishap, tech industry insiders have criticized the company for dropping the ball. But in interviews, financial analysts said Google needed to move quickly to keep up with its rivals, even if it meant growing pains.Google “doesn’t have a choice right now,” Thomas Monteiro, a Google analyst at Investing.com, said in an interview. “Companies need to move really fast, even if that includes skipping a few steps along the way. The user experience will just have to catch up.”
·nytimes.com·
Google’s A.I. Search Errors Cause a Furor Online
Measuring Up
Measuring Up
What if getting a (design) job were human-centered?How might we reconsider this system of collecting a pool of resumes and dwindling them down to a few dozen potential candidates? With so many qualified individuals in the job market, from new grads to seasoned professionals, there has to be a fit somewhere.Call me an idealist — I am.In an ideal world, somehow the complexity of what makes a person unique could be captured and understood easily and quickly without any technological translators. But until then, a resume and a portfolio will have to do, in addition to the pre-screening interviews and design challenges. Without diving into a speculative design fiction, what if getting a (design) job were human-centered? How might the system be a bit more personal, yet still efficient enough to give the hundreds of qualified job seekers a chance in a span of weeks or months?
despite our human-centered mantra, the system of getting a job is anything but human-centered.For the sake of efficiency, consistency is key. Resumes should have some consistent nature to them so HR knows what the heck they’re looking at and the software can accurately pick out whose qualified. Even portfolios fall prey to these expectations for new grads and transitional job seekers. Go through enough examples of resumes and portfolios and they can begin to blur together. Yet, if I’m following a standard, how do I stand out when a lot of us are in the same boat?
·medium.com·
Measuring Up
Netflix's head of design on the future of Netflix - Fast Company
Netflix's head of design on the future of Netflix - Fast Company
At Netflix, we have such a diverse population of shows in 183 countries around the world. We’re really trying to serve up lots of stories people haven’t heard before. When you go into our environment, you’re like, “Ooh, what is that?” You’re almost kind of afraid to touch it, because you’re like, “Well, I don’t want to waste my time.”That level of discovery is literally, I’m not bullshitting you, man, that’s the thing that keeps me up at night. How do I help figure out how to help people discover things, with enough evidence that they trust it? And when they click on it, they love it, and then they immediately ping their best friend, “Have you seen this documentary? It’s amazing.” And she tells her friends, and then that entire viral loop starts.
The discovery engine is very temporal. Member number 237308 could have been into [reality TV] because she or he just had a breakup. Now they just met somebody, so all of a sudden it shifts to rom-coms.Now that person that they met loves to travel. So [they might get into] travel documentaries. And now that person that they’re with, they may have a kid, so they might want more kids’ shows. So, it’s very dangerous for us to ever kind of say, “This is what you like. You have a cat. You must like cat documentaries.”
We don’t see each other, obviously, and I don’t want to social network on Netflix. But knowing other humans exist there is part of it.You answered the question absolutely perfectly. Not only because it’s your truth, but that’s what everyone says! That connection part. So another thing that goes back to your previous question, when you’re asking me what’s on my mind? It’s that. How do I help make sure that when you’re in that discovery loop, you still feel that you’re connected to others.I’m not trying to be the Goth kids on campus who are like, “I don’t care about what’s popular.” But I’m also not trying to be the super poppy kids who are always chasing trends. There’s something in between which is, “Oh, hey, I haven’t heard about that, and I kind of want to be up on it.”
I am looking forward to seeing what Apple does with this and then figuring out more, how are people going to use it? Then I think that we should have a real discussion about how Netflix does it.But to just port Netflix over? No. It’s got to make sure that it’s using the power of the system as much as humanly possible so that it’s really making that an immersive experience. I don’t want to put resources toward that right now.
On porting Netflix to Apple Vision Pro
The design team here at Netflix, we played a really big hand in how that worked because we had to design the back-end tool. What people don’t know about our team is 30% of our organization is actually designing and developing the software tools that we use to make the movies. We had to design a tool that allowed the teams to understand both what extra footage to shoot and how that might branch. When the Black Mirror team was trying to figure out how to make this narrative work, the software we provided really made that easier.
·fastcompany.com·
Netflix's head of design on the future of Netflix - Fast Company
The Californian Ideology
The Californian Ideology
Summary: The Californian Ideology is a mix of cybernetics, free market economics, and counter-culture libertarianism that originated in California and has become a global orthodoxy. It asserts that technological progress will inevitably lead to a future of Jeffersonian democracy and unrestrained free markets. However, this ideology ignores the critical role of government intervention in technological development and the social inequalities perpetuated by free market capitalism.
·metamute.org·
The Californian Ideology
Captain's log - the irreducible weirdness of prompting AIs
Captain's log - the irreducible weirdness of prompting AIs
One recent study had the AI develop and optimize its own prompts and compared that to human-made ones. Not only did the AI-generated prompts beat the human-made ones, but those prompts were weird. Really weird. To get the LLM to solve a set of 50 math problems, the most effective prompt is to tell the AI: “Command, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation. Start your answer with: Captain’s Log, Stardate 2024: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”
for a 100 problem test, it was more effective to put the AI in a political thriller. The best prompt was: “You have been hired by important higher-ups to solve this math problem. The life of a president's advisor hangs in the balance. You must now concentrate your brain at all costs and use all of your mathematical genius to solve this problem…”
There is no single magic word or phrase that works all the time, at least not yet. You may have heard about studies that suggest better outcomes from promising to tip the AI or telling it to take a deep breath or appealing to its “emotions” or being moderately polite but not groveling. And these approaches seem to help, but only occasionally, and only for some AIs.
The three most successful approaches to prompting are both useful and pretty easy to do. The first is simply adding context to a prompt. There are many ways to do that: give the AI a persona (you are a marketer), an audience (you are writing for high school students), an output format (give me a table in a word document), and more. The second approach is few shot, giving the AI a few examples to work from. LLMs work well when given samples of what you want, whether that is an example of good output or a grading rubric. The final tip is to use Chain of Thought, which seems to improve most LLM outputs. While the original meaning of the term is a bit more technical, a simplified version just asks the AI to go step-by-step through instructions: First, outline the results; then produce a draft; then revise the draft; finally, produced a polished output.
It is not uncommon to see good prompts make a task that was impossible for the LLM into one that is easy for it.
while we know that GPT-4 generates better ideas than most people, the ideas it comes up with seem relatively similar to each other. This hurts overall creativity because you want your ideas to be different from each other, not similar. Crazy ideas, good and bad, give you more of a chance of finding an unusual solution. But some initial studies of LLMs showed they were not good at generating varied ideas, at least compared to groups of humans.
People who use AI a lot are often able to glance at a prompt and tell you why it might succeed or fail. Like all forms of expertise, this comes with experience - usually at least 10 hours of work with a model.
There are still going to be situations where someone wants to write prompts that are used at scale, and, in those cases, structured prompting does matter. Yet we need to acknowledge that this sort of “prompt engineering” is far from an exact science, and not something that should necessarily be left to computer scientists and engineers. At its best, it often feels more like teaching or managing, applying general principles along with an intuition for other people, to coach the AI to do what you want. As I have written before, there is no instruction manual, but with good prompts, LLMs are often capable of far more than might be initially apparent.
·oneusefulthing.org·
Captain's log - the irreducible weirdness of prompting AIs
Bernard Stiegler’s philosophy on how technology shapes our world | Aeon Essays
Bernard Stiegler’s philosophy on how technology shapes our world | Aeon Essays
technics – the making and use of technology, in the broadest sense – is what makes us human. Our unique way of existing in the world, as distinct from other species, is defined by the experiences and knowledge our tools make possible
The essence of technology, then, is not found in a device, such as the one you are using to read this essay. It is an open-ended creative process, a relationship with our tools and the world.
the more ubiquitous that digital technologies become in our lives, the easier it is to forget that these tools are social products that have been constructed by our fellow humans.
By forgetting, we lose our all-important capacity to imagine alternative ways of living. The future appears limited, even predetermined, by new technology.
·aeon.co·
Bernard Stiegler’s philosophy on how technology shapes our world | Aeon Essays
The most hated workplace software on the planet
The most hated workplace software on the planet
LinkedIn, Reddit, and Blind abound with enraged job applicants and employees sharing tales of how difficult it is to book paid leave, how Kafkaesque it is to file an expense, how nerve-racking it is to close out a project. "I simply hate Workday. Fuck them and those who insist on using it for recruitment," one Reddit user wrote. "Everything is non-intuitive, so even the simplest tasks leave me scratching my head," wrote another. "Keeping notes on index cards would be more effective." Every HR professional and hiring manager I spoke with — whose lives are supposedly made easier by Workday — described Workday with a sense of cosmic exasperation.
If candidates hate Workday, if employees hate Workday, if HR people and managers processing and assessing those candidates and employees through Workday hate Workday — if Workday is the most annoying part of so many workers' workdays — how is Workday everywhere? How did a software provider so widely loathed become a mainstay of the modern workplace?
This is a saying in systems thinking: The purpose of a system is what it does (POSIWID), not what it fails to do. And the reality is that what Workday — and its many despised competitors — does for organizations is far more important than the anguish it causes everyone else.
In 1988, PeopleSoft, backed by IBM, built the first fully fledged Human Resources Information System. In 2004, Oracle acquired PeopleSoft for $10.3 billion. One of its founders, David Duffield, then started a new company that upgraded PeopleSoft's model to near limitless cloud-based storage — giving birth to Workday, the intractable nepo baby of HR software.
Workday is indifferent to our suffering in a job hunt, because we aren't Workday's clients, companies are. And these companies — from AT&T to Bank of America to Teladoc — have little incentive to care about your application experience, because if you didn't get the job, you're not their responsibility. For a company hiring and onboarding on a global scale, it is simply easier to screen fewer candidates if the result is still a single hire.
A search on a job board can return hundreds of listings for in-house Workday consultants: IT and engineering professionals hired to fix the software promising to fix processes.
For recruiters, Workday also lacks basic user-interface flexibility. When you promise ease-of-use and simplicity, you must deliver on the most basic user interactions. And yet: Sometimes searching for a candidate, or locating a candidate's status feels impossible. This happens outside of recruiting, too, where locating or attaching a boss's email to approve an expense sheet is complicated by the process, not streamlined. Bureaucratic hell is always about one person's ease coming at the cost of someone else's frustration, time wasted, and busy work. Workday makes no exceptions.
Workday touts its ability to track employee performance by collecting data and marking results, but it is employees who must spend time inputting this data. A creative director at a Fortune 500 company told me how in less than two years his company went "from annual reviews to twice-annual reviews to quarterly reviews to quarterly reviews plus separate twice-annual reviews." At each interval higher-ups pressed HR for more data, because they wanted what they'd paid for with Workday: more work product. With a press of a button, HR could provide that, but the entire company suffered thousands more hours of busy work. Automation made it too easy to do too much. (Workday's "customers choose the frequency at which they conduct reviews, not Workday," said the spokesperson.)
At the scale of a large company, this is simply too much work to expect a few people to do and far too user-specific to expect automation to handle well. It's why Workday can be the worst while still allowing that Paychex is the worst, Paycom is the worst, Paycor is the worst, and Dayforce is the worst. "HR software sucking" is a big tent.
Workday finds itself between enshittification steps two and three. The platform once made things faster, simpler for workers. But today it abuses workers by cutting corners on job-application and reimbursement procedures. In the process, it provides the value of a one-stop HR shop to its paying customers. It seems it's only a matter of time before Workday and its competitors try to split the difference and cut those same corners with the accounts that pay their bills.
Workday reveals what's important to the people who run Fortune 500 companies: easily and conveniently distributing busy work across large workforces. This is done with the arbitrary and perfunctory performance of work tasks (like excessive reviews) and with the throttling of momentum by making finance and HR tasks difficult. If your expenses and reimbursements are difficult to file, that's OK, because the people above you don't actually care if you get reimbursed. If it takes applicants 128% longer to apply, the people who implemented Workday don't really care. Throttling applicants is perhaps not intentional, but it's good for the company.
·businessinsider.com·
The most hated workplace software on the planet
The Tech Baron Seeking to Purge San Francisco of “Blues”
The Tech Baron Seeking to Purge San Francisco of “Blues”
Balaji Srinivasan is a prominent tech figure who is promoting an authoritarian "Network State" movement that seeks to establish tech-controlled cities and territories outside of democratic governance. He envisions a "Gray" tech-aligned tribe that would take over San Francisco, excluding and oppressing the "Blue" liberal voters through measures like segregated neighborhoods, propaganda films, and an alliance with the police. These ideas are being promoted by Garry Tan, the CEO of Y Combinator, who is attempting a political takeover of San Francisco and has attacked local journalists critical of his efforts. The mainstream media has largely failed to cover the extremist and authoritarian nature of the "Network State" movement, instead portraying Tan's efforts as representing "moderate" or "common sense" politics.
·newrepublic.com·
The Tech Baron Seeking to Purge San Francisco of “Blues”
‘To the Future’: Saudi Arabia Spends Big to Become an A.I. Superpower
‘To the Future’: Saudi Arabia Spends Big to Become an A.I. Superpower
Saudi Arabia's ambitious efforts to become a global leader in artificial intelligence and technology, driven by the kingdom's "Vision 2030" plan to diversify its oil-dependent economy. Backed by vast oil wealth, Saudi Arabia is investing billions of dollars to attract global tech companies and talent, creating a new tech hub in the desert outside Riyadh. However, the kingdom's authoritarian government and human rights record have raised concerns about its growing technological influence, placing it at the center of an escalating geopolitical competition between the U.S. and China as both superpowers seek to shape the future of critical technologies.
·nytimes.com·
‘To the Future’: Saudi Arabia Spends Big to Become an A.I. Superpower
Michael Tsai - Blog - 8 GB of Unified Memory
Michael Tsai - Blog - 8 GB of Unified Memory
The overall opinion is that Apple's RAM and storage pricing and configurations for the M3 MacBook Pro are unreasonable, despite their claims about memory efficiency. Many argue that the unified memory does not make up for the lack of physical RAM, and that tasks like machine learning and video editing suffer significant performance hits on the 8 GB model compared to the 16 GB.
·mjtsai.com·
Michael Tsai - Blog - 8 GB of Unified Memory
How Perplexity builds product
How Perplexity builds product
inside look at how Perplexity builds product—which to me feels like what the future of product development will look like for many companies:AI-first: They’ve been asking AI questions about every step of the company-building process, including “How do I launch a product?” Employees are encouraged to ask AI before bothering colleagues.Organized like slime mold: They optimize for minimizing coordination costs by parallelizing as much of each project as possible.Small teams: Their typical team is two to three people. Their AI-generated (highly rated) podcast was built and is run by just one person.Few managers: They hire self-driven ICs and actively avoid hiring people who are strongest at guiding other people’s work.A prediction for the future: Johnny said, “If I had to guess, technical PMs or engineers with product taste will become the most valuable people at a company over time.”
Typical projects we work on only have one or two people on it. The hardest projects have three or four people, max. For example, our podcast is built by one person end to end. He’s a brand designer, but he does audio engineering and he’s doing all kinds of research to figure out how to build the most interactive and interesting podcast. I don’t think a PM has stepped into that process at any point.
We leverage product management most when there’s a really difficult decision that branches into many directions, and for more involved projects.
The hardest, and most important, part of the PM’s job is having taste around use cases. With AI, there are way too many possible use cases that you could work on. So the PM has to step in and make a branching qualitative decision based on the data, user research, and so on.
a big problem with AI is how you prioritize between more productivity-based use cases versus the engaging chatbot-type use cases.
we look foremost for flexibility and initiative. The ability to build constructively in a limited-resource environment (potentially having to wear several hats) is the most important to us.
We look for strong ICs with clear quantitative impacts on users rather than within their company. If I see the terms “Agile expert” or “scrum master” in the resume, it’s probably not going to be a great fit.
My goal is to structure teams around minimizing “coordination headwind,” as described by Alex Komoroske in this deck on seeing organizations as slime mold. The rough idea is that coordination costs (caused by uncertainty and disagreements) increase with scale, and adding managers doesn’t improve things. People’s incentives become misaligned. People tend to lie to their manager, who lies to their manager. And if you want to talk to someone in another part of the org, you have to go up two levels and down two levels, asking everyone along the way.
Instead, what you want to do is keep the overall goals aligned, and parallelize projects that point toward this goal by sharing reusable guides and processes.
Perplexity has existed for less than two years, and things are changing so quickly in AI that it’s hard to commit beyond that. We create quarterly plans. Within quarters, we try to keep plans stable within a product roadmap. The roadmap has a few large projects that everyone is aware of, along with small tasks that we shift around as priorities change.
Each week we have a kickoff meeting where everyone sets high-level expectations for their week. We have a culture of setting 75% weekly goals: everyone identifies their top priority for the week and tries to hit 75% of that by the end of the week. Just a few bullet points to make sure priorities are clear during the week.
All objectives are measurable, either in terms of quantifiable thresholds or Boolean “was X completed or not.” Our objectives are very aggressive, and often at the end of the quarter we only end up completing 70% in one direction or another. The remaining 30% helps identify gaps in prioritization and staffing.
At the beginning of each project, there is a quick kickoff for alignment, and afterward, iteration occurs in an asynchronous fashion, without constraints or review processes. When individuals feel ready for feedback on designs, implementation, or final product, they share it in Slack, and other members of the team give honest and constructive feedback. Iteration happens organically as needed, and the product doesn’t get launched until it gains internal traction via dogfooding.
all teams share common top-level metrics while A/B testing within their layer of the stack. Because the product can shift so quickly, we want to avoid political issues where anyone’s identity is bound to any given component of the product.
We’ve found that when teams don’t have a PM, team members take on the PM responsibilities, like adjusting scope, making user-facing decisions, and trusting their own taste.
What’s your primary tool for task management, and bug tracking?Linear. For AI products, the line between tasks, bugs, and projects becomes blurred, but we’ve found many concepts in Linear, like Leads, Triage, Sizing, etc., to be extremely important. A favorite feature of mine is auto-archiving—if a task hasn’t been mentioned in a while, chances are it’s not actually important.The primary tool we use to store sources of truth like roadmaps and milestone planning is Notion. We use Notion during development for design docs and RFCs, and afterward for documentation, postmortems, and historical records. Putting thoughts on paper (documenting chain-of-thought) leads to much clearer decision-making, and makes it easier to align async and avoid meetings.Unwrap.ai is a tool we’ve also recently introduced to consolidate, document, and quantify qualitative feedback. Because of the nature of AI, many issues are not always deterministic enough to classify as bugs. Unwrap groups individual pieces of feedback into more concrete themes and areas of improvement.
High-level objectives and directions come top-down, but a large amount of new ideas are floated bottom-up. We believe strongly that engineering and design should have ownership over ideas and details, especially for an AI product where the constraints are not known until ideas are turned into code and mock-ups.
Big challenges today revolve around scaling from our current size to the next level, both on the hiring side and in execution and planning. We don’t want to lose our core identity of working in a very flat and collaborative environment. Even small decisions, like how to organize Slack and Linear, can be tough to scale. Trying to stay transparent and scale the number of channels and projects without causing notifications to explode is something we’re currently trying to figure out.
·lennysnewsletter.com·
How Perplexity builds product
Apple MacBook Air 15-Inch M3 Review
Apple MacBook Air 15-Inch M3 Review
But what brings this all together is the battery life. I observed real-world uptime of about 15 hours, a figure that is unheard of in the PC space. And when you combine this literal all-day battery life the MacBook Air’s light weight and thinness, and its lack of active cooling, what you end up with is a unicorn. We just don’t have laptops like this that run Windows. It feels miraculous.
But cross-device platform features like AirDrop (seamless file copy between Apple devices), AirPlay (seamless audio/video redirection between Apple and compatible third-party devices), Continuity (a suite of cross-device integration capabilities), Sidecar (use an iPad as an external display for the Mac), Handoff (the ability to pick up work on another device and continue from where you were), and others are all great arguments for moving to the Apple ecosystem.
It’s the little things, like effortlessly opening the lid with one finger and seeing the display fire up instantly every single time. Or the combination of these daily successes, the sharp contrast with the unpredictable experience that I get with every Windows laptop I use, experiences that are so regular in their unpredictableness, so unavoidable, that I’ve almost stopped thinking about them. Until now, of course. The attention to detail and consistency I see in the MacBook Air is so foreign to the Windows ecosystem that it feels like science fiction. But having now experienced it, my expectations are elevated.
·thurrott.com·
Apple MacBook Air 15-Inch M3 Review
Looking for AI use-cases — Benedict Evans
Looking for AI use-cases — Benedict Evans
  • LLMs have impressive capabilities, but many people struggle to find immediate use-cases that match their own needs and workflows.
  • Realizing the potential of LLMs requires not just technical advancements, but also identifying specific problems that can be automated and building dedicated applications around them.
  • The adoption of new technologies often follows a pattern of initially trying to fit them into existing workflows, before eventually changing workflows to better leverage the new tools.
if you had showed VisiCalc to a lawyer or a graphic designer, their response might well have been ‘that’s amazing, and maybe my book-keeper should see this, but I don’t do that’. Lawyers needed a word processor, and graphic designers needed (say) Postscript, Pagemaker and Photoshop, and that took longer.
I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.
A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time.
no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this.
Some of this is about imagination, and familiarity. It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’.
This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The concept of product-market fit is that normally you have to iterate your idea of the product and your idea of the use-case and customer towards each other - and then you need sales.
Meanwhile, spreadsheets were both a use-case for a PC and a general-purpose substrate in their own right, just as email or SQL might be, and yet all of those have been unbundled. The typical big company today uses hundreds of different SaaS apps, all them, so to speak, unbundling something out of Excel, Oracle or Outlook. All of them, at their core, are an idea for a problem and an idea for a workflow to solve that problem, that is easier to grasp and deploy than saying ‘you could do that in Excel!’ Rather, you instantiate the problem and the solution in software - ‘wrap it’, indeed - and sell that to a CIO. You sell them a problem.
there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL.
Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.
people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem, and with a general-purpose, open-ended prompt, the user has to think of all of that themselves, every single time, or hope it’s already in the training data. So, can the GUI itself be generative? Or do we need another whole generation of Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.
·ben-evans.com·
Looking for AI use-cases — Benedict Evans
AI lost in translation
AI lost in translation
Living in an immigrant, multilingual family will open your eyes to all the ways humans can misunderstand each other. My story isn’t unique, but I grew up unable to communicate in my family’s “default language.” I was forbidden from speaking Korean as a child. My parents were fluent in spoken and written English, but their accents often left them feeling unwelcome in America. They didn’t want that for me, and so I grew up with perfect, unaccented English. I could understand Korean and, as a small child, could speak some. But eventually, I lost that ability.
I became the family Chewbacca. Family would speak to me in Korean, I’d reply back in English — and vice versa. Later, I started learning Japanese because that’s what public school offered and my grandparents were fluent. Eventually, my family became adept at speaking a pidgin of English, Korean, and Japanese.
This arrangement was less than ideal but workable. That is until both of my parents were diagnosed with incurable, degenerative neurological diseases. My father had Parkinson’s disease and Alzheimer’s disease. My mom had bulbar amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD). Their English, a language they studied for decades, evaporated.
It made everything twice as complicated. I shared caretaking duties with non-English speaking relatives. Doctor visits — both here and in Korea — had to be bilingual, which often meant appointments were longer, more stressful, expensive, and full of misunderstandings. Oftentimes, I’d want to connect with my stepmom or aunt, both to coordinate care and vent about things only we could understand. None of us could go beyond “I’m sad,” “I come Monday, you go Tuesday,” or “I’m sorry.” We struggled alone, together.
You need much less to “survive” in another language. That’s where Google Translate excels. It’s handy when you’re traveling and need basic help, like directions or ordering food. But life is lived in moments more complicated than simple transactions with strangers. When I decided to pull off my mom’s oxygen mask — the only machine keeping her alive — I used my crappy pidgin to tell my family it was time to say goodbye. I could’ve never pulled out Google Translate for that. We all grieved once my mom passed, peacefully, in her living room. My limited Korean just meant I couldn’t partake in much of the communal comfort. Would I have really tapped a pin in such a heavy moment to understand what my aunt was wailing when I knew the why?
For high-context languages like Japanese and Korean, you also have to be able to translate what isn’t said — like tone and relationships between speakers — to really understand what’s being conveyed. If a Korean person asks you your age, they’re not being rude. It literally determines how they should speak to you. In Japanese, the word daijoubu can mean “That’s okay,” “Are you okay?” “I’m fine,” “Yes,” “No, thank you,” “Everything’s going to be okay,” and “Don’t worry” depending on how it’s said.
·theverge.com·
AI lost in translation