Found 12 bookmarks
Newest
Gen Z and the End of Predictable Progress
Gen Z and the End of Predictable Progress
Gen Z faces a double disruption: AI-driven technological change and institutional instability Three distinct Gen Z cohorts have emerged, each with different relationships to digital reality A version of the barbell strategy is splitting career paths between "safety seekers" and "digital gamblers" Our fiscal reality is quite stark right now, and that is shaping how young people see opportunities
When I talk to young people from New York or Louisiana or Tennessee or California or DC or Indiana or Massachusetts about their futures, they're not just worried about finding jobs, they're worried about whether or not the whole concept of a "career" as we know it will exist in five years.
When a main path to financial security comes through the algorithmic gods rather than institutional advancement (like when a single viral TikTok can generate more income than a year of professional work) it fundamentally changes how people view everything from education to social structures to political systems that they’re apart of.
Gen Z 1.0: The Bridge Generation: This group watched the digital transformation happen in real-time, experiencing both the analog and internet worlds during formative years. They might view technology as a tool rather than an environment. They're young enough to navigate digital spaces fluently but old enough to remember alternatives. They (myself included) entered the workforce during Covid and might have severe workplace interaction gaps because they missed out on formative time during their early years. Gen Z 1.5: The Covid Cohort: This group hit major life milestones during a global pandemic. They entered college under Trump but graduated under Biden. This group has a particularly complex relationship with institutions. They watched traditional systems bend and break in real-time during Covid, while simultaneously seeing how digital infrastructure kept society functioning. Gen Z 2.0: The Digital Natives: This is the first group that will be graduate into the new digital economy. This group has never known a world without smartphones. To them, social media could be another layer of reality. Their understanding of economic opportunity is completely different from their older peers.
Gen Z 2.0 doesn't just use digital tools differently, they understand reality through a digital-first lens. Their identity formation happens through and with technology.
Technology enables new forms of value exchange, which creates new economic possibilities so people build identities around these possibilities and these identities drive development of new technologies and the cycle continues.
different generations don’t just use different tools, they operate in different economic realities and form identity through fundamentally different processes. Technology is accelerating differentiation. Economic paths are becoming more extreme. Identity formation is becoming more fluid.
I wrote a very long piece about why Trump won that focused on uncertainty, structural affordability, and fear - and that’s what the younger Gen Z’s are facing. Add AI into this mix, and the rocky path gets rockier. Traditional professional paths that once promised stability and maybe the ability to buy a house one day might not even exist in two years. Couple this with increased zero sum thinking, a lack of trust in institutions and subsequent institutional dismantling, and the whole attention economy thing, and you’ve got a group of young people who are going to be trying to find their footing in a whole new world. Of course you vote for the person promising to dismantle it and save you.
·kyla.substack.com·
Gen Z and the End of Predictable Progress
My Last Five Years of Work
My Last Five Years of Work
Copywriting, tax preparation, customer service, and many other tasks are or will soon be heavily automated. I can see the beginnings in areas like software development and contract law. Generally, tasks that involve reading, analyzing, and synthesizing information, and then generating content based on it, seem ripe for replacement by language models.
Anyone who makes a living through  delicate and varied movements guided by situation specific know-how can expect to work for much longer than five more years. Thus, electricians, gardeners, plumbers, jewelry makers, hair stylists, as well as those who repair ironwork or make stained glass might find their handiwork contributing to our society for many more years to come
Finally, I expect there to be jobs where humans are preferred to AIs even if the AIs can do the job equally well, or perhaps even if they can do it better. This will apply to jobs where something is gained from the very fact that a human is doing it—likely because it involves the consumer feeling like they have a relationship with the human worker as a human. Jobs that might fall into this category include counselors, doulas, caretakers for the elderly, babysitters, preschool teachers, priests and religious leaders, even sex workers—much has been made of AI girlfriends, but I still expect that a large percentage of buyers of in-person sexual services will have a strong preference for humans. Some have called these jobs “nostalgic jobs.”
It does seem that, overall, unemployment makes people sadder, sicker, and more anxious. But it isn’t clear if this is an inherent fact of unemployment, or a contingent one. It is difficult to isolate the pure psychological effects of being unemployed, because at present these are confounded with the financial effects—if you lose your job, you have less money—which produce stress that would not exist in the context of, say, universal basic income. It is also confounded with the “shame” aspect of being fired or laid off—of not working when you really feel you should be working—as opposed to the context where essentially all workers have been displaced.
One study that gets around the “shame” confounder of unemployment is “A Forced Vacation? The Stress of Being Temporarily Laid Off During a Pandemic” by Scott Schieman, Quan Mai, and Ryu Won Kang. This study looked at Canadian workers who were temporarily laid off several months into the COVID-19 pandemic. They first assumed that such a disruption would increase psychological distress, but instead found that the self-reported wellbeing was more in line with the “forced vacation hypothesis,” suggesting that temporarily laid-off workers might initially experience lower distress due to the unique circumstances of the pandemic.
By May 2020, the distress gap observed in April had vanished, indicating that being temporarily laid off was not associated with higher distress during these months. The interviews revealed that many workers viewed being left without work as a “forced vacation,” appreciating the break from work-related stress and valuing the time for self-care and family. The widespread nature of layoffs normalized the experience, reducing personal blame and fostering a sense of shared experience. Financial strain was mitigated by government support, personal savings, and reduced spending, which buffered against potential distress.
The study suggests that the context and available support systems can significantly alter the psychological outcomes of unemployment—which seems promising for AGI-induced unemployment.
From the studies on plant closures and pandemic layoffs, it seems that shame plays a role in making people unhappy after unemployment, which implies that they might be happier in full automation-induced unemployment, since it would be near-universal and not signify any personal failing.
A final piece that reveals a societal-psychological aspect to how much work is deemed necessary is that the amount has changed over time! The number of hours that people have worked has declined over the past 150 years. Work hours tend to decline as a country gets richer. It seems odd to assume that the current accepted amount of work of roughly 40 hours a week is the optimal amount. The 8-hour work day, weekends, time off—hard-fought and won by the labor movement!—seem to have been triumphs for human health and well-being. Why should we assume that stopping here is right? Why should we assume that less work was better in the past, but less work now would be worse?
Removing the shame that accompanies unemployment by removing the sense that one ought to be working seems one way to make people happier during unemployment. Another is what they do with their free time. Regardless of how one enters unemployment, one still confronts empty and often unstructured time.
One paper, titled “Having Too Little or Too Much Time Is Linked to Lower Subjective Well-Being” by Marissa A. Sharif, Cassie Mogilner, and Hal E. Hershfield tried to explore whether it was possible to have “too much” leisure time.
The paper concluded that it is possible to have too little discretionary time, but also possible to have too much, and that moderate amounts of discretionary time seemed best for subjective well-being. More time could be better, or at least not meaningfully worse, provided it was spent on “social” or “productive” leisure activities. This suggests that how people fare psychologically with their post-AGI unemployment will depend heavily on how they use their time, not how much of it there is
Automation-induced unemployment could feel like retiring depending on how total it is. If essentially no one is working, and no one feels like they should be working, it might be more akin to retirement, in that it would lack the shameful element of feeling set apart from one’s peers.
Women provide another view on whether formal work is good for happiness. Women are, for the most part, relatively recent entrants to the formal labor market. In the U.S., 18% of women were in the formal labor force in 1890. In 2016, 57% were. Has labor force participation made them happier? By some accounts: no. A paper that looked at subjective well-being for U.S. women from the General Social Survey between the 1970s and 2000s—a time when labor force participation was climbing—found both relative and absolute declines in female happiness.
I think women’s work and AI is a relatively optimistic story. Women have been able to automate unpleasant tasks via technological advances, while the more meaningful aspects of their work seem less likely to be automated away.  When not participating in the formal labor market, women overwhelmingly fill their time with childcare and housework. The time needed to do housework has declined over time due to tools like washing machines, dryers, and dishwashers. These tools might serve as early analogous examples of the future effects of AI: reducing unwanted and burdensome work to free up time for other tasks deemed more necessary or enjoyable.
it seems less likely that AIs will so thoroughly automate childcare and child-rearing because this “work” is so much more about the relationship between the parties involved. Like therapy, childcare and teaching seems likely to be one of the forms of work where a preference for a human worker will persist the longest.
In the early modern era, landed gentry and similar were essentially unemployed. Perhaps they did some minor administration of their tenants, some dabbled in politics or were dragged into military projects, but compared to most formal workers they seem to have worked relatively few hours. They filled the remainder of their time with intricate social rituals like balls and parties, hobbies like hunting, studying literature, and philosophy, producing and consuming art, writing letters, and spending time with friends and family. We don’t have much real well-being survey data from this group, but, hedonically, they seem to have been fine. Perhaps they suffered from some ennui, but if we were informed that the great mass of humanity was going to enter their position, I don’t think people would be particularly worried.
I sometimes wonder if there is some implicit classism in people’s worries about unemployment: the rich will know how to use their time well, but the poor will need to be kept busy.
Although a trained therapist might be able to counsel my friends or family through their troubles better, I still do it, because there is value in me being the one to do so. We can think of this as the relational reason for doing something others can do better. I write because sometimes I enjoy it, and sometimes I think it betters me. I know others do so better, but I don’t care—at least not all the time. The reasons for this are part hedonic and part virtue or morality.  A renowned AI researcher once told me that he is practicing for post-AGI by taking up activities that he is not particularly good at: jiu-jitsu, surfing, and so on, and savoring the doing even without excellence. This is how we can prepare for our future where we will have to do things from joy rather than need, where we will no longer be the best at them, but will still have to choose how to fill our days.
·palladiummag.com·
My Last Five Years of Work
Malleable software in the age of LLMs
Malleable software in the age of LLMs
Historically, end-user programming efforts have been limited by the difficulty of turning informal user intent into executable code, but LLMs can help open up this programming bottleneck. However, user interfaces still matter, and while chatbots have their place, they are an essentially limited interaction mode. An intriguing way forward is to combine LLMs with open-ended, user-moldable computational media, where the AI acts as an assistant to help users directly manipulate and extend their tools over time.
LLMs will represent a step change in tool support for end-user programming: the ability of normal people to fully harness the general power of computers without resorting to the complexity of normal programming. Until now, that vision has been bottlenecked on turning fuzzy informal intent into formal, executable code; now that bottleneck is rapidly opening up thanks to LLMs.
If this hypothesis indeed comes true, we might start to see some surprising changes in the way people use software: One-off scripts: Normal computer users have their AI create and execute scripts dozens of times a day, to perform tasks like data analysis, video editing, or automating tedious tasks. One-off GUIs: People use AI to create entire GUI applications just for performing a single specific task—containing just the features they need, no bloat. Build don’t buy: Businesses develop more software in-house that meets their custom needs, rather than buying SaaS off the shelf, since it’s now cheaper to get software tailored to the use case. Modding/extensions: Consumers and businesses demand the ability to extend and mod their existing software, since it’s now easier to specify a new feature or a tweak to match a user’s workflow. Recombination: Take the best parts of the different applications you like best, and create a new hybrid that composes them together.
Chat will never feel like driving a car, no matter how good the bot is. In their 1986 book Understanding Computers and Cognition, Terry Winograd and Fernando Flores elaborate on this point: In driving a car, the control interaction is normally transparent. You do not think “How far should I turn the steering wheel to go around that curve?” In fact, you are not even aware (unless something intrudes) of using a steering wheel…The long evolution of the design of automobiles has led to this readiness-to-hand. It is not achieved by having a car communicate like a person, but by providing the right coupling between the driver and action in the relevant domain (motion down the road).
Think about how a spreadsheet works. If you have a financial model in a spreadsheet, you can try changing a number in a cell to assess a scenario—this is the inner loop of direct manipulation at work. But, you can also edit the formulas! A spreadsheet isn’t just an “app” focused on a specific task; it’s closer to a general computational medium which lets you flexibly express many kinds of tasks. The “platform developers"—the creators of the spreadsheet—have given you a set of general primitives that can be used to make many tools. We might draw the double loop of the spreadsheet interaction like this. You can edit numbers in the spreadsheet, but you can also edit formulas, which edits the tool
what if you had an LLM play the role of the local developer? That is, the user mainly drives the creation of the spreadsheet, but asks for technical help with some of the formulas when needed? The LLM wouldn’t just create an entire solution, it would also teach the user how to create the solution themselves next time.
This picture shows a world that I find pretty compelling. There’s an inner interaction loop that takes advantage of the full power of direct manipulation. There’s an outer loop where the user can also more deeply edit their tools within an open-ended medium. They can get AI support for making tool edits, and grow their own capacity to work in the medium. Over time, they can learn things like the basics of formulas, or how a VLOOKUP works. This structural knowledge helps the user think of possible use cases for the tool, and also helps them audit the output from the LLMs. In a ChatGPT world, the user is left entirely dependent on the AI, without any understanding of its inner mechanism. In a computational medium with AI as assistant, the user’s reliance on the AI gently decreases over time as they become more comfortable in the medium.
·geoffreylitt.com·
Malleable software in the age of LLMs
complete delegation
complete delegation
Linus shares his evolving perspective on chat interfaces and his experience building a fully autonomous chatbot agent. He argues that learning to trust and delegate to such systems without micromanaging the specifics is key to collaborating with autonomous AI agents in the future.
I've changed my mind quite a bit on the role and importance of chat interfaces. I used to think they were the primitive version of rich, creative, more intuitive interfaces that would come in the future; now I think conversational, anthropomorphic interfaces will coexist with more rich dexterous ones, and the two will both evolve over time to be more intuitive, capable, and powerful.
I kept checking the database manually after each interaction to see it was indeed updating the right records — but after a few hours of using it, I've basically learned to trust it. I ask it to do things, it tells me it did them, and I don't check anymore. Full delegation.
How can I trust it? High task success rate — I interact with it, and observe that it doesn't let me down, over and over again. The price for this degree of delegation is giving up control over exactly how the task is done. It often does things differently from the way I would, but that doesn't matter as long as outputs from the system are useful for me.
·stream.thesephist.com·
complete delegation
How Perplexity builds product
How Perplexity builds product
inside look at how Perplexity builds product—which to me feels like what the future of product development will look like for many companies:AI-first: They’ve been asking AI questions about every step of the company-building process, including “How do I launch a product?” Employees are encouraged to ask AI before bothering colleagues.Organized like slime mold: They optimize for minimizing coordination costs by parallelizing as much of each project as possible.Small teams: Their typical team is two to three people. Their AI-generated (highly rated) podcast was built and is run by just one person.Few managers: They hire self-driven ICs and actively avoid hiring people who are strongest at guiding other people’s work.A prediction for the future: Johnny said, “If I had to guess, technical PMs or engineers with product taste will become the most valuable people at a company over time.”
Typical projects we work on only have one or two people on it. The hardest projects have three or four people, max. For example, our podcast is built by one person end to end. He’s a brand designer, but he does audio engineering and he’s doing all kinds of research to figure out how to build the most interactive and interesting podcast. I don’t think a PM has stepped into that process at any point.
We leverage product management most when there’s a really difficult decision that branches into many directions, and for more involved projects.
The hardest, and most important, part of the PM’s job is having taste around use cases. With AI, there are way too many possible use cases that you could work on. So the PM has to step in and make a branching qualitative decision based on the data, user research, and so on.
a big problem with AI is how you prioritize between more productivity-based use cases versus the engaging chatbot-type use cases.
we look foremost for flexibility and initiative. The ability to build constructively in a limited-resource environment (potentially having to wear several hats) is the most important to us.
We look for strong ICs with clear quantitative impacts on users rather than within their company. If I see the terms “Agile expert” or “scrum master” in the resume, it’s probably not going to be a great fit.
My goal is to structure teams around minimizing “coordination headwind,” as described by Alex Komoroske in this deck on seeing organizations as slime mold. The rough idea is that coordination costs (caused by uncertainty and disagreements) increase with scale, and adding managers doesn’t improve things. People’s incentives become misaligned. People tend to lie to their manager, who lies to their manager. And if you want to talk to someone in another part of the org, you have to go up two levels and down two levels, asking everyone along the way.
Instead, what you want to do is keep the overall goals aligned, and parallelize projects that point toward this goal by sharing reusable guides and processes.
Perplexity has existed for less than two years, and things are changing so quickly in AI that it’s hard to commit beyond that. We create quarterly plans. Within quarters, we try to keep plans stable within a product roadmap. The roadmap has a few large projects that everyone is aware of, along with small tasks that we shift around as priorities change.
Each week we have a kickoff meeting where everyone sets high-level expectations for their week. We have a culture of setting 75% weekly goals: everyone identifies their top priority for the week and tries to hit 75% of that by the end of the week. Just a few bullet points to make sure priorities are clear during the week.
All objectives are measurable, either in terms of quantifiable thresholds or Boolean “was X completed or not.” Our objectives are very aggressive, and often at the end of the quarter we only end up completing 70% in one direction or another. The remaining 30% helps identify gaps in prioritization and staffing.
At the beginning of each project, there is a quick kickoff for alignment, and afterward, iteration occurs in an asynchronous fashion, without constraints or review processes. When individuals feel ready for feedback on designs, implementation, or final product, they share it in Slack, and other members of the team give honest and constructive feedback. Iteration happens organically as needed, and the product doesn’t get launched until it gains internal traction via dogfooding.
all teams share common top-level metrics while A/B testing within their layer of the stack. Because the product can shift so quickly, we want to avoid political issues where anyone’s identity is bound to any given component of the product.
We’ve found that when teams don’t have a PM, team members take on the PM responsibilities, like adjusting scope, making user-facing decisions, and trusting their own taste.
What’s your primary tool for task management, and bug tracking?Linear. For AI products, the line between tasks, bugs, and projects becomes blurred, but we’ve found many concepts in Linear, like Leads, Triage, Sizing, etc., to be extremely important. A favorite feature of mine is auto-archiving—if a task hasn’t been mentioned in a while, chances are it’s not actually important.The primary tool we use to store sources of truth like roadmaps and milestone planning is Notion. We use Notion during development for design docs and RFCs, and afterward for documentation, postmortems, and historical records. Putting thoughts on paper (documenting chain-of-thought) leads to much clearer decision-making, and makes it easier to align async and avoid meetings.Unwrap.ai is a tool we’ve also recently introduced to consolidate, document, and quantify qualitative feedback. Because of the nature of AI, many issues are not always deterministic enough to classify as bugs. Unwrap groups individual pieces of feedback into more concrete themes and areas of improvement.
High-level objectives and directions come top-down, but a large amount of new ideas are floated bottom-up. We believe strongly that engineering and design should have ownership over ideas and details, especially for an AI product where the constraints are not known until ideas are turned into code and mock-ups.
Big challenges today revolve around scaling from our current size to the next level, both on the hiring side and in execution and planning. We don’t want to lose our core identity of working in a very flat and collaborative environment. Even small decisions, like how to organize Slack and Linear, can be tough to scale. Trying to stay transparent and scale the number of channels and projects without causing notifications to explode is something we’re currently trying to figure out.
·lennysnewsletter.com·
How Perplexity builds product
A camera for ideas
A camera for ideas
Instead of turning light into pictures, it turns ideas into pictures.
This new kind of camera replicates what your imagination does. It receives words and then synthesizes a picture from its experience seeing millions of other pictures. The output doesn’t have a name yet, but I’ll call it a synthograph (meaning synthetic drawing).
Photography can capture moments that happened, but synthography is not bound by the limitations of reality. Synthography can capture moments that did not happen and moments that could never happen.
Taking a great syntho is about stimulating the imagination of the camera. Synthography doesn’t require you to be anywhere or anywhen in particular.
Photography is an important medium of expression because it is so accessible and instantaneous. Synthography will even further reduce barriers to entry, and give everyone the power to convert ideas into pictures.
·stephango.com·
A camera for ideas
Photoshop for text
Photoshop for text
In the near future, transforming text will become as commonplace as filtering images. A new set of tools is emerging, like Photoshop for text. Up until now, text editors have been focused on input. The next evolution of text editors will make it easy to alter, summarize and lengthen text. You’ll be able to do this for entire documents, not just individual sentences or paragraphs. The filters will be instantaneous and as good as if you wrote the text yourself. You will also be able to do this with local files, on your device, without relying on remote servers.
Initially, many of Photoshop’s capabilities were adaptations of analog effects. For example, “dodge” and “burn” are old darkroom techniques used to alter photographs. There are countless skeuomorphic names throughout digital image editing tools that refer to analog processes.
Text seems like it would be easier to manipulate than images. But languages have far more rules than images do. A reader expects writing to follow proper spelling and grammar, a consistent tone, and a logical sequence of sentences. Until now, solving this problem required building complex rule-based algorithms. Now we can solve this problem with AI models that can teach themselves to create readable text in any language.
·stephango.com·
Photoshop for text
Announcing iA Writer 7
Announcing iA Writer 7
New features in iA Writer that discern authorship between human and AI writing, and encourages making human changes to writing pasted from AI
With iA Writer 7 you can manually mark ChatGPT’s contributions as AI text. AI text is greyed out. This allows you to separate and control what you borrow and what you type. By splitting what you type and what you pasted, you can make sure that you speak your mind with your voice, rhythm and tone.
As a dialog partner AI makes you think more and write better. As ghost writer it takes over and you lose your voice. Yet, sometimes it helps to paste its replies and notes. And if you want to use that information, you rewrite it to make it our own. So far, in traditional apps we are not able to easily see what we wrote and what we pasted from AI. iA Writer lets you discern your words from what you borrowed as you write on top of it. As you type over the AI generated text you can see it becoming your own. We found that in most cases, and with the exception of some generic pronouns and common verbs like “to have” and “to be”, most texts profit from a full rewrite.
we believe that using AI for writing will likely become as common as using dishwashers, spellcheckers, and pocket calculators. The question is: How will it be used? Like spell checkers, dishwashers, chess computers and pocket calculators, writing with AI will be tied to varying rules in different settings.
We suggest using AI’s ability to replace thinking not for ourselves but for writing in dialogue. Don’t use it as a ghost writer. Because why should anyone bother to read what you didn’t write? Use it as a writing companion. It comes with a ChatUI, so ask it questions and let it ask you questions about what you write. Use it to think better, don’t become a vegetable.
·ia.net·
Announcing iA Writer 7
How OpenAI is building a path toward AI agents
How OpenAI is building a path toward AI agents
Many of the most pressing concerns around AI safety will come with these features, whenever they arrive. The fear is that when you tell AI systems to do things on your behalf, they might accomplish them via harmful means. This is the fear embedded in the famous paperclip problem, and while that remains an outlandish worst-case scenario, other potential harms are much more plausible.Once you start enabling agents like the ones OpenAI pointed toward today, you start building the path toward sophisticated algorithms manipulating the stock market; highly personalized and effective phishing attacks; discrimination and privacy violations based on automations connected to facial recognition; and all the unintended (and currently unimaginable) consequences of infinite AIs colliding on the internet.
That same Copy Editor I described above might be able in the future to automate the creation of a series of blogs, publish original columns on them every day, and promote them on social networks via an established daily budget, all working toward the overall goal of undermining support for Ukraine.
Which actions is OpenAI comfortable letting GPT-4 take on the internet today, and which does the company not want to touch?  Altman’s answer is that, at least for now, the company wants to keep it simple. Clear, direct actions are OK; anything that involves high-level planning isn’t.
For most of his keynote address, Altman avoided making lofty promises about the future of AI, instead focusing on the day-to-day utility of the updates that his company was announcing. In the final minutes of his talk, though, he outlined a loftier vision.“We believe that AI will be about individual empowerment and agency at a scale we've never seen before,” Altman said, “And that will elevate humanity to a scale that we've never seen before, either. We'll be able to do more, to create more, and to have more. As intelligence is integrated everywhere, we will all have superpowers on demand.”
·platformer.news·
How OpenAI is building a path toward AI agents
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography
With the comprehensive application of Artificial Intelligence into the creation and post production of images, it seems questionable if the resulting visualisations can still be considered ‘photographs’ in a classical sense – drawing with light. Automation has been part of the popular strain of photography since its inception, but even the amateurs with only basic knowledge of the craft could understand themselves as author of their images. We state a legitimation crisis for the current usage of the term. This paper is an invitation to consider Synthography as a term for a new genre for image production based on AI, observing the current occurrence and implementation in consumer cameras and post-production.
·link.springer.com·
Synthography – An Invitation to Reconsider the Rapidly Changing Toolkit of Digital Image Creation as a New Genre Beyond Photography