Personal toolkit: A framework for personal knowledge management tools
Tennis Explains Everything - The Atlantic
Tennis is an elegant and simple sport. Players stand on opposite sides of a rectangle, divided by a net that can’t be crossed. The gameplay is full of invisible geometry: Viewers might trace parabolas, angles, and lines depending on how the players move and where they hit the ball. It’s an ideal representation of conflict, a perfect stage for pitting one competitor against another, so it’s no wonder that the game comes to stand in for all sorts of different things off the court.
The “Battle of Sexes” match in 1973, between Billie Jean King and then-retired Bobby Riggs, has since been mythologized as a turning point for women’s sports. If the social allegory of the Ashe-Graeber match was subtextual, the one in this spectacle—which ended in a decisive victory for King over the cartoonishly chauvinistic Riggs—was glaringly explicit. At a time when women’s liberation was becoming a force that threw all sorts of conventions into question, and plenty of people were for or against the gains of the movement, seeing the debate represented by a game of tennis surely had a comforting appeal. For those with more regressive beliefs, rooting for Bobby was certainly easier than really articulating a justification for maintaining massive pay disparities between men and women, both within and outside of professional tennis.
Within their love triangle, tension arises with the dawning recognition that in a one-on-one sport, there’s always another person who doesn’t have a place on the court. Save for the night they meet, when Tashi induces Art and Patrick to kiss each other for her entertainment, the three of them rarely engage with one another at the same time: Someone is always watching from the stands, whether literally or metaphorically.
During Patrick and Tashi’s brief romance, a post-coital conversation seamlessly transitions into a discussion about Patrick’s poor performance as a pro, and eventually becomes a referendum on why their relationship doesn’t work. Confused, and trying to make sense of it all as their banter swiftly changes definitions, Patrick asks: “Are we still talking about tennis?” “We’re always talking about tennis,” Tashi replies. Frustrated, Patrick tersely retorts: “Can we not?”
As the linguists George Lakoff and Mark Johnson argue in their 1980 book,Metaphors We Live By, “Our ordinary conceptual system, in terms of which we both think and act, is fundamentally metaphorical in nature.” In other words, we’re always talking about things in terms of other things—even if it’s not always as obvious as it is in Challengers. Metaphors are more than just a poetic device; they’re fundamental to the way language is structured.
No matter what issue is at stake, or how grand it may be, it can always be reduced to an individual’s performance on the court.
While Patrick is still dating Tashi, and Art is transparently trying to steal his best friend’s girl, Patrick playfully accuses Art of playing “percentage tennis”—a patient strategy of hitting low-risk shots and waiting for your opponent to mess up
Art asks for Tashi’s permission to retire once the season is over. Art knows that this would be the end of their professional relationship—he would no longer be able to play dutiful pupil to Tashi. But it might also be the end of whatever spark animated their love in the first place, as you can’t play “good fucking tennis” in retirement. Tashi says she will leave Art if he doesn’t beat Patrick in the final. Tired of playing, but unable to escape the game, Art curls up in his wife’s lap and cries.
Build tools around workflows, not workflows around tools | thesephist.com
Building your own productivity tools that conform to your unique workflows and mental models is more effective than using mass-market tools and bending your workflows to fit them
My biggest benefit from writing my own tool set is that I can build the tools that exactly conform to my workflows, rather than constructing my workflows around the tools available to me. This means the tools can truly be an extension of the way my brain thinks and organizes information about the world around me.
I think it’s easy to underestimate the extent to which our tools can constrain our thinking, if the way they work goes against the way we work. Conversely, great tools that parallel our minds can multiply our creativity and productivity, by removing the invisible friction of translating between our mental models and the models around which the tools are built.
I don’t think everyone needs to go out and build their own productivity tools from the ground-up. But I do think that it’s important to think of the tools you use to organize your life as extensions of your mind and yourself, rather than trivial utilities to fill the gaps in your life.
The rise of Generative AI-driven design patterns
One of the most impactful uses of LLM technology lies in content rewriting, which naturally capitalizes on these systems’ robust capabilities for generating and refining text. This application is a logical fit, helping users enhance their content while engaging with a service.
Similar to summarization but incorporating an element of judgment, features like Microsoft Team CoPilot’s call transcript summaries distill extensive discussions into essential bullet points, spotlighting pivotal moments or insights.
The ability to ‘understand’ nuanced language through summarization extends naturally into advanced search functionalities. ServiceNow does this by enabling customer service agents to search tickets for recommended solutions and to dispel jargon used by different agents.
Rather than merely focusing on content creation or manipulation, emerging applications of these systems provide new perspectives and predict outcomes based on accumulated human experiences. The actual value of these applications lies not merely in enhancing efficiency but in augmenting effectiveness, enabling users to make more informed decisions.
How Perplexity builds product
inside look at how Perplexity builds product—which to me feels like what the future of product development will look like for many companies:AI-first: They’ve been asking AI questions about every step of the company-building process, including “How do I launch a product?” Employees are encouraged to ask AI before bothering colleagues.Organized like slime mold: They optimize for minimizing coordination costs by parallelizing as much of each project as possible.Small teams: Their typical team is two to three people. Their AI-generated (highly rated) podcast was built and is run by just one person.Few managers: They hire self-driven ICs and actively avoid hiring people who are strongest at guiding other people’s work.A prediction for the future: Johnny said, “If I had to guess, technical PMs or engineers with product taste will become the most valuable people at a company over time.”
Typical projects we work on only have one or two people on it. The hardest projects have three or four people, max. For example, our podcast is built by one person end to end. He’s a brand designer, but he does audio engineering and he’s doing all kinds of research to figure out how to build the most interactive and interesting podcast. I don’t think a PM has stepped into that process at any point.
We leverage product management most when there’s a really difficult decision that branches into many directions, and for more involved projects.
The hardest, and most important, part of the PM’s job is having taste around use cases. With AI, there are way too many possible use cases that you could work on. So the PM has to step in and make a branching qualitative decision based on the data, user research, and so on.
a big problem with AI is how you prioritize between more productivity-based use cases versus the engaging chatbot-type use cases.
we look foremost for flexibility and initiative. The ability to build constructively in a limited-resource environment (potentially having to wear several hats) is the most important to us.
We look for strong ICs with clear quantitative impacts on users rather than within their company. If I see the terms “Agile expert” or “scrum master” in the resume, it’s probably not going to be a great fit.
My goal is to structure teams around minimizing “coordination headwind,” as described by Alex Komoroske in this deck on seeing organizations as slime mold. The rough idea is that coordination costs (caused by uncertainty and disagreements) increase with scale, and adding managers doesn’t improve things. People’s incentives become misaligned. People tend to lie to their manager, who lies to their manager. And if you want to talk to someone in another part of the org, you have to go up two levels and down two levels, asking everyone along the way.
Instead, what you want to do is keep the overall goals aligned, and parallelize projects that point toward this goal by sharing reusable guides and processes.
Perplexity has existed for less than two years, and things are changing so quickly in AI that it’s hard to commit beyond that. We create quarterly plans. Within quarters, we try to keep plans stable within a product roadmap. The roadmap has a few large projects that everyone is aware of, along with small tasks that we shift around as priorities change.
Each week we have a kickoff meeting where everyone sets high-level expectations for their week. We have a culture of setting 75% weekly goals: everyone identifies their top priority for the week and tries to hit 75% of that by the end of the week. Just a few bullet points to make sure priorities are clear during the week.
All objectives are measurable, either in terms of quantifiable thresholds or Boolean “was X completed or not.” Our objectives are very aggressive, and often at the end of the quarter we only end up completing 70% in one direction or another. The remaining 30% helps identify gaps in prioritization and staffing.
At the beginning of each project, there is a quick kickoff for alignment, and afterward, iteration occurs in an asynchronous fashion, without constraints or review processes. When individuals feel ready for feedback on designs, implementation, or final product, they share it in Slack, and other members of the team give honest and constructive feedback. Iteration happens organically as needed, and the product doesn’t get launched until it gains internal traction via dogfooding.
all teams share common top-level metrics while A/B testing within their layer of the stack. Because the product can shift so quickly, we want to avoid political issues where anyone’s identity is bound to any given component of the product.
We’ve found that when teams don’t have a PM, team members take on the PM responsibilities, like adjusting scope, making user-facing decisions, and trusting their own taste.
What’s your primary tool for task management, and bug tracking?Linear. For AI products, the line between tasks, bugs, and projects becomes blurred, but we’ve found many concepts in Linear, like Leads, Triage, Sizing, etc., to be extremely important. A favorite feature of mine is auto-archiving—if a task hasn’t been mentioned in a while, chances are it’s not actually important.The primary tool we use to store sources of truth like roadmaps and milestone planning is Notion. We use Notion during development for design docs and RFCs, and afterward for documentation, postmortems, and historical records. Putting thoughts on paper (documenting chain-of-thought) leads to much clearer decision-making, and makes it easier to align async and avoid meetings.Unwrap.ai is a tool we’ve also recently introduced to consolidate, document, and quantify qualitative feedback. Because of the nature of AI, many issues are not always deterministic enough to classify as bugs. Unwrap groups individual pieces of feedback into more concrete themes and areas of improvement.
High-level objectives and directions come top-down, but a large amount of new ideas are floated bottom-up. We believe strongly that engineering and design should have ownership over ideas and details, especially for an AI product where the constraints are not known until ideas are turned into code and mock-ups.
Big challenges today revolve around scaling from our current size to the next level, both on the hiring side and in execution and planning. We don’t want to lose our core identity of working in a very flat and collaborative environment. Even small decisions, like how to organize Slack and Linear, can be tough to scale. Trying to stay transparent and scale the number of channels and projects without causing notifications to explode is something we’re currently trying to figure out.
Flow state - Why fragmented thinking is worse than any interruption
Both arts and athletics involve a lot of deft physical movement, and I could see why professionals in those fields would benefit from learning to resist overthinking so they can “just do it.”
Almost every profession involves some need for focus, however, so you can see why, over time, the idea of a flow state breached its original limits. Now, “flow state” has all sorts of associations—some scientific, some folk, and some a mix of both. For many, the term has just become a dressed-up version of focusing.
A 2023 study found, for example, that there is a huge range of barriers to flow—many of which aren’t just interruptions from coworkers. They categorized these as situational barriers, such as interruptions and distractions; personal barriers, such as the work being too challenging or not challenging enough; and interpersonal barriers, such as poor management and poor team dynamics.
A 2018 study found, in addition, that the most disruptive interruptions aren’t external—they’re internal. 81% of the participants predicted internal interruptions would be worse, but they were wrong. “Self-interruptions,” the researchers wrote, “make task switching and interruptions more disruptive by negatively impacting the length of the suspension period and the number of nested interruptions.”
But because no one literally interrupted your work, you might be unaware of the costs of that rote, mundane work. You might even castigate yourself over the day for not getting the work done: You fought for a distraction-free day, got it, and you have nothing to show for it. It can feel bad.
a seemingly individual problem, staying focused, is often downstream from an organizational problem.
Looking for AI use-cases — Benedict Evans
- LLMs have impressive capabilities, but many people struggle to find immediate use-cases that match their own needs and workflows.
- Realizing the potential of LLMs requires not just technical advancements, but also identifying specific problems that can be automated and building dedicated applications around them.
- The adoption of new technologies often follows a pattern of initially trying to fit them into existing workflows, before eventually changing workflows to better leverage the new tools.
if you had showed VisiCalc to a lawyer or a graphic designer, their response might well have been ‘that’s amazing, and maybe my book-keeper should see this, but I don’t do that’. Lawyers needed a word processor, and graphic designers needed (say) Postscript, Pagemaker and Photoshop, and that took longer.
I’ve been thinking about this problem a lot in the last 18 months, as I’ve experimented with ChatGPT, Gemini, Claude and all the other chatbots that have sprouted up: ‘this is amazing, but I don’t have that use-case’.
A spreadsheet can’t do word processing or graphic design, and a PC can do all of those but someone needs to write those applications for you first, one use-case at a time.
no matter how good the tech is, you have to think of the use-case. You have to see it. You have to notice something you spend a lot of time doing and realise that it could be automated with a tool like this.
Some of this is about imagination, and familiarity. It reminds me a little of the early days of Google, when we were so used to hand-crafting our solutions to problems that it took time to realise that you could ‘just Google that’.
This is also, perhaps, matching a classic pattern for the adoption of new technology: you start by making it fit the things you already do, where it’s easy and obvious to see that this is a use-case, if you have one, and then later, over time, you change the way you work to fit the new tool.
The concept of product-market fit is that normally you have to iterate your idea of the product and your idea of the use-case and customer towards each other - and then you need sales.
Meanwhile, spreadsheets were both a use-case for a PC and a general-purpose substrate in their own right, just as email or SQL might be, and yet all of those have been unbundled. The typical big company today uses hundreds of different SaaS apps, all them, so to speak, unbundling something out of Excel, Oracle or Outlook. All of them, at their core, are an idea for a problem and an idea for a workflow to solve that problem, that is easier to grasp and deploy than saying ‘you could do that in Excel!’ Rather, you instantiate the problem and the solution in software - ‘wrap it’, indeed - and sell that to a CIO. You sell them a problem.
there’s a ‘Cambrian Explosion’ of startups using OpenAI or Anthropic APIs to build single-purpose dedicated apps that aim at one problem and wrap it in hand-built UI, tooling and enterprise sales, much as a previous generation did with SQL.
Back in 1982, my father had one (1) electric drill, but since then tool companies have turned that into a whole constellation of battery-powered electric hole-makers. One upon a time every startup had SQL inside, but that wasn’t the product, and now every startup will have LLMs inside.
people are still creating companies based on realising that X or Y is a problem, realising that it can be turned into pattern recognition, and then going out and selling that problem.
A GUI tells the users what they can do, but it also tells the computer everything we already know about the problem, and with a general-purpose, open-ended prompt, the user has to think of all of that themselves, every single time, or hope it’s already in the training data. So, can the GUI itself be generative? Or do we need another whole generation of Dan Bricklins to see the problem, and then turn it into apps, thousands of them, one at a time, each of them with some LLM somewhere under the hood?
The change would be that these new use-cases would be things that are still automated one-at-a-time, but that could not have been automated before, or that would have needed far more software (and capital) to automate. That would make LLMs the new SQL, not the new HAL9000.