Public Health & Medicine

Public Health & Medicine

556 bookmarks
Custom sorting
Supercharge your research with Alec Nguyen co-founder of Afforai
Supercharge your research with Alec Nguyen co-founder of Afforai
FEATURED TOOL Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better and work smarter. Alec Nguyen is the co-founder of Afforai, an AI tool to perform research and interact with documents. He is now on a mission to make AI tools more accessible to everyone, whatever their background. In this interview, we talked about privacy and security in the AI industry, why multilingual support is paramount for inclusive AI, the importance of reliability for knowledge workers, and much more. Enjoy the read! Hi Alec, thanks so much for agreeing to this interview! Afforai started as a college project. Can you tell us about the early discovery journey? My cofounder, Hung, and I have known each other since our first year of college. We both have backgrounds in economics, data science and software engineering, which means we read a lot of research papers. But it started feeling overwhelming—so much to read and summarize. That’s when we thought about a faster way to read through papers, get the main points, and still do our work accurately. And this is how we came up with Afforai. During our final year at university, we developed Afforai for a startup competition. Our success as finalists, along with acceptance into two amazing startup accelerators, 1871 and gener8tor, marked significant milestones that led to us pursuing Afforai full-time. We are building Afforai to be the solution we wish heavy researchers like us had. Your ambition is to create Google for knowledge—what does that mean, exactly? With so much knowledge about anything and everything around the world, it’s really easy to get drowned from the sea of information. No one person can master anything anymore. There’s simply too much to know.  Google managed to index websites and data from the entire internet, but it doesn’t understand the information it indexed. We’re working to build a platform that index knowledge, making infinite knowledge instantly accessible. Specifically, how does it work? Afforai helps you input all the information on any topic, from any discipline, in any language, and summarize the key findings relevant to your goals. Providing you an invaluable tool to research damn near anything. It can search information, summarize reports, and translate between languages, answering and explaining in a different language than the original text. You can upload hundreds of documents and files like pdf, docx, text, and even websites and Afforai will be able to comprehend the entire body of knowledge you provided to give you the answer you’re looking for. Can you also tell us about the different modes you have created for the assistant? Afforai has three different modes, Fast, Powerful, and Google Modes, when it comes to the way Afforai gathers and comprehends the knowledge to give you the answer you’re looking for. The “fast mode” (default) is designed for tasks like information look-up and creating website chatbots. This mode uses Regular Retrieval Augmented Generation AI (RAG) to embed a collection of text into a vector database. This technology struggles with answering questions that are not explicitly given in the database and has length limits for generating responses. The “powerful mode” is designed to address the above limitations. This mode identifies information that may not be explicitly stated or requires additional reasoning. It also combines information from different answers and filters out redundant or irrelevant information. The resulting output is an information-dense answer with a reading coverage of 100,000 words, which is 10 times that of Fast mode. Powerful model is recommended for tasks like document comprehension, reasoning tasks, writing reports, and research.  The “Google Mode” is pretty self-explanatory. Turning this mode on will allow Afforai to access the internet to supplement the answer when our AI determines that the provided documents don’t have enough information. This also adds an extra layer to guarantee up-to-date information for your answer. AI + Internet = Magical Answers. Something people often worry about with AI tools for research is the accuracy and reliability of the output. How does Afforai address this challenge? Of course, my goal is to build an AI tool that people can trust: accurate, fast, and reliable. You can upload hundreds of files like pdf, docx, and websites to Afforai. Afforai will use our Powerful Mode algorithm combined with Azure OpenAI model to understand, extrapolate information that is both explicitly and implicitly stated from the sources. To be even more helpful with your knowledge research work, you can view the files, documents side-by-side with our document viewer feature as you ask questions to Afforai. So you don’t have to switch tabs back and forth as you work. Expanding on the document viewer, you get accuracy and reliability for every answer given by the AI with data citation feature. With every answer given by Afforai, you will get clickable citation links that would highlight on the document viewer where Afforai got the answer from. From the page of the document and down to the paragraph, every single time. You can also connect Afforai chatbot with Google, giving you the ability to do real-time research with up-to-date information. Building on top of an overpowered data comprehension ability, Afforai also accesses the internet and fills in any information gaps to provide you with the most accurate answer. You’ve also worked hard to address the challenge of multilingual content. I came to the US as an international student, so English is not my first language. This gives me the understanding that knowledge exists not only in just English. Supporting multilingual content was incredibly important to me personally and I believe this gives my users in the US and internationally around the world equal access to Afforai and equal access to knowledge, anywhere and everywhere, regardless of cultural and language barriers. With Afforai, you can upload files and research papers in foreign languages and get a response in English and vice versa. Giving an example, you can upload a pasta recipe on an Italian website and ask in Japanese about the recipe, Afforai will give you an answer back in Japanese about a pasta recipe in Italian. This is possible to do with over 100 languages. What about privacy and security? I’ve been a user for so many apps and services, so I understand firsthand how I’d like my data to be protected. So, having an opportunity to build my own startup, I always think how I would feel if I’m an user of Afforai. With that said, I do not ever use your data or sell your data to any other companies. I don’t store your conversation with the AI, and the files you uploaded on the system are stored and encrypted on the cloud using Microsoft Azure and MongoDB with their standard security. LLM calls are made using Azure OpenAI with their security measures. What about you, how do you use Afforai? There are many ways I’ve personally used Afforai in my own work. For instance, I use Afforai to scan through partnership agreements for my startup. I upload an entire 600-page macroeconomics book and use “Powerful Mode” to learn, summarize chapters, and create knowledge check questions. I have also used Afforai to create an internal training chatbot for my team members. How do you recommend someone get started? To get started with Afforai, I recommend following these steps. First, you can sign up for Afforai for free and get 50 credits to try the platform. You will go through a quick welcome guide that will help you get started with using Afforai. Then, you will be able to upload many types of documents such as books, papers, reports, and even Ness Labs articles. You can upload multiple documents at once. And this is the aha moment you will experience: You can immediately start asking questions to Afforai just like you would ask a know-it-all friend. For example, you can ask for a summary of the Ness Labs blogs, extract common themes from the documents, or even draft an email based on a team report. Tasks that would have taken hours or even days to complete can now be done in just minutes with the help of Afforai. The reliability of the information provided by Afforai satisfies me as a knowledge worker. The AI provides detailed citations, giving me the confidence to use the knowledge it provides. Many users, including myself, have saved a lot of mental power, significant amounts of time and boosted productivity with Afforai. Students use Afforai to summarize their readings for their school work. Sales teams have relied on Afforai to conduct customer research faster and get the most relevant info to send effective outreach emails. That sounds fantastic, and very easy to get started. And finally… What’s next for Afforai? In terms of development, I want to stay true to my vision, making Afforai your second brain, helping you make infinite knowledge instantly accessible and making you the smartest person in your field. I have sales teams, researchers, and knowledge workers starting to use Afforai in their work and I’m very happy to see the value I bring to these amazing people.  We’re also expanding our team, having new brilliant minds joining Afforai to help make it even better. This enables my team and I to polish up the platform, develop new features that would give our users a better experience. Because at the end of the day, the users are the people that bring value to Afforai. Thank you so much for your time, Alec! Where can people learn more about Afforai? If you read all the way here, I encourage you to sign up and try it here. You can message me directly on my LinkedIn or email me. I personally check this inbox every day. So these two methods are the fastest to get a direct hold of me. The post Supercharge your research with Alec Nguyen, co-founder of Afforai appeared first on Ness Labs.
Supercharge your research with Alec Nguyen co-founder of Afforai
Self-Anthropology: Become your own anthropologist with personal field notes
Self-Anthropology: Become your own anthropologist with personal field notes
When was the last time you stopped to truly observe your own life? Turning an anthropological lens on yourself might feel strange, but it can lead to invaluable insights, allowing you to uncover patterns, gain self-knowledge, and imagine new possibilities. Anthropologists ask fundamental questions such as: What does it mean to live in our world as a human being? How can the study of humanity reveal new ways of being human and help us imagine our collective future? It’s a game of curiosity and patience, an exercise in humility and receptiveness. And it’s a game you can play to learn more about yourself and where you stand in the world. You simply need to turn into an anthropologist where the topic of study is your own life. In search of answers, anthropologists conduct fieldwork: they go into the field and write field notes. These notes could be written accounts of observations, or they may take the form of visual maps to chart relationships and uncover intriguing paths. In the same way that anthropologists take field notes to understand humanity, you can use this practice to learn more about who you are and how to improve your life. Keeping a personal field journal will allow you to create a trail of breadcrumbs to deconstruct patterns and imagine new directions. Let’s see how it works. The power of personal field notes I was first introduced to the idea of adding timestamps to my notes by Tony Stubblebine, the CEO of Medium, who notes the time and writes a few sentences in a journal every time he switches work projects. Because he journals in the interstice between projects, Stubblebine dubbed this practice interstitial journaling. Then, I started seeing such timestamped notes everywhere. Timestamped notes are ubiquitous in professions where important decisions must be made based on rapidly changing information. Doctors write patient charts, pilots keep flight logs, scientists track their research in lab notebooks, system engineers record events to the syslog, journalists have interview transcripts, and project managers often maintain work logs. Inspired by all these forms of timestamped notes, personal field notes offer a hybrid of journaling and note-taking specifically designed to audit your daily experiences. The basic idea is to write a few lines every time you take a break and track the exact time you take these notes. Unlike logs that focus on events at work or interstitial journaling, which is confined to workday transitions, personal field notes can be captured anytime and anywhere – whether at the office, home, commuting, or even mid-conversation when something piques your interest. (My friends sometimes make fun of me when I grab my phone saying “Wait, I need to write that down!” while we chat – it usually means it’s a good chat). Field notes are powerful for several reasons. By encouraging you to capture your thoughts while listening to podcasts, reading articles, or even during conversations, they help you become a more active observer of your own life. They take very little time; a few seconds whenever you observe something interesting. And because they are timestamped, they help make it easier to identify under what conditions you work, learn, and feel best. By taking notes in the present moment instead of waiting until a dedicated time to reflect, you are less likely to forget some important experiences; this includes fleeting moments of inspiration and ideas that often get lost in the bustle of the day. And when you collect lots of small data points, you create a “breadcrumbs trail” and are more likely to notice overarching trends than if you only focus on the most salient experiences. By recording your activities, thoughts, and emotions, these notes will serve as a rich source of observations you can then turn into insights to guide your next growth loop.  How to practice self-anthropology Practicing self-anthropology with field notes only takes three steps. This exercise in self-exploration requires no special skills but the willingness to slow down and take notes throughout the day. You will, however, need to approach this practice with the same receptive and inquisitive attitude of an anthropologist studying an unfamiliar culture. With a little curiosity and patience, your own fieldwork will reveal inspiration to create positive change. Let’s go over the steps to turning an anthropological lens on yourself: Step 1: Set up your field journal First, you need a simple, low-friction way to take notes. Where do you take quick notes when you’re in a rush? This is where your field notes should go. It could be in your phone or a notebook – wherever it feels most comfortable. Seriously, don’t overthink it: you can use Asana, Evernote, or any other notetaking app. Apple Notes or Google Keep is fine! Create a note on your phone or start a new page in your notebook. This will be your field journal (mine is synced between my phone and my laptop, so I have access to it on the go and I can open it in a tab when working). Step 2: Capture your field notes You need enough data to start noticing patterns, so aim to capture field notes for at least 24 hours. When feeling particularly lost, I do intense personal fieldwork for three to five days. Choose a day in the next week when you will start this exercise. Ideally, it should be a typical day with a mix of professional and personal activities. Don’t do it on your best friend’s wedding day or when management is due to announce the latest round of promotions. Keep your field journal with you (that’s why a note on your phone works great). Write the time and a couple of sentences whenever you take a break, switch tasks, or notice something interesting. That “something interesting” could be external such as an event, or internal such as a feeling – maybe uneasiness or excitement. If something made you stop for a second to wonder whether you should write it down, then it’s interesting enough. Embrace non-linearity: You have complete freedom to write in a stream-of-consciousness style to capture and connect your observations as they arise. Interactions, emotions, moments of curiosity, emerging interests… Did someone compliment you? Were you excited by a particular announcement? Were you faced with a surprising challenge? Did you find a piece of work particularly draining or stimulating? There are no limitations as to what you can include in your field notes, but here are some ideas to inspire you: Insights: Your moments of curiosity, random thoughts, new ideas, and questions that spark your interest. Encounters: Your social interactions or new connections made and any insights or feelings that arose from them. Mood: Your emotions during or after an experience, whether a meeting, a workout, a podcast, etc. Energy: Your shifts in energy levels throughout the day, as well as what gives you energy or drains your energy. Other: Anything else that doesn’t fit in the previous categories. It may seem like a lot, but remember that this is only for a few days at most. You are doing deep field work and want to ensure you don’t miss anything. Use your curiosity as a compass to decide what to write down. Step 3: Analyze your data After 24 hours (or a bit longer), you will have a treasure trove of field notes. It’s time to review them. If your field notes are on paper, you may want to grab some colored pens. If you captured them digitally, it can be easier to copy and paste them into a document to highlight and move text around. Spend time reading your notes and reflecting on the experiences you’ve documented. Look for recurring themes, interesting details, and general feelings that come up repeatedly. This is a very fluid process. You may want to create a category for “things that give me joy” and “things that drain me”, or “what I want more of” and “what I want less of”, or create big categories for important aspects of your life like learning, relationships, and health. Simply by grouping your breadcrumbs into larger piles, you will start to see some patterns emerge. Identify an observation that stands out to you. This could be a recurring theme, a persistent challenge, or a point of curiosity. For instance, you could notice that you have the “morning blues” every day when it’s time to go to work, that working on a specific type of task drains your energy, or that your moods tend to be higher when you work on group projects. You can then turn your observation into an actionable question. For example, if your observation is that you’re feeling energized when discussing certain topics, you might ask: “How can I incorporate more of this into my daily life?” This can be the seed of a little life experiment – something new you want to try to see if it improves your creativity, productivity, and wellbeing. And if you enjoyed the few days of taking field notes, you don’t have to stop! I personally take them all the time, albeit in a less intense way than I do when I’m feeling lost and need to recalibrate. Practicing self-anthropology opens up new possibilities. Taking field notes is like planting the seeds of insights that will eventually grow into greater self-knowledge. Equipped with a fresh perspective, you can rethink habits, relationships, and priorities. So create a new note, grab your journal, and explore the uncharted territory of your own life with an open mind; you never know what you could find. The post Self-Anthropology: Become your own anthropologist with personal field notes appeared first on Ness Labs.
Self-Anthropology: Become your own anthropologist with personal field notes
September 2023 Updates
September 2023 Updates
New Things Under the Sun is a living literature review; as the state of the academic literature evolves, so do we. This post highlights some recent updates. One theme of this update is responding to feedback, which is always welcome. Thanks! Thanks for reading What's New Under the Sun! Subscribe for free to receive new posts and support my work. Peer Review The article “What does peer review know?” surveyed some studies that compare peer review scores to long-run outcomes, both for grant proposals and journal submissions. It argues peer review scores do predict long-run outcomes, but only with a lot of noise. Misha Teplitskiy pointed me to some additional papers on this topic, which reinforced this point. The updated article now includes the following section. Gallo et al. (2014) obtain pretty similar results as above for the peer review scores of the American Institutes of Biological Sciences, an organization that provides expert peer review services for clients. In the figure below, on the horizontal axis we see the peer review scores for 227 projects reviewed by American Institutes of Biological Sciences peer reviewers that were ultimately funded. These range from 1 (the best) to 5 (the worst) (note the figure stops at 4; no projects receiving a score worse than that were funded). On the vertical axis we have a normalized count of all the citations to publications that emerged from the grant. As with the NIH data, we again observe a noisy but pretty consistent relationship: the better the peer review score, the more citations eventually earned.1 From Gallo et al. (2014) Clavería et al. (2000) obtains similar results in a review of 2,744 proposals funded by the Spanish Health Research Fund over 1988-1994. In this case, the peer review data available is pretty coarse: Claveria and coauthors just know if reviewers classified projects as “excellent/good”, “acceptable”, or “questionable/rejected.” However, a distinguishing feature of this study is that in 1996 the authors arranged for each of these proposals to be reviewed retrospectively by new reviewers. These reviewers looked at the original proposals, the annual and final reports, and published papers originating from the project, and assigned each of the now-completed proposals a score of 1-10 (higher is better) for its actual scientific performance. So, if we are concerned that quantitative indicators like citations or publication counts are inappropriate ways to evaluate science, this study gives us a more holistic/subjective assessment of research quality. The study again finds that peer review scores are noisily correlated with measures of quality. Spanish Health Research Fund proposals were reviewed by two commissions, one comprised of experts with topical expertise, and one with experts from related fields. After controlling for research level, duration, budget, and year of project onset, projects that received an “excellent/good” review at the proposal stage from the related field commission were rated 0.3 points higher when the completed projects were reviewed (recall, on a ten point scale). An “excellent/good” review from the commission with more direct topical expertise was associated with a 0.7 higher rating. (If you do not adjust for research level and others, the association is a bit stronger). Again - better peer review scores seem to be associated with better outcomes, but the association isn’t super strong (for context, the average rating for completed projects was 5.0/10). The rest of the article turns to similar evidence from peer review reports to journal submissions. Read the whole article Screening for Statistical Significance? Turning to the effects of peer review and editor discretion on publication bias, the article “Publication bias without editors? The case of preprint servers” looks at the causes of publication bias. It could be that publication bias arises at the journal submission stage; maybe editors and peer reviewers screen out papers that find non-significant results? The article looks at preprint servers to see if that’s so, and argues such a process is not the main driver of publication bias. It is not merely the case that reviewers bounce all the papers that are submitted but obtain results that are not statistically significant. Instead, such papers do not seem to even be written up and submitted. A new paper by Brodeur et al. provides quite clear evidence of this dynamic by following submissions and publications at the Journal of Human Resources. I’ve incorporated discussion of that paper into a discussion of another (Broderick, Cook, and Heyes 2020), already covered in the original version of the article. We pick up after describing how you can identify the statistical fingerprints of p-hacking by looking for a suspicious pileup of test-statistics that are just barely statistically significant (and hence, perceived to be publishable). Brodeur, Cook, and Heyes (2020) and Brodeur et al. (2023) look for [a] suspicious pileup right above the conventional thresholds for statistical significance. The set of four figures below plot the distribution of two kinds of test statistics found in various samples of economics papers. The top row, from Brodeur, Cook, and Heyes (2020) plot the distribution of something called a z-statistic, which divides a normalized version of the effect size by an estimate of precision. A big z-statistic is associated with a precisely estimated effect that is large - those are places where we can be most confident the true effect is not actually zero. A small z-statistic is a small and very imprecisely estimated effect size; those are places where we worry a lot that the true effect is actually zero and we’re just observing noise. The bottom row, from Brodeur et al. (2023) plots a closely related statistic, a p-value, which is (colloquially) the probability a given set of data would arise simply by chance, if there is no genuine effect out there. Top row from Brodeur, Cook, and Heyes (2020), bottom row from Brodeur et al. (2023) There are two interesting things we can read off this figure. First, we look to see if there is a suspicious pileup right above (for z-statistics, so top row) or below (for p-values, so bottom row) important thresholds. Those thresholds are indicated by vertical lines and each distribution shows spikes of test statistics just barely in the statistically significant range. In other words, lots of papers just happen to be finding with results that are barely statistically significant by conventional standards. The second interesting thing relates to the similarity of these patterns across the four figures. In the top-right, we have the distribution of test-statistics from papers published in top 25 economics journals in 2015 and 2018. In the top-left, Brodeur and coauthors go back and identify published pre-print versions of these papers and do the same analysis. For the purposes of the current discussion, the main point is that this anomalous distribution of test statistic results is already there in the working paper stage. If we interpret this as evidence of p-hacking, it’s telling us that researchers don’t do it when reviewers complain - they do it before they even submit to reviewers. A limitation of the top row is that we don’t actually see how peer review affects what gets published. We started with the set of published papers, and then looked back to see what those papers looked like when they were just working papers. But we don’t know if the stuff that wasn’t published was better or worse, in terms of evidence for p-hacking. That’s where the second row comes in. Although it’s a more limited sample, in the bottom left we now have a large sample of papers that were submitted to one particular journal. In the bottom right, we have the papers that ended up being published. Again, there’s not a large difference between the two. It’s not really the case that economists submit papers without much evidence of p-hacking but then peer reviewers only publish the stuff that exhibits signs of p-hacking. If it’s there, it’s there from the start. (Aside - Brodeur et al. 2023 actually finds some evidence that editors are a bit more likely to desk reject papers with results that are just barely statistically significant, while peer reviewers display the opposite tendency. The two effects seem to mostly wash out. For more on the relative merits of accountable individual decision-makers, such as editors, relative to peer review, see Can taste beat peer review?) Read the whole article Variation in Publication Bias The preceding argued that publication bias ultimately stems from researchers anticipating a better reception for papers that obtain statistically significant results. But as highlighted in “Why is publication bias worse in some disciplines than others?” an additional puzzle is why this problem seems to be worse in some fields. I’ve updated this article to incorporate discussion of Bartoš et al. (2022), which uses more sophisticated methods to assess the extent of publication bias across different fields. After discussing how different forms of publication bias can lead to unusual distributions of statistical test statistics, and how Bayesian model averaging can leverage those distortions to assess the likelihood of different forms of bias, the post continues: Bartoš et al. (2022) identify about 1,000 meta analyses across environmental sciences, psychology, and economics, covering more than one hundred thousand individual studies (the lion’s share in economics), and another 67,000 meta-analyses in medicine that cover nearly 600,000 individual studies in medicine. For each field, they see how likely it is that different sets of assumptions would generate data displaying these patterns, and then how likely it is that each of these models is “correct.” Lastly, once they have the probability all these different models are correct, they can “turn off” publ...
September 2023 Updates
You Dont Need to Choose
You Dont Need to Choose
“I’ve decided to take it easy at work this year and focus on myself.” I’ve recently been hearing variations of this sentence over and over again. Magazines are publishing stories about “the end of ambition” and how more people are taking extended sabbaticals. It seems like we need to make a constant choice between our personal and professional and personal growth. If you want to achieve your entrepreneurial dreams or build a successful career, then your personal development will take a backseat. Or, if you want to get to know yourself better and expand your consciousness, you should disconnect from work. But is it truly a zero-sum game? The finite mental energy fallacy The American social psychologist Roy Baumeister and his colleagues proposed a model that compared self-control to a muscle that can become fatigued. The researchers hypothesized that the of your willpower “muscle” would leave you exhausted and unable to muster the same level of effort in subsequent tasks—a phenomenon called ego depletion. According to this view, the brain is like a battery with a finite amount of mental energy per day. We believe that every challenge we navigate at work will drain this battery. And as we pour our energy into work, there’s an underlying worry that we’re using up this precious, limited energy that could have been allocated to personal pursuits and self-improvement. This perspective has significantly influenced the discourse around work-life balance. Just have a quick look online, and you’ll find a deluge of articles and courses aimed at helping people strike the right balance between their personal and professional lives. The premise of these resources is often the same: Since our mental energy is limited, we must find ways to ration it wisely. Strategies revolve around optimizing work productivity to ensure enough energy is left for personal pursuits. The narrative is clear: personal sacrifice is necessary to achieve professional success, and vice versa. But researchers are starting to challenge this idea. Recent studies suggest that after an initial burst of effort, people’s motivation shifts from control to reward. This indicates that we don’t necessarily experience a depletion of mental energy, but a change in focus. Let’s say you have a bucket of water and you’re using it to water the plants in a garden. The traditional view of ego depletion suggests that every time you use mental energy, it’s like drawing from the bucket to water the plants. Over time, the bucket will eventually be empty. The newest research offers a different perspective. Instead of a bucket, imagine that you have a hose. After using some water for watering the plants, you may use the hose for something more immediately gratifying, like filling a kiddie pool. The water source hasn’t run out; it’s just being channeled in a different direction based on changing priorities. It’s not about a loss of mental energy, but a decision—which can be conscious or subconscious— to redirect your efforts. So, what if our mental energy isn’t as limited as we’ve been told? Then, the strategies we’ve been employing to balance our professional and personal lives might need a complete overhaul. It opens the door to a paradigm shift where personal and professional growth aren’t at odds, but can actually complement and fuel each other. Nurturing your mental energy Instead of seeing your mental energy as a limited resource you need to ration, breaking free from this scarcity mindset can help you create a virtuous circle where your day jobs and side projects both fuel your productivity and creativity, where your personal relationships provide inspiration to solve professional challenges, and where learning and growth permeate all area of your life. Learning how to use a new tool at work could inspire you to start a new digital project when you get home. Researching your local archives to create historically accurate characters in a novel you are writing could provide insights into building a more engaging community at work. A conversation with a colleague can offer the exact perspective you needed to approach a thorny conversation with a friend. The key is not to treat what you learn in your professional life as separate from what you learn in your personal life. It’s to see them as porous, equally important parts of your life, full of opportunities to gain energy from. Here are three simple ways you can apply to start breaking free from the ego depletion paradigm and nurture your mental energy: 1. To manage your energy, manage your focus. Challenge the belief that you’re “out” of energy after a day of hard work. Instead, you can expand your energy by directing your focus toward activities that feed your curiosity and creativity. 2. Reflect on how your energy flows between various areas of your life. We often struggle to manage our energy levels when we feel stretched between unrelated commitments. Take some time to look at all your personal and professional projects, and ask yourself: Where can I create synergies? For instance, is there a topic you’re personally curious about that could benefit your colleagues? Or, is there something you have to learn for work that could be useful for a personal project? 3. Surround yourself with energy expanders. Connect with people who also believe in nurturing and expanding their mental energy by seeking growth in both professional and personal parts of their lives. Not only will they inspire you to not place false limitations on yourself, but they can provide advice to create new synergies across all your areas of potential growth. Of course, we can be physically and psychologically exhausted for many reasons—lack of sleep, emotional upheavals, or even nutritional imbalances—but it doesn’t mean that our mental energy is inherently finite. By challenging this belief, creating growth loops across different areas of your life, and surrounding yourself with like-minded people, you can significantly expand your mental energy to achieve more without sacrificing your mental health. The post You Don’t Need to Choose appeared first on Ness Labs.
You Dont Need to Choose
Interoception: The hidden sixth sense
Interoception: The hidden sixth sense
“See, hear, smell, taste, touch… With our five senses, we can learn so much!” You’ve probably heard some variation of this nursery rhyme. Most languages have their own version, walking kids through each of their senses. But those songs paint an incomplete picture of our sensory system, for they only include our outward-facing senses, which scientists call exteroception (literally, “external perception”). We also have an internal sensory system that allows us to perceive and interpret signals originating from within the body — such as your heart, stomach, or lungs. For instance, you may feel hungry, sense your heartbeat increasing, or notice the air in your lungs. For instance, you may feel hungry, sense your heartbeat increasing, or notice the air in your lungs. This process through which your nervous system maps your body’s internal landscape is called interoception. Interoception is how we understand our body’s inner sensations. It’s our brain’s ability to sense what’s happening inside the body and adjust accordingly. And recent research suggests that this sixth sense may play a key role in our well-being and even our sense of self. Making Sense of The Sixth Sense People usually think of the brain as an organ designed to respond to external stimuli. Let’s say you’re in the kitchen, heating a pan of oil to fry some food. When you drop a piece of food into the pan, the heated oil splatters. You feel a few hot droplets hitting your skin and reflexively pull your hand away to avoid further splashes. Now, imagine reading this in a cookbook: “When adding food to hot oil, especially those with high moisture content like fresh fish or certain vegetables, always do so cautiously to prevent dangerous splattering.” A weird thing may happen: simply reading this cautionary advice might make you experience the burning feeling of the hot oil droplets! You’ve probably experienced something similar when a friend tells you a story and you get goosebumps, or when you wince when someone recounts an accident, or when watching a movie where someone is on the edge of a tall building and your palms get sweaty or your stomach is churning. That’s because our brains didn’t evolve to merely react to the world around us, but rather to try to predict what will happen to us next based on both external and internal signals. This predictive process is how your brain makes sense of the world and guides your actions. In addition to your five other senses, interoception is crucial to this predictive process. Interoception is how your brain integrates information about the body’s internal state. It helps the brain keep your body in homeostasis — continuously adjusting many variables such as your temperature and blood pressure to maintain the equilibrium that’s best for your survival. The Five Fundamentals of Interoception Interoception is an emerging topic of research that fascinates neuroscientists, including myself. Here are five things you need to know about how this sixth sense works: 1. Interoception can be conscious or subconscious. Interoception includes the processing of signals such as the rate of your heartbeat, your breathing, and whether you’re full or hungry, among many others. We perceive many of these sensations unconsciously, but some make their way into our conscious awareness. This conscious processing of our internal signals is known as interoceptive awareness. And this seems to be a useful skill, as the ability to regulate our emotions has been found to be associated with interoceptive awareness. 2. Many factors shape our interoceptive abilities. Traumatic experiences can affect interoceptive awareness, either dulling or heightening your sensitivity to your internal experience. Our day-to-day environment, which includes factors like stress, dietary habits, and overall health, also has a significant impact on our capacity for interoceptive awareness. For instance, researchers explain that “we are currently exposed to an excess of exteroceptive stimuli for food consumption, marked by the high availability of a wide variety of ultra-processed and hyperpalatable foods, in addition to increasingly larger food portions that end up intensifying the reward responses and circumventing the homeostatic balance mechanisms.” As a result, it can markedly vary across the lifespan. 3. Interoception deeply influences both mental and physical health. Research suggests that a higher degree of interoceptive awareness has been linked to enhanced mental health, while lower interoceptive awareness is associated with several mental disorders. For instance, people suffering from depression may have a reduced ability to perceive bodily signals, which may contribute to emotional numbness. People with anxiety may be hypersensitive to cues from their own bodies, leading to exaggerated responses. This disconnection between what the body feels and how those signals are acted upon has also been found to be central to eating disorders like anorexia or bulimia. 4. Interoception can go awry. Being aware of our body’s internal signals is helpful, but we shouldn’t always use them to guide our decisions. For instance, in a study of decision-making and interoception, participants’ heart rates were monitored while they engaged in a gambling task. They were asked to identify profitable card decks. Interestingly, those with more accurate interoception aligned their choices with cardiac activity. But choosing a deck in response to an increased heart rate was a double-edged sword. If their heart rates surged when they picked a bad deck, these people fared worse than those with lower interoceptive awareness. 5. Interoception can be trained. Interventions focusing on enhancing interoceptive awareness are still in their early stages but show promise. One recent study looked at autistic adults, a demographic known to be at increased risk for anxiety. Participants were trained using heartbeat detection tasks, receiving feedback on their performance. The results were striking: those trained in interoceptive awareness exhibited a significant reduction in anxiety rates compared to the control group. Simply put, being able to tune into their inner states helped them de-catastrophize them more effectively. Now that you know the five fundamentals — and you are hopefully convinced of the usefulness of mastering your sixth sense — let’s move on to some more practical insights. How to Practice Conscious Interoception We have established that better interoceptive awareness is linked to better mental and physical health. But how the heck are you supposed to improve your interoceptive awareness? There are countless articles that have been published on the topic, but I’ll distill some of the most immediately applicable strategies you can start using right now. First, you want to know where you stand in terms of interoceptive awareness. One simple exercise is to count your heartbeat in your head for over a minute and then compare it with the actual reading. You may want to use a Heart Rate Variability (HRV) tracker — most smartwatches have one — or you could do it the old-school way by asking someone to count your heartbeats by placing their index and middle fingers on your wrist, at the base of your thumb. The second method is not as accurate but can be a good way to get started. You can also fill out the Body Perception Questionnaire, which has been translated into many languages. You can download a version in English here. This is not necessary, but knowing your baseline score will allow you to track your progress over time. Next, let’s have a look at three simple exercises that can help you improve your interoceptive awareness: Body scanning. This involves mentally scanning your body from head to toe. Just sit down in a quiet space, and spend a few minutes noting sensations, tensions, or discomforts in each part of your body. Is your throat itchy? Does your chest feel tight? Over time, this will help you become better at recognizing your bodily signals. Interoceptive journaling. Taking a few minutes daily to jot down internal sensations and emotions can help you create a habit of tuning in to the body’s signals. You can even incorporate this into your existing journaling practice. Here is a list of questions you can use for interoceptive journaling. Interoceptive exposure. This one consists of intentionally placing yourself in situations that elicit stronger physiological responses and practicing noticing and labeling the corresponding internal sensations. You can start with simple ones, like brief cold exposure or safe cardiovascular exercise, and ramp it up to more challenging situations, like public speaking. In addition to these three simple exercises, any mindfulness practice would probably help increase your interoceptive awareness (though of course not all have been thoroughly researched yet). That includes meditation, breathwork, and yoga. That’s it, folks! As with most tools that can help you live better, it takes time and dedication to unlock some of the most impactful benefits of better interoceptive awareness. But even if you only do one of those exercises for a little while, you’ll find that it helps you become more aware of and able to regulate your emotions. The post Interoception: The hidden ‘sixth sense’ appeared first on Ness Labs.
Interoception: The hidden sixth sense
Interoceptive Journaling
Interoceptive Journaling
Interoceptive journaling is a mindfulness practice that involves recording and reflecting upon one’s own bodily sensations. It’s an intentional way of tuning into the often subtle signals our bodies send us, ranging from hunger pangs and heartbeats to flutters of anxiety in the stomach or warm waves of contentment. By deliberately writing about these internal signals, you can improve your interoceptive awareness and strengthen the bond between mind and body. By helping you recognize and understand your bodily signals, this method can help you enhance your emotional regulation, self-awareness, and overall well-being. Interoceptive journaling can be practiced through free-flow writing. If you’d prefer to follow some guided prompts, I’ve read many articles about interoception and developed the following eight questions to get you started. How does my body feel right now, in this moment? Describe your bodily sensations in as much detail as possible. Are there any tingling, warmth, or cool sensations anywhere in your body? Where in my body do I feel the most tension or discomfort? Can I associate this feeling with a particular event or emotion from today? Do I feel any sensations of hunger or fullness? If so, where do I feel it, and how intense is it? How is my breath? Is it shallow or deep, fast or slow? Can I feel it more prominently in my chest, throat, or abdomen? Can I detect my heartbeat without touching my chest or wrist? If so, what does it feel like? Does my body feel heavy or light? Can I connect this feeling to something I’ve ingested or a particular activity? How did physical activity (or lack thereof) impact my bodily sensations today? Are there parts of my body that feel sore? Have I noticed any recurring bodily sensations throughout the day or week? If so, can I identify any patterns or triggers? Like many other mindfulness practices, interoceptive journaling is most effective when done regularly. When you do this practice every day, you get more out of it and can better understand and connect with your inner states.  Interoceptive journaling can be easily included as part of any mindfulness practice, whether you’re just starting out or have been practicing for some time. Just a few minutes a day focusing on your sixth sense can strengthen your awareness of and improve your response to your body’s signals, leading to greater overall well-being. This list of prompts is licensed under a Creative Commons Attribution ShareAlike 4.0 International License, which means you can feel free to copy and distribute this list of questions as long as you attribute it and link back to this page. The post Interoceptive Journaling appeared first on Ness Labs.
Interoceptive Journaling
Big firms have different incentives
Big firms have different incentives
This post is a collaboration between me and Arnaud Dyèvre (@ArnaudDyevre), a PhD student at the London School of Economics working on growth and the economic returns to publicly funded R&D. Learn more about my collaboration policy here. This article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. In a previous post, we documented a puzzle: larger firms conduct R&D at the same rate as smaller firms, despite getting fewer (and more incremental) innovations per R&D dollar. Why wouldn’t firms decelerate their research spending as the return on R&D apparently declines? In this follow-up post, we look at one explanation: firms of different sizes face different incentives when it comes to innovation. In a later post, we’ll review another explanation, that large firms have different inventive and commercialization capabilities.1 Subscribe now Cost spreading and invisible innovations To start, let’s revisit our claim that the return to R&D seems to fall as firms get larger. Is this accurate? We can think of the returns to R&D as the “results” a firm gets out of R&D, divided by that firm’s R&D “effort.” Typically we measure those “results” by new patents, products, or streams of profit. It turns out some of these measures might understate innovation by large firms, because larger firms are more likely to generate process rather than productinnovations. Process innovations are concerned with better ways of delivering a service or manufacturing a product, not creating a new business line. Process innovations will not show up directly in product based measures of innovation.2 For example, some earlier posts have looked at the introduction of new consumer products or the attributes of car models as measures of the output of innovation. And while process innovations can be patented, they are probably less likely to be patented than new products. For example, a 1994 survey (Cohen, Nelson and Walsh 2000) asked 1500 R&D labs in the manufacturing sector to rank five different ways of capturing the value of new inventions. Among the 33 different sectors to which the firms belonged, just 1/33 thought patents the most effective way to protect process inventions compared to 7/33 who thought them the most effective way to protect a new product invention. In contrast, 16/33 sectors think patents the worstway to protect new process inventions, compared to 10/33 think patents the worst way to protect product inventions. Another way to summarize the survey is to note that only 23% of respondents reported that patents were effective means to appropriate process innovations while 35% considered them effective to appropriate product innovations. If process innovations are less likely to find their way into the catalogues of new products or the patent portfolio of firms, then they are less likely to be picked up by conventional measures of innovation. If larger firms are disproportionately likely to engage in process innovation, that will make it seem as if larger firms get fewer results from their R&D. And we do have some evidence large firms are more process innovation oriented. Liu, Sojli, and Tham (2022) use natural language processing to try and classify patents as protecting process or product innovations. The main approach breaks the title of patents and their claims into multiple components, and then looks to see if these strings of words contain words like “process”, “method” or “use” (which indicate a process), or words like “product”, “apparatus” or “tool” (which indicates a product). When they ask patent examiners and an IP management firm to classify a random sample of hundreds of patents classified by their algorithm, they come up with the same answer around 90% of the time. They show that, over 1976-2020, US public firms that have more active process patents than active product patents to be larger. We also have some non-patent evidence, though it’s based on pretty old surveys at this point. Akcigit & Kerr (2018)match Census data on U.S. firms to a comprehensive survey of R&D activities by the NSF (covering 1979-1989) and find a positive correlation between firm size (defined here as log employment) and the share of R&D dedicated to process innovation. So both the patent and survey-based evidence suggests larger firms do more process innovation than product innovation. And we also have pretty good theoretical reasons to expect this should be the case. As Matt has written elsewhere, when a particular kind of technology gets more profitable to invent, firms do more R&D on that kind of technology. To the extent the profitability of different kinds of R&D differ as firms scale, it’s not surprising that their R&D choices should differ. For example, larger firms typically have a wider portfolio of products and sell more products in each line, so it therefore makes sense for them to find more efficient ways to produce and deliver these products because they can spread the costs of their process innovation over more products and product lines. If you expect to sell ten thousand cars, it’s worth $10,000 to invent a process that reduces the cost of manufacturing by $1 per car. If you expect to sell a million, you’ll pay $1 million to invent the same technology. This explanation has been referred to as the cost spreading advantage of larger firms in conducting R&D: the bigger the firm, the greater the level of output over which it can apply its process R&D. Cost spreading pushes bigger firms toward process innovation. So one reason we may observe fewer innovations per dollar among large firms is that their size incentivizes them to focus on harder-to-observe process improvements. More speculatively, it might be that a similar dynamic also affects our measurement of the inputs to R&D that further biases our measures of the R&D productivity of firms. It has long3 been suggested smaller firms might underreport R&D expenditures, which would tend to inflate their measured R&D productivity (because they would seem to get more from less). One reason for that might be that, if firms can receive tax breaks for R&D expenditure, larger firms may invest more in sophisticated ways of claiming these breaks, either via more careful documentation or by pushing the boundary of what can be claimed as an expense. It’s kind of a cousin to cost-spreading; if there is a fixed cost of aggressively reporting R&D spending (for example, because you have to hire more tax lawyers), that cost might be more worth enduring for larger firms with more plausible R&D expenses. Boeing & Peters (2021), for example, provide evidence that R&D subsidies are often used for non-research purposes in China. And this isn’t the only possible reason small firms might under-report R&D. Roper (1999) suggests it could also be because it’s harder to measure R&D spending in smaller firms that don’t have full time research staff or dedicated research labs (and so it’s harder to tell what’s R&D and what’s not). That said, while it seems plausible, I’m not aware of evidence that documents biased R&D reporting. Indeed, in Boeing and Peters (2021), they actually do not find any statistically significant correlation between the size of firms and their tendency to mis-report R&D. The Replacement Effect The cost spreading incentive pushes firms toward process innovation, which might be harder to observe but should still be considered a form of genuine innovation. Another incentive pushes them away from product innovation though: the replacement effect. If a better version of a product is invented, most people will buy the improved version rather than the older one. If you are an incumbent firm that was previously selling that older version, that’s a reason to be less excited about a new product: if you invent a new product, you are partially competing against yourself. If you’re an entrant though, you won’t care. Since incumbents will tend to be larger firms, this dynamic might also explain differences in how firms innovate as they grow larger. This is an old argument in economics, dating back to Kenneth Arrow (1962), which was later named the ‘replacement effect.’4 Incumbent firms’ reluctance to do R&D in domains that could threaten their core business is closely related to what is sometimes called the innovator’s dilemma in the business literature and is a core tenet of some endogenous growth models.5 The recent development of chatbots powered by large language models offers a possible illustration of this dynamic. Google seems to have underinvested in the type of AI technology powering OpenAI’s ChatGPT because it would be a direct siphon of the ads revenues generated by its own search engine. As a result, Google is finding itself having to make up for lost ground in the AI race it once dominated. Documenting the extent of the replacement effect at large is a bit tricky because you are looking for R&D that doesn’t happen. One way we could do this is if we came up with a bunch of good ideas for R&D projects and randomly gave the ideas to large and small firms. We could then see which firms ran with the ideas and which ones left them alone. The trouble is, it’s hard enough for firms to come up with good ideas for themselves, let alone innovation researchers to come up with good ideas for them. But there are two studies that are related to this thought experiment. Cunningham, Ederer, and Ma (2021), while not about innovation and the size of firms specifically, provides some excellent documentation of replacement effect style dynamics. Their context is the pharmaceutical sector, where it is quite common for large incumbent firms to source new R&D projects from small startups. The sector is also one where there is high quality data available on the different research projects (here, new drug compounds) that firms are working on. Cunningham, Ederer, and Ma...
Big firms have different incentives
Ness Labs Best Books of August 2023
Ness Labs Best Books of August 2023
At Ness Labs, we believe in the power of ideas and the profound impact of continuously feeding our minds with thoughtful content. Each month, we meticulously curate a selection of books that truly stand out in an ocean of books that can be overwhelming. This series aims to highlight the work that can serve as a compass to navigate life and work, so we can collectively learn, evolve, and thrive. This is your guide to discovering the most insightful, inspiring, and transformative books on mindful productivity, creative growth, holistic ambition, and developing a healthier relationship with work. The World Behind the World Dr. Erik Hoel offers a captivating exploration of the frontier of consciousness science. The World Behind the World deftly unpacks the historical dichotomy between extrinsic perspectives based on the principles of physics and mechanisms and intrinsic perspectives which revolve around feelings, ideas, and thoughts. In his book, Dr. Hoel chronicles the quest to reconcile these perspectives under the banner of consciousness science, where metaphysical concepts clash and often yield paradoxical outcomes. The book delves into fascinating topics such as physics and morality, the unexpected similarities between black holes and our brains, and AI consciousness. It’s an invitation to ponder the profound questions that emerge from the study of consciousness, including the implications for brain death, free will, and mathematics. This book is a must-read for anyone intrigued by what we can learn at the intersection of consciousness, neuroscience, and technology and the transformative impact of this field on the fabric of our society. Learn more The PARA Method Tiago Forte’s practical guide delves into the intricacies of managing the influx of information that defines modern life. Building upon the foundations of his best-selling book Building a Second Brain, Forte offers readers a pragmatic, four-step system to efficiently categorize and manage information, fostering productivity and helping individuals achieve their goals. With a straightforward approach to sorting the barrage of information in our lives, Forte introduces four categories: Projects, Areas, Resources, and Archives. Projects are short-term tasks with specific goals, such as completing a website; Areas refer to broader, ongoing areas of responsibility like health or finances; Resources include various content to support projects and areas; and Archives store inactive information for future reference. This system can be effortlessly implemented in mere seconds, but its impact on one’s work and life is immeasurable. By unlocking the power of digital organization, The PARA Method will help you transform information overload into creative possibility. Whether you are struggling to stay organized or looking to enhance your focus, this book will be a valuable companion in your journey toward a more organized digital life. Learn more Excellent Advice for Living This little book by Kevin Kelly is a treasure trove of wisdom garnered over a lifetime of experiences. Born from the desire to pass down knowledge to his children, the co-founder of Wired has crafted a compelling collection of insights that span a wide array of life’s facets. Excellent Advice for Living encompasses themes ranging from setting audacious goals to practicing generosity and fostering compassion, with advice on topics as varied as careers, relationships, parenting, finances, and even practical travel and troubleshooting tips. While the book is primarily intended for younger audiences, its universal truths will resonate with readers of all ages. The words shine with the authenticity of a well-lived life. As Seth Godin remarks, part of the book’s uniqueness lies in its nonlinear approach, which is “part of its magic.” With a timeless quality that sets it apart from the ephemeral, Kelly has distilled the essence of a life lived with curiosity, creativity, and generosity into a book that will serve as a trusted companion for readers seeking to traverse life with grace. This book is to be savored, revisited and shared with others — a testament to the profound impact of a well-lived life. Learn more Stolen Focus Johann Hari provides an eye-opening exploration of our declining ability to pay attention in the modern world. The bestselling author of Chasing the Scream and Lost Connections delves into the reasons behind this alarming trend and offers practical solutions for reclaiming our attention. In Stolen Focus, Hari shares his struggle with maintaining focus in a world filled with devices and distractions. Driven to uncover the true causes of our attention deficit, he embarked on a journey to interview leading experts on human attention. Through this work, he identified twelve deep-rooted causes of this crisis, ranging from the decline of mind-wandering to the rise of pollution, and provides actionable steps for individuals and society to reclaim their attention. Stolen Focus is a timely and vital book that challenges our understanding of attention and provides a roadmap for navigating an increasingly distracted world. This book not only sheds light on the causes of our attention crisis but also offers evidence-based solutions to regain control of our focus. In short, a book that’s worth our attention! Learn more Thinking in Pictures Michael Blastland offers a refreshingly candid take on smart thinking in an age where words often fall short. Rather than focusing solely on written explanations, Blastland uses an extensive range of illustrations to help bring ideas to life, providing a more vivid way to explore complex concepts. Deep and broad, insightful and wise, the book is equal parts guide and gallery. Blastland takes readers on a journey beyond the limitations of typical smart-thinking books, encouraging them to explore multiple perspectives, embrace uncertainty, and accept that there might not always be a clear answer. Thinking in Pictures is a breath of fresh air, a welcome departure from the conventional approach to problem-solving and decision-making, advocating for a more comprehensive and humble perspective on the world. This is a must-read for anyone seeking a more nuanced and thoughtful approach to understanding the world — a testament to the power of visual thinking and a celebration of the richness and complexity of the human experience. Learn more Do you have any books to recommend for the Ness Labs Best Books series? Please let us know via the contact form. We welcome self-recommendations. The post Ness Labs Best Books of August 2023 appeared first on Ness Labs.
Ness Labs Best Books of August 2023
Inverting the Internet with Davey Morse Founder of Plexus
Inverting the Internet with Davey Morse Founder of Plexus
FEATURED TOOL Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us make the most of our minds. Davey Morse is the founder of Plexus, a company building a radically inclusive online community, connecting people not through mutual friends but through mutual thoughts. Davey grew up in NYC, studied Symbolic Systems at Williams College, dropped out, coded on Apple’s Screen Time team, built a self-organizing notebook for students, and raised venture funding to start Plexus. In this interview, we talked about traditional social media, why we need spaces for people to authentically express their thoughts online, the importance of exploring raw and unresolved thoughts that weigh on your mind, the need for a shift from attention-based to intention-based interactions, and much more. Enjoy the read! Hi Davey, thanks for agreeing to this interview! First, why do you think it’s so important to have a space for unresolved thoughts? Hey, thanks for having me! And for making space for my unresolved thoughts, too. The question of whether there’s space for unresolved thoughts online is really a question of whether there’s space for most people. Most of us neither think in Tweet-sized bites nor find it natural to share thoughts with everyone we know. Yet, on the internet, posting usually means exactly that: sharing Tweet-sized bites with everyone you know. So most people don’t contribute: just 1% of people create the vast majority of content. At Plexus, we believe there’s much more to each of us than we know. But when most of us just consume online, our digital identities flatten. So as our lives trend digital, we trend toward depression. We also believe people can be extraordinary at solving problems together. But when we don’t have space to explore unresolved things online, problems persist. As species-level problems loom (e.g., a burning climate, rampant epidemics, uncontrollable AGI) we approach existential risk. We need an online space where we can express what we’re actually thinking and think together. Is that what inspired you to create Plexus? Plexus was born out of a connection—between tech I was working on in college and the lives of my friends. The tech was a self-organizing notebook. It was a tool that helped students find connections across their notes, automatically. (Like Roam Research, but easier.) I had friends who were dealing with very particular things: rare mental struggles, illnesses, relationship tensions, and interests. Most of them felt alone with those things. After way too much time, I saw them each get connected with the person across their extended community who knew exactly what they were going through. It’s a movie moment, a feeling I think everyone’s had: when you realize that whatever you’re dealing with… You’re not alone with it. So here I was, applying connection-making tech to supercharge writers’ notebooks—even getting some traction—but realizing: notes weren’t the right application. A connection between two notes could save a writer time; a connection between people could alter their lives. I started obsessing: what would it look like to help people connect, not just through mutual friends, but through mutual thoughts? And for those thoughts to be the messy, unresolved things that are actually on people’s minds, rather than polished thoughts that sound catchy but don’t represent what we’re dealing with? That question led me to Plexus. I dropped out of school, started a public benefit corporation, raised venture funding, and recruited Micah Corning-Myers as our Founding Engineer—another psychologist’s son and hacker with an intense love for people. We set out to enable broad participation online. That’s an ambitious mission. How does Plexus work? Plexus is a space for thinking together. Plexus is for all the thoughts you’d never share on Twitter—the raw and unresolved things that are actually on each of our minds. Testers call the experience “Walking.” We’ve seen our early community “Walk” to explore tense relationships, figure out what they should say in workplace conversations, reconcile disparate interests, and develop seeds of new ideas. You start a Walk by writing about something that feels interesting or off. Then, immediately, Plexus surfaces the community’s most related thoughts. If any thought hits a chord in you, you can “step through it” and retrace the steps of thinking who came before you. It’s a process of alternating between writing fresh thoughts and pulling others’ in. Walks end after you ride other folks’ wisdom into new terrain and find resolution around the unresolved thing you started with. The experience is somewhat hard to describe. Some of our testers have come up with a few interesting analogies, like “constructing my own feed,” “having a conversation with ChatGPT, but where ChatGPT is a community,” “thinking with other people’s thoughts.” You made the choice to have no followers, no broadcasting, no public profiles, and no likes. Can you tell us more? Followers, broadcasting, public profiles, and likes all represent off-putting real world interactions. Consider “Followers.” I can only think of two places where people are called “followers”: cults and social networks. Following is not a healthy relationship in the real world. It’s not healthy online either. Plexus has no “following” relationship. You’re never spammed with thoughts just because you happen to know the authors. You see other people’s thoughts only when they’re relevant to your current thinking. (We’re experimenting with a new kind of relationship in Plexus, called Walking Partners. These are people you meet through Plexus—people who are thinking along similar lines.) Now, when it comes to broadcasting… My followers on Twitter include basketball teammates from growing up, comedy friends from college, and AI friends from work. I never have a thought I want to share with all of them. But that’s what Tweeting is. It’s standing on a stage in front of everyone you know and shouting through a megaphone. So, most of us don’t Tweet. There’s no good online place to find connection around the things we think. But often, I really want to connect with people who get the thing that’s on my mind, whatever that thing is. And so, in Plexus, your thoughts get shared only with those people. They get routed not through the community’s social graph, but through the community’s thinking graph. On most social networks, your grandparents, your colleagues, and your recent hook-up can all see literally everything you’ve ever posted. Most of us feel like shells of ourselves online because there’s no way to feel comfortable being anything more. In Plexus, we’re experimenting with selective profiles, where you unlock different thoughts from a given person as you think about overlapping things. It’s meant to resemble the way a real relationship deepens through exploration and time, where you learn more about each other as you explore Finally, liking: If you’re with someone, you mention something that’s on your mind, and they just say “I like that” without following up… you might ask yourself whether they heard you at all. Plexus has a new lightweight interaction, called a “Walkthrough”. When someone values your thought enough to use it in their thought process, when they “walk through” your thought, you get notified. It feels better to receive a Walkthrough than a like. Those sound like better ways to foster collective thinking. We had never seen people think in a social space online before seeing our early community Walk in Plexus. The closest phenomenon to Walking is maybe what happens in therapy, or in front of a whiteboard brainstorming, or sitting around a table imagining new possibilities with close friends. But, in contrast to Walking, those situations require that only one person talk at a time, that you know the people you want to think with, and that you do so synchronously We organize a monthly walk in NYC, where a couple dozen friends gather in the same physical room with their laptops and Walk together virtually. It’s pretty quiet—the only thing you hear is the collective clacking of keyboards and Julia Jacklin playing quietly in the background. But, make no mistake: an order of magnitude more interactions are occuring than if everyone were talking with each other; thoughts are shooting between everyone’s computers constantly as they wrestle with things others have wrestled with before. For each individual, Plexus turns their laptop into a kind of magic room: a room where the right people, thinking about the right things, come in exactly when those things are on your mind. Some rituals have evolved around Walking. We have synchronous Walks every Sunday. We send out Daily Walking Prompts too (“questions we’ve never been asked before”) sourced from the community. But most Walks just happen asynchronously, when people realize there’s something that feels off or interesting and they want to explore it with the power of the community’s thinking at their fingertips. As the Internet has become increasingly public and geared towards attention-based metrics, many people have become hesitant to share their authentic thoughts online. How do you address this with Plexus? The internet has a couple issues. On the surface, its interfaces make it uncomfortable to share what you’re actually thinking. But, a level deeper, there’s the funding structure: an advertising based internet economy that prizes people’s attention above all. To make it possible for people to have real space for expressing themselves online, you need a new social interface, but more deeply, you need fundamentally new economics: you need a funding model that prioritizes people’s intention over their attention. I’ve shared about Plexus’ new interface. We’ve invented a more intimate sharing mechanism, where the things you think are only distributed to people who have similar ...
Inverting the Internet with Davey Morse Founder of Plexus
Mindful context switching: multitasking for humans
Mindful context switching: multitasking for humans
So many things to do, so little time. When you juggle work, personal projects, and are hoping to have any sort of social life, managing your time can feel like an impossible endeavor. There are many tips out there—the most common one being to focus on the most important task first—but few address the systemic complexities of managing your time and energy when you have a very long list of important and competing tasks as well as other people to take into account. Option 1: You are focusing on a single task and ignoring all distractions and interruptions. You are getting a lot done, but your responsiveness suffers. People who are counting on you are stuck because they need your input. Option 2: You make yourself as available as possible to other people and are extremely responsive when they need your input. They make faster progress with their work, but your own output suffers. Both options are less than ideal. As a knowledge worker, you need to ensure you complete these important tasks while being responsive enough to support your collaborators in their work. The challenge is in finding that delicate balance between optimizing your own output and sharing your input to enable your collaborators to progress. So what do we do? We try to multitask. A mythical activity In computing, context switching refers to the process of storing the current state for one task, so that this task can be paused and another task resumed. It’s basically what allows computers to multitask (fun fact: the word “multitask” was invented by IBM in 1965 to describe a computer capability. It was only later that we started using it for humans). In the same way that context switching comes with a cost in performance for computers, multitasking has its cost for humans too. Research shows that constantly switching context between different tasks has a terrible effect on attention. We’re basically less focused and less performant when trying to do several things at the same time. Psychiatrist Edward M. Hallowell even described multitasking as a “mythical activity in which people believe they can perform two or more tasks simultaneously as effectively as one.” But very few people can afford to stay focused on one single task until it’s done. Emails need to be answered, customers need to be helped. So how can you avoid the terrible impact multitasking can have on your performance? The mindful way to multitask What I call mindful context switching is a strategic approach to task management that emphasizes the importance of staying focused on a single task while maintaining an acceptable level of responsiveness. It involves defining your necessary level of responsiveness based on external demands, breaking tasks into achievable chunks that fit within these response intervals, and scheduling dedicated time slots for them. It was inspired by the work of Brian Christian and Tom Griffiths, authors of Algorithms to Live By, who wrote: “You should try to stay on a single task as long as possible without decreasing your responsiveness below a minimum acceptable limit. Decide how responsive you need to be—and then, if you want to get things done, be no more responsive than that.” The aim of mindful context switching is to boost your productivity and improve the quality of your output, all while maintaining healthy relationships at work and outside of work. Ready to give it a try? It essentially boils down to five simple steps: Define your responsiveness: If you have high-value customers who expect to hear back from you in less than an hour, that’s how responsive you need to be. If you sell a SaaS product that’s not business-critical, maybe responding to emails once a day is fine. There is no fast-and-hard rule here, but you need to figure out what level of responsiveness will work for your business. Design manageable chunks of work: Now that you know how responsive you need to be, break down your tasks into manageable chunks that can be done between these response times. Each chunk needs to be realistic, with a beginning and an end. For example, if you need to write an article, one chunk could be to create the outline. Schedule dedicated time: That’s it for this one. Just put these chunks into your calendar. Communicate clearly: Let everyone you work with know that you won’t be able to respond during these deep work time slots. There are several ways to go about this. If you have a shared calendar, that’s fairly easy. When I was working at Google, I also saw people put it in their email signature or inside an email autoresponder if their response time was longer. Although it may feel weird at first, it’s usually best to overcommunicate. Revisit regularly: Don’t simply duplicate your time slots from one week to another. Reflect on what worked and what didn’t. Were the chunks actually manageable? Was your responsiveness appropriate? You can even proactively ask your teammates for feedback. Play with different configurations until you find the one that works for you. That’s it! The first time around will take a bit of work, but mindful context switching will help you do better work, faster, and without alienating the people around you. The post Mindful context switching: multitasking for humans appeared first on Ness Labs.
Mindful context switching: multitasking for humans
Geography and What Gets Researched
Geography and What Gets Researched
This post was jointly written by me and Caroline Fry, assistant professor at the University of Hawai’i at Manoa! Learn more about my collaboration policy here. How do academic researchers decide what to work on?  Part of it comes down to what you judge to be important and valuable; and that can come from exposure to problems in your local community. For example, one of us (Matt), did a PhD in Iowa, and ended up writing a paper on the innovation impact of ethanol-style policies (ethanol is a big business in Iowa). One of us (Caroline), was leaving Sierra Leone after two years there, just as the Ebola epidemic was starting. She became interested in understanding why science capacity is so low in some countries and not others, and what that means for the development of drugs and vaccines to combat local problems. (Indeed, we’ll talk about two of the papers that emerged from that research program in just a minute.) Brief Pause for Some Announcements The Institute for Replication is looking for researchers interested in replicating economic and political science articles. Research using non-public data (for example, Bell et al. 2019, discussed below) is a formidable barrier for reproducibility and replicability - so they are offering up to 5,000 USD and coauthorship on a meta-paper combining hundreds of replications. A list of studies of eligible studies is available here, with payment info. Please contact instituteforreplication@gmail.com for more detail and indicate which study you would like to replicate. They are interested in 3 types of replications: (i) using new data, (ii) robustness checks and (iii) recoding from scratch. Open Philanthropy’s Innovation Policy program is currently soliciting pre-proposals from individuals for financial support to write living literature reviews about policy-relevant topic areas. Interested individuals should have a PhD related to their proposed area and should contact matt.clancy@openphilanthropy.org for more information. This article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Back to Geography and What Gets Researched! Subscribe now Testing the relationship between location and research choice Both of us made research decisions that were, in part, influenced by exposure to local problems. Are we atypical, or is this path of exposure to research choice a common one? The role of exposure to local problems in determining research choice is difficult to test. People might locate themselves in places precisely because they are interested in the problems in those places. The ideal way to test this would be to randomly assign researchers to different locations and see if they work on local problems that they are exposed to. However, randomly assigning researchers usually isn’t particularly feasible. Alternatively, we could randomly “assign” problems to different locations and see if local researchers begin working on those problems after exposure. One candidate for a problem that all-but randomly arises in some locations but not others is a novel disease outbreak. So one way to assess how strong is the local problems to local research link is to see how scientists respond to local disease outbreaks. Fry (2022) takes this strategy and evaluates the impact of the 2014 West African Ebola epidemic on the publication output of endemic country scientists: did scientists working in areas hit harder by Ebola begin to disproportionately work on it? To see, Fry starts with a dataset of 57 endemic country biomedical scientists (those affiliated with institutions in Sierra Leone, Guinea and Liberia, the three hardest hit countries, at the time of the epidemic). She then matches these endemic country scientists to 532 control scientists who are from non-endemic countries in West or Central Africa, but who are at similar points in their career, work in similar areas, publish at similar rates, have similar rates of international collaboration, and reside in countries with similar GDP per capitas. She pulls out the publication record for each sample scientist for the four years before and six years after the epidemic from the Elsevier Scopus publication database, and creates counts of annual publications. Finally, she separates these counts into Ebola and non-Ebola publications through a key word search of the title, abstract and key words of the publications. Fry compares the changes in publication output of endemic country scientists to that of the control scientists, adjusting for persistent differences between individual scientists, typical career age trends, and variation in publication trends over time for all scientists. As illustrated in the figure below, prior to 2014 none of the scientists in her sample really focused on Ebola. Beginning in 2014, endemic country scientists experience a large and fairly sustained increase in their publication output of Ebola related publications, as compared to non-endemic country scientists. That implies exposure to a new problem in a researcher’s location can shift their attention towards that problem. (It could be about something besides exposure too – we’ll talk about that later) From Fry (2022) Location and research focus are correlated We noted above that our ideal experiment would randomly allocate scientists to different locations. While we may not be able to do that, scientists do change locations of their own accord and insofar as local problems drive research choice, then we might expect to see similar patterns when they do.  Fry (2023) tests exactly this. The working paper builds a dataset of 32,113 biomedical scientists affiliated with an African institution between 2000 and 2020, their publication output in different disease areas (by extracting key words from the title and abstract of their publication), and uses the affiliation listed in these publications to infer their country affiliation in each year. She then compares the research choices of these African scientists (proxied by the number of publications on each diseases) with the disease burden in their country of residence. The idea is to compare the disease focus of mobile researchers before and after their move to that of matched control researchers who don’t migrate. She finds, indeed, that researchers are more likely to publish papers on diseases that are more prevalent in their host country after they move there. This trend is particularly salient for researchers moving into Africa from outside the continent. And note, this is relative to matched scientists who did not move, but prior to the move were publishing at similar rates, on the same diseases, as the scientists who move. We can see similar dynamics beyond the specific context of neglected tropical diseases. Moscona and Sastry (2022)provide some additional data from global agriculture, where there is substantial international variation in crop pests and pathogens. Moscona and Sastry search for the names of specific pests and pathogens in the titles, abstracts, and descriptions of agricultural patents across the world (using a dataset on international crop pests and pathogens from the Centre for Agriculture and Bioscience International). For example, there might be a patent for a pesticide to control a specific kind of pest, or a patent for a gene that confers resistance to some kind of pathogen. Since inventors list their country of residence on patents, Moscona and Sastry can see if inventors disproportionately invent technologies that mitigate pests and pathogens present in their country of residence. That seems to be the case. In the figure below, they show that for any given crop pest or pathogen (which they call a CPP), the number of patents by inventors in the same country where those pests and pathogens are found is much higher than the share of patents by inventors from other countries. Moscona and Sastry also statistically estimates the relationship between patents on a given pest or pathogen by country inventors and the presence of those pests/pathogens in that country, holding country and pest/pathogen differences fixed. That analysis also finds local presence is a strong predictor of local patenting related to a given pest or pathogen. From Moscona and Sastry (2022) Why would location affect research choice? Taking this cluster of papers as providing at least preliminary evidence that location influences research choice, the next question is: why? We’ve suggested it could be due to researchers being exposed to local problems, and that’s certainly one likely channel. It would be consistent, for example, with research finding that women scientists are more likely to work on issues that disproportionately affect women (suggesting that different researchers find different problems more salient and important to investigate). But a researcher’s location could influence their choice of topics in a number of other ways too. Researchers around the world might be equally interested in a topic, but local researchers could have an advantage in studying a particular topic because of better access to local data, for example, samples of viruses, pests, pathogens, or infected people. It may also be that local funders of research, rather than researchers themselves, are more likely to know and care about local problems. (That said, at least in the case of the 2014 Ebola epidemic, Fry 2022 finds no correlation between domestic funding for Ebola research and the shift towards it) Beyond these direct effects of location on research choice, one secondary effect could be social contagion from other researchers: even if researchers are not initially motivated to study local problems, they may want to locally collaborate, and if local collaborators are more likely to be working on local problems, they are more likely to begin working on the topic too. We do have some evidence that res...
Geography and What Gets Researched
Stop looking for The One: The Inverted Pyramid of Life
Stop looking for The One: The Inverted Pyramid of Life
“What do you want to be when you grow up?” Adults often as this question when chatting with a kid. Maybe it’s because the answer is often endearing (an astronaut!) or surprising (a YouTuber!), or because it’s a way to connect through a topic that speaks to us—work. We keep doing this to each other as adults too. “What do you do for a living?” and “Where do you work?” are some of the most common conversation starters when meeting someone for the first time. When you’re a kid, the world is full of possibilities. Nothing seems to be impossible. No question or topic seems trivial enough not to wonder about it. It’s a wonderful exploratory phase. You may want to try a different sport every week. You have a new best friend every month. You’re into board games and then realize that painting is more your thing. For now. So why do we later insist on this fabricated idea of having one calling in life? Go forth and specialize Often, as soon as you start showing a sustained interest in a specific area, adults push you to practice and improve. To make it your thing. It comes from a good place, of course, but it stems from the idea that the more “defined” you are as a person, the better. Our education system works in a similar way. We are expected to specialize, going from a generalist curriculum covering everything from arts to maths and history to graduating with a degree in one specific area. Then, at work, we hone in on what sets us apart and create an elevator pitch—a short description that gets our value proposition across in one key point or two. In friendship too. Research shows that the older you get, the fewer friends you have. Growing up is like trying to squeeze through a gradually shrinking funnel, making yourself smaller and smaller until you can describe yourself with as few words as possible. We become more focused in our interests, our work, and even our friendships. In the words of Rhiannon Lucy Cosslett, a journalist at The Guardian: “Part of growing up is accepting all those things you’ll never be, but which perhaps, in another system or universe, you could have been.” But does it have to be the case? Inverting the pyramid of life In the years since I founded Ness Labs, I’ve had countless conversations with talented, intelligent people who told me they felt lost. Either because they didn’t find joy in their day job anymore, because a project they had poured their heart and soul into didn’t work out, or because the next logical steps in their career were not particularly exciting. For most of them, it seemed hard to find alternative options because, after years of hard work and smart choices, they were sitting at the tip of the pyramid. Here is how the pyramid of life normally works: As a child, you explored. As a student, you specialized. Now, as an adult, you can easily define who you are to yourself and other people. This is the path I have followed for a long time. This is the narrowing path most people will follow. Not because that’s what they want but because that’s what is expected from them. For the rare few ones with a true calling, research suggests that it may work just fine. But what about the others? The same research shows that searching for a calling leaves us confused and uncomfortable. Now, you probably see where I am going with this: Why should we look for our one true calling in the first place? Why not invert the pyramid? Here is what the inverted pyramid of life looks like: As a child, we are full of potential. As a student, we can explore our affinities. As an adult, we open up a world of opportunities. In this paradigm, the potential you have as a child is just the beginning—the tip of a cone of creativity that widens as you grow up. Because you’re optimizing for opportunities and not trying to define yourself through specific expertise, you can keep expanding your playground all your life. The inverted pyramid of life can apply to studies, work, but also friendships. Children have neighborhood friends, school friends, friends from a sports team or an art club. As an adult, we tend to only have a couple of friends outside of work. But we can significantly expand our circles and choose new friends consciously. What if you had a friend who loves hiking, another who enjoys nerding about technology and tools, and another who is always excited to try new foods together? What if you had friends all over the world, who you know you may only meet in years to come, if ever, but who share your interests? This ability to identify yourself across multiple domains and roles, which researchers call “self-complexity”, has been found to support emotional resilience by reducing the impact of failure or setback in any single domain. You may lose your job but still be a great friend. Your startup may fail, but you may run your first marathon with your partner. You may be rejected from your dream school but win a poetry prize. The self-complexity that arises when we invert our pyramid of life also encourages personal growth and self-discovery, as you can explore and evolve across various aspects of your identity, which means a richer, fuller life. When you stop trying to nail down your narrative and focus only on the most obvious relationships, life becomes a giant sandbox where we can learn anything, grow in any direction, and connect with anyone. Maybe then, instead of asking, “So, what do you do for a living?” we’ll start asking: “So, what makes you feel alive these days?” The post Stop looking for The One: The Inverted Pyramid of Life appeared first on Ness Labs.
Stop looking for The One: The Inverted Pyramid of Life
Ness Labs Best Books of July 2023
Ness Labs Best Books of July 2023
At Ness Labs, we believe in the power of ideas and the profound impact of continuously feeding our minds with thoughtful content. Each month, we meticulously curate a selection of books that truly stand out in an ocean of books that can be overwhelming. This series aims to highlight the work that can serve as a compass to navigate life and work, so we can collectively learn, evolve, and thrive. This is your guide to discovering the most insightful, inspiring, and transformative books on mindful productivity, creative growth, holistic ambition, and developing a healthier relationship with work. The Good Enough Job Simone Stolzoff’s The Good Enough Job offers a compelling critique of the prevailing culture that places our work and professional ambitions at the center of our identities. Through insightful reporting and interviews with individuals across diverse professions, Stolzoff lays bare the impacts of intertwining our sense of self with our jobs and the cost it exacts on our well-being and even professional success. The book prompts us to question the status quo, challenging the societal expectation of work as a calling, a dream to be chased relentlessly. For those striving to find a healthier relationship with work and ambition, The Good Enough Job provides a refreshing perspective. By exposing the myths that have chained us to our work desks and that underscore the overvaluation of our labor, Stolzoff inspires us to redefine what it means for a job to be good enough. Learn more The Order of Time Time has bemused us since the dawn of consciousness. With his unique combination of scientific insight, philosophical wisdom, and artistic flair, Rovelli takes us on a journey to demystify the enigma of time. He guides us from Einstein to loop quantum gravity, all the while challenging and reshaping our intuitive understanding of time’s very structure and compelling us to confront the startling realities of our universe, where time flows at varied speeds in different places. With his help, we understand that the distinctions between past, future, and present are far less rigid than we perceive. Rovelli’s work is not just an intellectual feast; it’s also a call to introspection. For those obsessively striving to master time management, this book serves as a reminder to reconsider our relationship with time. It urges us to reflect on the interconnectedness of our selfhood and our perception of time. With The Order of Time, Rovelli nudges us to view time not as a foe to be tamed but as an intrinsic part of our existence to be understood and appreciated. Learn more Hidden Genius The book Hidden Genius by Polina Marinova Pompliano is a treasure trove of insights from some of the world’s most intriguing individuals. After five years of studying these high performers through her work at The Profile, Pompliano offers readers a unique opportunity to understand the mental frameworks these individuals use to navigate complex problems, fuel their creativity, and perform exceptionally under pressure. Far from simple tricks or hacks, these frameworks offer profound shifts in perspective that can redefine one’s worldview. This book can be an invaluable resource to enhance your thinking skills or seek inspiration during trying times. The great thing about Polina’s book is that it goes beyond sharing successful people’s stories: it also provides a mental toolkit that you can use to tackle complex problems, navigate relationships, and foster creativity and resilience in the face of uncertainty. Learn more Saving Time Saving Time by Jenny Odell is a riveting investigation into our relationship with time, compelling us to question the societal structures that commodify it and push us towards relentless efficiency. Odell argues that the societal clock we live by was designed more for profit than for people, turning even our leisure into quantifiable, transactional moments. Her book highlights how our distorted perception of time is intricately tied to enduring the climate, social, and mental health crises. Yet, Odell’s book is not a despairing read; it’s a beacon of hope, presenting us with alternative ways to experience time. By saving time from its commodification, Odell suggests that time, in its most authentic and diverse forms, may also save us, offering a profound source of meaning beyond the constraints of the workplace or the dictates of a profit-oriented society. In short, her book is a thoughtful rebellion against reality as we know it. Learn more The Pathless Path The Pathless Path by Paul Millerd takes readers on a deeply personal journey of self-discovery and personal growth. From his beginnings as a small-town Connecticut kid to reaching what he thought at the time was the pinnacle of success at a prestigious consulting firm, Paul had it all by conventional standards. Yet, he chose to walk away, setting off on his life’s “real work”: identifying what truly mattered to him and daringly constructing a life around those values. This book is not a how-to manual filled with life hacks. Rather, The Pathless Path is an intimate account of Paul’s transition from a life focused on professional advancement to one centered on work that genuinely matters. This book should be an essential companion for those contemplating a departure from their current jobs, embarking on a new path, navigating the uncertainties of an unconventional trajectory, or seeking alternative ways to understand work in our rapidly evolving world. Learn more Other books to explore this month: Exhalation by Ted Chiang (this is fiction but relevant to the future of life and work) The Art & Business Of Ghostwriting by Nicolas Cole How We Learn by Stanislas Dehaene Do you have any books to recommend for the Ness Labs Best Books series? Please let us know via the contact form. We welcome self-recommendations. The post Ness Labs Best Books of July 2023 appeared first on Ness Labs.
Ness Labs Best Books of July 2023
Turning Fear of Failure into Increments of Curiosity
Turning Fear of Failure into Increments of Curiosity
When I was younger, I badly wanted to live in Japan. Japan is a country with very strict immigration laws, but my university had an exchange program where you could go spend a semester and study in another country. There was only one problem: the Japanese university they had a partnership with was one of the most selective in the country. I remember thinking: “There is no way I’ll get accepted.” I told my mom about my doubts. “It’s not your decision to make,” she said. And, as often, she was right. We constantly limit our options by deciding for others. All I had to do was apply, and it then became the university’s job to accept my application or not. You probably have seen this pattern countless times in yourself and others. It’s far easier not to fail when you haven’t tried. It’s far easier to not be wrong when you’re not putting yourself out there. But it’s also much harder to grow as a human being when we avoid getting out of our comfort zone. If this fear of failure is so bad for our personal and professional growth, why is it so common? We all want to be loved Fear of failure starts in early childhood. We are social animals and feel the need to be accepted by others, which begins with the acceptance and love of our parents. In a study looking at the relationship between young athletes and their parents, researchers found a correlation between the parents’ high expectations for achievement and the children’s fear of failure. The more the parents showed a negative reaction to what they perceived as a failure from their kid, the more the kid would fear the consequences of “failing.” In some people, this can turn into atychiphobia, an irrational and paralyzing fear of failure, often accompanied by an intense feeling of panic or anxiety, and physical symptoms such as difficulty breathing, an unusually fast heart rate, and sweating. For most people, though, fear of failure manifests itself in a much more subtle way, mainly self-doubt that prevents us from exploring uncertain paths: We put off doing things because we’re unsure how they will turn out. We avoid situations where we may have to try something new in front of other people. We avoid doing things we know will improve our lives because we don’t have all the necessary skills. We give ourselves the illusion of growth by reading, researching, watching videos… Anything but doing the thing and risking being judged by others. But the good news is that nobody is hoping for you to fail. Most people you know would be happy to see you succeed, and the ones who don’t know you don’t care. So how can you shift your perception and overcome your fear of failure? Your perception of possible When you start reading a novel, you rarely expect to finish it in one go. Instead, you will probably read a few chapters, then a few more, until you’re done with the book. Strangely, we’re not so pragmatic when it comes to personal goals. It’s common to look at a long-term goal and never get started because it seems too far out of reach. But we can reshape our perception of what’s possible by breaking our journey into smaller, more achievable chunks. Achievable, in this case, does not mean something where you are certain of succeeding, but rather something that you can put to the test in the short term, without being able to use any excuse to put it off. Let’s say you have a fear of public speaking and use the excuse that, in any case, nobody has ever invited you to speak at a conference. A small, achievable experiment would be to apply to five local meetups to give a talk. While speaking in public may sound terrifying, filling out an online form is perfectly doable. Similarly, you may be scared to be judged for the quality of your writing. While writing a book is a daunting task that is easy to hide behind (“I’d love to write a book, but I don’t have the time”), writing a blog post is much more manageable. Fail like a scientist If you see life as a giant experiment where your goal is to explore as much as you can to obtain answers to your questions, failure becomes an investment to get closer to these answers. In the words of Seth Godin: “The cost of being wrong is less than the cost of doing nothing.” Scientists often repeat experiments thousands of times to get a conclusive answer. And more often than not, the answer they get is that their initial hypothesis was wrong. Not performing the experiment would have allowed them to stay in a cozy limbo of being not wrong, but then we wouldn’t have any science. This is why approaching failure like a scientist is so powerful. By making decisions that will let you learn something new, you are guaranteed to be successful—where success is learning, evolving, and growing as a human being. Failing becomes a way to cultivate aliveness. Increments of curiosity Another way to approach your fear of failure is to think like a kid. Children tend to experiment just for the sake of it: What will happen if I press this button? How does it feel to touch this thing? Reconnecting with your inner child is a great way to overcome your fear of failure. For example: What will happen if I publish this post? How does it feel to speak my mind? Instead of imagining all the ways you may fail, turn your doubts into questions. Maybe nothing good will happen, but a child would not take the answer for granted. Start with something small, then move on to another iteration—a bigger growth loop. With time, your mind will become increasingly comfortable with trying new things and constantly expanding your horizons. Practically, here is how you can start applying this approach of deliberate experimentation right now: Pick something you’ve been putting off because of your fear of failure. Is it public speaking? Starting a blog? Producing a podcast? Launching your first product? Write it down. Define one small experiment you can design to explore this fear. It should be actionable. For example, apply to a few meetups to give a talk, produce one episode of a podcast, or write an article as a Google Doc and share it with a few friends. It should be simple enough that you can just do it in a few hours at most. Do it! Don’t plan anything. Don’t research the best way to go about it. Don’t announce it on Twitter. Just do it. Reflect on what happened. Any negative reactions? What about your emotions? What did you learn? Write all of these thoughts down. It’s a great way to practice metacognition. Rinse and repeat. Keep defining incremental steps in the form of experiments that fall out of your comfort zone but are not scary to the point of being paralyzing. Again, avoid overthinking it beforehand. Just do it, and reflect only after you have performed the experiment. You may feel some anxiety or discomfort along the way, but addressing your fears and trying new things you care about is the best way to avoid another feeling that’s much harder to manage: regret. The post Turning Fear of Failure into Increments of Curiosity appeared first on Ness Labs.
Turning Fear of Failure into Increments of Curiosity
How to Impede Technological Progress
How to Impede Technological Progress
“Everything that’s happening is coordinated by someone behind the scenes with one goal: to completely ruin scientific research.” – Da Shi, in The Three Body Problem by Liu Cixin Most of the time, we think of innovation policy as a problem of how to accelerate desirable forms of technological progress. Broadly speaking, economists tend to lump innovation policy options into two categories: push and pull policies. Push policies try to reduce the cost of conducting research, often by funding or subsidizing research. Pull policies try to increase the rewards of doing research, for example by offering patent protection or placing advance orders. These have been extensively studied and while they’re not silver bullets I think we have a good evidence base that they can be effective in accelerating particular streams of technology. But there are other times when we may wish to actively slow technological progress. The AI pause letter is a recent example, but less controversial examples abound. A lot of energy policy acts as a brake on the rate of technological advance in conventional fossil fuel innovation. Geopolitical rivals often seek to impede the advance of rivals’ military technology. Today I want to look at policy levers that actively slow technological advance, sometimes (but not always) as an explicit goal. I think we can broadly group these policies into two categories analogously to push and pull policies: Reverse push (drag?): Policies that raise the costs of conducting research. Examples we’ll look at include restrictions on federal R&D funding for stem cell research, and increased requirements for making sure chemical research is conducted safely. Reverse pull (barrier?): Policies that reduce the profits of certain kinds of innovation. We’ll look (briefly) at carbon taxes, competition policy, liability, and bans on commercializing research. The fact that conventional push and pull policies appear to work should lead us to believe that their reverses probably also work; and indeed, that’s what most studies seem to find. But there are some exceptions as we’ll see. Brief Pause for Some Announcements If you’re a fan of what I’m doing here at New Things Under the Sun, and want to write something yourself, you may be interested in the following: Interested in collaborating with me on a post? Click here for details. The Roots of Progress Blog-Building Intensive is a new 8-week (free!) program for aspiring progress writers to start or grow a blog. Learn more or apply here. Open Philanthropy’s Innovation Policy program is currently soliciting pre-proposals from individuals for financial support to write living literature reviews about policy-relevant topic areas. Interested individuals should have a PhD related to their proposed area and should contact matt.clancy@openphilanthropy.org for more information. Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Subscribe now Back to the Article Reverse Push Policies Sort of Working Let’s start with two studies that have the effect of making it more expensive (in terms of time or money) to do certain kinds of research. Both these studies are going to proceed by comparing certain fields of science that are impacted by a new policy, to arguably similar fields that are not impacted by the policy. By seeing how the fields change relative to each other both before and after the new policy, we can infer the policy’s impact. Let’s start with US restrictions on public funding for research involving human embryonic stem cells. The basic context is that in 1998, there was a scientific breakthrough that made it much easier to work with human embryonic stem cells. While this was immediately recognized as an important breakthrough for basic and applied research, a lot of people did not want this kind of research to proceed, at least if it was going to result in the termination (or murder, depending on your point of view) of human embryos. A few years later, George W. Bush (who was sympathetic to this view) won a closely fought US presidential election and in August 2001, a new policy was announced that prohibited federal research funding for research on new cell lines. Research reliant on existing cell lines was still eligible for funding, but since most of the existing cell lines were not valuable for developing new therapies, this restriction was more significant than it might naively seem. No restrictions were placed on private, state, or local funding of human embryonic stem cell research, but anyone who received funds for this kind of work would need to establish a physically and organizationally separated lab to receive federal funding for permissible research on existing lines. To see how this policy change affected subsequent research, Furman, Murray, and Stern (2012) identify a core set of papers about human embryonic stem cell research and RNAi, another breakthrough in the same year and originating in the US that unaffected by the policy, but which was perceived to be of similar scientific import. They then look at how citations to those core papers evolve over time, with the idea that a citation to one of these core papers is a (noisy) indication that someone is working on the topic. Because foreign scientists are unaffected by US policy, they also divide these citations into those coming from papers with US researchers and those without. They estimate a statistical model predicting how many US and foreign citations a core paper in either topic receives, in each year, as a function of its characteristics. A key finding is illustrated in the following figure, which tracks the percentage change in citations from US-authored articles to human embryonic stem cell research, as compared to a baseline (which includes RNAi papers, and citations from foreign-authored articles). Prior to 2001, citations by US authors to papers on human embryonic stem cells were about 80% of a baseline, but the error bars were wide enough so that we can’t rule out no difference from baseline. Beginning in 2001 though (when the policy was announced), US citations to these papers dropped by a pretty noticeable amount - from roughly 80% of baseline to 40%. How citations from US authors to human embryonic stem cell papers fare, compare to a baselineFrom Furman, Murray, and Stern (2012) Note though; just three years later, in 2004 things may have been back to their pre-2001 levels. But the restrictions on federal research weren’t relaxed in 2004. So what’s going on?  We’ll return to this later. For now, let’s turn to another study that shows reverse push policies (of a sort) can exert a detectable influence on basic research. This time, we’ll look at a policy whose goal was not to reduce the amount of research, but instead to simply make sure it was done in a safer manner. In 2008 Sheharbano (Sheri) Sangji died in a tragic UCLA chemistry lab accident involving flammable compounds. This incident and the subsequent criminal case for willful violation of safety regulations by the lab’s principal investigator and the Regents of the University of California galvanized a significant ratcheting up of safety regulations across US chemistry labs. For example, at UCLA, participants in lab safety classes rose from about 6,000 in 2008, to 13,000 in 2009 and 22,000 in 2012, while the number of safety inspections of labs rose from 1,100 in 2008, to 2,000 in 2009 and 4,500 in 2012. This was accompanied by an increase in laboratory safety protocols and more stringent rules for the handling of dangerous chemicals. To see what impact the increase in safety requirements had on chemistry research, Galasso, Luo, and Zhu (2023) gather data on the publications of labs in the UC system. They end up with data on the publications of 592 labs, published between 2004 and 2017 (note they exclude the lab where Sangji worked). To assess the impact of more stringent safety regulations, they cut the labs into two different pairs of sub-samples, with one half of each pair more impacted by the policy and the other half less impacted.  First, they hire a team of chemistry PhD students to classify labs as “wet”, which are equipped to handle biological specimens, chemicals, drugs, and other experimental materials, and “dry”, which are not and might do computational or theoretical research (these comprise 14% of labs). We should expect safety requirements to not affect dry labs, but possibly to affect wet ones - but not if they rarely work with dangerous compounds. So, as a further test, Galasso and coauthors use data on the chemicals associated with lab publications to identify a small subset of labs that most frequently work with compounds classified as dangerous. Because they need a long time series prior to 2008 for this classification exercise, they can only apply this method to 42 labs, out of which they flag the 8 working most often with dangerous compounds. Their main finding is that the impact of the increased safety requirements were pretty small. Indeed, comparing the publication output of wet labs and dry labs, there appears to be no detectable impact of the policy at all, even when trying to adjust for the quality of publications by adjusting for the number of citations received, or after taking into account potential changes in the sizes of labs. The effects were not totally zero though. When they zero in on labs using the most dangerous compounds, they find that after safety standards are ratcheted up, the most high-risk labs begin to publish about 1.2 fewer articles per year mentioning dangerous substances as compared to less dangerous wet labs (labs publish an average of 7.7 articles per year in the sample). The reduction is most pronounced for articles mentioning flammable substances, or dangerous compounds that haven’t ...
How to Impede Technological Progress
Discover an immersive new approach to productivity with Nick Daniels Founder of Portal
Discover an immersive new approach to productivity with Nick Daniels Founder of Portal
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us make the most of our mind. Nick Daniels is the founder of Portal, an immersive productivity app designed to help you stay in the flow. Portal uses the latest technology to deliver the most natural reproductions of real-life locations. In this interview, we talked about how physical work spaces can influence wellbeing, productivity and creativity, the potential of immersive technologies, the psychology of how we respond to our environment, and much more. Enjoy the read! Hi Nick, thanks for agreeing to this interview! Most people believe that nature contributes to our wellbeing, but you believe that nature is at the heart of our health and happiness. How did you form that belief? Thanks so much for having me! The inspiration for Portal was sparked back in 2018 when my wife and I spent 6 weeks camping around New Zealand on our honeymoon. We’d spent the previous 10 years living in London pretty much working ourselves into the ground and I was only just starting to recover from a period of depression and burnout from pushing myself too hard on a previous startup, so we were both very much craving an opportunity to get away from it all. The experience itself was of course amazing — unzipping the tent in the morning to stunning views and experiencing the ever-changing sights and sounds of each location we were camping in was incredible. But the most surprising and unexpected thing was that we actually had some of the best sleep of our lives. When I’d camped before it’s always been for short periods of time, and I’d always found the fact that you can hear every sound and the light pouring in early in the morning less than ideal — and often meant you ended up with less sleep not more. But what we found living in the tent for an extended period of time was that over time we just seemed to naturally sync up with the rhythm of the world around us. We’d start getting tired as the sun went down, the temperature fell and the sounds of the birds got replaced with the sounds of the insects at night. And we’d wake up so fresh and energized in the morning as the opposite happened and the sun and temperatures rose and the birds began to sing outside. It’s a feeling that’s almost impossible to describe — when you feel in sync with everything around you, but this experience completely changed how I viewed the natural world. I no longer felt like it was a place separate from me — a place to visit or an attraction to enjoy. It was more like the feeling of finally being home after a long time away. And at the end of those six weeks we both felt the best we’d ever felt in our lives. The idea for Portal then came on the flight home — cramped, uncomfortable and returning to our hectic, stressful lives in London. I was struggling to sleep and started to think about how we’d slept so well and whether it was possible to “bottle up” and re-create that experience and these amazing surroundings back home in London. It was only once I was home and started to research further around what I’d experienced I realized that there was a growing mountain of scientific evidence drawing the link between nature, circadian rhythms, our surroundings and our mental health. It was then that I realized this might be able to help many others beyond myself and within a week I’d handed in my notice and started coding the first version of the app. Ha, inspiration literally hit you. So the initial version was mostly focused on sleep and relaxation? Yes, not many people know this, but the app actually started off life completely focused on being a sleep aid and natural alarm clock — recreating that camping experience in the bedroom using immersive sound, smart lighting and visuals. The big idea was to take an experience-led approach to designing an alarm clock that was inspired by our trip, so rather than the purely functional approaches to alarm clocks which basically use loud noises to scare you awake at a specific time — it would help you wind down at night using gentle transitions mimicking the natural world and then wake you up gently in the morning. It’s an approach that draws upon a lot of the principles behind Biophilic Design, a design approach traditionally used by architects and interior designers that looks to increase people’s connectivity to natural environments and the benefits this can bring. It’s still quite a niche approach but I’m convinced given the amount of research and the positive impact it can bring in our lives that you’ll see it becoming much more mainstream over the coming years. You’ve just launched the Mac version of the app, which is all about improving focus and productivity. How did this come about? In truth, it was a little bit Inception-like. I’d have the scenes playing a lot while coding and developing the app in the early days and came to realize that it was actually really helping me to concentrate and get into the flow. The thunderstorms especially were game-changing for me! As I dug a little deeper, I discovered a wealth of research has come out over recent years shining light on the attention-enhancing effects of nature exposure both digitally and in the real world, specifically research around Attention Restoration Theory (ART) in the field of environmental psychology. There’s also a lot more investment and research going into the architecture and design of physical work spaces and buildings and how they can influence wellbeing, productivity and creativity using the principles of Biophilic Design mentioned before. Apple Park is probably the best example of this that I’ve come across where they’ve spent billions of dollars creating a physical work environment that takes a very human-centric approach and really does aim to bring the natural outdoor environment indoors as much as physically possible. Another fascinating insight we found when speaking with existing customers who were using the iOS app to help them focus was that 40% of those who we interviewed were diagnosed with ADHD (which normally occurs in around 5% of the population). They reported that Portal had become an essential part of their toolkit in managing their ADHD and helping them pursue their studies, careers and passions. However, despite this, the biggest concern for our customers was actually having to use Portal on their phones as these had increasingly become the greatest source of distraction in their lives. They were a big driving force for us prioritizing bringing Portal to Mac. This is such an ambitious idea. How does Portal work, exactly? The app itself uses immersive technologies to instantly transform your workspace into an environment that’s designed to aid focus and creativity. Most of us are very aware of how different places make us feel — it’s not hard to imagine how different you’d feel if you were sitting on top of a mountain right now, or in the midst of a beautiful ancient woodland or a stunning tropical beach. But what’s often less obvious is that how we feel emotionally has a very direct impact on our thought process and how we actually think. In the words of one of our customers: “It has not only made me more productive, but more importantly, it has brought a sense of joy to my work day.” We essentially tap into the psychology of how we respond to our environment and draw on inspiration from some of the world’s most peaceful and awe-inspiring surroundings to create environments that are attuned to helping us get into the right state of mind to think, focus and create. The beauty of this approach is that it’s very passive — it doesn’t take active effort to enjoy the benefits. How does Portal work under the hood? Our ultimate goal is to re-create environments in the most true-to-life and authentic way possible, while also making it as practical and easy to use as possible. To do this, we’ve really had to really push the use of technologies that allow us to capture and reproduce visuals, sound and lighting as realistically as possible. Firstly we use the visuals of the location to create the feeling of a “window” to that place. Our aim has been to get as close to the feeling of a real window as possible and with Mac we’ve integrated these motion visuals directly onto the desktop. It may seem pretty counterintuitive that putting motion onto your desktop would actually help with concentration and make you less distracted, but when done right it can be really effective and it has a very similar effect to having a real window in your office. We’ve meticulously captured over 80 portals ourselves in some of the most beautiful and peaceful corners of the world. We’ve used 12K digital cinema cameras and an evidence-based approach to our content production to ensure we capture the feeling and detail of these incredible places in a way that can enhance productivity and inspire creativity without pulling your focus away. The second component is the sound. We’ve again put an enormous amount of focus on recreating the most true-to-life and immersive sound experience possible. To do this we not only use state-of-the-art spatial audio microphones but we’ve developed our own Spatial Audio solution from the ground up which is specifically designed for real-life ambiance. Rather than using Dolby Atmos which is the default technology on iOS and Mac we use a technology called Ambisonics which is most often used in VR and represents the soundfield as a sphere rather than the traditional speaker or channel-based sound formats.  Spatial audio better reflects how we actually hear our surroundings in real life, giving a much greater sense of space and delivering the closest experience to actually being there yourself. The effect can be quite subtle, but it’s incredible just how much our subconscious picks up on. We also go to great lengths to capture sound in the field that’s naturally free of noise pollution. It’s ama...
Discover an immersive new approach to productivity with Nick Daniels Founder of Portal
The false promise of the 10000 hour rule
The false promise of the 10000 hour rule
Our culture loves experts. Whether it’s athletes, chefs, or musicians, some of the biggest celebrities are considered masters of their craft, and we admire the long hours they put into practicing over and over again the same skills so they could become second nature. In 2008, Malcolm Gladwell published his popular book Outliers, exploring why some seemingly extraordinary people achieve much more than others. The book mentioned a study of violin students at a German music academy. This is from the abstract: “Many characteristics once believed to reflect innate talent are actually the result of intense practice extended for a minimum of 10 years.” Malcolm Gladwell branded this the 10,000-hour rule. Study whichever topic for 10,000 hours, and you will master it. Practice doesn’t make perfect First, the study wasn’t about studying a topic for a specific amount of time. It was about deliberate practice. This is a type of practice that is systematic and purposeful, with the specific goal of improving performance and requires focused attention rather than mindless repetitions. More importantly, the lead researcher of the study himself doesn’t even seem to agree with the magical 10,000-hour rule. “He misread that as every one of them had actually spent at least 10,000 hours [practicing], so somehow they passed this magical boundary (…) They were very good, promising students who were likely headed to the top of their field, but they still had a long way to go at the time of the study.” Anders Ericsson, Psychologist & Researcher, Florida State University (source). Finally, and maybe the biggest problem with the 10,000-hour rule, there is absolutely nothing in the study that suggests that anyone can become an expert in any given domain by putting in 10,000 hours of practice, even deliberate practice. To show this, the researchers would have had to take a random sample of people through 10,000 hours of practice and see if the results were statistically significant. All the study shows is that the “best” violinists had put in more hours of deliberate practice than the “good” violinists. Which is interesting but by no means a promise of expertise. In fact, a research study from Princeton shows evidence that practice accounts for just a 12% difference on average in performance in various domains, specifically: 18% in sports 21% in music 26% in games As Frans Johansson explains in his book The Click Moment, deliberate and repeated practice works better in fields with stable structures, such as chess, classical music, or tennis, where the rules never change. But, when it comes to entrepreneurship and other creative fields, the rules change all the time, making deliberate practice less useful. So if practice doesn’t make perfect, how can we go about mastering new skills? Range over mastery The learning strategy that has been used traditionally in school to teach students consists in focusing on one skill before moving on to the next one and is called blocking. But there is a better way: interleaving, which consists in practicing multiple parallel skills at once. Research has shown that randomizing the information causes your brain to stay alert, helping to store information in your long-term memory. This means that the next time you want to study a new subject, you could benefit from switching things up. For example, a bit of coding mixed with a bit of UX design will work better than one long coding session. Not only will you learn better and faster, but it may also make you more successful in the long run. In his book Range, David J. Epstein shows how generalists, rather than specialists, are more likely to succeed, especially in complex fields. The graph below is based on the Ancient Greek proverb: “The fox knows many things; the hedgehog one great thing.” Being too much of an expert can even be detrimental. In Expert Political Judgment, Philip E. Tetlock shares an experiment where political and economic experts were asked to make predictions. Turns out, 15% of outcomes that experts had considered impossible came to happen anyway, and a quarter of what constituted virtually guaranteed outcomes were never predicted. The interesting part? The more experience and credentials these experts held, the further off the mark their predictions were. In contrast, the participants who had a wider range of knowledge areas and were not bound to a specific “expertise” domain fared better in their predictions. Being able to see new patterns and generate ideas across fields where people don’t usually make connections is an incredibly valuable skill. This superpower rarely comes with deep expertise in one unique field at the expense of other areas of knowledge. So, forget about the 10,000-hour rule. Forget about sticking to one area of expertise for many years. It may work for a very small subset of people, but there is no rule indicating that this is the best strategy. Next time you feel like studying something new that doesn’t fit neatly into your current “frame of expertise”, go ahead and just do it. The post The false promise of the 10,000 hour rule appeared first on Ness Labs.
The false promise of the 10000 hour rule
Does Advanced AI Lead to 10x Faster Economic Growth?
Does Advanced AI Lead to 10x Faster Economic Growth?
Dear readers, I’m still writing the next New Things Under the Sun post, but in the interim, I hope you’ll probably find this debate I had with Tamay Besiroglu as fascinating as I did.1 It’s about the claim that, once we develop AI that can do anything (mental) a human worker can do, the economy will start to grow much, much, much faster. This claim is actually implied by some pretty mainstream models of economic growth! Tamay and I had this debate in slow motion, in a shared google doc, over a few months, and it was published in Asterisk Magazine Friday. In the debate, I’m the skeptic and Tamay the advocate. While I think it’s pretty likely sufficiently advanced AI would lead to (somewhat) faster economic growth, I think growth of 20% per year and up is pretty unlikely. In contrast, Tamay thinks 20% annual growth and faster is pretty likely, if we successfully develop AI that can do every kind of human mental work. If you’re unfamiliar with this debate, I think we cover the fundamentals well. But even if you are familiar I think we also push past the basics and articulate some novel arguments too. You can read the whole piece over at Asterisk right now. Read the Debate Now If you prefer audio, Tamay and I also recorded a podcast version where we each perform our parts of the dialogue. That one should be ready in the next 24 hours - it’ll show up first at this link, and then on your local podcast app a bit later. Cheers, Matt 1 I actually covered some of Tamay’s work on New Things Under the Sun here!
Does Advanced AI Lead to 10x Faster Economic Growth?
Creative burnout: when the creativity tap runs dry
Creative burnout: when the creativity tap runs dry
You are probably all too familiar with the dreaded creative block: sitting in front of your computer, your mind as blank as the page you are staring at, hoping that some miraculous burst of inspiration will suddenly rush through your fingers so you can finally get back into the flow. You also know of the many techniques to deal with creative block. Find inspiration by changing your scenery—maybe going for a walk or packing your laptop to work from a cafe. Just writing whatever crosses your mind, even if it’s unrelated to the work at hand until your mind starts forming interesting connections. Talk to other creatives to brainstorm some ideas. Experiencing a creative block is always inconvenient and stressful, but it is normally short-lived, and feeling occasionally stuck when working on a project is perfectly normal. Even if it may feel like an eternity, we soon end up finding a way to get our creative juices flowing. But sometimes, the problem runs much deeper. Creative burnout is a state of emotional, physical, and mental exhaustion around creative work. The symptoms can be hard to pinpoint, and the potential causes are many. The 8 symptoms of creative burnout Because it’s normal for creativity to fluctuate depending on factors such as sleep and stress levels, creative burnout can easily fly under the radar—masking temporary procrastination, tiredness, or lack of motivation. For people who genuinely care about their work and for those who rely on creative output as an emotional outlet, the insidious nature of creative burnout can have a devastating impact on their mental health: When you can’t seem to be able to produce any good creative work and you don’t know what’s wrong, you start blaming yourself. So I put together a list of eight signs of creative burnout. In isolation, most of these signs are harmless. However, if you have four symptoms or more, it may be time to shake things up. Procrastination. Putting off work for a couple of days because you don’t feel like you have enough mental energy is nothing to worry about. However, if you procrastinate for long periods of time and ignore important deadlines, it may be a sign of creative burnout. Struggle to do basic work. Is your to-do list getting longer and longer, but you can’t bring yourself to check some easy tasks off it? Are you burying your head in the sand and neglecting the growing mountain of little things you ought to get off your plate? This may be another symptom. Constant exhaustion. Sometimes, we don’t get enough sleep and feel sluggish the day after. That’s completely fine. But if the physical exhaustion is sustained over a long period of time despite a decent amount of sleep, you may be burning out. Inexplicable stress. Creative work can be stressful. Deadlines, complicated projects with many moving parts, a pushy client… These factors can cause stress within the Goldilocks curve and remain manageable. But creative burnout may make you feel persistently stressed without being able to pinpoint the exact cause. Unhealthy comparisons. We are more connected than ever, and many creators follow the work of fellow creators online. Some creators are more productive than others, and this productivity usually ebbs and flows. If you look at their output and can’t help but compare their productivity to yours in a negative way, you may be experiencing a symptom of creative burnout. Unbalanced content consumption. As a creator, it’s vital to balance your levels of creative input and creative output. When we burn out, we often find ourselves scrolling endlessly and binging TV shows but not creating much work of our own.  Morning dread. Have you ever experienced that feeling of angst, a sense of doom where your mind is racing into the future, and everything seems bleak? Stressful times in our life can make us dread waking up. If this feeling persists, it may be a sign of creative burnout—or something even more serious. Harmful habits. Eating unhealthy food or eating more than usual, abandoning your exercise routine, drinking more alcohol… If you are experiencing creative burnout, you may be coping through damaging mechanisms which will leave you feeling even worse. Irritability. You may be feeling frustrated with your colleagues, annoyed with your spouse, or snappy at your kids. Being more temperamental than usual can be a symptom of creative burnout. Self-doubt. Finally, you may also think that you will never be good enough, that your work is pointless, or that you lack the necessary imagination—despite having produced good creative work in the past and having received praise for it. Please note that if you are experiencing many of these signs, or even just one of these for a long time, it may be more serious than creative burnout. Many of these signs are also found in mental health conditions such as depression, anxiety disorders, and seasonal affective disorder, or could be caused by sleep problems. In doubt, it’s always worth talking to a professional. How to bounce back from creative burnout Creative burnout can make us feel powerless as if there was nothing to be done about it. But we have agency and can use simple strategies to break the cycle. Of course, simple does not mean easy, but removing unnecessary complexity from our approach makes it more likely for us to succeed. Get support. Because creative burnout impacts our work, our first instinct may be to hide our struggle from our colleagues. However, just grabbing someone and telling them: “I’ve been feeling burned out lately” can be immensely helpful. You will find that most people are more than happy to help, whether by giving you a hand with a project, brainstorming fresh ideas, or just lending an ear. Voicing your struggle is also a great first step in bouncing back from creative burnout. Take a break. Not just a short walk, which may be helpful for a creative block but probably not enough to help with creative burnout. Take a proper break—a few days off, with your out-of-office autoresponder on, where nobody will expect any work from you. The anxiety of knowing you are supposed to work but can’t bring yourself to is a vicious cycle. Taking a break is a way to escape that cycle so you can start afresh. Use the time to do things that have nothing to do with work without feeling any guilt: spend time with your loved ones, read books, take naps, cook, watch movies, go on a weekend holiday in the countryside, take care of your plants… Or just do nothing, that’s perfectly fine. Make space for self-reflection. Replace destructive existential angst with constructive self-reflection. It could take the form of journaling, discussing your struggle with a friend, reviewing your current environment and your schedule, running a motivation clinic, or even just talking to yourself out loud. Burnout can be hard to manage when we can’t define its exact source. Turn yourself into a self-experimenting scientist and try to uncover the roots of the problem. Look at your past work. Because creative burnout often comes with self-doubt, it’s easy to forget all our past accomplishments and focus on our present challenges instead. Go browse your past work, both the good and the bad. If it’s good, remember how it wasn’t easy to produce. If it’s bad, look at how much progress you have made. Channel the feelings you experience while reviewing your past work to overcome your self-doubt. Start with the basics. Choose the smallest atomic unit of creative work you can do to get you started again. Are you trying to write a book? Just write one paragraph. Trying to design a new website? Just work on one wireframe. Instead of looking at the mountain of work in front of you and feeling paralyzed, take your first baby step. Don’t forget to be kind to yourself. Creative burnout does not mean you don’t care about your work; it doesn’t mean you are lazy; it doesn’t mean you are not talented. Creative burnout can stem from perfectionism, external pressure, high expectations, or hypersensitivity. It’s a temporary state, not a permanent condition. Prevention is better than cure Creativity is fragile. It needs to be fed, but not too much, for consuming an excessive amount of information may destroy its delicate balance. It needs space to grow, but should not be forced, for mechanical work may lead to lifeless output. Despite all our care, sometimes, it seems to be gone: the creativity tap has run dry. We experience the dreaded creative burnout. While there are simple strategies to manage creative burnout, the best way to deal with it is to avoid burning out in the first place. Because of all the different causes of creative burnout, it may not always be possible, but creating a mental scaffolding to support your health and creativity can go a long way. Metacognition. Don’t wait until things are bad to start reflecting on how you feel, your progress, your goals, and your motivations. Metacognition means “thinking about thinking”—it’s being aware of your own awareness so you can determine the best strategies for learning and problem-solving, as well as when to apply them. It consists in planning, monitoring, and evaluating your creative work on an ongoing basis, so you can catch any early signs of creative burnout. Mindful productivity. Mindfulness and productivity may seem antithetic, but borrowing principles from mindfulness when you pursue creative work will help you build a sustainable work environment for yourself. Mindful productivity can be defined as being consciously present in the work you’re doing while you’re doing it. It’s not about meditation; it’s about calmly acknowledging and accepting your feelings and thoughts while engaged in work or creative activities. Habits, routines, rituals. Ensure you have the basics covered in terms of mental and physical health. Habits, routines, and rituals all have different levels of intentionality, and are all helpful to help you feel balanced and healt...
Creative burnout: when the creativity tap runs dry
And now for something completely different
And now for something completely different
This short post is to announce the launch of a new living literature review, on a topic almost the opposite of New Things Under the Sun: Existential Crunch, by Florian Jehn! Existential Crunch Thoughts about existential risk, history, climate, food security and other large scale topics. By Florian U. Jehn Existential Crunch is about societal collapse, and what academic research has to say about it. The first post takes a tour of the major schools of thought on this topic: Gibbon, Malthus, Tainter, Turchin and more. As the post says in it's closing: My main takeaway is that this field still has a long way to go. This is troubling, because in our society today we can see signs that could be interpreted as indications of a nearing collapse. There are voices warning that our global society has become decadent (writers like Ross Douthat), that we are pushing against environmental limits (for example, Extinction Rebellion), that we are having a decreasing return of investment for our energy system (for example, work by David Murphy) and that there has been an overproduction of elites in the last decades (writers like Noah Smith). This means we have warning signs that fit all major viewpoints on collapse. Moreover, new technological capabilities pose novel dangers that require us to extrapolate beyond the domain of historical experience. All this means that understanding how collapse really happens is rather urgent. If we want innovation and progress to continue (and I certainly do!), understanding how it dies seems, uh, important! Check it out, and sign up for the substack here. Why am I telling you about this? Well, one of the reasons I was excited to join Open Philanthropy was for the opportunity to support more living literature reviews, on a diverse array of topics. This is our first such review we’ve supported, but we’re interested in financially supporting more via the newly launched innovation policy program. We’re especially interested in people interested in writing reviews for policy relevant topics. For us, a living literature review is an online collection of short, accessible articles that synthesize academic research, updated as the lit evolves, and written by a single qualified individual (for example, Florian has published related academic work). If you're interested, go here for more info. And please, if you know of people who you think would be a good fit for this kind of thing, please let them know about this opportunity.
And now for something completely different
The Size of Firms and the Nature of Innovation
The Size of Firms and the Nature of Innovation
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Special note: Up until now, everything on New Things Under the Sun has been written by me. This is post is the first ever collaboration! My coauthor is Arnaud Dyèvre (@ArnaudDyevre), a PhD student at the London School of Economics working on growth and the economic returns to publicly funded R&D. I think this turned out great and so I wanted to extend an invitation to the rest of you - if you want to coauthor a New Things post with me, go here to learn more about what I’m looking for and what the process would be like. One last thing; I want to assure readers that, although this is a collaboration, I’ve read all the major papers discussed in the post. I view part of my job as making connections between papers, and I think that works best if all the papers covered in this newsletter are bouncing around in my brain, rather than split across different heads. On to the post! We are used to thinking about income inequality between individuals, but inequality between firms is vastly larger. In the US, the richest 1% of individuals earned about 20% of all income in 2018.1 In contrast, the top 1% of US firms by sales earned about 80% of all sales in 2018. The economy is populated by a few “superfirms” and multitude of small- to medium-size businesses. And this disparity is getting more extreme over time.2 Does this huge disparity in firm size matter for innovation and technological progress? Do big firms differ in the type of R&D they do, and if so, why? The academic literature about the empirical link between firm size and innovation is an old one, dating back to the 1960s at least,3 and we do not have space to do it full justice here. Instead, in this post we’ll focus on work using a variety of approaches to document that there are important differences in how innovation varies across firm sizes. In a followup post, we’ll examine some explanations for why. One quick point before digging in: when economists talk about firm size, they typically refer to its total sales or (more rarely) its employment count. Defined in this way, firm size is often used as an imperfect proxy for the number of business units of a firm (i.e. the number of product lines it has). Subscribe now Fact 1: Firm size and R&D rise proportionally The first important fact about firm heterogeneity and innovation is that corporate R&D expenditures scale up proportionately with their sales. In other words, when sales double, money spent on R&D doubles too. This doesn’t have to be the case: for example, it has been shown that other inputs in production such as labor4 and capital5 do not scale proportionately with firm sales (less than proportionately for labor, more than proportionately for capital). This proportional relationship has been shown time and again, at least for firms above a certain size who do at least some R&D.6 To illustrate this point, the figure below shows the relationship between firm sales and R&D expenses among publicly traded firms who report doing some R&D. The data is from Compustat (a database of publicly listed firms) and each dot represents 750 firm-by-year observations. In this graph, we control for year and fine sector (SIC4) so that the variation we isolate is across firms, within a year and within a sector.7 The slope is strikingly close to 1 on a log-log plot, meaning that the typical publicly listed firm increases its R&D expenditures by 10% when its size increases by 10%. Firm R&D expenditures by firm sales (log plot) Notes: Graph generated by Arnaud Dyèvre, with data on US publicly listed firms from Compustat. The sample of firms only include firms who report some R&D expenditures in a year. Sales and R&D are deflated using the Bureau of Labor Statistics CPI. This finding was first observed in the 1960s and has been reproduced across many studies since. In the figure below, from a seminal 1982 study by Bound, Cummins, Griliches, Hall and Jaffe, the authors have plotted log R&D expenditures of a panel of 2,600 manufacturing firms, as a function of their log sales, in 1976. The same proportional relationship is observed. Firm R&D expenditures by firm sales (log plot). Data from Bound, Cummins, Griliches, Hall and Jaffe (1982) The 1-to-1 proportionality of R&D to sales may lead one to conclude that the immense heterogeneity in firm sizes does not matter for the aggregate level of innovation. After all, if R&D scales proportionately with firm size, then an economy consisting of 10 firms with $1 billion in sales each will spend as much on R&D as an economy consisting of one firm with $10 billion in sales. But as we’ll see, this conclusion would be erroneous. Fact 2: Larger firms get fewer inventions per R&D dollar A variety of different lines of evidence show that firms get fewer inventions per R&D dollar as they grow. Let’s start with patents  (we’ll talk about non-patent evidence in a minute). The 1982 study by Bound, Cummins, Griliches, Hall and Jaffe mentioned earlier found that firms with larger R&D programs get fewer patents per dollar of R&D. Their result is summarized in Figure 3 (panel A) below; it shows an exponential decrease in the number of patents per R&D dollar as one moves up the size of firm’s log R&D expenditure. In a more recent and more comprehensive exploration of this relationship, Akcigit & Kerr (2018) use the universe of firms in the US matched to patents to document that patents per employee also decrease exponentially as a function of log employment (panel B). The relationships shown in the figures are very similar and suggest that bigger firms are getting fewer patents per productive unit—employment or R&D dollar.  Left: Patents per dollar of R&D as a function of total R&D expenditures (x-scale in log). From Bound, Cummins, Griliches, Hall & Jaffe (1982).Right: Patents per employees as a function of total employee count (x-scale in log). From Akcigit & Kerr (2018). Patents are not synonymous with invention though. It could, for example, be that as firms grow larger they create just as many inventions per R&D dollar, but they become less likely to use patents to protect their work. But in fact, the opposite seems to be true. Mezzanotti and Simcoe (2022) report on the Business R&D and Innovation survey, which was conducted between 2008 and 2015 by the US Census Bureau and the National Science Foundation. This survey asked more than 40,000 US firms, from a nationally representative sample, about their use of intellectual property. They find larger firms are much more likely to rate patents as important. For example, 69% of firms with more than $1bn in annual sales rate patents as somewhat or very important, compared to just 24% of firms with annual sales below $10mn. This relationship also holds when you compare responses across firms belonging to the same sector, in the same year. In other words, if we had a perfect measure of innovation that is not affected by selection like patenting is, we would find an even stronger negative relationship between firm size and patent per R&D dollar or per employee. Small firms have more patents per employee or R&D dollar, in spite of being less likely to file patents than big firms. Other empirical studies of innovations have relied on different measures of innovative output and have reached a similar conclusion. In a creative 2006 study of the financial service industry, Josh Lerner uses news articles from the Wall Street Journal to identify new products and services introduced by financial institutions. For example, if a story about a new security or the first online banking platform is written in the WSJ, Lerner counts it as an innovation and attribute it to a bank in the Compustat database. Consistent with papers using patent data, he finds that innovation intensity scales less than proportionately with firm size. (Note that Lerner measures size as the log of assets here rather than log sales, due to the nature of the industry studied.) You can also look for the introduction of innovations in other places. In 1982, the US Small Business Administration created a database of new products, processes or services in 100 technology, engineering or trade journals, and linked these inventions to firms. In their 1987 paper using this data, Acs & Audretsch also find that larger firms have fewer innovations per employees and fewer innovations per dollar of sales than small firms. (Though they emphasize that this isn’t universal; in some industries, large firms produce more innovations per dollar than small firms - but this isn’t typical) Finally, Argente et al. (2023) use product-scanner data in the consumer goods sector over 2006-2015 to obtain details on every product sold in a large sample of grocery, drug, and general-merchandise stores, including the associated firm that markets the product. Here, they identify innovation as the introduction of a new product; as the figure below illustrates, bigger firms consistently introduce fewer new products, relative to the number of products they already sell (gray line below). From Argente et al. (2023) Of course, not all new products are equally innovative. To deal with this issue, Argente and coauthors use data on the attributes of each product. Since they know the price and sales of each product, they can run statistical models to estimate a dollar value consumers put on different product attributes. They can then “quality adjust” new product introductions by the introduction of products that include new attributes, where attributes are given more weight if associated with higher prices (or sales). This more sophisticated approach yields the same result: when you adjust for quality, you still find that larger firms are less innovative (relative to their size) than small...
The Size of Firms and the Nature of Innovation
Discover the productivity wearable with Ben Wisbey Co-Founder of Pylot
Discover the productivity wearable with Ben Wisbey Co-Founder of Pylot
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better and work smarter. Ben Wisbey is the co-founder of Pylot, the very first wearable to track your productivity, so you can know when you’re ready for deep work, shallow tasks, or need a break. In this interview, we talked about the fallacy of time management, how to quantify work quality, the key questions to achieve deep work, the science of cognitive performance, how to manage mental fatigue, and much more. Enjoy the read! Hi Ben, thanks for agreeing to this interview! Most people associate productivity with time management, but you think differently. Can you tell us more? This is a great question. The age-old quest to optimize our time and make the most of our day often leads us to neglect a crucial factor: not all hours are created equal. In fact, an hour of deep, focused work can yield far better results than multiple hours of grinding through tasks when we’re not at our best. After years of managing my time, I learnt that it was more important to manage my energy. If I could do my priority tasks when my mental energy was high, I was producing better work in less time.  As a performance scientist with a background in monitoring professional and Olympic athletes, I’ve always been passionate about helping people perform at their peak. This interest eventually merged with my obsession for productivity, and I embarked on an ambitious project to quantify energy management. What started as a few months of work quickly turned into a two-and-a-half-year research journey, during which the Pylot team and I monitored brain wave activity and physiological responses during work and other mentally challenging tasks like video gaming. Our research led us to quantify the mental aspects that impact performance. The most significant determinant of mental performance is flow. Flow, often referred to as “being in the zone,” is a state of relaxed concentration where you’re fully immersed in your work and not easily distracted. Scientifically speaking, this state is associated with specific brain wave frequencies. Another critical factor we identified was mental fatigue. When mental fatigue is high, it’s difficult to maintain a flow state, and achieving another quality work block within the same day becomes highly unlikely. While quantifying work quality is no easy task, our research demonstrated that serious gamers playing competitive online games had significantly higher win rates when they were in high flow states and managed to avoid fatigue. So, the next time you find yourself striving for maximum productivity, remember that managing your energy and tapping into your flow state may just be the key to unlocking your true potential. And this is what inspired you to build Pylot. After the acquisition of my previous business, I found myself working remotely for a large organization. My mornings were filled with back-to-back meetings, and when I finally sat down to tackle my “real work” in the afternoons, I hit a wall. I couldn’t seem to get into the groove, and I wondered if I was just being lazy or if my endless meetings had left me mentally drained. Even worse, I’d reach the end of each day feeling unsatisfied, questioning what I had truly accomplished. As a self-proclaimed productivity nerd, I decided to dive deep into the data I had been tracking for years on RescueTime. While apps like Rize and RescueTime are fantastic at providing insights into computer activity and behavior, they couldn’t quite answer why I struggled to engage in deep work. That’s when a few colleagues and I embarked on a mission to unravel this mystery by using sensors to measure what was actually going so, so we could answer three crucial questions: What time of day is best for my deep work? How long should these deep work sessions be? When do I need a break? How does it work under the hood? Pylot utilizes a lightweight and comfortable headband to gather EEG and HRV data. EEG tracks brain wave activity, while HRV measures variations in heart rate. By capturing this information, Pylot can assess mental fatigue and flow—two key elements of optimal cognitive performance. The collected data is then sent to the Pylot app, available on Mac and Windows devices, where it offers real-time feedback along with recommendations for engaging in deep work, tackling shallow tasks, or taking a break. As you continue to use the app, it learns about your unique patterns and can suggest the best times of day for your deep work sessions and their ideal duration. Although the concept may sound straightforward, developing the algorithms that power this process took us three years. Behind the scenes, there’s a lot of heavy lifting happening on the data side to transform raw sensor information into valuable feedback. They say hardware is hard. Building the first wearable for productivity must have come with many challenges—what were some of the design challenges you had to resolve? Developing hardware is no easy feat, especially when it comes to creating devices that accurately collect scientific data. Fortunately, our founding team brought invaluable experience from working on various wearable devices. We knew that our product had to be comfortable, lightweight, visually appealing, energy-efficient, and provide accurate data—a challenging combination to achieve. As a pioneering wearable in its field, we faced our fair share of trial and error. Some of our early prototypes were uncomfortable to wear and not aesthetically pleasing. We also had to ensure compatibility with glasses and headsets. After exploring multiple form factors and sensor placements, we’ve arrived at a design that is even better than we had hoped. The end product is incredibly lightweight and flexible to the point you forget you’re wearing it. Moreover, it delivers high-quality data, boasts a ten-hour battery life, and maintains compatibility with glasses and headsets. We couldn’t be more thrilled with the final product and are eager to share it with the world. This is such a thoughtful approach to hardware design. So, what does the user experience look like? The terms “productivity wearable,” “EEG,” and “HRV” might seem complex and scientific, but we’ve made sure that our product is user-friendly and straightforward. All you need to do is turn the band on and wear it. The apps will automatically record your work session and offer live feedback. There is an overlay, or widget, on Windows/Mac so you can see live feedback without interrupting your work. The app then provides a summary of each work session, and each day, while also allowing you to see trends over time.  The experience of wearing the band is similar to using headphones. You might be aware of them for a few minutes after putting them on, but soon after, you’ll forget they’re even there. What kind of people do you think would most benefit from using Pylot? Pylot is designed for individuals seeking to maximize the quality of their workday. Rather than focusing on doing more work, it emphasizes doing one’s best work. To accomplish this, users need some control over their work schedules, allowing them to adapt their work hours based on what suits them best. This flexibility may apply to remote workers or those with adjustable schedules, making it particularly relevant for founders, developers, designers, writers, and many other knowledge workers. We’ve been testing Pylot with some of these users and encountered intriguing results. One memorable example involves a founder who used Pylot to adjust their daily routine based on the app’s recommendations. During an unusually busy week, they pushed through a demanding day despite experiencing mental fatigue on Thursday. Come Friday, and their mental fatigue was high all day, making it difficult to perform at their best. However, they adapted their work plan according to Pylot’s feedback and shifted to a day focused on administrative and procedural tasks. What about you, how do you use Pylot? I use Pylot daily, being a productivity enthusiast myself. I was already aware that I worked best in the mornings, but Pylot has helped me refine my schedule further by identifying my optimal time for deep work as 7am to midday, working in 90-120 minute blocks. However, with these early starts comes a decline in the afternoon. My flow diminishes significantly after 2pm, so I focus on shallow tasks and try to schedule meetings and emails during this period. Although this structure generally works well for me, not every day is identical. So, I monitor my fatigue levels to incorporate more breaks when necessary. We have already begun examining the influence of sleep and exercise on cognitive performance. In the future, you might see a feature where the app integrates with data from devices like Apple Watch. How do you recommend someone get started? Right from the start, Pylot offers instant feedback on your flow and fatigue levels. However, its accuracy improves over time as it learns what’s normal for you. We recommend using Pylot for two weeks to receive suggestions on your ideal deep work hours and session lengths. By continuing to wear Pylot, you can monitor how these factors change over time and receive live recommendations on when to switch to shallow work or take a break. Since no two days are the same, this real-time feedback proves invaluable in adjusting your work schedule to achieve the best outcomes. And finally… What’s next for Pylot? Our mission is to help people design their day for success. We aim to assist users in making the most of their time by engaging in the right tasks at the right moments. This approach not only leads to improved work outcomes but also ensures there’s time for other important activities in life. This principle applies not only to work but also to any activity where cognitive performance is crucial, including s...
Discover the productivity wearable with Ben Wisbey Co-Founder of Pylot
The two sides of stress: distress and eustress
The two sides of stress: distress and eustress
Picture this: You’re at work with a big deadline coming up. Unfortunately, someone made a mistake, and part of the project needs to be completely redone in a rush. As the pressure mounts, you can feel the tension gripping your mind and body, causing your patience to wear thin. In those stressful situations, it’s not uncommon to experience automatic negative responses that arise from the complex interplay between our thoughts and emotions. We may find ourselves snapping at a colleague or retreating into quiet as we try to cope with the crushing weight of anxiety. So it’s not surprising that we tend to perceive stress as a negative phenomenon that should be minimized at all costs. In fact, a common misconception is that stress is inherently bad. But stress is just your body and your mind’s response to external challenges. Depending on the particular stressors and your reaction, stress can be detrimental (distress) or beneficial (eustress). The prefix -dis in “distress” has the same root as words like disconnect, dissatisfaction, and disingenuous. In contrast, “eustress” literally means “good stress.” It was coined by endocrinologist Hans Selye in 1975 to describe a positive cognitive response to stress. Distress versus eustress Distress can have a terrible impact on productivity, creativity, and mental health. On the other hand, eustress has been found to enhance performance and overall well-being, especially in the workplace. When you experience eustress, you’re pushed to do your best. In short, distress results in anxiety; eustress is exciting. Distress leads to procrastination, while eustress is a source of motivation. Overall, distress has a negative impact on performance. On the other hand, eustress acts as a performance enhancer. Here is an overview of the key differences between distress and eustress: Experiences that lead to eustress are usually perceived as challenging but still within our coping abilities, leading to heightened focus and motivation. That delicate balance is where lies the secret to eustress. A balancing game You need just the right amount of pressure to unlock the benefits of eustress. This is known as the Yerkes–Dodson law, originally developed by psychologists Robert M. Yerkes and John Dillingham Dodson in 1908, which states that performance increases with mental or physiological arousal—but only up to a limit. But if you manage to strike that balance, eustress offers many benefits, especially for ambitious people who enjoy an interesting challenge. Some of the benefits of eustress include: Flow. Researchers described flow as the “ultimate eustress experience—the epitome of eustress.” When in flow, we are focused on the challenge and fully present. We become so fully absorbed in what we are doing, we lose track of time and can effortlessly ignore external distractions. Resilience. Because eustress is based on perception, cultivating eustress can help in reacting more positively to challenging situations, resulting in higher emotional agility. It can help us build better coping skills and boost confidence by reframing stressors as valuable learning opportunities. Self-efficacy. Your judgment of how you can carry out a required task or take on a specific role is a measure of your level of self-efficacy. Experiences of eustress allow you to accumulate evidence of your abilities and competence, and in turn, encourage you to explore more ambitious ideas. The good news is: Though not all stress can be reframed as a positive experience, you can proactively manage many external stressors, so they result in productive eustress instead of paralyzing distress. How to foster eustress As eustress is a positive reaction to stress based on perception rather than objective stressors, the potential sources of eustress vary greatly between people. These are examples of stressors that are commonly perceived as positive: Learning a new skill. Working hard to learn something new is, for many, a safe source of eustress, creating the right amount of challenge while staying in control of the learning experience. Starting a new job. Because it’s a combination of using existing skills and learning new ones, while quickly forming relationships in a new environment, starting a new job can be challenging in the best ways, resulting in eustress. Similarly, receiving a promotion or moving teams can create good stress. Going on a holiday. Traveling to a distant place with a different culture can create eustress by forcing us to leave our comfort zone. Although travel can bring about distress—canceled flights, stolen items—many people view it as a fulfilling challenge. Starting a family. Whether getting married or having a child, starting a family can be a source of eustress by offering a novel challenge and many opportunities for personal growth. Moving. Finally, moving houses implies leaving the comfort of a familiar place behind to start a new life. The process is a source of negative stress for many people but can lead to eustress because of its inherently adventurous nature. There are many other potential sources of eustress, such as playing competitive sports, some challenging video games, participating in a tournament, or having a complex but constructive debate with someone. In order to find your own sources of eustress, the key is to experiment with positive stressors and to practice metacognitive strategies to reflect on their impact on your stress levels. A simple method to keep track of your stressors—whether they result in distress or eustress—is the Plus Minus Next method. If you only remember one thing: Not all stress is bad and it can be a healthy source of motivation as long as you find your own positive stressors. The post The two sides of stress: distress and eustress appeared first on Ness Labs.
The two sides of stress: distress and eustress
When Technology Goes Bad
When Technology Goes Bad
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Innovation has, historically, been pretty good for humanity. Economists view long-run progress in material living standards as primarily resulting from improving technology, which, in turn, emerges from the processes of innovation. Material living standards aren’t everything, but I think you can make a pretty good case that they tend to enable human flourishing better than feasible alternatives (this post from Jason Crawford reflects my views pretty well). In general, the return on R&D has been very good, and most of the attention on this website is viewed through a lens of how to get more of it. But technology is just a tool, and tools can be used for good or evil purposes. So far, technology has skewed towards “good” rather than evil but there are some reasons to worry things may differ in the future. Subscribe now Why is technology good for us, on average? I think technological progress has skewed good through most history for a few reasons. First, invention takes work, and people don’t do work unless they expect to benefit. The primary ways you can benefit from invention are either directly, by using your new invention yourself, or indirectly, by trading the technology for something else. To benefit from trade, you need to find technologies that others want, and so generally people invent technologies they think will benefit people (themselves or others), rather than harm them. Second, invention is a lot of work, and that makes it harder to develop technology whose primary purpose is to harm others. Frontier technological and scientific research is conducted by ever larger teams of specialists, and overall pushing the scientific or technological envelope seems to be getting harder. The upshot of all this is technological progress increasingly requires the cooperation of many highly skilled individuals. This makes it hard for people who want to invent technologies that harm others (even while benefitting themselves). While people who are trying to invent technologies to benefit mankind can openly seek collaborators and communicate what they are working on, those working on technologies to harm or oppress must do so clandestinely or be stopped. Third and finally, the technological capabilities of the people trying to stop bad technology from being developed grow with the march of technological progress. Think of surveillance technology in all its forms: wiretaps, satellite surveillance, wastewater monitoring for novel pathogens, and so on. Since it’s easier to develop technologies for beneficial use when you can be open about your work, then that will tend to boost the powers of those empowered to represent the common interest. In a democracy, that process will tend to hand more powerful tools to the people trying to stop the development of harmful technologies. Now - these tendencies have never been strong enough to guarantee technology is always good. Far from it. Sometimes technologies have unappreciated negative effects: think carbon emitting fossil fuels. Other times, large organizations successfully collaborate in secret to develop harmful technology: think military research. In other cases, authoritarian organizations use technological power to oppress. But on the whole, I think these biases have mitigated much of the worst that technology could do to us. But I worry a new technology - artificial intelligence - risks upending these dynamics. Most stories about the risks of AI revolve around AI’s developing goals that are not aligned with human flourishing; such a technology might have no hesitation creating technologies that hurt us. But I don’t think we even need to posit the existence of AI’s with unaligned goals of their own to be a bit concerned. Simply imagine a smart, moderately wealthy, but highly disturbed individual teaming up with a large language model trained on the entire scientific corpus, working together to develop potent bioweapons. More generally, artificial intelligence could make frontier science and technology much easier, making it accessible to small groups, or even individuals without highly specialized skills. That would mean the historic skew of new science and technology being used for good rather than evil would be weakened.1 What does science and technology policy look like in a world where we can no longer assume that more innovation generally leads to more human flourishing? It’s hard to say too much about such an abstract question, but a number of economic growth models have grappled with this idea. Don’t Stop Till You Get Enough Jones (2016) and Jones (2023) both consider the question of the desirability of technological progress in a world where progress can sometimes get you killed. In each paper, Jones sets up a simple model where people enjoy two different things; having stuff and being alive. Throughout this post, you can think of “stuff” as meaning all the goods and services we produce for each other; socks and shoes, but also prestige television and poetry. So let’s assume we have a choice: innovate or not. If we innovate, we increase our pile of stuff by some constant proportion (for example, GDP per capita tends to go up by about 2% per year), but we face some small probability we invent something that kills us. What do we do? As Jones shows, it all depends on the tradeoff between stuff and being alive. As is common in economics, he assumes there is some kind of “all-things-considered” measure of human preferences called “utility” which you can think of as comprising happiness, meaning, satisfaction, flourishing, etc. - all the stuff that ultimately makes life worth living. Most models of human decision-making assume that our utility increases by less-and-less as we get more-and-more stuff. If this effect is very strong, so that we very quickly get tired of having more stuff, then Jones (2016) shows we eventually hit a point where the innovation-safety tradeoff is no longer worth it. At some point we get rich enough that we choose to shut down growth, rather than risk losing everything we have on a little bit more. On the other hand, if the tendency for more stuff to increase utility by less-and-less is weak, then we may always choose to roll the dice for a little bit more. As a concrete illustration (not meant to be a forecast), Jones (2023) imagines a scenario where using artificial intelligence can increase annual GDP per capita growth from 2% per year to 10% per year, but with an annual 1% risk that it kills us all. Jones considers two different models of human preferences. In one of them, increasing our stuff by a given proportion (say, doubling it), always increases our utility by the same amount. If that is how humans balance the tradeoff between stuff and being alive, it implies we would actually take big gambles with our lives for more stuff. Jones’ model implies we would let AI run for 40 years, which would increase your income more than 50-fold, but the AI would kill us all with 1/3 probability! On the other hand, he also considers a model where there is some maximum feasible utility for humans; with more-and-more stuff, we get closer-and-closer to this theoretical maximum, but can never quite reach it. That implies increasing our pile of stuff by a constant proportion increases utility by less and less. If that is how humans balance the tradeoff between having stuff and being alive, we’re much more cautious. Jones’ model implies in this setting we would let AI operate for just 4-5 years. That would increase our income by about 50%, and the AI would kill us all with “just” 4% probability. But after our income grows by 50%, we would be in a position where a 10% increase in our stuff wouldn’t be worth a 1% chance that we lose it all. Different Kinds of Progress The common result is that, as we get sufficiently rich, we are increasingly willing to sacrifice economic growth in exchange for reduced risks to our lives. That’s a good place to start, but it’s a bit too blunt an instrument: we actually have more options available than merely “full steam ahead” and “stop!” A variety of papers - including Jones (2016) - take a more nuanced approach and imagine there are two kinds of technology. The first is as described above: it increases our stuff, but doesn’t help (and may hurts) our health. The second is a “safety” technology: it doesn’t increase our stuff, but it does increase our probability of survival. “Safety” technology is a big category. Plausible technologies in this category could include: Life-saving medical technology Seatbelts and parachutes Renewable energy Carbon capture and removal technology Crimefighting technology Organizational innovations that reduce the prospects of inadvertent nuclear first strikes AI alignment research And many others. The common denominator is that safety technologies reduce dangers to us as individuals, or as a species, but generate less economic growth than normal technologies. In addition to the model discussed above, Jones (2016) builds a second model where scientists face a choice about what kind of technologies to work on. The model starts with a standard model of economic growth, where technological progress does not tend to increase your risk of dying (whew!). But we still do die in this model and Jones assumes people can reduce their probability of dying by purchasing safety technologies. Scientists and inventors, in turn, can choose to work on “normal” technology that makes people richer, or safety technology, which makes them live longer. There’s a market for each. This gives you a result similar in spirit to the one discussed above: as people get richer, the tradeoff between stuff and survival starts to tilt increasingly towards survival. If peopl...
When Technology Goes Bad
Unlock your best work with Jim Kleban Head of AI at Supernormal
Unlock your best work with Jim Kleban Head of AI at Supernormal
FEATURED TOOL Welcome to this edition of our Tools for Thought series, where we talk to founders on a mission to help us think better and work smarter. Jim Kleban is the Head of AI at Supernormal, an AI-powered app that helps you create amazing meeting notes without lifting a finger, saving ten minutes every meeting. In this interview, we discussed the underpowered value of taking notes, the importance of building memory over the knowledge contained in meeting discussions, the critical relationship between note-taking and decision-making, how AI will shape the future of work, and much more. Enjoy the read! Hi Jim, thanks for agreeing to this interview! Most people know the value of taking meeting notes, and yet in most cases, notes are sent around and never used again. Why is that?  Thanks for having me! I’m excited to share what we’re building at Supernormal and how I think these tools are going to change how we work. Supernormal provides detailed meeting notes automatically that you can tailor to the type of meeting you’re having. This frees people from the mental effort of having to write out notes so we can be fully present in our meetings. Meetings are a critical part of how work gets done, but how the world approaches meetings hasn’t really evolved so much. Meetings are still conducted similarly to the pre-remote work era, and we may even have gotten sloppier. In many cases meetings don’t have an agenda, nobody takes notes, and action items are forgotten. The lack of rigor is making meetings less productive and workers are feeling it. A recent survey from Zippia found that organizations spend ~15% of their time on meetings, with surveys showing that 71% of those meetings are considered unproductive. When people do take notes for meetings, it is true that they are often sent and then effectively lost. I think this is due to a lack of structure for building an organizational memory for what has been discussed over time. How does Supernormal address those challenges?  The tools we’re building at Supernormal are aimed at making meetings a much more valuable use of time. We consider how we can help people before, during, and after their meetings. Ahead of time, we make it easy to add the Supernormal notetaker to meetings by syncing it with your calendar. It’s simple to turn the notetaker on or off for a meeting both beforehand or when a meeting is happening. During the meeting, Supernormal will automatically transcribe the conversation and create meeting notes. The product works today for Google Meet, Zoom, and Microsoft Teams so it is flexible to cover meetings when you use a different remote meeting platform. The set of meeting notes people receive includes a short summary that we call “the gist”, a longer summary with all the details, and a list of action items. And for specific types of meetings like customer discovery calls, interviews, and business pitches, we provide custom notes that are tailored to what people most want to learn for that type of meeting. When the meeting ends, the transcript and notes can be automatically shared with meeting participants. They can also be viewed, edited, and shared from inside the Supernormal web app. Here is where Supernormal helps teams build memory over the knowledge contained in their meeting discussions. On Supernormal, we organize past meetings for easy reference and make it easy to find meetings by searching over transcripts and notes. We even help people make progress on the action items they’ve been assigned in their meetings. Can you give us an example of how that would work, let’s say with a customer discovery call?  Sure! Let’s say you’re a product manager and you’re trying to validate your product idea. To conduct a customer discovery call using Supernormal, you would first identify potential customers who fit your target market. You then send the usual meeting invite to have them participate in a call and include the Supernormal notetaker. On the call, you would ask the customers a series of questions that helps you better understand whether your product idea addresses their needs. With Supernormal in the meeting, you can stay fully focused on what the customer is telling you and not be distracted by the need to write notes. For this type of call, Supernormal will generate custom note sections based on who is speaking that summarize the customer’s needs and pain points. Afterward, anyone you share the meeting with can access the transcript and notes. You can highlight customer insights from this call to compare with other customer discovery calls, and you can easily share out the notes via email or message. People who weren’t present on the call can quickly learn by reading the notes and diving into the relevant parts of the transcript.  Overall, the Supernormal app helps you conduct customer discovery calls more efficiently and effectively by automatically providing real-time analysis and insights from the conversation, making notes easy to share, and centralizing notes in a single place. That sounds great. I guess a common problem with meeting notes is that we often just forget to take them.  Yes, and this is the first problem Supernormal is designed to solve. We automatically take detailed meeting notes for you so it’s no longer an annoying task or tradeoff.  I also want to mention that what sets Supernormal apart is that we have invested heavily in improving the Supernormal AI. The notes are designed to be accurate, concise, and not miss any of the important discussion points. And to improve on this we have built user feedback and quality controls. The AI learns to provide better notes for people the more they use our tool. Unlike other transcription tools, Supernormal accurately summarizes the meeting so you don’t have to comb through a transcript. The transcript and notes work for meetings in languages other than English, too. And in some ways, because the AI is a neutral observer, the notes generated may indicate or remind the participants of important points or tones that happened during the meeting that may have been missing in the notes otherwise. So, it’s practically impossible to forget about taking notes during meetings. What about sharing those meeting notes?  Sharing notes really is a key behavior. Notes can be much more than a record of what has been discussed, for instance, they are often a way teams formalize key decisions. Supernormal gives meeting participants the ability to take the output notes from the AI as a starting point and then refine them as they see fit, by editing them or applying custom templates to get notes for specific kinds of meetings. And I’ve mentioned we make it easy to automatically share with participants, copy the notes and send in an email, or just share a link to the meeting.  All of your meetings are securely stored and discoverable on Supernormal, so you never have to spend time searching docs or flipping through calendar invites to find them. Supernormal also integrates with Slack, Hubspot, and Pipedrive so you can save and share meeting notes in the tools you already use.  What kind of people use Supernormal to capture and share meeting notes?  The world has shifted to remote or hybrid work since the pandemic, and even though we started building Supernormal before COVID-19 the changes in how we work have opened up the possibilities for tools like ours. People also are excited about what is possible with AI and the ChatGPT explosion, and they want to find tools like Supernormal where the AI helps but does not replace the human in our work. This was also personally very important to me as I considered the type of AI product I want to be contributing to the world in my own work. So the people who use Supernormal are often remote work oriented. They feel they are gaining a superpower at work from the tool. Their teams often have important external meetings that not everyone attends so the notes and transcripts are critically valuable.  As an example, there’s a product manager from a startup in the Pacific Northwest. Her team is working on a years-long project with multiple customer discovery calls that she can’t always attend. But, she uses Supernormal to review each of those calls, and finds it helpful to get the insights from the notes and then is able to read the direct user quotes from the transcripts. For meetings within her team, she is using Supernormal as a way to make a record for the entire team to access. This streamlines team-wide communications so everyone always knows what’s going on and nothing happens behind closed doors.  What about you, how do you use Supernormal?  At Supernormal we’re pretty serious about meeting notes. I spent more than a decade of my career as a product manager and most of my workday has been in meetings. I always wanted a tool to help make the pain of follow-ups and sending notes less toilsome. As our company has been growing, you can also imagine the number of meetings I have now as Head of AI is increasing.  We dogfood our own product at Supernormal and typically use it to capture and share all our meetings. One of the features we really love is tracking the action items assigned to each of us as tasks. It even feels fun to check out the new tasks that have automatically appeared for us to do after a day of meetings. These are helpful reminders of the things we said we’d do in our meetings, and I’d imagine we’d forget at least some of them otherwise. The other key part of how we use Supernormal is that it frees people on the AI team from feeling like they have to attend every meeting that gets scheduled. Everyone has access to every meeting, so this means they won’t lose context when a meeting is something they skip attending. They can focus instead on completing their engineering work instead. This has greatly reduced meeting bloat and opened up work time for our AI team. How do you recommend someone get started?  Getting started is...
Unlock your best work with Jim Kleban Head of AI at Supernormal
Time is not a measure of productivity
Time is not a measure of productivity
It seems obvious that the amount of time you spend on a task is a terrible indicator of how productive you are. And yet, a lot of our work culture is fixated on time. We often feel pressure to prove our productivity by working long hours or responding to emails outside of regular work hours. Using principles from hourly work to define productivity in knowledge work has resulted in inefficient and often unhappy work conditions for many teams. Faster individuals are frustrated, useless meetings are filling time, and instead of taking mindful breaks, people stay sitting at their desks at home or in the office even when there is no meaningful work to do. The pandemic has forced many companies to switch to remote work, and many of them intend to keep it this way in the future. As working remotely is becoming the norm for many knowledge workers, our practices need to change. We need to abandon time as a measure of productivity. The dangers of passive face time In a famous study conducted by researchers from the University of California and the University of North Carolina, 39 corporate managers were asked about their perception of their employees. During the interviews with those managers, the researchers explored two topics in particular: Expected face time. Being seen at work during normal business hours. Extracurricular face time. Being seen at work outside of normal business hours. These are two forms of passive face time—“passive” because there is no real work interaction; the manager simply observes the amount of time their employee spends at work. What the team member is actually doing and how well they are doing it does not matter. The researchers found that these two forms of passive face time resulted in better perceptions from corporate managers. People who would spend more time at their desks or work during the weekends were seen as more “committed”, “trustworthy”, “dependable”, “hard-working” and “dedicated”. Here are some quotes from the interviews so you can judge for yourself: “I know I can depend on someone that I see all the time at their desk.” “This one guy, he’s in the room at every meeting. Lots of times, he doesn’t say anything, but he’s there on time, and people notice that. He definitely is seen as a hardworking and dependable guy.” “Arriving early and staying late in the office makes a good impression. I think of those workers as more dedicated than most.” “Working on the weekends makes a very good impression. It sends a signal that you’re contributing to your team and that you’re putting in that extra commitment to get the work done.” “If I see you there all the time, okay, good. You’re hard-working, a hard-working, dependable individual.” “I would bump into my supervisor at 7 o’clock in the evening. She knows I’m there working. In those cases, I get extra points just for being there late. I’m seen as having an extra level of commitment.” These comments were not surprising in 2010 when the study was conducted. But peeking over the shoulder of an employee to check whether they are working, bumping into a supervisor at 7 pm to get extra points, being perceived as hard-working just by sitting in front of your desk — these do not make sense anymore, especially in a distributed company where it’s physically impossible, except with some regrettably popular tracking software. However, cultural remnants from the industrial age mean that to this day, many managers still rely on presence — whether online or in-person — to measure performance. This is despite the fact that time is a terrible incentive for productive work: On one hand, someone who manages to finish their work faster may get penalized compared to a slower employee who will be perceived as more zealous. On the other hand, some people keep busy in order to project an image of productivity. Beyond time measurement Instead of the hours of work, we should focus on the results. Instead of passive face time, we should strive for mindful productivity. Whether you are a manager, an employee, a freelancer, or an entrepreneur, these five strategies can be helpful to stop using time as a measure of productivity: Avoid unnecessary meetings. Always ask yourself: “What’s the goal of this meeting? Could the goal be achieved in a more efficient manner?” You will often realize that a meeting does not have a clear goal. Out of insecurity or habit, people organize meetings to show they are working publicly—that they are “dependable” and “dedicated”. If the meeting doesn’t have a clear goal, ask for clarification or ask to cancel it. If the meeting has a clear goal, consider whether sending a memo around or having everyone send a quick update over email may not be a way to avoid wasting time. Define purposeful goals. Human beings like to keep busy. When we don’t have clearly defined goals, it’s easy to fill our time with ill-fitted tasks to maintain the illusion of productivity. For short-term goals based on predictable outcomes, you can use the SMART goals framework. For long-term personal growth goals which are more flexible, use the PACT framework instead, which stands for Purposeful, Actionable, Continuous, and Trackable. Having clearly defined goals will ensure the focus is on achieving these goals rather than passive face time. Reduce repetitive tasks. We waste a lot of time repeating the same tasks at work, which can keep us unnecessarily busy and fill up our time without progressing toward our goals. Review such tasks and consider whether you can automate, simplify, or outsource some of them. For instance, tools like Zapier can help you build workflows and connect all your apps together. Or you could hire someone to take care of repetitive tasks on one of the many freelancing platforms out there. Focus on the 20%. The 80/20 rule, also called the Pareto Principle after economist Vilfredo Pareto, states that 80% of consequences come from 20% of the causes. At work, 80% of your success will come from 20% of your efforts. Identify these key efforts, try to eliminate as much of the noise in the 80%, and focus on the 20% that really matters. Be protective of your time. While passive face time encourages people to participate in meetings and sit at their desks longer, mindful time blocking ensures you have time to focus on the 20% that matters and achieve your goals. Whether you share your calendar with a team or work independently, add blocks to your calendar for important tasks. Just make sure not to go overboard, as time blocking starts losing its meaning when everything is blocked in your calendar! And, most importantly: if you finish a task ahead of a deadline, give yourself a pat on the back and take a break! You deserved it. Sitting in front of a desk should never be seen as a sign of hard work and commitment. Focusing on results rather than hours has always made sense. In today’s distributed world, it has become inevitable. Hopefully, managers will embrace the change. The post Time is not a measure of productivity appeared first on Ness Labs.
Time is not a measure of productivity
The neurochemicals of productivity and procrastination
The neurochemicals of productivity and procrastination
We all have goals. They can be big or small; professional or personal. But obstacles get in the way. External obligations such as social events, unforeseen additional work, and demanding customers can drain our energy, so there’s little left to focus on what really matters to us. If only that was the only issue. To make things worse, we’re also constantly fighting an internal battle against our brain, which background mechanisms we’re unconscious of. You don’t feel anything every time a neuron fires, and you have little control over the activity inside your brain. But those processes have a huge impact on how you manage your goals and how it feels to work toward your goals. Understanding these mechanisms won’t magically allow you to achieve your goals, but it will help you be kinder to yourself when things don’t seem to go as planned, and you struggle to focus on your goals. Your three frenemies Three main neurochemicals have been identified in people experiencing a state of flow: dopamine, noradrenaline, and acetylcholine. As you’ll see, these are akin to little tricksters that can sometimes help you and other times work against you. Dopamine is a neurotransmitter that plays an important role in the reward system. Releasing dopamine is one of the ways your brain has to make you feel good and encourage you to do more of whatever you’re doing. Research has found that behaviors such as sex, eating, and playing video games tend to increase dopamine levels in the central nervous system. When it comes to productivity, dopamine is a double-edged sword. It can increase or decrease your productivity depending on what exactly triggers the reward system. Let’s say you check how many words you wrote in the last hour, or finally get a new feature to work in your app. Boom, you get a hit of dopamine. But let’s say you get a notification on your phone and see someone liked your latest Tweet. Boom, you also get a hit of dopamine. In order to make the most of that nice feeling you’ll get from increased levels of dopamine, you need to ensure you trigger your reward system in a way that’s aligned with your goals. This means putting your phone away, focusing on the task at hand, and designing ways to reward yourself for a well-done job. We’ll look at practical strategies to achieve this later in this article, but first, let’s look at the two other neurochemicals involved in productivity and procrastination. The second neurochemical is noradrenaline, also known as norepinephrine in the United States. It’s a neurotransmitter that makes you feel “ready for action” — it’s involved in the fight-or-flight response and makes you more alert and vigilant. Again, there is a tricky balance to find with noradrenaline. The right amount of pressure can be beneficial in order to increase your productivity — this is why many procrastinators report performing better when a deadline is approaching. But if you keep on waiting until the last minute to complete your tasks, the resulting chronic stress can be damaging. Finally, acetylcholine is the third neurochemical of productivity and procrastination. It was the first neurotransmitter ever discovered and is abundant in the nervous system. Besides being involved in the autonomic nervous system — all of the involuntary and unconscious activity in your body, such as heart rate, digestion, or respiration — it also plays an important role in focus, learning, and memory. Studies found that increased acetylcholine levels have a positive impact on performance. On the flip side, an acetylcholine deficiency often means that you’ll have trouble focusing your attention and remembering things, and damage to the cholinergic system — the system in the brain that produces acetylcholine — has been found to be associated with the memory deficits observed in Alzheimer’s disease. That’s a lot to remember, so how can you make the most of this knowledge in a practical way in order to achieve your goals without sacrificing your mental health? A practical neuroproductivity framework Dr. Friederike Fabritius created a handy framework to remember the three neurochemicals of productivity and procrastination based on the general areas of cognition they affect: fun, fear, and focus. Fun. That’s dopamine. As mentioned earlier, it’s a tricky one. It’s all about finding the right balance between having fun without getting distracted. The best strategy is to ensure there’s some reward in the process of working on your project. Sometimes, the reward is intrinsic: you genuinely enjoy what you’re working on. But sometimes, you need to work on something you don’t find as interesting. It’s a good idea in these cases to create extrinsic rewards you genuinely care about. For example, promise yourself to go see a movie you’re excited about after you’re done with the project. It also helps to design an environment that doesn’t include distracting rewards, for example, by leaving your phone in another room so you don’t see anytime someone likes your latest tweet. Fear. Living in constant fear is not good for you, but just the right amount of uncertainty will increase your levels of noradrenaline and, thus, your productivity. Instead of waiting until the last minute to start working on a project, create positive pressure by getting out of your comfort zone, for instance, by working on something new. Or, if you’re working on documentation or something tedious, tell the team that you will present your work to them at your next stand-up meeting. This will trick your mind into feeling just the right amount of positive pressure and help you avoid procrastination. Focus. Finally, make sure to give your brain everything it needs to increase your levels of acetylcholine and, thus, your focus. Some ways to increase your levels of acetylcholine include eating foods rich in choline — which is needed to synthesize acetylcholine — such as lean meats, fatty fish, milk, yogurt, kidney beans, green beans, peas, and broccoli. You can also gently exercise before working, such as going for a walk. But don’t overdo it: research suggests that lengthy exercise sessions, such as marathon training, reduce your acetylcholine levels. All combined together, fun, fear, and focus will help you get in the flow. And if you really can’t seem to be able to be productive, consider taking a break. Staying busy for the sake of staying busy can give you the illusion of productivity and lead to anxiety. Prolonged procrastination is not your enemy — it’s a signal sent by your brain that something is not quite working well. The post The neurochemicals of productivity and procrastination appeared first on Ness Labs.
The neurochemicals of productivity and procrastination
Can taste beat peer review?
Can taste beat peer review?
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Note: Have an idea for a research project about how to improve our scientific institutions? Consider applying for a grant of up to $10,000 from the Metascience Challenge on experiment.com, led by Paul Niehaus, Caleb Watney, and Heidi Williams. From their call for proposals: We're open to a broad set of proposals to improve science -- for example, experimental designs, surveys, qualitative interviews with scientists, pilot programs for new mechanisms, scientific talent development strategies, and other research outputs that may be relevant for scientific research funders. The deadline to apply is April 30. On to our regularly scheduled programming! Scientific peer review is widely used as a way to distribute scarce resources in academic science, whether those are scarce research dollars or scarce journal pages.1 Peer review is, on average, predictive of the eventual scientific impactof research proposals and journal articles, though not super strongly. In some sense, that’s quite unsurprising; most of our measures of scientific impact are, to some degree, about how the scientific community perceives the merit of your work: do they want to let it into a journal? Do they want to cite it? It’s not surprising that polling a few people from a given community is mildly predictive of that community’s views. At the same time, peer review has several potential short-comings: Multiple people reading and commenting on the same document costs more than having just one person do it Current peer review practices provides little incentives to do a great job at peer review Peer review may lead to biases against riskier proposals One alternative is to empower individuals to make decisions about how to allocate scientific resources. Indeed, we do this with journal editors and grant makers, though generally in consultation with peer review. Under what conditions might we expect individuals empowered to exercise independent judgement to outperform peer review? To begin, while peer review does seem to add value, it doesn’t seem to add a ton of value; at the NIH, top-scoring proposals aren’t that much better than average, in terms of their eventual probability of leading to a hit (see this for more discussion). Maybe individuals selected for their scientific taste can do better, in the same way some people seem to have an unusual knack for forecasting. Second, peer reviewers are only really accountable for their recommendations insofar as it affects their professional reputations. And often they are anonymous, except to a journal editor or program manager. That doesn’t lead to strong incentives to try and really pin down the likely scientific contribution of a proposal or article. To the extent it is possible to make better judgments by exerting more effort, we might expect better decision-making from people who have more of their professional reputation on the line, such as editors and grant-makers. Third, the very process of peer review may lead to risk aversion. Individual judgment, relying on a different process, may be able to avoid these pitfalls, at least if taking risks is aligned with professional incentives. Alternatively, it could be that a tolerance for risk is a rare trait in individuals, so that most peer reviewers are risk averse. If so, a grant-maker or journal that wants to encourage risk could do so by seeking out (rare) risk-loving individuals, and putting them in decision-making roles. Lastly, another feature of peer review is that most proposals or papers are evaluated independently of each other. But it may make sense for a grant-maker or journal to adopt a broader, portfolio-based strategy for selecting science, sometimes elevating projects with lower scores if they fit into a broader strategy. For example, maybe a grant-maker would want to support in parallel a variety of distinct approaches to a problem, to maximize the chances at least one will succeed. Or maybe they will want to fund mutually synergistic scientific projects. We have a bit of evidence that empowered individual decision-makers can indeed offer some of these advantages (often in consultation with peer review). Subscribe now Picking Winners Before Research To start, Wagner and Alexander (2013) is an evaluation of the NSF’s Small Grants for Exploratory Research programme. This program, which ran from 1990-2006, allowed NSF programme managers to bypass peer review and award small short-term grants (up to $200,000 over 2 years).2 Proposals were short (just a few pages), made in consultation with the programme manager (but not other external review), and processed fast. The idea was to provide a way for programme managers to fund risky and speculative projects that might not have made it through normal peer review. Over its 16 years, the SGER (or “sugar”) program disbursed $284mn via nearly 5,000 awards. Wagner and Alexander argue the SGER program was a big success. By the time of their study, about two thirds of SGER recipients had used their results to apply for larger grant funding from the conventional NSF programs, and of those that applied 80% were successful (at least, among those who had received a decision). They also specifically identify a number of “spectacular” successes, where SGER providing seed funding for highly transformative research (judged as such from a survey of SGER awardees and programme managers, coupled with citation analysis). Indeed, Wagner and Alexander’s main critique of the programme is that it was insufficiently used. Up to 5% of agency funds could be allocated to the program, but a 2001 study found only 0.6% of the budget actually was. Wagner and Alexander also argue that, by their criteria, around 10% of funded projects were associated with transformational research, whereas a 2007 report by the NSF suggests research should be transformational about 3% of the time. That suggests perhaps program managers were not taking enough risks with the program. Moreover, in a survey of awardees, 25% said an ‘extremely important’ reason for pursuing an SGER grant was that their proposed research idea would be seen as either too high-risk, too novel, too controversial, or too opposed to the status quo for a peer review panel. That’s a large fraction, but it’s not a majority (the paper doesn’t report the share who rate these factors as important but not extremely important though). Again, maybe the high-risk programme is not taking enough risks! In general though, the SGER programme’s experience seems to support the idea that individual decision-makers can do a decent job supporting less conventional research. Goldstein and Kearney (2018) is another look at how well discretion compares to peer review, this time in the context of the Advanced Research Projects Agency - Energy (ARPA-E). ARPA-E does not function like a traditional scientific grant-maker, where most of the money is handed out to scientists who independently propose projects for broadly defined research priorities. Instead, ARPA-E is composed of program managers who are goal oriented, seeking to fund research projects in the service of overcoming specific technological challenges. Proposals are solicited and scored by peer reviewers along several criteria, on a five-point scale. But program managers are very autonomous and do not simply defer to peer review; instead, they decide what to fund in terms of how proposals fit into their overall vision. Indeed, in interviews conducted by Goldstein and Kearney, program managers report that they explicitly think of their funded proposals as constituting a portfolio, and will often fund diverse projects (to better insure at least one approach succeeds), rather than merely the highest scoring proposals. From Goldstein and Kearney (2018) Goldstein and Kearney have data on 1,216 proposals made up through the end of 2015. They want to see what kinds of projects program managers select, and in particular, how they use their peer review feedback. Overall, they find proposals with higher average peer review scores are more likely to get funded, but the effects are pretty weak, explaining about 13% of the variation in what gets funded. The figure above shows the average peer review scores for 74 different proposals to the “Batteries for Electrical Energy Storage in Transportation” program: filled in circles were funded. As you can see, program managers picked many projects outside the top. From Goldstein and Kearney (2018) What do ARPA-E program managers look at, besides the average peer review score? Goldstein and Kearney argue that they are very open to proposals with highly divergent scores, so long as at least one of the peer review reports is very good. Above, we have the same proposals to the Batteries program listed above, but instead of ordering them by their average peer review score, now we’re ordering them by their maximum peer review score. Now we’re seeing more proposals getting funded that are clustered around the highest score. This is true beyond the battery program: across all 1,216 project proposals, for a given average score, the probability of being funded is higher if the proposal receives a wider range of peer review scores. Goldstein and Kearney also find proposals are more likely to be funded if they are described as “creative” by peer reviewers, even after taking into account the average peer review score. ARPA-E was first funded in 2009, and this study took place in 2018, using proposals made up through 2015. So there hasn’t been a ton of time to assess how well the program has worked. But Goldstein and Kearney do an initial analysis to see how well projects turn out when program managers use their discretion to override peer review. To do this, they divid...
Can taste beat peer review?
Think and learn visually with Dom Zijlstra founder of Traverse
Think and learn visually with Dom Zijlstra founder of Traverse
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better and work smarter. Dom Zijlstra is the founder of Traverse, the only tool with mind mapping, note-taking and spaced repetition flashcards in one place. Traverse uses science-based features to help you deeply grasp complex topics so you can remember them for life. In this interview, we discussed how cognitive science can help us learn better, the different types of effective mind maps for learning, using spaced repetition as a powerful learning technique, the best way to create, connect, and consolidate knowledge, and much more. Enjoy the read! Hi Dom, thanks for agreeing to this interview! Combining mind mapping, note-taking and spaced repetition flashcards in one place is an ambitious endeavor. What inspired you to start building Traverse? Thank you for this interview opportunity! I’m thrilled to share my story and the inspiration behind Traverse, a science-based learning tool that combines mind mapping, note-taking, and spaced repetition flashcards in one place. It all started around six years ago, when I faced a learning challenge bigger than I could handle. I always thought of myself as a pretty smart guy, having studied physics and worked as a spacecraft engineer. But when I met my Chinese wife and tried to learn Mandarin, I realized that my learning method wasn’t up to the test. At the time, I had just completed my studies in Germany, having learned German and Portuguese. I had traveled to Sweden on an exchange program and later moved to Brazil for a while. I had always been excited about new challenges and learning new things. But learning Mandarin turned out to be a whole new level of difficulty. I spent countless hours using different tools and ineffective methods to learn the language, wasting precious time and energy. At some point, I realized that if I wanted to succeed, I needed a method based on how humans actually learn. This led me to dive into learning science and put together the method that later became Traverse, a research-based learning tool that can help anyone master complex topics. Using Traverse myself, I was finally able to get fluent in Mandarin, live in China, and chat with my wife’s family and friends. The app has since helped tens of thousands of learners, and I’m grateful for the opportunity to build the best learning tool for complex topics together with our users. When I look back at my life, there’s a thread that connects my experiences and the inspiration that led to Traverse. As a child, I was fascinated by books, nature, games, movies, and programming. Throughout my life, I’ve enjoyed learning new things and adapting to new environments. Even before going to college myself, I taught college students math and engineering, and developed a programming course for them. I loved thinking about how to teach and help others learn. Today, as the founder of Traverse, I aim to be kind, helpful, knowledgeable, and inspiring. I want to be a go-to person for those who seek to learn and grow. The possibility of financial freedom and inspiring others to join me on this mission has been a driving force behind Traverse. My vision is to be at the forefront of a revolution in education, helping people from all over the world become “superlearners” and create deep connections with others that bring happiness and fulfillment. In conclusion, my journey in creating Traverse has been fueled by my own experiences, challenges, and the desire to help others learn and connect. The app’s foundation is built on cognitive science, my passion for learning, and the experiences I’ve gathered throughout my life. Traverse is not just an app, it’s a manifestation of my life’s mission to empower people to learn anything, anywhere, and share the joy of learning with others. How would you describe Traverse to someone who has never used it? Traverse can be described as a powerful fusion of Notion, Miro, and Anki, but with a focused approach on deep learning, understanding, and memory. It is not a to-do list tool like Notion, nor is it a personal knowledge management tool. Traverse is a learning tool, specifically designed to enhance your brain. It is especially useful for those determined to learn something defined. Traverse is an all-in-one app that combines the best features of mind mapping, note-taking, and spaced repetition flashcards, offering an integrated learning experience. Unlike other tools, it is not designed for merely gathering thoughts from books and articles. Traverse is built on a solid foundation of cognitive science and is tailored for those who are serious about learning and mastering complex topics. By integrating the best of flashcard apps like Anki, note-taking apps like Notion, and mind mapping apps like Miro, Traverse provides a comprehensive and efficient learning experience. It offers user-friendly spaced repetition flashcards, note-taking features, and a visually organized mind map that allows learners to express their thoughts and knowledge in a vibrant and colorful manner. Let’s start with mind mapping. How does it work in Traverse? Mind mapping is a visual learning technique that helps individuals organize and represent information in a structured and interconnected manner. Traverse is a mind mapping application that goes beyond the traditional tree-like structures offered by many other tools, providing a comprehensive set of features for deep learning of complex topics. Traverse employs a science-backed approach called GRINDE, which has been borrowed from Dr Justin Sung, and stands for Grouped, Reflective, Interconnected, Non-verbal, Directional, and Emphasized. This method guides users in creating effective mind maps for learning: Grouped: Traverse encourages users to organize information into several boxes, forming larger concepts that offer more flexibility, similar to tree branches that can be rearranged. Reflective: The app promotes a reflection of what’s going on inside the user’s mind, as opposed to linear note-taking, which doesn’t effectively represent one’s thought process. Interconnected: Traverse allows users to form a big picture by connecting related ideas and concepts. Non-verbal: The app encourages the use of arrows, sketches, and other visual elements instead of text-heavy notes, fostering creativity and reducing time spent on note-taking. Directional: Traverse helps users give order and flow to their mind maps, creating cause-and-effect relationships and a logical framework for deeper learning. Emphasized: The app supports the use of thicker lines and larger fonts for main points, reducing cognitive load and making it easier to identify important connections at a glance. Traverse features an infinite canvas where notes can be grouped, linked, and freely arranged. Users can create customized links and use freehand drawing to express ideas visually. The app avoids auto-linking to prevent messy and overwhelming mind maps, promoting deliberate connections instead. With Traverse, users can see the big picture, stay organized, dive deeper without losing context, and experience the joy of learning and discovery. The app incorporates key principles such as visual encoding, cognitive load optimization, spaced revisions, and spatial memory to enhance the learning process and promote long-term retention. Traverse also allows you to take notes. Why should users take their notes in Traverse? Traverse offers a unique and powerful approach to note-taking by integrating notes within visually organized mind maps. This combination effectively bridges the gap between traditional note-taking and mind mapping, allowing users to take advantage of the benefits of both techniques. Using Traverse for note-taking provides several advantages: Visual organization: Notes in Traverse live within a mind map, similar to sketchnoting, but with the ability to add more information, sources, and references. This visual organization makes it easier to understand and remember the relationships between various concepts. Markdown-based and powerful embeds: Like Notion, Traverse supports markdown formatting, which makes it easy to create well-structured and visually appealing notes. Additionally, it offers powerful embeds such as YouTube videos, LaTeX math equations, and code blocks with syntax highlighting, enriching the learning experience. Visual Zettelkasten: Traverse functions as a visual Zettelkasten, a note-taking system popularized by German sociologist Niklas Luhmann. By incorporating bidirectional links and visually organizing notes, Traverse enables users to connect ideas, fostering a deeper understanding and generating new insights. All knowledge in one place: With Traverse, users can store all their notes and mind maps in a single, unified platform. This eliminates the need to switch between multiple applications and allows users to manage and consolidate their knowledge more efficiently. Bridging mind maps and retrieval practice: Traverse combines the power of mind maps with the benefits of retrieval practice, a proven learning technique that involves actively recalling information from memory. By integrating notes within mind maps, Traverse supports both the organization of knowledge and the active retrieval of information, leading to better comprehension and long-term retention. In summary, Traverse provides a versatile and effective note-taking solution by combining the best aspects of mind mapping and traditional note-taking. By using Traverse for note-taking, users can enjoy a visually organized learning experience, a powerful feature set, and the benefits of having all their knowledge in one place. Something exciting is that you can quickly create flashcards from any note. Can you tell us more about spaced-repetition in Traverse? Spaced repetition is an incredibly powerful learning technique when implemented correctly, and Trav...
Think and learn visually with Dom Zijlstra founder of Traverse
Growth Loops: From linear growth to circular growth
Growth Loops: From linear growth to circular growth
It’s common to see progress as linear. When thinking about success, many people imagine a ladder or stairs going up. To progress, you need to climb each step one by one and get closer to the top. But that’s not the only model you can apply to visualize personal growth. Linear model: A then B then C then D. Circular model: A feeds B feeds C, which in turn feeds A. In a linear model of personal growth, you can only go up or down. By design, there are people below and above yourself. This model can be falsely reassuring, as it seems to offer a clear path to success. It’s used by many organisations as a way to manage their employees’ careers. In a circular model of growth, nobody is more advanced than anyone. There is no “up” or “down.” People are at a particular point of their own, unique growth loop. Everyone only competes against one’s self. The circular model can be more daunting, as there is no predefined direction — you need to design your own personal growth process — but it can also be infinitely more rewarding. Designing growth loops The circular model of personal growth is not too dissimilar from the concept of circular economy, where the goal is to make the most of resources and to create self-sustaining loops. It forces people to learn how to learn by designing feedback mechanisms that will allow them to continuously improve. Here is an example of the circular model of personal growth applied to learning: Learn something new Write about it and share it Connect with new people … and learn something new from them. As you can see, there is no clear “winning” end goal. When using the circular model of growth, you need to fall in love with the process. Success becomes a by-product of your learning journey, and it’s all about celebrating the small wins rather than chasing a big final victory. From single loops to double loops Growth loops are not intrinsically good if we keep approaching the same problem with no variation of method and without ever questioning the overarching goal. This is called “single loop learning.” A better approach is “double loop learning”, which is easily understood using the thermostat analogy from Teaching Smart People How To Learn: “A thermostat that automatically turns on the heat whenever the temperature in a room drops below 68°F is a good example of single-loop learning. A thermostat that could ask, “why am I set to 68°F?” and then explore whether or not some other temperature might more economically achieve the goal of heating the room would be engaged in double-loop learning.” Chris Argyris, Business Theorist and Professor at Harvard Business School. Unlike single loop learning which is simple and static, double-loop learning is more complex and dynamic, taking into account external factors and the changes in your environment, and adjusting the mental models on which a decision depends. Double loop learning is a model that encourages people and organisations to continuously challenge their assumptions and goals instead of blindly repeating the same loop. While the idea seems simple, it can be hard to implement double loop learning because of a natural need for control, a fear of failure, or an overall resistance to change. Mental models are hard to change, which is why double loop learning is more challenging to implement at first, but also more rewarding. If you’re struggling to get out of linear learning or single loop learning, try to understand the true nature of your resistance and to implement double loop learning in a small area of your life where you already feel quite comfortable. The post Growth Loops: From linear growth to circular growth appeared first on Ness Labs.
Growth Loops: From linear growth to circular growth