Digital Gems

Digital Gems

2442 bookmarks
Newest
Inverting the Internet with Davey Morse Founder of Plexus
Inverting the Internet with Davey Morse Founder of Plexus
FEATURED TOOL Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us make the most of our minds. Davey Morse is the founder of Plexus, a company building a radically inclusive online community, connecting people not through mutual friends but through mutual thoughts. Davey grew up in NYC, studied Symbolic Systems at Williams College, dropped out, coded on Apple’s Screen Time team, built a self-organizing notebook for students, and raised venture funding to start Plexus. In this interview, we talked about traditional social media, why we need spaces for people to authentically express their thoughts online, the importance of exploring raw and unresolved thoughts that weigh on your mind, the need for a shift from attention-based to intention-based interactions, and much more. Enjoy the read! Hi Davey, thanks for agreeing to this interview! First, why do you think it’s so important to have a space for unresolved thoughts? Hey, thanks for having me! And for making space for my unresolved thoughts, too. The question of whether there’s space for unresolved thoughts online is really a question of whether there’s space for most people. Most of us neither think in Tweet-sized bites nor find it natural to share thoughts with everyone we know. Yet, on the internet, posting usually means exactly that: sharing Tweet-sized bites with everyone you know. So most people don’t contribute: just 1% of people create the vast majority of content. At Plexus, we believe there’s much more to each of us than we know. But when most of us just consume online, our digital identities flatten. So as our lives trend digital, we trend toward depression. We also believe people can be extraordinary at solving problems together. But when we don’t have space to explore unresolved things online, problems persist. As species-level problems loom (e.g., a burning climate, rampant epidemics, uncontrollable AGI) we approach existential risk. We need an online space where we can express what we’re actually thinking and think together. Is that what inspired you to create Plexus? Plexus was born out of a connection—between tech I was working on in college and the lives of my friends. The tech was a self-organizing notebook. It was a tool that helped students find connections across their notes, automatically. (Like Roam Research, but easier.) I had friends who were dealing with very particular things: rare mental struggles, illnesses, relationship tensions, and interests. Most of them felt alone with those things. After way too much time, I saw them each get connected with the person across their extended community who knew exactly what they were going through. It’s a movie moment, a feeling I think everyone’s had: when you realize that whatever you’re dealing with… You’re not alone with it. So here I was, applying connection-making tech to supercharge writers’ notebooks—even getting some traction—but realizing: notes weren’t the right application. A connection between two notes could save a writer time; a connection between people could alter their lives. I started obsessing: what would it look like to help people connect, not just through mutual friends, but through mutual thoughts? And for those thoughts to be the messy, unresolved things that are actually on people’s minds, rather than polished thoughts that sound catchy but don’t represent what we’re dealing with? That question led me to Plexus. I dropped out of school, started a public benefit corporation, raised venture funding, and recruited Micah Corning-Myers as our Founding Engineer—another psychologist’s son and hacker with an intense love for people. We set out to enable broad participation online. That’s an ambitious mission. How does Plexus work? Plexus is a space for thinking together. Plexus is for all the thoughts you’d never share on Twitter—the raw and unresolved things that are actually on each of our minds. Testers call the experience “Walking.” We’ve seen our early community “Walk” to explore tense relationships, figure out what they should say in workplace conversations, reconcile disparate interests, and develop seeds of new ideas. You start a Walk by writing about something that feels interesting or off. Then, immediately, Plexus surfaces the community’s most related thoughts. If any thought hits a chord in you, you can “step through it” and retrace the steps of thinking who came before you. It’s a process of alternating between writing fresh thoughts and pulling others’ in. Walks end after you ride other folks’ wisdom into new terrain and find resolution around the unresolved thing you started with. The experience is somewhat hard to describe. Some of our testers have come up with a few interesting analogies, like “constructing my own feed,” “having a conversation with ChatGPT, but where ChatGPT is a community,” “thinking with other people’s thoughts.” You made the choice to have no followers, no broadcasting, no public profiles, and no likes. Can you tell us more? Followers, broadcasting, public profiles, and likes all represent off-putting real world interactions. Consider “Followers.” I can only think of two places where people are called “followers”: cults and social networks. Following is not a healthy relationship in the real world. It’s not healthy online either. Plexus has no “following” relationship. You’re never spammed with thoughts just because you happen to know the authors. You see other people’s thoughts only when they’re relevant to your current thinking. (We’re experimenting with a new kind of relationship in Plexus, called Walking Partners. These are people you meet through Plexus—people who are thinking along similar lines.) Now, when it comes to broadcasting… My followers on Twitter include basketball teammates from growing up, comedy friends from college, and AI friends from work. I never have a thought I want to share with all of them. But that’s what Tweeting is. It’s standing on a stage in front of everyone you know and shouting through a megaphone. So, most of us don’t Tweet. There’s no good online place to find connection around the things we think. But often, I really want to connect with people who get the thing that’s on my mind, whatever that thing is. And so, in Plexus, your thoughts get shared only with those people. They get routed not through the community’s social graph, but through the community’s thinking graph. On most social networks, your grandparents, your colleagues, and your recent hook-up can all see literally everything you’ve ever posted. Most of us feel like shells of ourselves online because there’s no way to feel comfortable being anything more. In Plexus, we’re experimenting with selective profiles, where you unlock different thoughts from a given person as you think about overlapping things. It’s meant to resemble the way a real relationship deepens through exploration and time, where you learn more about each other as you explore Finally, liking: If you’re with someone, you mention something that’s on your mind, and they just say “I like that” without following up… you might ask yourself whether they heard you at all. Plexus has a new lightweight interaction, called a “Walkthrough”. When someone values your thought enough to use it in their thought process, when they “walk through” your thought, you get notified. It feels better to receive a Walkthrough than a like. Those sound like better ways to foster collective thinking. We had never seen people think in a social space online before seeing our early community Walk in Plexus. The closest phenomenon to Walking is maybe what happens in therapy, or in front of a whiteboard brainstorming, or sitting around a table imagining new possibilities with close friends. But, in contrast to Walking, those situations require that only one person talk at a time, that you know the people you want to think with, and that you do so synchronously We organize a monthly walk in NYC, where a couple dozen friends gather in the same physical room with their laptops and Walk together virtually. It’s pretty quiet—the only thing you hear is the collective clacking of keyboards and Julia Jacklin playing quietly in the background. But, make no mistake: an order of magnitude more interactions are occuring than if everyone were talking with each other; thoughts are shooting between everyone’s computers constantly as they wrestle with things others have wrestled with before. For each individual, Plexus turns their laptop into a kind of magic room: a room where the right people, thinking about the right things, come in exactly when those things are on your mind. Some rituals have evolved around Walking. We have synchronous Walks every Sunday. We send out Daily Walking Prompts too (“questions we’ve never been asked before”) sourced from the community. But most Walks just happen asynchronously, when people realize there’s something that feels off or interesting and they want to explore it with the power of the community’s thinking at their fingertips. As the Internet has become increasingly public and geared towards attention-based metrics, many people have become hesitant to share their authentic thoughts online. How do you address this with Plexus? The internet has a couple issues. On the surface, its interfaces make it uncomfortable to share what you’re actually thinking. But, a level deeper, there’s the funding structure: an advertising based internet economy that prizes people’s attention above all. To make it possible for people to have real space for expressing themselves online, you need a new social interface, but more deeply, you need fundamentally new economics: you need a funding model that prioritizes people’s intention over their attention. I’ve shared about Plexus’ new interface. We’ve invented a more intimate sharing mechanism, where the things you think are only distributed to people who have similar ...
Inverting the Internet with Davey Morse Founder of Plexus
Mindful context switching: multitasking for humans
Mindful context switching: multitasking for humans
So many things to do, so little time. When you juggle work, personal projects, and are hoping to have any sort of social life, managing your time can feel like an impossible endeavor. There are many tips out there—the most common one being to focus on the most important task first—but few address the systemic complexities of managing your time and energy when you have a very long list of important and competing tasks as well as other people to take into account. Option 1: You are focusing on a single task and ignoring all distractions and interruptions. You are getting a lot done, but your responsiveness suffers. People who are counting on you are stuck because they need your input. Option 2: You make yourself as available as possible to other people and are extremely responsive when they need your input. They make faster progress with their work, but your own output suffers. Both options are less than ideal. As a knowledge worker, you need to ensure you complete these important tasks while being responsive enough to support your collaborators in their work. The challenge is in finding that delicate balance between optimizing your own output and sharing your input to enable your collaborators to progress. So what do we do? We try to multitask. A mythical activity In computing, context switching refers to the process of storing the current state for one task, so that this task can be paused and another task resumed. It’s basically what allows computers to multitask (fun fact: the word “multitask” was invented by IBM in 1965 to describe a computer capability. It was only later that we started using it for humans). In the same way that context switching comes with a cost in performance for computers, multitasking has its cost for humans too. Research shows that constantly switching context between different tasks has a terrible effect on attention. We’re basically less focused and less performant when trying to do several things at the same time. Psychiatrist Edward M. Hallowell even described multitasking as a “mythical activity in which people believe they can perform two or more tasks simultaneously as effectively as one.” But very few people can afford to stay focused on one single task until it’s done. Emails need to be answered, customers need to be helped. So how can you avoid the terrible impact multitasking can have on your performance? The mindful way to multitask What I call mindful context switching is a strategic approach to task management that emphasizes the importance of staying focused on a single task while maintaining an acceptable level of responsiveness. It involves defining your necessary level of responsiveness based on external demands, breaking tasks into achievable chunks that fit within these response intervals, and scheduling dedicated time slots for them. It was inspired by the work of Brian Christian and Tom Griffiths, authors of Algorithms to Live By, who wrote: “You should try to stay on a single task as long as possible without decreasing your responsiveness below a minimum acceptable limit. Decide how responsive you need to be—and then, if you want to get things done, be no more responsive than that.” The aim of mindful context switching is to boost your productivity and improve the quality of your output, all while maintaining healthy relationships at work and outside of work. Ready to give it a try? It essentially boils down to five simple steps: Define your responsiveness: If you have high-value customers who expect to hear back from you in less than an hour, that’s how responsive you need to be. If you sell a SaaS product that’s not business-critical, maybe responding to emails once a day is fine. There is no fast-and-hard rule here, but you need to figure out what level of responsiveness will work for your business. Design manageable chunks of work: Now that you know how responsive you need to be, break down your tasks into manageable chunks that can be done between these response times. Each chunk needs to be realistic, with a beginning and an end. For example, if you need to write an article, one chunk could be to create the outline. Schedule dedicated time: That’s it for this one. Just put these chunks into your calendar. Communicate clearly: Let everyone you work with know that you won’t be able to respond during these deep work time slots. There are several ways to go about this. If you have a shared calendar, that’s fairly easy. When I was working at Google, I also saw people put it in their email signature or inside an email autoresponder if their response time was longer. Although it may feel weird at first, it’s usually best to overcommunicate. Revisit regularly: Don’t simply duplicate your time slots from one week to another. Reflect on what worked and what didn’t. Were the chunks actually manageable? Was your responsiveness appropriate? You can even proactively ask your teammates for feedback. Play with different configurations until you find the one that works for you. That’s it! The first time around will take a bit of work, but mindful context switching will help you do better work, faster, and without alienating the people around you. The post Mindful context switching: multitasking for humans appeared first on Ness Labs.
Mindful context switching: multitasking for humans
Geography and What Gets Researched
Geography and What Gets Researched
This post was jointly written by me and Caroline Fry, assistant professor at the University of Hawai’i at Manoa! Learn more about my collaboration policy here. How do academic researchers decide what to work on?  Part of it comes down to what you judge to be important and valuable; and that can come from exposure to problems in your local community. For example, one of us (Matt), did a PhD in Iowa, and ended up writing a paper on the innovation impact of ethanol-style policies (ethanol is a big business in Iowa). One of us (Caroline), was leaving Sierra Leone after two years there, just as the Ebola epidemic was starting. She became interested in understanding why science capacity is so low in some countries and not others, and what that means for the development of drugs and vaccines to combat local problems. (Indeed, we’ll talk about two of the papers that emerged from that research program in just a minute.) Brief Pause for Some Announcements The Institute for Replication is looking for researchers interested in replicating economic and political science articles. Research using non-public data (for example, Bell et al. 2019, discussed below) is a formidable barrier for reproducibility and replicability - so they are offering up to 5,000 USD and coauthorship on a meta-paper combining hundreds of replications. A list of studies of eligible studies is available here, with payment info. Please contact instituteforreplication@gmail.com for more detail and indicate which study you would like to replicate. They are interested in 3 types of replications: (i) using new data, (ii) robustness checks and (iii) recoding from scratch. Open Philanthropy’s Innovation Policy program is currently soliciting pre-proposals from individuals for financial support to write living literature reviews about policy-relevant topic areas. Interested individuals should have a PhD related to their proposed area and should contact matt.clancy@openphilanthropy.org for more information. This article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Back to Geography and What Gets Researched! Subscribe now Testing the relationship between location and research choice Both of us made research decisions that were, in part, influenced by exposure to local problems. Are we atypical, or is this path of exposure to research choice a common one? The role of exposure to local problems in determining research choice is difficult to test. People might locate themselves in places precisely because they are interested in the problems in those places. The ideal way to test this would be to randomly assign researchers to different locations and see if they work on local problems that they are exposed to. However, randomly assigning researchers usually isn’t particularly feasible. Alternatively, we could randomly “assign” problems to different locations and see if local researchers begin working on those problems after exposure. One candidate for a problem that all-but randomly arises in some locations but not others is a novel disease outbreak. So one way to assess how strong is the local problems to local research link is to see how scientists respond to local disease outbreaks. Fry (2022) takes this strategy and evaluates the impact of the 2014 West African Ebola epidemic on the publication output of endemic country scientists: did scientists working in areas hit harder by Ebola begin to disproportionately work on it? To see, Fry starts with a dataset of 57 endemic country biomedical scientists (those affiliated with institutions in Sierra Leone, Guinea and Liberia, the three hardest hit countries, at the time of the epidemic). She then matches these endemic country scientists to 532 control scientists who are from non-endemic countries in West or Central Africa, but who are at similar points in their career, work in similar areas, publish at similar rates, have similar rates of international collaboration, and reside in countries with similar GDP per capitas. She pulls out the publication record for each sample scientist for the four years before and six years after the epidemic from the Elsevier Scopus publication database, and creates counts of annual publications. Finally, she separates these counts into Ebola and non-Ebola publications through a key word search of the title, abstract and key words of the publications. Fry compares the changes in publication output of endemic country scientists to that of the control scientists, adjusting for persistent differences between individual scientists, typical career age trends, and variation in publication trends over time for all scientists. As illustrated in the figure below, prior to 2014 none of the scientists in her sample really focused on Ebola. Beginning in 2014, endemic country scientists experience a large and fairly sustained increase in their publication output of Ebola related publications, as compared to non-endemic country scientists. That implies exposure to a new problem in a researcher’s location can shift their attention towards that problem. (It could be about something besides exposure too – we’ll talk about that later) From Fry (2022) Location and research focus are correlated We noted above that our ideal experiment would randomly allocate scientists to different locations. While we may not be able to do that, scientists do change locations of their own accord and insofar as local problems drive research choice, then we might expect to see similar patterns when they do.  Fry (2023) tests exactly this. The working paper builds a dataset of 32,113 biomedical scientists affiliated with an African institution between 2000 and 2020, their publication output in different disease areas (by extracting key words from the title and abstract of their publication), and uses the affiliation listed in these publications to infer their country affiliation in each year. She then compares the research choices of these African scientists (proxied by the number of publications on each diseases) with the disease burden in their country of residence. The idea is to compare the disease focus of mobile researchers before and after their move to that of matched control researchers who don’t migrate. She finds, indeed, that researchers are more likely to publish papers on diseases that are more prevalent in their host country after they move there. This trend is particularly salient for researchers moving into Africa from outside the continent. And note, this is relative to matched scientists who did not move, but prior to the move were publishing at similar rates, on the same diseases, as the scientists who move. We can see similar dynamics beyond the specific context of neglected tropical diseases. Moscona and Sastry (2022)provide some additional data from global agriculture, where there is substantial international variation in crop pests and pathogens. Moscona and Sastry search for the names of specific pests and pathogens in the titles, abstracts, and descriptions of agricultural patents across the world (using a dataset on international crop pests and pathogens from the Centre for Agriculture and Bioscience International). For example, there might be a patent for a pesticide to control a specific kind of pest, or a patent for a gene that confers resistance to some kind of pathogen. Since inventors list their country of residence on patents, Moscona and Sastry can see if inventors disproportionately invent technologies that mitigate pests and pathogens present in their country of residence. That seems to be the case. In the figure below, they show that for any given crop pest or pathogen (which they call a CPP), the number of patents by inventors in the same country where those pests and pathogens are found is much higher than the share of patents by inventors from other countries. Moscona and Sastry also statistically estimates the relationship between patents on a given pest or pathogen by country inventors and the presence of those pests/pathogens in that country, holding country and pest/pathogen differences fixed. That analysis also finds local presence is a strong predictor of local patenting related to a given pest or pathogen. From Moscona and Sastry (2022) Why would location affect research choice? Taking this cluster of papers as providing at least preliminary evidence that location influences research choice, the next question is: why? We’ve suggested it could be due to researchers being exposed to local problems, and that’s certainly one likely channel. It would be consistent, for example, with research finding that women scientists are more likely to work on issues that disproportionately affect women (suggesting that different researchers find different problems more salient and important to investigate). But a researcher’s location could influence their choice of topics in a number of other ways too. Researchers around the world might be equally interested in a topic, but local researchers could have an advantage in studying a particular topic because of better access to local data, for example, samples of viruses, pests, pathogens, or infected people. It may also be that local funders of research, rather than researchers themselves, are more likely to know and care about local problems. (That said, at least in the case of the 2014 Ebola epidemic, Fry 2022 finds no correlation between domestic funding for Ebola research and the shift towards it) Beyond these direct effects of location on research choice, one secondary effect could be social contagion from other researchers: even if researchers are not initially motivated to study local problems, they may want to locally collaborate, and if local collaborators are more likely to be working on local problems, they are more likely to begin working on the topic too. We do have some evidence that res...
Geography and What Gets Researched
Stop looking for The One: The Inverted Pyramid of Life
Stop looking for The One: The Inverted Pyramid of Life
“What do you want to be when you grow up?” Adults often as this question when chatting with a kid. Maybe it’s because the answer is often endearing (an astronaut!) or surprising (a YouTuber!), or because it’s a way to connect through a topic that speaks to us—work. We keep doing this to each other as adults too. “What do you do for a living?” and “Where do you work?” are some of the most common conversation starters when meeting someone for the first time. When you’re a kid, the world is full of possibilities. Nothing seems to be impossible. No question or topic seems trivial enough not to wonder about it. It’s a wonderful exploratory phase. You may want to try a different sport every week. You have a new best friend every month. You’re into board games and then realize that painting is more your thing. For now. So why do we later insist on this fabricated idea of having one calling in life? Go forth and specialize Often, as soon as you start showing a sustained interest in a specific area, adults push you to practice and improve. To make it your thing. It comes from a good place, of course, but it stems from the idea that the more “defined” you are as a person, the better. Our education system works in a similar way. We are expected to specialize, going from a generalist curriculum covering everything from arts to maths and history to graduating with a degree in one specific area. Then, at work, we hone in on what sets us apart and create an elevator pitch—a short description that gets our value proposition across in one key point or two. In friendship too. Research shows that the older you get, the fewer friends you have. Growing up is like trying to squeeze through a gradually shrinking funnel, making yourself smaller and smaller until you can describe yourself with as few words as possible. We become more focused in our interests, our work, and even our friendships. In the words of Rhiannon Lucy Cosslett, a journalist at The Guardian: “Part of growing up is accepting all those things you’ll never be, but which perhaps, in another system or universe, you could have been.” But does it have to be the case? Inverting the pyramid of life In the years since I founded Ness Labs, I’ve had countless conversations with talented, intelligent people who told me they felt lost. Either because they didn’t find joy in their day job anymore, because a project they had poured their heart and soul into didn’t work out, or because the next logical steps in their career were not particularly exciting. For most of them, it seemed hard to find alternative options because, after years of hard work and smart choices, they were sitting at the tip of the pyramid. Here is how the pyramid of life normally works: As a child, you explored. As a student, you specialized. Now, as an adult, you can easily define who you are to yourself and other people. This is the path I have followed for a long time. This is the narrowing path most people will follow. Not because that’s what they want but because that’s what is expected from them. For the rare few ones with a true calling, research suggests that it may work just fine. But what about the others? The same research shows that searching for a calling leaves us confused and uncomfortable. Now, you probably see where I am going with this: Why should we look for our one true calling in the first place? Why not invert the pyramid? Here is what the inverted pyramid of life looks like: As a child, we are full of potential. As a student, we can explore our affinities. As an adult, we open up a world of opportunities. In this paradigm, the potential you have as a child is just the beginning—the tip of a cone of creativity that widens as you grow up. Because you’re optimizing for opportunities and not trying to define yourself through specific expertise, you can keep expanding your playground all your life. The inverted pyramid of life can apply to studies, work, but also friendships. Children have neighborhood friends, school friends, friends from a sports team or an art club. As an adult, we tend to only have a couple of friends outside of work. But we can significantly expand our circles and choose new friends consciously. What if you had a friend who loves hiking, another who enjoys nerding about technology and tools, and another who is always excited to try new foods together? What if you had friends all over the world, who you know you may only meet in years to come, if ever, but who share your interests? This ability to identify yourself across multiple domains and roles, which researchers call “self-complexity”, has been found to support emotional resilience by reducing the impact of failure or setback in any single domain. You may lose your job but still be a great friend. Your startup may fail, but you may run your first marathon with your partner. You may be rejected from your dream school but win a poetry prize. The self-complexity that arises when we invert our pyramid of life also encourages personal growth and self-discovery, as you can explore and evolve across various aspects of your identity, which means a richer, fuller life. When you stop trying to nail down your narrative and focus only on the most obvious relationships, life becomes a giant sandbox where we can learn anything, grow in any direction, and connect with anyone. Maybe then, instead of asking, “So, what do you do for a living?” we’ll start asking: “So, what makes you feel alive these days?” The post Stop looking for The One: The Inverted Pyramid of Life appeared first on Ness Labs.
Stop looking for The One: The Inverted Pyramid of Life
Ness Labs Best Books of July 2023
Ness Labs Best Books of July 2023
At Ness Labs, we believe in the power of ideas and the profound impact of continuously feeding our minds with thoughtful content. Each month, we meticulously curate a selection of books that truly stand out in an ocean of books that can be overwhelming. This series aims to highlight the work that can serve as a compass to navigate life and work, so we can collectively learn, evolve, and thrive. This is your guide to discovering the most insightful, inspiring, and transformative books on mindful productivity, creative growth, holistic ambition, and developing a healthier relationship with work. The Good Enough Job Simone Stolzoff’s The Good Enough Job offers a compelling critique of the prevailing culture that places our work and professional ambitions at the center of our identities. Through insightful reporting and interviews with individuals across diverse professions, Stolzoff lays bare the impacts of intertwining our sense of self with our jobs and the cost it exacts on our well-being and even professional success. The book prompts us to question the status quo, challenging the societal expectation of work as a calling, a dream to be chased relentlessly. For those striving to find a healthier relationship with work and ambition, The Good Enough Job provides a refreshing perspective. By exposing the myths that have chained us to our work desks and that underscore the overvaluation of our labor, Stolzoff inspires us to redefine what it means for a job to be good enough. Learn more The Order of Time Time has bemused us since the dawn of consciousness. With his unique combination of scientific insight, philosophical wisdom, and artistic flair, Rovelli takes us on a journey to demystify the enigma of time. He guides us from Einstein to loop quantum gravity, all the while challenging and reshaping our intuitive understanding of time’s very structure and compelling us to confront the startling realities of our universe, where time flows at varied speeds in different places. With his help, we understand that the distinctions between past, future, and present are far less rigid than we perceive. Rovelli’s work is not just an intellectual feast; it’s also a call to introspection. For those obsessively striving to master time management, this book serves as a reminder to reconsider our relationship with time. It urges us to reflect on the interconnectedness of our selfhood and our perception of time. With The Order of Time, Rovelli nudges us to view time not as a foe to be tamed but as an intrinsic part of our existence to be understood and appreciated. Learn more Hidden Genius The book Hidden Genius by Polina Marinova Pompliano is a treasure trove of insights from some of the world’s most intriguing individuals. After five years of studying these high performers through her work at The Profile, Pompliano offers readers a unique opportunity to understand the mental frameworks these individuals use to navigate complex problems, fuel their creativity, and perform exceptionally under pressure. Far from simple tricks or hacks, these frameworks offer profound shifts in perspective that can redefine one’s worldview. This book can be an invaluable resource to enhance your thinking skills or seek inspiration during trying times. The great thing about Polina’s book is that it goes beyond sharing successful people’s stories: it also provides a mental toolkit that you can use to tackle complex problems, navigate relationships, and foster creativity and resilience in the face of uncertainty. Learn more Saving Time Saving Time by Jenny Odell is a riveting investigation into our relationship with time, compelling us to question the societal structures that commodify it and push us towards relentless efficiency. Odell argues that the societal clock we live by was designed more for profit than for people, turning even our leisure into quantifiable, transactional moments. Her book highlights how our distorted perception of time is intricately tied to enduring the climate, social, and mental health crises. Yet, Odell’s book is not a despairing read; it’s a beacon of hope, presenting us with alternative ways to experience time. By saving time from its commodification, Odell suggests that time, in its most authentic and diverse forms, may also save us, offering a profound source of meaning beyond the constraints of the workplace or the dictates of a profit-oriented society. In short, her book is a thoughtful rebellion against reality as we know it. Learn more The Pathless Path The Pathless Path by Paul Millerd takes readers on a deeply personal journey of self-discovery and personal growth. From his beginnings as a small-town Connecticut kid to reaching what he thought at the time was the pinnacle of success at a prestigious consulting firm, Paul had it all by conventional standards. Yet, he chose to walk away, setting off on his life’s “real work”: identifying what truly mattered to him and daringly constructing a life around those values. This book is not a how-to manual filled with life hacks. Rather, The Pathless Path is an intimate account of Paul’s transition from a life focused on professional advancement to one centered on work that genuinely matters. This book should be an essential companion for those contemplating a departure from their current jobs, embarking on a new path, navigating the uncertainties of an unconventional trajectory, or seeking alternative ways to understand work in our rapidly evolving world. Learn more Other books to explore this month: Exhalation by Ted Chiang (this is fiction but relevant to the future of life and work) The Art & Business Of Ghostwriting by Nicolas Cole How We Learn by Stanislas Dehaene Do you have any books to recommend for the Ness Labs Best Books series? Please let us know via the contact form. We welcome self-recommendations. The post Ness Labs Best Books of July 2023 appeared first on Ness Labs.
Ness Labs Best Books of July 2023
Turning Fear of Failure into Increments of Curiosity
Turning Fear of Failure into Increments of Curiosity
When I was younger, I badly wanted to live in Japan. Japan is a country with very strict immigration laws, but my university had an exchange program where you could go spend a semester and study in another country. There was only one problem: the Japanese university they had a partnership with was one of the most selective in the country. I remember thinking: “There is no way I’ll get accepted.” I told my mom about my doubts. “It’s not your decision to make,” she said. And, as often, she was right. We constantly limit our options by deciding for others. All I had to do was apply, and it then became the university’s job to accept my application or not. You probably have seen this pattern countless times in yourself and others. It’s far easier not to fail when you haven’t tried. It’s far easier to not be wrong when you’re not putting yourself out there. But it’s also much harder to grow as a human being when we avoid getting out of our comfort zone. If this fear of failure is so bad for our personal and professional growth, why is it so common? We all want to be loved Fear of failure starts in early childhood. We are social animals and feel the need to be accepted by others, which begins with the acceptance and love of our parents. In a study looking at the relationship between young athletes and their parents, researchers found a correlation between the parents’ high expectations for achievement and the children’s fear of failure. The more the parents showed a negative reaction to what they perceived as a failure from their kid, the more the kid would fear the consequences of “failing.” In some people, this can turn into atychiphobia, an irrational and paralyzing fear of failure, often accompanied by an intense feeling of panic or anxiety, and physical symptoms such as difficulty breathing, an unusually fast heart rate, and sweating. For most people, though, fear of failure manifests itself in a much more subtle way, mainly self-doubt that prevents us from exploring uncertain paths: We put off doing things because we’re unsure how they will turn out. We avoid situations where we may have to try something new in front of other people. We avoid doing things we know will improve our lives because we don’t have all the necessary skills. We give ourselves the illusion of growth by reading, researching, watching videos… Anything but doing the thing and risking being judged by others. But the good news is that nobody is hoping for you to fail. Most people you know would be happy to see you succeed, and the ones who don’t know you don’t care. So how can you shift your perception and overcome your fear of failure? Your perception of possible When you start reading a novel, you rarely expect to finish it in one go. Instead, you will probably read a few chapters, then a few more, until you’re done with the book. Strangely, we’re not so pragmatic when it comes to personal goals. It’s common to look at a long-term goal and never get started because it seems too far out of reach. But we can reshape our perception of what’s possible by breaking our journey into smaller, more achievable chunks. Achievable, in this case, does not mean something where you are certain of succeeding, but rather something that you can put to the test in the short term, without being able to use any excuse to put it off. Let’s say you have a fear of public speaking and use the excuse that, in any case, nobody has ever invited you to speak at a conference. A small, achievable experiment would be to apply to five local meetups to give a talk. While speaking in public may sound terrifying, filling out an online form is perfectly doable. Similarly, you may be scared to be judged for the quality of your writing. While writing a book is a daunting task that is easy to hide behind (“I’d love to write a book, but I don’t have the time”), writing a blog post is much more manageable. Fail like a scientist If you see life as a giant experiment where your goal is to explore as much as you can to obtain answers to your questions, failure becomes an investment to get closer to these answers. In the words of Seth Godin: “The cost of being wrong is less than the cost of doing nothing.” Scientists often repeat experiments thousands of times to get a conclusive answer. And more often than not, the answer they get is that their initial hypothesis was wrong. Not performing the experiment would have allowed them to stay in a cozy limbo of being not wrong, but then we wouldn’t have any science. This is why approaching failure like a scientist is so powerful. By making decisions that will let you learn something new, you are guaranteed to be successful—where success is learning, evolving, and growing as a human being. Failing becomes a way to cultivate aliveness. Increments of curiosity Another way to approach your fear of failure is to think like a kid. Children tend to experiment just for the sake of it: What will happen if I press this button? How does it feel to touch this thing? Reconnecting with your inner child is a great way to overcome your fear of failure. For example: What will happen if I publish this post? How does it feel to speak my mind? Instead of imagining all the ways you may fail, turn your doubts into questions. Maybe nothing good will happen, but a child would not take the answer for granted. Start with something small, then move on to another iteration—a bigger growth loop. With time, your mind will become increasingly comfortable with trying new things and constantly expanding your horizons. Practically, here is how you can start applying this approach of deliberate experimentation right now: Pick something you’ve been putting off because of your fear of failure. Is it public speaking? Starting a blog? Producing a podcast? Launching your first product? Write it down. Define one small experiment you can design to explore this fear. It should be actionable. For example, apply to a few meetups to give a talk, produce one episode of a podcast, or write an article as a Google Doc and share it with a few friends. It should be simple enough that you can just do it in a few hours at most. Do it! Don’t plan anything. Don’t research the best way to go about it. Don’t announce it on Twitter. Just do it. Reflect on what happened. Any negative reactions? What about your emotions? What did you learn? Write all of these thoughts down. It’s a great way to practice metacognition. Rinse and repeat. Keep defining incremental steps in the form of experiments that fall out of your comfort zone but are not scary to the point of being paralyzing. Again, avoid overthinking it beforehand. Just do it, and reflect only after you have performed the experiment. You may feel some anxiety or discomfort along the way, but addressing your fears and trying new things you care about is the best way to avoid another feeling that’s much harder to manage: regret. The post Turning Fear of Failure into Increments of Curiosity appeared first on Ness Labs.
Turning Fear of Failure into Increments of Curiosity
How to Impede Technological Progress
How to Impede Technological Progress
“Everything that’s happening is coordinated by someone behind the scenes with one goal: to completely ruin scientific research.” – Da Shi, in The Three Body Problem by Liu Cixin Most of the time, we think of innovation policy as a problem of how to accelerate desirable forms of technological progress. Broadly speaking, economists tend to lump innovation policy options into two categories: push and pull policies. Push policies try to reduce the cost of conducting research, often by funding or subsidizing research. Pull policies try to increase the rewards of doing research, for example by offering patent protection or placing advance orders. These have been extensively studied and while they’re not silver bullets I think we have a good evidence base that they can be effective in accelerating particular streams of technology. But there are other times when we may wish to actively slow technological progress. The AI pause letter is a recent example, but less controversial examples abound. A lot of energy policy acts as a brake on the rate of technological advance in conventional fossil fuel innovation. Geopolitical rivals often seek to impede the advance of rivals’ military technology. Today I want to look at policy levers that actively slow technological advance, sometimes (but not always) as an explicit goal. I think we can broadly group these policies into two categories analogously to push and pull policies: Reverse push (drag?): Policies that raise the costs of conducting research. Examples we’ll look at include restrictions on federal R&D funding for stem cell research, and increased requirements for making sure chemical research is conducted safely. Reverse pull (barrier?): Policies that reduce the profits of certain kinds of innovation. We’ll look (briefly) at carbon taxes, competition policy, liability, and bans on commercializing research. The fact that conventional push and pull policies appear to work should lead us to believe that their reverses probably also work; and indeed, that’s what most studies seem to find. But there are some exceptions as we’ll see. Brief Pause for Some Announcements If you’re a fan of what I’m doing here at New Things Under the Sun, and want to write something yourself, you may be interested in the following: Interested in collaborating with me on a post? Click here for details. The Roots of Progress Blog-Building Intensive is a new 8-week (free!) program for aspiring progress writers to start or grow a blog. Learn more or apply here. Open Philanthropy’s Innovation Policy program is currently soliciting pre-proposals from individuals for financial support to write living literature reviews about policy-relevant topic areas. Interested individuals should have a PhD related to their proposed area and should contact matt.clancy@openphilanthropy.org for more information. Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Subscribe now Back to the Article Reverse Push Policies Sort of Working Let’s start with two studies that have the effect of making it more expensive (in terms of time or money) to do certain kinds of research. Both these studies are going to proceed by comparing certain fields of science that are impacted by a new policy, to arguably similar fields that are not impacted by the policy. By seeing how the fields change relative to each other both before and after the new policy, we can infer the policy’s impact. Let’s start with US restrictions on public funding for research involving human embryonic stem cells. The basic context is that in 1998, there was a scientific breakthrough that made it much easier to work with human embryonic stem cells. While this was immediately recognized as an important breakthrough for basic and applied research, a lot of people did not want this kind of research to proceed, at least if it was going to result in the termination (or murder, depending on your point of view) of human embryos. A few years later, George W. Bush (who was sympathetic to this view) won a closely fought US presidential election and in August 2001, a new policy was announced that prohibited federal research funding for research on new cell lines. Research reliant on existing cell lines was still eligible for funding, but since most of the existing cell lines were not valuable for developing new therapies, this restriction was more significant than it might naively seem. No restrictions were placed on private, state, or local funding of human embryonic stem cell research, but anyone who received funds for this kind of work would need to establish a physically and organizationally separated lab to receive federal funding for permissible research on existing lines. To see how this policy change affected subsequent research, Furman, Murray, and Stern (2012) identify a core set of papers about human embryonic stem cell research and RNAi, another breakthrough in the same year and originating in the US that unaffected by the policy, but which was perceived to be of similar scientific import. They then look at how citations to those core papers evolve over time, with the idea that a citation to one of these core papers is a (noisy) indication that someone is working on the topic. Because foreign scientists are unaffected by US policy, they also divide these citations into those coming from papers with US researchers and those without. They estimate a statistical model predicting how many US and foreign citations a core paper in either topic receives, in each year, as a function of its characteristics. A key finding is illustrated in the following figure, which tracks the percentage change in citations from US-authored articles to human embryonic stem cell research, as compared to a baseline (which includes RNAi papers, and citations from foreign-authored articles). Prior to 2001, citations by US authors to papers on human embryonic stem cells were about 80% of a baseline, but the error bars were wide enough so that we can’t rule out no difference from baseline. Beginning in 2001 though (when the policy was announced), US citations to these papers dropped by a pretty noticeable amount - from roughly 80% of baseline to 40%. How citations from US authors to human embryonic stem cell papers fare, compare to a baselineFrom Furman, Murray, and Stern (2012) Note though; just three years later, in 2004 things may have been back to their pre-2001 levels. But the restrictions on federal research weren’t relaxed in 2004. So what’s going on?  We’ll return to this later. For now, let’s turn to another study that shows reverse push policies (of a sort) can exert a detectable influence on basic research. This time, we’ll look at a policy whose goal was not to reduce the amount of research, but instead to simply make sure it was done in a safer manner. In 2008 Sheharbano (Sheri) Sangji died in a tragic UCLA chemistry lab accident involving flammable compounds. This incident and the subsequent criminal case for willful violation of safety regulations by the lab’s principal investigator and the Regents of the University of California galvanized a significant ratcheting up of safety regulations across US chemistry labs. For example, at UCLA, participants in lab safety classes rose from about 6,000 in 2008, to 13,000 in 2009 and 22,000 in 2012, while the number of safety inspections of labs rose from 1,100 in 2008, to 2,000 in 2009 and 4,500 in 2012. This was accompanied by an increase in laboratory safety protocols and more stringent rules for the handling of dangerous chemicals. To see what impact the increase in safety requirements had on chemistry research, Galasso, Luo, and Zhu (2023) gather data on the publications of labs in the UC system. They end up with data on the publications of 592 labs, published between 2004 and 2017 (note they exclude the lab where Sangji worked). To assess the impact of more stringent safety regulations, they cut the labs into two different pairs of sub-samples, with one half of each pair more impacted by the policy and the other half less impacted.  First, they hire a team of chemistry PhD students to classify labs as “wet”, which are equipped to handle biological specimens, chemicals, drugs, and other experimental materials, and “dry”, which are not and might do computational or theoretical research (these comprise 14% of labs). We should expect safety requirements to not affect dry labs, but possibly to affect wet ones - but not if they rarely work with dangerous compounds. So, as a further test, Galasso and coauthors use data on the chemicals associated with lab publications to identify a small subset of labs that most frequently work with compounds classified as dangerous. Because they need a long time series prior to 2008 for this classification exercise, they can only apply this method to 42 labs, out of which they flag the 8 working most often with dangerous compounds. Their main finding is that the impact of the increased safety requirements were pretty small. Indeed, comparing the publication output of wet labs and dry labs, there appears to be no detectable impact of the policy at all, even when trying to adjust for the quality of publications by adjusting for the number of citations received, or after taking into account potential changes in the sizes of labs. The effects were not totally zero though. When they zero in on labs using the most dangerous compounds, they find that after safety standards are ratcheted up, the most high-risk labs begin to publish about 1.2 fewer articles per year mentioning dangerous substances as compared to less dangerous wet labs (labs publish an average of 7.7 articles per year in the sample). The reduction is most pronounced for articles mentioning flammable substances, or dangerous compounds that haven’t ...
How to Impede Technological Progress
Discover an immersive new approach to productivity with Nick Daniels Founder of Portal
Discover an immersive new approach to productivity with Nick Daniels Founder of Portal
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us make the most of our mind. Nick Daniels is the founder of Portal, an immersive productivity app designed to help you stay in the flow. Portal uses the latest technology to deliver the most natural reproductions of real-life locations. In this interview, we talked about how physical work spaces can influence wellbeing, productivity and creativity, the potential of immersive technologies, the psychology of how we respond to our environment, and much more. Enjoy the read! Hi Nick, thanks for agreeing to this interview! Most people believe that nature contributes to our wellbeing, but you believe that nature is at the heart of our health and happiness. How did you form that belief? Thanks so much for having me! The inspiration for Portal was sparked back in 2018 when my wife and I spent 6 weeks camping around New Zealand on our honeymoon. We’d spent the previous 10 years living in London pretty much working ourselves into the ground and I was only just starting to recover from a period of depression and burnout from pushing myself too hard on a previous startup, so we were both very much craving an opportunity to get away from it all. The experience itself was of course amazing — unzipping the tent in the morning to stunning views and experiencing the ever-changing sights and sounds of each location we were camping in was incredible. But the most surprising and unexpected thing was that we actually had some of the best sleep of our lives. When I’d camped before it’s always been for short periods of time, and I’d always found the fact that you can hear every sound and the light pouring in early in the morning less than ideal — and often meant you ended up with less sleep not more. But what we found living in the tent for an extended period of time was that over time we just seemed to naturally sync up with the rhythm of the world around us. We’d start getting tired as the sun went down, the temperature fell and the sounds of the birds got replaced with the sounds of the insects at night. And we’d wake up so fresh and energized in the morning as the opposite happened and the sun and temperatures rose and the birds began to sing outside. It’s a feeling that’s almost impossible to describe — when you feel in sync with everything around you, but this experience completely changed how I viewed the natural world. I no longer felt like it was a place separate from me — a place to visit or an attraction to enjoy. It was more like the feeling of finally being home after a long time away. And at the end of those six weeks we both felt the best we’d ever felt in our lives. The idea for Portal then came on the flight home — cramped, uncomfortable and returning to our hectic, stressful lives in London. I was struggling to sleep and started to think about how we’d slept so well and whether it was possible to “bottle up” and re-create that experience and these amazing surroundings back home in London. It was only once I was home and started to research further around what I’d experienced I realized that there was a growing mountain of scientific evidence drawing the link between nature, circadian rhythms, our surroundings and our mental health. It was then that I realized this might be able to help many others beyond myself and within a week I’d handed in my notice and started coding the first version of the app. Ha, inspiration literally hit you. So the initial version was mostly focused on sleep and relaxation? Yes, not many people know this, but the app actually started off life completely focused on being a sleep aid and natural alarm clock — recreating that camping experience in the bedroom using immersive sound, smart lighting and visuals. The big idea was to take an experience-led approach to designing an alarm clock that was inspired by our trip, so rather than the purely functional approaches to alarm clocks which basically use loud noises to scare you awake at a specific time — it would help you wind down at night using gentle transitions mimicking the natural world and then wake you up gently in the morning. It’s an approach that draws upon a lot of the principles behind Biophilic Design, a design approach traditionally used by architects and interior designers that looks to increase people’s connectivity to natural environments and the benefits this can bring. It’s still quite a niche approach but I’m convinced given the amount of research and the positive impact it can bring in our lives that you’ll see it becoming much more mainstream over the coming years. You’ve just launched the Mac version of the app, which is all about improving focus and productivity. How did this come about? In truth, it was a little bit Inception-like. I’d have the scenes playing a lot while coding and developing the app in the early days and came to realize that it was actually really helping me to concentrate and get into the flow. The thunderstorms especially were game-changing for me! As I dug a little deeper, I discovered a wealth of research has come out over recent years shining light on the attention-enhancing effects of nature exposure both digitally and in the real world, specifically research around Attention Restoration Theory (ART) in the field of environmental psychology. There’s also a lot more investment and research going into the architecture and design of physical work spaces and buildings and how they can influence wellbeing, productivity and creativity using the principles of Biophilic Design mentioned before. Apple Park is probably the best example of this that I’ve come across where they’ve spent billions of dollars creating a physical work environment that takes a very human-centric approach and really does aim to bring the natural outdoor environment indoors as much as physically possible. Another fascinating insight we found when speaking with existing customers who were using the iOS app to help them focus was that 40% of those who we interviewed were diagnosed with ADHD (which normally occurs in around 5% of the population). They reported that Portal had become an essential part of their toolkit in managing their ADHD and helping them pursue their studies, careers and passions. However, despite this, the biggest concern for our customers was actually having to use Portal on their phones as these had increasingly become the greatest source of distraction in their lives. They were a big driving force for us prioritizing bringing Portal to Mac. This is such an ambitious idea. How does Portal work, exactly? The app itself uses immersive technologies to instantly transform your workspace into an environment that’s designed to aid focus and creativity. Most of us are very aware of how different places make us feel — it’s not hard to imagine how different you’d feel if you were sitting on top of a mountain right now, or in the midst of a beautiful ancient woodland or a stunning tropical beach. But what’s often less obvious is that how we feel emotionally has a very direct impact on our thought process and how we actually think. In the words of one of our customers: “It has not only made me more productive, but more importantly, it has brought a sense of joy to my work day.” We essentially tap into the psychology of how we respond to our environment and draw on inspiration from some of the world’s most peaceful and awe-inspiring surroundings to create environments that are attuned to helping us get into the right state of mind to think, focus and create. The beauty of this approach is that it’s very passive — it doesn’t take active effort to enjoy the benefits. How does Portal work under the hood? Our ultimate goal is to re-create environments in the most true-to-life and authentic way possible, while also making it as practical and easy to use as possible. To do this, we’ve really had to really push the use of technologies that allow us to capture and reproduce visuals, sound and lighting as realistically as possible. Firstly we use the visuals of the location to create the feeling of a “window” to that place. Our aim has been to get as close to the feeling of a real window as possible and with Mac we’ve integrated these motion visuals directly onto the desktop. It may seem pretty counterintuitive that putting motion onto your desktop would actually help with concentration and make you less distracted, but when done right it can be really effective and it has a very similar effect to having a real window in your office. We’ve meticulously captured over 80 portals ourselves in some of the most beautiful and peaceful corners of the world. We’ve used 12K digital cinema cameras and an evidence-based approach to our content production to ensure we capture the feeling and detail of these incredible places in a way that can enhance productivity and inspire creativity without pulling your focus away. The second component is the sound. We’ve again put an enormous amount of focus on recreating the most true-to-life and immersive sound experience possible. To do this we not only use state-of-the-art spatial audio microphones but we’ve developed our own Spatial Audio solution from the ground up which is specifically designed for real-life ambiance. Rather than using Dolby Atmos which is the default technology on iOS and Mac we use a technology called Ambisonics which is most often used in VR and represents the soundfield as a sphere rather than the traditional speaker or channel-based sound formats.  Spatial audio better reflects how we actually hear our surroundings in real life, giving a much greater sense of space and delivering the closest experience to actually being there yourself. The effect can be quite subtle, but it’s incredible just how much our subconscious picks up on. We also go to great lengths to capture sound in the field that’s naturally free of noise pollution. It’s ama...
Discover an immersive new approach to productivity with Nick Daniels Founder of Portal
The false promise of the 10000 hour rule
The false promise of the 10000 hour rule
Our culture loves experts. Whether it’s athletes, chefs, or musicians, some of the biggest celebrities are considered masters of their craft, and we admire the long hours they put into practicing over and over again the same skills so they could become second nature. In 2008, Malcolm Gladwell published his popular book Outliers, exploring why some seemingly extraordinary people achieve much more than others. The book mentioned a study of violin students at a German music academy. This is from the abstract: “Many characteristics once believed to reflect innate talent are actually the result of intense practice extended for a minimum of 10 years.” Malcolm Gladwell branded this the 10,000-hour rule. Study whichever topic for 10,000 hours, and you will master it. Practice doesn’t make perfect First, the study wasn’t about studying a topic for a specific amount of time. It was about deliberate practice. This is a type of practice that is systematic and purposeful, with the specific goal of improving performance and requires focused attention rather than mindless repetitions. More importantly, the lead researcher of the study himself doesn’t even seem to agree with the magical 10,000-hour rule. “He misread that as every one of them had actually spent at least 10,000 hours [practicing], so somehow they passed this magical boundary (…) They were very good, promising students who were likely headed to the top of their field, but they still had a long way to go at the time of the study.” Anders Ericsson, Psychologist & Researcher, Florida State University (source). Finally, and maybe the biggest problem with the 10,000-hour rule, there is absolutely nothing in the study that suggests that anyone can become an expert in any given domain by putting in 10,000 hours of practice, even deliberate practice. To show this, the researchers would have had to take a random sample of people through 10,000 hours of practice and see if the results were statistically significant. All the study shows is that the “best” violinists had put in more hours of deliberate practice than the “good” violinists. Which is interesting but by no means a promise of expertise. In fact, a research study from Princeton shows evidence that practice accounts for just a 12% difference on average in performance in various domains, specifically: 18% in sports 21% in music 26% in games As Frans Johansson explains in his book The Click Moment, deliberate and repeated practice works better in fields with stable structures, such as chess, classical music, or tennis, where the rules never change. But, when it comes to entrepreneurship and other creative fields, the rules change all the time, making deliberate practice less useful. So if practice doesn’t make perfect, how can we go about mastering new skills? Range over mastery The learning strategy that has been used traditionally in school to teach students consists in focusing on one skill before moving on to the next one and is called blocking. But there is a better way: interleaving, which consists in practicing multiple parallel skills at once. Research has shown that randomizing the information causes your brain to stay alert, helping to store information in your long-term memory. This means that the next time you want to study a new subject, you could benefit from switching things up. For example, a bit of coding mixed with a bit of UX design will work better than one long coding session. Not only will you learn better and faster, but it may also make you more successful in the long run. In his book Range, David J. Epstein shows how generalists, rather than specialists, are more likely to succeed, especially in complex fields. The graph below is based on the Ancient Greek proverb: “The fox knows many things; the hedgehog one great thing.” Being too much of an expert can even be detrimental. In Expert Political Judgment, Philip E. Tetlock shares an experiment where political and economic experts were asked to make predictions. Turns out, 15% of outcomes that experts had considered impossible came to happen anyway, and a quarter of what constituted virtually guaranteed outcomes were never predicted. The interesting part? The more experience and credentials these experts held, the further off the mark their predictions were. In contrast, the participants who had a wider range of knowledge areas and were not bound to a specific “expertise” domain fared better in their predictions. Being able to see new patterns and generate ideas across fields where people don’t usually make connections is an incredibly valuable skill. This superpower rarely comes with deep expertise in one unique field at the expense of other areas of knowledge. So, forget about the 10,000-hour rule. Forget about sticking to one area of expertise for many years. It may work for a very small subset of people, but there is no rule indicating that this is the best strategy. Next time you feel like studying something new that doesn’t fit neatly into your current “frame of expertise”, go ahead and just do it. The post The false promise of the 10,000 hour rule appeared first on Ness Labs.
The false promise of the 10000 hour rule
Does Advanced AI Lead to 10x Faster Economic Growth?
Does Advanced AI Lead to 10x Faster Economic Growth?
Dear readers, I’m still writing the next New Things Under the Sun post, but in the interim, I hope you’ll probably find this debate I had with Tamay Besiroglu as fascinating as I did.1 It’s about the claim that, once we develop AI that can do anything (mental) a human worker can do, the economy will start to grow much, much, much faster. This claim is actually implied by some pretty mainstream models of economic growth! Tamay and I had this debate in slow motion, in a shared google doc, over a few months, and it was published in Asterisk Magazine Friday. In the debate, I’m the skeptic and Tamay the advocate. While I think it’s pretty likely sufficiently advanced AI would lead to (somewhat) faster economic growth, I think growth of 20% per year and up is pretty unlikely. In contrast, Tamay thinks 20% annual growth and faster is pretty likely, if we successfully develop AI that can do every kind of human mental work. If you’re unfamiliar with this debate, I think we cover the fundamentals well. But even if you are familiar I think we also push past the basics and articulate some novel arguments too. You can read the whole piece over at Asterisk right now. Read the Debate Now If you prefer audio, Tamay and I also recorded a podcast version where we each perform our parts of the dialogue. That one should be ready in the next 24 hours - it’ll show up first at this link, and then on your local podcast app a bit later. Cheers, Matt 1 I actually covered some of Tamay’s work on New Things Under the Sun here!
Does Advanced AI Lead to 10x Faster Economic Growth?
Creative burnout: when the creativity tap runs dry
Creative burnout: when the creativity tap runs dry
You are probably all too familiar with the dreaded creative block: sitting in front of your computer, your mind as blank as the page you are staring at, hoping that some miraculous burst of inspiration will suddenly rush through your fingers so you can finally get back into the flow. You also know of the many techniques to deal with creative block. Find inspiration by changing your scenery—maybe going for a walk or packing your laptop to work from a cafe. Just writing whatever crosses your mind, even if it’s unrelated to the work at hand until your mind starts forming interesting connections. Talk to other creatives to brainstorm some ideas. Experiencing a creative block is always inconvenient and stressful, but it is normally short-lived, and feeling occasionally stuck when working on a project is perfectly normal. Even if it may feel like an eternity, we soon end up finding a way to get our creative juices flowing. But sometimes, the problem runs much deeper. Creative burnout is a state of emotional, physical, and mental exhaustion around creative work. The symptoms can be hard to pinpoint, and the potential causes are many. The 8 symptoms of creative burnout Because it’s normal for creativity to fluctuate depending on factors such as sleep and stress levels, creative burnout can easily fly under the radar—masking temporary procrastination, tiredness, or lack of motivation. For people who genuinely care about their work and for those who rely on creative output as an emotional outlet, the insidious nature of creative burnout can have a devastating impact on their mental health: When you can’t seem to be able to produce any good creative work and you don’t know what’s wrong, you start blaming yourself. So I put together a list of eight signs of creative burnout. In isolation, most of these signs are harmless. However, if you have four symptoms or more, it may be time to shake things up. Procrastination. Putting off work for a couple of days because you don’t feel like you have enough mental energy is nothing to worry about. However, if you procrastinate for long periods of time and ignore important deadlines, it may be a sign of creative burnout. Struggle to do basic work. Is your to-do list getting longer and longer, but you can’t bring yourself to check some easy tasks off it? Are you burying your head in the sand and neglecting the growing mountain of little things you ought to get off your plate? This may be another symptom. Constant exhaustion. Sometimes, we don’t get enough sleep and feel sluggish the day after. That’s completely fine. But if the physical exhaustion is sustained over a long period of time despite a decent amount of sleep, you may be burning out. Inexplicable stress. Creative work can be stressful. Deadlines, complicated projects with many moving parts, a pushy client… These factors can cause stress within the Goldilocks curve and remain manageable. But creative burnout may make you feel persistently stressed without being able to pinpoint the exact cause. Unhealthy comparisons. We are more connected than ever, and many creators follow the work of fellow creators online. Some creators are more productive than others, and this productivity usually ebbs and flows. If you look at their output and can’t help but compare their productivity to yours in a negative way, you may be experiencing a symptom of creative burnout. Unbalanced content consumption. As a creator, it’s vital to balance your levels of creative input and creative output. When we burn out, we often find ourselves scrolling endlessly and binging TV shows but not creating much work of our own.  Morning dread. Have you ever experienced that feeling of angst, a sense of doom where your mind is racing into the future, and everything seems bleak? Stressful times in our life can make us dread waking up. If this feeling persists, it may be a sign of creative burnout—or something even more serious. Harmful habits. Eating unhealthy food or eating more than usual, abandoning your exercise routine, drinking more alcohol… If you are experiencing creative burnout, you may be coping through damaging mechanisms which will leave you feeling even worse. Irritability. You may be feeling frustrated with your colleagues, annoyed with your spouse, or snappy at your kids. Being more temperamental than usual can be a symptom of creative burnout. Self-doubt. Finally, you may also think that you will never be good enough, that your work is pointless, or that you lack the necessary imagination—despite having produced good creative work in the past and having received praise for it. Please note that if you are experiencing many of these signs, or even just one of these for a long time, it may be more serious than creative burnout. Many of these signs are also found in mental health conditions such as depression, anxiety disorders, and seasonal affective disorder, or could be caused by sleep problems. In doubt, it’s always worth talking to a professional. How to bounce back from creative burnout Creative burnout can make us feel powerless as if there was nothing to be done about it. But we have agency and can use simple strategies to break the cycle. Of course, simple does not mean easy, but removing unnecessary complexity from our approach makes it more likely for us to succeed. Get support. Because creative burnout impacts our work, our first instinct may be to hide our struggle from our colleagues. However, just grabbing someone and telling them: “I’ve been feeling burned out lately” can be immensely helpful. You will find that most people are more than happy to help, whether by giving you a hand with a project, brainstorming fresh ideas, or just lending an ear. Voicing your struggle is also a great first step in bouncing back from creative burnout. Take a break. Not just a short walk, which may be helpful for a creative block but probably not enough to help with creative burnout. Take a proper break—a few days off, with your out-of-office autoresponder on, where nobody will expect any work from you. The anxiety of knowing you are supposed to work but can’t bring yourself to is a vicious cycle. Taking a break is a way to escape that cycle so you can start afresh. Use the time to do things that have nothing to do with work without feeling any guilt: spend time with your loved ones, read books, take naps, cook, watch movies, go on a weekend holiday in the countryside, take care of your plants… Or just do nothing, that’s perfectly fine. Make space for self-reflection. Replace destructive existential angst with constructive self-reflection. It could take the form of journaling, discussing your struggle with a friend, reviewing your current environment and your schedule, running a motivation clinic, or even just talking to yourself out loud. Burnout can be hard to manage when we can’t define its exact source. Turn yourself into a self-experimenting scientist and try to uncover the roots of the problem. Look at your past work. Because creative burnout often comes with self-doubt, it’s easy to forget all our past accomplishments and focus on our present challenges instead. Go browse your past work, both the good and the bad. If it’s good, remember how it wasn’t easy to produce. If it’s bad, look at how much progress you have made. Channel the feelings you experience while reviewing your past work to overcome your self-doubt. Start with the basics. Choose the smallest atomic unit of creative work you can do to get you started again. Are you trying to write a book? Just write one paragraph. Trying to design a new website? Just work on one wireframe. Instead of looking at the mountain of work in front of you and feeling paralyzed, take your first baby step. Don’t forget to be kind to yourself. Creative burnout does not mean you don’t care about your work; it doesn’t mean you are lazy; it doesn’t mean you are not talented. Creative burnout can stem from perfectionism, external pressure, high expectations, or hypersensitivity. It’s a temporary state, not a permanent condition. Prevention is better than cure Creativity is fragile. It needs to be fed, but not too much, for consuming an excessive amount of information may destroy its delicate balance. It needs space to grow, but should not be forced, for mechanical work may lead to lifeless output. Despite all our care, sometimes, it seems to be gone: the creativity tap has run dry. We experience the dreaded creative burnout. While there are simple strategies to manage creative burnout, the best way to deal with it is to avoid burning out in the first place. Because of all the different causes of creative burnout, it may not always be possible, but creating a mental scaffolding to support your health and creativity can go a long way. Metacognition. Don’t wait until things are bad to start reflecting on how you feel, your progress, your goals, and your motivations. Metacognition means “thinking about thinking”—it’s being aware of your own awareness so you can determine the best strategies for learning and problem-solving, as well as when to apply them. It consists in planning, monitoring, and evaluating your creative work on an ongoing basis, so you can catch any early signs of creative burnout. Mindful productivity. Mindfulness and productivity may seem antithetic, but borrowing principles from mindfulness when you pursue creative work will help you build a sustainable work environment for yourself. Mindful productivity can be defined as being consciously present in the work you’re doing while you’re doing it. It’s not about meditation; it’s about calmly acknowledging and accepting your feelings and thoughts while engaged in work or creative activities. Habits, routines, rituals. Ensure you have the basics covered in terms of mental and physical health. Habits, routines, and rituals all have different levels of intentionality, and are all helpful to help you feel balanced and healt...
Creative burnout: when the creativity tap runs dry
And now for something completely different
And now for something completely different
This short post is to announce the launch of a new living literature review, on a topic almost the opposite of New Things Under the Sun: Existential Crunch, by Florian Jehn! Existential Crunch Thoughts about existential risk, history, climate, food security and other large scale topics. By Florian U. Jehn Existential Crunch is about societal collapse, and what academic research has to say about it. The first post takes a tour of the major schools of thought on this topic: Gibbon, Malthus, Tainter, Turchin and more. As the post says in it's closing: My main takeaway is that this field still has a long way to go. This is troubling, because in our society today we can see signs that could be interpreted as indications of a nearing collapse. There are voices warning that our global society has become decadent (writers like Ross Douthat), that we are pushing against environmental limits (for example, Extinction Rebellion), that we are having a decreasing return of investment for our energy system (for example, work by David Murphy) and that there has been an overproduction of elites in the last decades (writers like Noah Smith). This means we have warning signs that fit all major viewpoints on collapse. Moreover, new technological capabilities pose novel dangers that require us to extrapolate beyond the domain of historical experience. All this means that understanding how collapse really happens is rather urgent. If we want innovation and progress to continue (and I certainly do!), understanding how it dies seems, uh, important! Check it out, and sign up for the substack here. Why am I telling you about this? Well, one of the reasons I was excited to join Open Philanthropy was for the opportunity to support more living literature reviews, on a diverse array of topics. This is our first such review we’ve supported, but we’re interested in financially supporting more via the newly launched innovation policy program. We’re especially interested in people interested in writing reviews for policy relevant topics. For us, a living literature review is an online collection of short, accessible articles that synthesize academic research, updated as the lit evolves, and written by a single qualified individual (for example, Florian has published related academic work). If you're interested, go here for more info. And please, if you know of people who you think would be a good fit for this kind of thing, please let them know about this opportunity.
And now for something completely different
The Size of Firms and the Nature of Innovation
The Size of Firms and the Nature of Innovation
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Special note: Up until now, everything on New Things Under the Sun has been written by me. This is post is the first ever collaboration! My coauthor is Arnaud Dyèvre (@ArnaudDyevre), a PhD student at the London School of Economics working on growth and the economic returns to publicly funded R&D. I think this turned out great and so I wanted to extend an invitation to the rest of you - if you want to coauthor a New Things post with me, go here to learn more about what I’m looking for and what the process would be like. One last thing; I want to assure readers that, although this is a collaboration, I’ve read all the major papers discussed in the post. I view part of my job as making connections between papers, and I think that works best if all the papers covered in this newsletter are bouncing around in my brain, rather than split across different heads. On to the post! We are used to thinking about income inequality between individuals, but inequality between firms is vastly larger. In the US, the richest 1% of individuals earned about 20% of all income in 2018.1 In contrast, the top 1% of US firms by sales earned about 80% of all sales in 2018. The economy is populated by a few “superfirms” and multitude of small- to medium-size businesses. And this disparity is getting more extreme over time.2 Does this huge disparity in firm size matter for innovation and technological progress? Do big firms differ in the type of R&D they do, and if so, why? The academic literature about the empirical link between firm size and innovation is an old one, dating back to the 1960s at least,3 and we do not have space to do it full justice here. Instead, in this post we’ll focus on work using a variety of approaches to document that there are important differences in how innovation varies across firm sizes. In a followup post, we’ll examine some explanations for why. One quick point before digging in: when economists talk about firm size, they typically refer to its total sales or (more rarely) its employment count. Defined in this way, firm size is often used as an imperfect proxy for the number of business units of a firm (i.e. the number of product lines it has). Subscribe now Fact 1: Firm size and R&D rise proportionally The first important fact about firm heterogeneity and innovation is that corporate R&D expenditures scale up proportionately with their sales. In other words, when sales double, money spent on R&D doubles too. This doesn’t have to be the case: for example, it has been shown that other inputs in production such as labor4 and capital5 do not scale proportionately with firm sales (less than proportionately for labor, more than proportionately for capital). This proportional relationship has been shown time and again, at least for firms above a certain size who do at least some R&D.6 To illustrate this point, the figure below shows the relationship between firm sales and R&D expenses among publicly traded firms who report doing some R&D. The data is from Compustat (a database of publicly listed firms) and each dot represents 750 firm-by-year observations. In this graph, we control for year and fine sector (SIC4) so that the variation we isolate is across firms, within a year and within a sector.7 The slope is strikingly close to 1 on a log-log plot, meaning that the typical publicly listed firm increases its R&D expenditures by 10% when its size increases by 10%. Firm R&D expenditures by firm sales (log plot) Notes: Graph generated by Arnaud Dyèvre, with data on US publicly listed firms from Compustat. The sample of firms only include firms who report some R&D expenditures in a year. Sales and R&D are deflated using the Bureau of Labor Statistics CPI. This finding was first observed in the 1960s and has been reproduced across many studies since. In the figure below, from a seminal 1982 study by Bound, Cummins, Griliches, Hall and Jaffe, the authors have plotted log R&D expenditures of a panel of 2,600 manufacturing firms, as a function of their log sales, in 1976. The same proportional relationship is observed. Firm R&D expenditures by firm sales (log plot). Data from Bound, Cummins, Griliches, Hall and Jaffe (1982) The 1-to-1 proportionality of R&D to sales may lead one to conclude that the immense heterogeneity in firm sizes does not matter for the aggregate level of innovation. After all, if R&D scales proportionately with firm size, then an economy consisting of 10 firms with $1 billion in sales each will spend as much on R&D as an economy consisting of one firm with $10 billion in sales. But as we’ll see, this conclusion would be erroneous. Fact 2: Larger firms get fewer inventions per R&D dollar A variety of different lines of evidence show that firms get fewer inventions per R&D dollar as they grow. Let’s start with patents  (we’ll talk about non-patent evidence in a minute). The 1982 study by Bound, Cummins, Griliches, Hall and Jaffe mentioned earlier found that firms with larger R&D programs get fewer patents per dollar of R&D. Their result is summarized in Figure 3 (panel A) below; it shows an exponential decrease in the number of patents per R&D dollar as one moves up the size of firm’s log R&D expenditure. In a more recent and more comprehensive exploration of this relationship, Akcigit & Kerr (2018) use the universe of firms in the US matched to patents to document that patents per employee also decrease exponentially as a function of log employment (panel B). The relationships shown in the figures are very similar and suggest that bigger firms are getting fewer patents per productive unit—employment or R&D dollar.  Left: Patents per dollar of R&D as a function of total R&D expenditures (x-scale in log). From Bound, Cummins, Griliches, Hall & Jaffe (1982).Right: Patents per employees as a function of total employee count (x-scale in log). From Akcigit & Kerr (2018). Patents are not synonymous with invention though. It could, for example, be that as firms grow larger they create just as many inventions per R&D dollar, but they become less likely to use patents to protect their work. But in fact, the opposite seems to be true. Mezzanotti and Simcoe (2022) report on the Business R&D and Innovation survey, which was conducted between 2008 and 2015 by the US Census Bureau and the National Science Foundation. This survey asked more than 40,000 US firms, from a nationally representative sample, about their use of intellectual property. They find larger firms are much more likely to rate patents as important. For example, 69% of firms with more than $1bn in annual sales rate patents as somewhat or very important, compared to just 24% of firms with annual sales below $10mn. This relationship also holds when you compare responses across firms belonging to the same sector, in the same year. In other words, if we had a perfect measure of innovation that is not affected by selection like patenting is, we would find an even stronger negative relationship between firm size and patent per R&D dollar or per employee. Small firms have more patents per employee or R&D dollar, in spite of being less likely to file patents than big firms. Other empirical studies of innovations have relied on different measures of innovative output and have reached a similar conclusion. In a creative 2006 study of the financial service industry, Josh Lerner uses news articles from the Wall Street Journal to identify new products and services introduced by financial institutions. For example, if a story about a new security or the first online banking platform is written in the WSJ, Lerner counts it as an innovation and attribute it to a bank in the Compustat database. Consistent with papers using patent data, he finds that innovation intensity scales less than proportionately with firm size. (Note that Lerner measures size as the log of assets here rather than log sales, due to the nature of the industry studied.) You can also look for the introduction of innovations in other places. In 1982, the US Small Business Administration created a database of new products, processes or services in 100 technology, engineering or trade journals, and linked these inventions to firms. In their 1987 paper using this data, Acs & Audretsch also find that larger firms have fewer innovations per employees and fewer innovations per dollar of sales than small firms. (Though they emphasize that this isn’t universal; in some industries, large firms produce more innovations per dollar than small firms - but this isn’t typical) Finally, Argente et al. (2023) use product-scanner data in the consumer goods sector over 2006-2015 to obtain details on every product sold in a large sample of grocery, drug, and general-merchandise stores, including the associated firm that markets the product. Here, they identify innovation as the introduction of a new product; as the figure below illustrates, bigger firms consistently introduce fewer new products, relative to the number of products they already sell (gray line below). From Argente et al. (2023) Of course, not all new products are equally innovative. To deal with this issue, Argente and coauthors use data on the attributes of each product. Since they know the price and sales of each product, they can run statistical models to estimate a dollar value consumers put on different product attributes. They can then “quality adjust” new product introductions by the introduction of products that include new attributes, where attributes are given more weight if associated with higher prices (or sales). This more sophisticated approach yields the same result: when you adjust for quality, you still find that larger firms are less innovative (relative to their size) than small...
The Size of Firms and the Nature of Innovation
Discover the productivity wearable with Ben Wisbey Co-Founder of Pylot
Discover the productivity wearable with Ben Wisbey Co-Founder of Pylot
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better and work smarter. Ben Wisbey is the co-founder of Pylot, the very first wearable to track your productivity, so you can know when you’re ready for deep work, shallow tasks, or need a break. In this interview, we talked about the fallacy of time management, how to quantify work quality, the key questions to achieve deep work, the science of cognitive performance, how to manage mental fatigue, and much more. Enjoy the read! Hi Ben, thanks for agreeing to this interview! Most people associate productivity with time management, but you think differently. Can you tell us more? This is a great question. The age-old quest to optimize our time and make the most of our day often leads us to neglect a crucial factor: not all hours are created equal. In fact, an hour of deep, focused work can yield far better results than multiple hours of grinding through tasks when we’re not at our best. After years of managing my time, I learnt that it was more important to manage my energy. If I could do my priority tasks when my mental energy was high, I was producing better work in less time.  As a performance scientist with a background in monitoring professional and Olympic athletes, I’ve always been passionate about helping people perform at their peak. This interest eventually merged with my obsession for productivity, and I embarked on an ambitious project to quantify energy management. What started as a few months of work quickly turned into a two-and-a-half-year research journey, during which the Pylot team and I monitored brain wave activity and physiological responses during work and other mentally challenging tasks like video gaming. Our research led us to quantify the mental aspects that impact performance. The most significant determinant of mental performance is flow. Flow, often referred to as “being in the zone,” is a state of relaxed concentration where you’re fully immersed in your work and not easily distracted. Scientifically speaking, this state is associated with specific brain wave frequencies. Another critical factor we identified was mental fatigue. When mental fatigue is high, it’s difficult to maintain a flow state, and achieving another quality work block within the same day becomes highly unlikely. While quantifying work quality is no easy task, our research demonstrated that serious gamers playing competitive online games had significantly higher win rates when they were in high flow states and managed to avoid fatigue. So, the next time you find yourself striving for maximum productivity, remember that managing your energy and tapping into your flow state may just be the key to unlocking your true potential. And this is what inspired you to build Pylot. After the acquisition of my previous business, I found myself working remotely for a large organization. My mornings were filled with back-to-back meetings, and when I finally sat down to tackle my “real work” in the afternoons, I hit a wall. I couldn’t seem to get into the groove, and I wondered if I was just being lazy or if my endless meetings had left me mentally drained. Even worse, I’d reach the end of each day feeling unsatisfied, questioning what I had truly accomplished. As a self-proclaimed productivity nerd, I decided to dive deep into the data I had been tracking for years on RescueTime. While apps like Rize and RescueTime are fantastic at providing insights into computer activity and behavior, they couldn’t quite answer why I struggled to engage in deep work. That’s when a few colleagues and I embarked on a mission to unravel this mystery by using sensors to measure what was actually going so, so we could answer three crucial questions: What time of day is best for my deep work? How long should these deep work sessions be? When do I need a break? How does it work under the hood? Pylot utilizes a lightweight and comfortable headband to gather EEG and HRV data. EEG tracks brain wave activity, while HRV measures variations in heart rate. By capturing this information, Pylot can assess mental fatigue and flow—two key elements of optimal cognitive performance. The collected data is then sent to the Pylot app, available on Mac and Windows devices, where it offers real-time feedback along with recommendations for engaging in deep work, tackling shallow tasks, or taking a break. As you continue to use the app, it learns about your unique patterns and can suggest the best times of day for your deep work sessions and their ideal duration. Although the concept may sound straightforward, developing the algorithms that power this process took us three years. Behind the scenes, there’s a lot of heavy lifting happening on the data side to transform raw sensor information into valuable feedback. They say hardware is hard. Building the first wearable for productivity must have come with many challenges—what were some of the design challenges you had to resolve? Developing hardware is no easy feat, especially when it comes to creating devices that accurately collect scientific data. Fortunately, our founding team brought invaluable experience from working on various wearable devices. We knew that our product had to be comfortable, lightweight, visually appealing, energy-efficient, and provide accurate data—a challenging combination to achieve. As a pioneering wearable in its field, we faced our fair share of trial and error. Some of our early prototypes were uncomfortable to wear and not aesthetically pleasing. We also had to ensure compatibility with glasses and headsets. After exploring multiple form factors and sensor placements, we’ve arrived at a design that is even better than we had hoped. The end product is incredibly lightweight and flexible to the point you forget you’re wearing it. Moreover, it delivers high-quality data, boasts a ten-hour battery life, and maintains compatibility with glasses and headsets. We couldn’t be more thrilled with the final product and are eager to share it with the world. This is such a thoughtful approach to hardware design. So, what does the user experience look like? The terms “productivity wearable,” “EEG,” and “HRV” might seem complex and scientific, but we’ve made sure that our product is user-friendly and straightforward. All you need to do is turn the band on and wear it. The apps will automatically record your work session and offer live feedback. There is an overlay, or widget, on Windows/Mac so you can see live feedback without interrupting your work. The app then provides a summary of each work session, and each day, while also allowing you to see trends over time.  The experience of wearing the band is similar to using headphones. You might be aware of them for a few minutes after putting them on, but soon after, you’ll forget they’re even there. What kind of people do you think would most benefit from using Pylot? Pylot is designed for individuals seeking to maximize the quality of their workday. Rather than focusing on doing more work, it emphasizes doing one’s best work. To accomplish this, users need some control over their work schedules, allowing them to adapt their work hours based on what suits them best. This flexibility may apply to remote workers or those with adjustable schedules, making it particularly relevant for founders, developers, designers, writers, and many other knowledge workers. We’ve been testing Pylot with some of these users and encountered intriguing results. One memorable example involves a founder who used Pylot to adjust their daily routine based on the app’s recommendations. During an unusually busy week, they pushed through a demanding day despite experiencing mental fatigue on Thursday. Come Friday, and their mental fatigue was high all day, making it difficult to perform at their best. However, they adapted their work plan according to Pylot’s feedback and shifted to a day focused on administrative and procedural tasks. What about you, how do you use Pylot? I use Pylot daily, being a productivity enthusiast myself. I was already aware that I worked best in the mornings, but Pylot has helped me refine my schedule further by identifying my optimal time for deep work as 7am to midday, working in 90-120 minute blocks. However, with these early starts comes a decline in the afternoon. My flow diminishes significantly after 2pm, so I focus on shallow tasks and try to schedule meetings and emails during this period. Although this structure generally works well for me, not every day is identical. So, I monitor my fatigue levels to incorporate more breaks when necessary. We have already begun examining the influence of sleep and exercise on cognitive performance. In the future, you might see a feature where the app integrates with data from devices like Apple Watch. How do you recommend someone get started? Right from the start, Pylot offers instant feedback on your flow and fatigue levels. However, its accuracy improves over time as it learns what’s normal for you. We recommend using Pylot for two weeks to receive suggestions on your ideal deep work hours and session lengths. By continuing to wear Pylot, you can monitor how these factors change over time and receive live recommendations on when to switch to shallow work or take a break. Since no two days are the same, this real-time feedback proves invaluable in adjusting your work schedule to achieve the best outcomes. And finally… What’s next for Pylot? Our mission is to help people design their day for success. We aim to assist users in making the most of their time by engaging in the right tasks at the right moments. This approach not only leads to improved work outcomes but also ensures there’s time for other important activities in life. This principle applies not only to work but also to any activity where cognitive performance is crucial, including s...
Discover the productivity wearable with Ben Wisbey Co-Founder of Pylot
The two sides of stress: distress and eustress
The two sides of stress: distress and eustress
Picture this: You’re at work with a big deadline coming up. Unfortunately, someone made a mistake, and part of the project needs to be completely redone in a rush. As the pressure mounts, you can feel the tension gripping your mind and body, causing your patience to wear thin. In those stressful situations, it’s not uncommon to experience automatic negative responses that arise from the complex interplay between our thoughts and emotions. We may find ourselves snapping at a colleague or retreating into quiet as we try to cope with the crushing weight of anxiety. So it’s not surprising that we tend to perceive stress as a negative phenomenon that should be minimized at all costs. In fact, a common misconception is that stress is inherently bad. But stress is just your body and your mind’s response to external challenges. Depending on the particular stressors and your reaction, stress can be detrimental (distress) or beneficial (eustress). The prefix -dis in “distress” has the same root as words like disconnect, dissatisfaction, and disingenuous. In contrast, “eustress” literally means “good stress.” It was coined by endocrinologist Hans Selye in 1975 to describe a positive cognitive response to stress. Distress versus eustress Distress can have a terrible impact on productivity, creativity, and mental health. On the other hand, eustress has been found to enhance performance and overall well-being, especially in the workplace. When you experience eustress, you’re pushed to do your best. In short, distress results in anxiety; eustress is exciting. Distress leads to procrastination, while eustress is a source of motivation. Overall, distress has a negative impact on performance. On the other hand, eustress acts as a performance enhancer. Here is an overview of the key differences between distress and eustress: Experiences that lead to eustress are usually perceived as challenging but still within our coping abilities, leading to heightened focus and motivation. That delicate balance is where lies the secret to eustress. A balancing game You need just the right amount of pressure to unlock the benefits of eustress. This is known as the Yerkes–Dodson law, originally developed by psychologists Robert M. Yerkes and John Dillingham Dodson in 1908, which states that performance increases with mental or physiological arousal—but only up to a limit. But if you manage to strike that balance, eustress offers many benefits, especially for ambitious people who enjoy an interesting challenge. Some of the benefits of eustress include: Flow. Researchers described flow as the “ultimate eustress experience—the epitome of eustress.” When in flow, we are focused on the challenge and fully present. We become so fully absorbed in what we are doing, we lose track of time and can effortlessly ignore external distractions. Resilience. Because eustress is based on perception, cultivating eustress can help in reacting more positively to challenging situations, resulting in higher emotional agility. It can help us build better coping skills and boost confidence by reframing stressors as valuable learning opportunities. Self-efficacy. Your judgment of how you can carry out a required task or take on a specific role is a measure of your level of self-efficacy. Experiences of eustress allow you to accumulate evidence of your abilities and competence, and in turn, encourage you to explore more ambitious ideas. The good news is: Though not all stress can be reframed as a positive experience, you can proactively manage many external stressors, so they result in productive eustress instead of paralyzing distress. How to foster eustress As eustress is a positive reaction to stress based on perception rather than objective stressors, the potential sources of eustress vary greatly between people. These are examples of stressors that are commonly perceived as positive: Learning a new skill. Working hard to learn something new is, for many, a safe source of eustress, creating the right amount of challenge while staying in control of the learning experience. Starting a new job. Because it’s a combination of using existing skills and learning new ones, while quickly forming relationships in a new environment, starting a new job can be challenging in the best ways, resulting in eustress. Similarly, receiving a promotion or moving teams can create good stress. Going on a holiday. Traveling to a distant place with a different culture can create eustress by forcing us to leave our comfort zone. Although travel can bring about distress—canceled flights, stolen items—many people view it as a fulfilling challenge. Starting a family. Whether getting married or having a child, starting a family can be a source of eustress by offering a novel challenge and many opportunities for personal growth. Moving. Finally, moving houses implies leaving the comfort of a familiar place behind to start a new life. The process is a source of negative stress for many people but can lead to eustress because of its inherently adventurous nature. There are many other potential sources of eustress, such as playing competitive sports, some challenging video games, participating in a tournament, or having a complex but constructive debate with someone. In order to find your own sources of eustress, the key is to experiment with positive stressors and to practice metacognitive strategies to reflect on their impact on your stress levels. A simple method to keep track of your stressors—whether they result in distress or eustress—is the Plus Minus Next method. If you only remember one thing: Not all stress is bad and it can be a healthy source of motivation as long as you find your own positive stressors. The post The two sides of stress: distress and eustress appeared first on Ness Labs.
The two sides of stress: distress and eustress
When Technology Goes Bad
When Technology Goes Bad
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Innovation has, historically, been pretty good for humanity. Economists view long-run progress in material living standards as primarily resulting from improving technology, which, in turn, emerges from the processes of innovation. Material living standards aren’t everything, but I think you can make a pretty good case that they tend to enable human flourishing better than feasible alternatives (this post from Jason Crawford reflects my views pretty well). In general, the return on R&D has been very good, and most of the attention on this website is viewed through a lens of how to get more of it. But technology is just a tool, and tools can be used for good or evil purposes. So far, technology has skewed towards “good” rather than evil but there are some reasons to worry things may differ in the future. Subscribe now Why is technology good for us, on average? I think technological progress has skewed good through most history for a few reasons. First, invention takes work, and people don’t do work unless they expect to benefit. The primary ways you can benefit from invention are either directly, by using your new invention yourself, or indirectly, by trading the technology for something else. To benefit from trade, you need to find technologies that others want, and so generally people invent technologies they think will benefit people (themselves or others), rather than harm them. Second, invention is a lot of work, and that makes it harder to develop technology whose primary purpose is to harm others. Frontier technological and scientific research is conducted by ever larger teams of specialists, and overall pushing the scientific or technological envelope seems to be getting harder. The upshot of all this is technological progress increasingly requires the cooperation of many highly skilled individuals. This makes it hard for people who want to invent technologies that harm others (even while benefitting themselves). While people who are trying to invent technologies to benefit mankind can openly seek collaborators and communicate what they are working on, those working on technologies to harm or oppress must do so clandestinely or be stopped. Third and finally, the technological capabilities of the people trying to stop bad technology from being developed grow with the march of technological progress. Think of surveillance technology in all its forms: wiretaps, satellite surveillance, wastewater monitoring for novel pathogens, and so on. Since it’s easier to develop technologies for beneficial use when you can be open about your work, then that will tend to boost the powers of those empowered to represent the common interest. In a democracy, that process will tend to hand more powerful tools to the people trying to stop the development of harmful technologies. Now - these tendencies have never been strong enough to guarantee technology is always good. Far from it. Sometimes technologies have unappreciated negative effects: think carbon emitting fossil fuels. Other times, large organizations successfully collaborate in secret to develop harmful technology: think military research. In other cases, authoritarian organizations use technological power to oppress. But on the whole, I think these biases have mitigated much of the worst that technology could do to us. But I worry a new technology - artificial intelligence - risks upending these dynamics. Most stories about the risks of AI revolve around AI’s developing goals that are not aligned with human flourishing; such a technology might have no hesitation creating technologies that hurt us. But I don’t think we even need to posit the existence of AI’s with unaligned goals of their own to be a bit concerned. Simply imagine a smart, moderately wealthy, but highly disturbed individual teaming up with a large language model trained on the entire scientific corpus, working together to develop potent bioweapons. More generally, artificial intelligence could make frontier science and technology much easier, making it accessible to small groups, or even individuals without highly specialized skills. That would mean the historic skew of new science and technology being used for good rather than evil would be weakened.1 What does science and technology policy look like in a world where we can no longer assume that more innovation generally leads to more human flourishing? It’s hard to say too much about such an abstract question, but a number of economic growth models have grappled with this idea. Don’t Stop Till You Get Enough Jones (2016) and Jones (2023) both consider the question of the desirability of technological progress in a world where progress can sometimes get you killed. In each paper, Jones sets up a simple model where people enjoy two different things; having stuff and being alive. Throughout this post, you can think of “stuff” as meaning all the goods and services we produce for each other; socks and shoes, but also prestige television and poetry. So let’s assume we have a choice: innovate or not. If we innovate, we increase our pile of stuff by some constant proportion (for example, GDP per capita tends to go up by about 2% per year), but we face some small probability we invent something that kills us. What do we do? As Jones shows, it all depends on the tradeoff between stuff and being alive. As is common in economics, he assumes there is some kind of “all-things-considered” measure of human preferences called “utility” which you can think of as comprising happiness, meaning, satisfaction, flourishing, etc. - all the stuff that ultimately makes life worth living. Most models of human decision-making assume that our utility increases by less-and-less as we get more-and-more stuff. If this effect is very strong, so that we very quickly get tired of having more stuff, then Jones (2016) shows we eventually hit a point where the innovation-safety tradeoff is no longer worth it. At some point we get rich enough that we choose to shut down growth, rather than risk losing everything we have on a little bit more. On the other hand, if the tendency for more stuff to increase utility by less-and-less is weak, then we may always choose to roll the dice for a little bit more. As a concrete illustration (not meant to be a forecast), Jones (2023) imagines a scenario where using artificial intelligence can increase annual GDP per capita growth from 2% per year to 10% per year, but with an annual 1% risk that it kills us all. Jones considers two different models of human preferences. In one of them, increasing our stuff by a given proportion (say, doubling it), always increases our utility by the same amount. If that is how humans balance the tradeoff between stuff and being alive, it implies we would actually take big gambles with our lives for more stuff. Jones’ model implies we would let AI run for 40 years, which would increase your income more than 50-fold, but the AI would kill us all with 1/3 probability! On the other hand, he also considers a model where there is some maximum feasible utility for humans; with more-and-more stuff, we get closer-and-closer to this theoretical maximum, but can never quite reach it. That implies increasing our pile of stuff by a constant proportion increases utility by less and less. If that is how humans balance the tradeoff between having stuff and being alive, we’re much more cautious. Jones’ model implies in this setting we would let AI operate for just 4-5 years. That would increase our income by about 50%, and the AI would kill us all with “just” 4% probability. But after our income grows by 50%, we would be in a position where a 10% increase in our stuff wouldn’t be worth a 1% chance that we lose it all. Different Kinds of Progress The common result is that, as we get sufficiently rich, we are increasingly willing to sacrifice economic growth in exchange for reduced risks to our lives. That’s a good place to start, but it’s a bit too blunt an instrument: we actually have more options available than merely “full steam ahead” and “stop!” A variety of papers - including Jones (2016) - take a more nuanced approach and imagine there are two kinds of technology. The first is as described above: it increases our stuff, but doesn’t help (and may hurts) our health. The second is a “safety” technology: it doesn’t increase our stuff, but it does increase our probability of survival. “Safety” technology is a big category. Plausible technologies in this category could include: Life-saving medical technology Seatbelts and parachutes Renewable energy Carbon capture and removal technology Crimefighting technology Organizational innovations that reduce the prospects of inadvertent nuclear first strikes AI alignment research And many others. The common denominator is that safety technologies reduce dangers to us as individuals, or as a species, but generate less economic growth than normal technologies. In addition to the model discussed above, Jones (2016) builds a second model where scientists face a choice about what kind of technologies to work on. The model starts with a standard model of economic growth, where technological progress does not tend to increase your risk of dying (whew!). But we still do die in this model and Jones assumes people can reduce their probability of dying by purchasing safety technologies. Scientists and inventors, in turn, can choose to work on “normal” technology that makes people richer, or safety technology, which makes them live longer. There’s a market for each. This gives you a result similar in spirit to the one discussed above: as people get richer, the tradeoff between stuff and survival starts to tilt increasingly towards survival. If peopl...
When Technology Goes Bad
Unlock your best work with Jim Kleban Head of AI at Supernormal
Unlock your best work with Jim Kleban Head of AI at Supernormal
FEATURED TOOL Welcome to this edition of our Tools for Thought series, where we talk to founders on a mission to help us think better and work smarter. Jim Kleban is the Head of AI at Supernormal, an AI-powered app that helps you create amazing meeting notes without lifting a finger, saving ten minutes every meeting. In this interview, we discussed the underpowered value of taking notes, the importance of building memory over the knowledge contained in meeting discussions, the critical relationship between note-taking and decision-making, how AI will shape the future of work, and much more. Enjoy the read! Hi Jim, thanks for agreeing to this interview! Most people know the value of taking meeting notes, and yet in most cases, notes are sent around and never used again. Why is that?  Thanks for having me! I’m excited to share what we’re building at Supernormal and how I think these tools are going to change how we work. Supernormal provides detailed meeting notes automatically that you can tailor to the type of meeting you’re having. This frees people from the mental effort of having to write out notes so we can be fully present in our meetings. Meetings are a critical part of how work gets done, but how the world approaches meetings hasn’t really evolved so much. Meetings are still conducted similarly to the pre-remote work era, and we may even have gotten sloppier. In many cases meetings don’t have an agenda, nobody takes notes, and action items are forgotten. The lack of rigor is making meetings less productive and workers are feeling it. A recent survey from Zippia found that organizations spend ~15% of their time on meetings, with surveys showing that 71% of those meetings are considered unproductive. When people do take notes for meetings, it is true that they are often sent and then effectively lost. I think this is due to a lack of structure for building an organizational memory for what has been discussed over time. How does Supernormal address those challenges?  The tools we’re building at Supernormal are aimed at making meetings a much more valuable use of time. We consider how we can help people before, during, and after their meetings. Ahead of time, we make it easy to add the Supernormal notetaker to meetings by syncing it with your calendar. It’s simple to turn the notetaker on or off for a meeting both beforehand or when a meeting is happening. During the meeting, Supernormal will automatically transcribe the conversation and create meeting notes. The product works today for Google Meet, Zoom, and Microsoft Teams so it is flexible to cover meetings when you use a different remote meeting platform. The set of meeting notes people receive includes a short summary that we call “the gist”, a longer summary with all the details, and a list of action items. And for specific types of meetings like customer discovery calls, interviews, and business pitches, we provide custom notes that are tailored to what people most want to learn for that type of meeting. When the meeting ends, the transcript and notes can be automatically shared with meeting participants. They can also be viewed, edited, and shared from inside the Supernormal web app. Here is where Supernormal helps teams build memory over the knowledge contained in their meeting discussions. On Supernormal, we organize past meetings for easy reference and make it easy to find meetings by searching over transcripts and notes. We even help people make progress on the action items they’ve been assigned in their meetings. Can you give us an example of how that would work, let’s say with a customer discovery call?  Sure! Let’s say you’re a product manager and you’re trying to validate your product idea. To conduct a customer discovery call using Supernormal, you would first identify potential customers who fit your target market. You then send the usual meeting invite to have them participate in a call and include the Supernormal notetaker. On the call, you would ask the customers a series of questions that helps you better understand whether your product idea addresses their needs. With Supernormal in the meeting, you can stay fully focused on what the customer is telling you and not be distracted by the need to write notes. For this type of call, Supernormal will generate custom note sections based on who is speaking that summarize the customer’s needs and pain points. Afterward, anyone you share the meeting with can access the transcript and notes. You can highlight customer insights from this call to compare with other customer discovery calls, and you can easily share out the notes via email or message. People who weren’t present on the call can quickly learn by reading the notes and diving into the relevant parts of the transcript.  Overall, the Supernormal app helps you conduct customer discovery calls more efficiently and effectively by automatically providing real-time analysis and insights from the conversation, making notes easy to share, and centralizing notes in a single place. That sounds great. I guess a common problem with meeting notes is that we often just forget to take them.  Yes, and this is the first problem Supernormal is designed to solve. We automatically take detailed meeting notes for you so it’s no longer an annoying task or tradeoff.  I also want to mention that what sets Supernormal apart is that we have invested heavily in improving the Supernormal AI. The notes are designed to be accurate, concise, and not miss any of the important discussion points. And to improve on this we have built user feedback and quality controls. The AI learns to provide better notes for people the more they use our tool. Unlike other transcription tools, Supernormal accurately summarizes the meeting so you don’t have to comb through a transcript. The transcript and notes work for meetings in languages other than English, too. And in some ways, because the AI is a neutral observer, the notes generated may indicate or remind the participants of important points or tones that happened during the meeting that may have been missing in the notes otherwise. So, it’s practically impossible to forget about taking notes during meetings. What about sharing those meeting notes?  Sharing notes really is a key behavior. Notes can be much more than a record of what has been discussed, for instance, they are often a way teams formalize key decisions. Supernormal gives meeting participants the ability to take the output notes from the AI as a starting point and then refine them as they see fit, by editing them or applying custom templates to get notes for specific kinds of meetings. And I’ve mentioned we make it easy to automatically share with participants, copy the notes and send in an email, or just share a link to the meeting.  All of your meetings are securely stored and discoverable on Supernormal, so you never have to spend time searching docs or flipping through calendar invites to find them. Supernormal also integrates with Slack, Hubspot, and Pipedrive so you can save and share meeting notes in the tools you already use.  What kind of people use Supernormal to capture and share meeting notes?  The world has shifted to remote or hybrid work since the pandemic, and even though we started building Supernormal before COVID-19 the changes in how we work have opened up the possibilities for tools like ours. People also are excited about what is possible with AI and the ChatGPT explosion, and they want to find tools like Supernormal where the AI helps but does not replace the human in our work. This was also personally very important to me as I considered the type of AI product I want to be contributing to the world in my own work. So the people who use Supernormal are often remote work oriented. They feel they are gaining a superpower at work from the tool. Their teams often have important external meetings that not everyone attends so the notes and transcripts are critically valuable.  As an example, there’s a product manager from a startup in the Pacific Northwest. Her team is working on a years-long project with multiple customer discovery calls that she can’t always attend. But, she uses Supernormal to review each of those calls, and finds it helpful to get the insights from the notes and then is able to read the direct user quotes from the transcripts. For meetings within her team, she is using Supernormal as a way to make a record for the entire team to access. This streamlines team-wide communications so everyone always knows what’s going on and nothing happens behind closed doors.  What about you, how do you use Supernormal?  At Supernormal we’re pretty serious about meeting notes. I spent more than a decade of my career as a product manager and most of my workday has been in meetings. I always wanted a tool to help make the pain of follow-ups and sending notes less toilsome. As our company has been growing, you can also imagine the number of meetings I have now as Head of AI is increasing.  We dogfood our own product at Supernormal and typically use it to capture and share all our meetings. One of the features we really love is tracking the action items assigned to each of us as tasks. It even feels fun to check out the new tasks that have automatically appeared for us to do after a day of meetings. These are helpful reminders of the things we said we’d do in our meetings, and I’d imagine we’d forget at least some of them otherwise. The other key part of how we use Supernormal is that it frees people on the AI team from feeling like they have to attend every meeting that gets scheduled. Everyone has access to every meeting, so this means they won’t lose context when a meeting is something they skip attending. They can focus instead on completing their engineering work instead. This has greatly reduced meeting bloat and opened up work time for our AI team. How do you recommend someone get started?  Getting started is...
Unlock your best work with Jim Kleban Head of AI at Supernormal
Time is not a measure of productivity
Time is not a measure of productivity
It seems obvious that the amount of time you spend on a task is a terrible indicator of how productive you are. And yet, a lot of our work culture is fixated on time. We often feel pressure to prove our productivity by working long hours or responding to emails outside of regular work hours. Using principles from hourly work to define productivity in knowledge work has resulted in inefficient and often unhappy work conditions for many teams. Faster individuals are frustrated, useless meetings are filling time, and instead of taking mindful breaks, people stay sitting at their desks at home or in the office even when there is no meaningful work to do. The pandemic has forced many companies to switch to remote work, and many of them intend to keep it this way in the future. As working remotely is becoming the norm for many knowledge workers, our practices need to change. We need to abandon time as a measure of productivity. The dangers of passive face time In a famous study conducted by researchers from the University of California and the University of North Carolina, 39 corporate managers were asked about their perception of their employees. During the interviews with those managers, the researchers explored two topics in particular: Expected face time. Being seen at work during normal business hours. Extracurricular face time. Being seen at work outside of normal business hours. These are two forms of passive face time—“passive” because there is no real work interaction; the manager simply observes the amount of time their employee spends at work. What the team member is actually doing and how well they are doing it does not matter. The researchers found that these two forms of passive face time resulted in better perceptions from corporate managers. People who would spend more time at their desks or work during the weekends were seen as more “committed”, “trustworthy”, “dependable”, “hard-working” and “dedicated”. Here are some quotes from the interviews so you can judge for yourself: “I know I can depend on someone that I see all the time at their desk.” “This one guy, he’s in the room at every meeting. Lots of times, he doesn’t say anything, but he’s there on time, and people notice that. He definitely is seen as a hardworking and dependable guy.” “Arriving early and staying late in the office makes a good impression. I think of those workers as more dedicated than most.” “Working on the weekends makes a very good impression. It sends a signal that you’re contributing to your team and that you’re putting in that extra commitment to get the work done.” “If I see you there all the time, okay, good. You’re hard-working, a hard-working, dependable individual.” “I would bump into my supervisor at 7 o’clock in the evening. She knows I’m there working. In those cases, I get extra points just for being there late. I’m seen as having an extra level of commitment.” These comments were not surprising in 2010 when the study was conducted. But peeking over the shoulder of an employee to check whether they are working, bumping into a supervisor at 7 pm to get extra points, being perceived as hard-working just by sitting in front of your desk — these do not make sense anymore, especially in a distributed company where it’s physically impossible, except with some regrettably popular tracking software. However, cultural remnants from the industrial age mean that to this day, many managers still rely on presence — whether online or in-person — to measure performance. This is despite the fact that time is a terrible incentive for productive work: On one hand, someone who manages to finish their work faster may get penalized compared to a slower employee who will be perceived as more zealous. On the other hand, some people keep busy in order to project an image of productivity. Beyond time measurement Instead of the hours of work, we should focus on the results. Instead of passive face time, we should strive for mindful productivity. Whether you are a manager, an employee, a freelancer, or an entrepreneur, these five strategies can be helpful to stop using time as a measure of productivity: Avoid unnecessary meetings. Always ask yourself: “What’s the goal of this meeting? Could the goal be achieved in a more efficient manner?” You will often realize that a meeting does not have a clear goal. Out of insecurity or habit, people organize meetings to show they are working publicly—that they are “dependable” and “dedicated”. If the meeting doesn’t have a clear goal, ask for clarification or ask to cancel it. If the meeting has a clear goal, consider whether sending a memo around or having everyone send a quick update over email may not be a way to avoid wasting time. Define purposeful goals. Human beings like to keep busy. When we don’t have clearly defined goals, it’s easy to fill our time with ill-fitted tasks to maintain the illusion of productivity. For short-term goals based on predictable outcomes, you can use the SMART goals framework. For long-term personal growth goals which are more flexible, use the PACT framework instead, which stands for Purposeful, Actionable, Continuous, and Trackable. Having clearly defined goals will ensure the focus is on achieving these goals rather than passive face time. Reduce repetitive tasks. We waste a lot of time repeating the same tasks at work, which can keep us unnecessarily busy and fill up our time without progressing toward our goals. Review such tasks and consider whether you can automate, simplify, or outsource some of them. For instance, tools like Zapier can help you build workflows and connect all your apps together. Or you could hire someone to take care of repetitive tasks on one of the many freelancing platforms out there. Focus on the 20%. The 80/20 rule, also called the Pareto Principle after economist Vilfredo Pareto, states that 80% of consequences come from 20% of the causes. At work, 80% of your success will come from 20% of your efforts. Identify these key efforts, try to eliminate as much of the noise in the 80%, and focus on the 20% that really matters. Be protective of your time. While passive face time encourages people to participate in meetings and sit at their desks longer, mindful time blocking ensures you have time to focus on the 20% that matters and achieve your goals. Whether you share your calendar with a team or work independently, add blocks to your calendar for important tasks. Just make sure not to go overboard, as time blocking starts losing its meaning when everything is blocked in your calendar! And, most importantly: if you finish a task ahead of a deadline, give yourself a pat on the back and take a break! You deserved it. Sitting in front of a desk should never be seen as a sign of hard work and commitment. Focusing on results rather than hours has always made sense. In today’s distributed world, it has become inevitable. Hopefully, managers will embrace the change. The post Time is not a measure of productivity appeared first on Ness Labs.
Time is not a measure of productivity
The neurochemicals of productivity and procrastination
The neurochemicals of productivity and procrastination
We all have goals. They can be big or small; professional or personal. But obstacles get in the way. External obligations such as social events, unforeseen additional work, and demanding customers can drain our energy, so there’s little left to focus on what really matters to us. If only that was the only issue. To make things worse, we’re also constantly fighting an internal battle against our brain, which background mechanisms we’re unconscious of. You don’t feel anything every time a neuron fires, and you have little control over the activity inside your brain. But those processes have a huge impact on how you manage your goals and how it feels to work toward your goals. Understanding these mechanisms won’t magically allow you to achieve your goals, but it will help you be kinder to yourself when things don’t seem to go as planned, and you struggle to focus on your goals. Your three frenemies Three main neurochemicals have been identified in people experiencing a state of flow: dopamine, noradrenaline, and acetylcholine. As you’ll see, these are akin to little tricksters that can sometimes help you and other times work against you. Dopamine is a neurotransmitter that plays an important role in the reward system. Releasing dopamine is one of the ways your brain has to make you feel good and encourage you to do more of whatever you’re doing. Research has found that behaviors such as sex, eating, and playing video games tend to increase dopamine levels in the central nervous system. When it comes to productivity, dopamine is a double-edged sword. It can increase or decrease your productivity depending on what exactly triggers the reward system. Let’s say you check how many words you wrote in the last hour, or finally get a new feature to work in your app. Boom, you get a hit of dopamine. But let’s say you get a notification on your phone and see someone liked your latest Tweet. Boom, you also get a hit of dopamine. In order to make the most of that nice feeling you’ll get from increased levels of dopamine, you need to ensure you trigger your reward system in a way that’s aligned with your goals. This means putting your phone away, focusing on the task at hand, and designing ways to reward yourself for a well-done job. We’ll look at practical strategies to achieve this later in this article, but first, let’s look at the two other neurochemicals involved in productivity and procrastination. The second neurochemical is noradrenaline, also known as norepinephrine in the United States. It’s a neurotransmitter that makes you feel “ready for action” — it’s involved in the fight-or-flight response and makes you more alert and vigilant. Again, there is a tricky balance to find with noradrenaline. The right amount of pressure can be beneficial in order to increase your productivity — this is why many procrastinators report performing better when a deadline is approaching. But if you keep on waiting until the last minute to complete your tasks, the resulting chronic stress can be damaging. Finally, acetylcholine is the third neurochemical of productivity and procrastination. It was the first neurotransmitter ever discovered and is abundant in the nervous system. Besides being involved in the autonomic nervous system — all of the involuntary and unconscious activity in your body, such as heart rate, digestion, or respiration — it also plays an important role in focus, learning, and memory. Studies found that increased acetylcholine levels have a positive impact on performance. On the flip side, an acetylcholine deficiency often means that you’ll have trouble focusing your attention and remembering things, and damage to the cholinergic system — the system in the brain that produces acetylcholine — has been found to be associated with the memory deficits observed in Alzheimer’s disease. That’s a lot to remember, so how can you make the most of this knowledge in a practical way in order to achieve your goals without sacrificing your mental health? A practical neuroproductivity framework Dr. Friederike Fabritius created a handy framework to remember the three neurochemicals of productivity and procrastination based on the general areas of cognition they affect: fun, fear, and focus. Fun. That’s dopamine. As mentioned earlier, it’s a tricky one. It’s all about finding the right balance between having fun without getting distracted. The best strategy is to ensure there’s some reward in the process of working on your project. Sometimes, the reward is intrinsic: you genuinely enjoy what you’re working on. But sometimes, you need to work on something you don’t find as interesting. It’s a good idea in these cases to create extrinsic rewards you genuinely care about. For example, promise yourself to go see a movie you’re excited about after you’re done with the project. It also helps to design an environment that doesn’t include distracting rewards, for example, by leaving your phone in another room so you don’t see anytime someone likes your latest tweet. Fear. Living in constant fear is not good for you, but just the right amount of uncertainty will increase your levels of noradrenaline and, thus, your productivity. Instead of waiting until the last minute to start working on a project, create positive pressure by getting out of your comfort zone, for instance, by working on something new. Or, if you’re working on documentation or something tedious, tell the team that you will present your work to them at your next stand-up meeting. This will trick your mind into feeling just the right amount of positive pressure and help you avoid procrastination. Focus. Finally, make sure to give your brain everything it needs to increase your levels of acetylcholine and, thus, your focus. Some ways to increase your levels of acetylcholine include eating foods rich in choline — which is needed to synthesize acetylcholine — such as lean meats, fatty fish, milk, yogurt, kidney beans, green beans, peas, and broccoli. You can also gently exercise before working, such as going for a walk. But don’t overdo it: research suggests that lengthy exercise sessions, such as marathon training, reduce your acetylcholine levels. All combined together, fun, fear, and focus will help you get in the flow. And if you really can’t seem to be able to be productive, consider taking a break. Staying busy for the sake of staying busy can give you the illusion of productivity and lead to anxiety. Prolonged procrastination is not your enemy — it’s a signal sent by your brain that something is not quite working well. The post The neurochemicals of productivity and procrastination appeared first on Ness Labs.
The neurochemicals of productivity and procrastination
Can taste beat peer review?
Can taste beat peer review?
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Note: Have an idea for a research project about how to improve our scientific institutions? Consider applying for a grant of up to $10,000 from the Metascience Challenge on experiment.com, led by Paul Niehaus, Caleb Watney, and Heidi Williams. From their call for proposals: We're open to a broad set of proposals to improve science -- for example, experimental designs, surveys, qualitative interviews with scientists, pilot programs for new mechanisms, scientific talent development strategies, and other research outputs that may be relevant for scientific research funders. The deadline to apply is April 30. On to our regularly scheduled programming! Scientific peer review is widely used as a way to distribute scarce resources in academic science, whether those are scarce research dollars or scarce journal pages.1 Peer review is, on average, predictive of the eventual scientific impactof research proposals and journal articles, though not super strongly. In some sense, that’s quite unsurprising; most of our measures of scientific impact are, to some degree, about how the scientific community perceives the merit of your work: do they want to let it into a journal? Do they want to cite it? It’s not surprising that polling a few people from a given community is mildly predictive of that community’s views. At the same time, peer review has several potential short-comings: Multiple people reading and commenting on the same document costs more than having just one person do it Current peer review practices provides little incentives to do a great job at peer review Peer review may lead to biases against riskier proposals One alternative is to empower individuals to make decisions about how to allocate scientific resources. Indeed, we do this with journal editors and grant makers, though generally in consultation with peer review. Under what conditions might we expect individuals empowered to exercise independent judgement to outperform peer review? To begin, while peer review does seem to add value, it doesn’t seem to add a ton of value; at the NIH, top-scoring proposals aren’t that much better than average, in terms of their eventual probability of leading to a hit (see this for more discussion). Maybe individuals selected for their scientific taste can do better, in the same way some people seem to have an unusual knack for forecasting. Second, peer reviewers are only really accountable for their recommendations insofar as it affects their professional reputations. And often they are anonymous, except to a journal editor or program manager. That doesn’t lead to strong incentives to try and really pin down the likely scientific contribution of a proposal or article. To the extent it is possible to make better judgments by exerting more effort, we might expect better decision-making from people who have more of their professional reputation on the line, such as editors and grant-makers. Third, the very process of peer review may lead to risk aversion. Individual judgment, relying on a different process, may be able to avoid these pitfalls, at least if taking risks is aligned with professional incentives. Alternatively, it could be that a tolerance for risk is a rare trait in individuals, so that most peer reviewers are risk averse. If so, a grant-maker or journal that wants to encourage risk could do so by seeking out (rare) risk-loving individuals, and putting them in decision-making roles. Lastly, another feature of peer review is that most proposals or papers are evaluated independently of each other. But it may make sense for a grant-maker or journal to adopt a broader, portfolio-based strategy for selecting science, sometimes elevating projects with lower scores if they fit into a broader strategy. For example, maybe a grant-maker would want to support in parallel a variety of distinct approaches to a problem, to maximize the chances at least one will succeed. Or maybe they will want to fund mutually synergistic scientific projects. We have a bit of evidence that empowered individual decision-makers can indeed offer some of these advantages (often in consultation with peer review). Subscribe now Picking Winners Before Research To start, Wagner and Alexander (2013) is an evaluation of the NSF’s Small Grants for Exploratory Research programme. This program, which ran from 1990-2006, allowed NSF programme managers to bypass peer review and award small short-term grants (up to $200,000 over 2 years).2 Proposals were short (just a few pages), made in consultation with the programme manager (but not other external review), and processed fast. The idea was to provide a way for programme managers to fund risky and speculative projects that might not have made it through normal peer review. Over its 16 years, the SGER (or “sugar”) program disbursed $284mn via nearly 5,000 awards. Wagner and Alexander argue the SGER program was a big success. By the time of their study, about two thirds of SGER recipients had used their results to apply for larger grant funding from the conventional NSF programs, and of those that applied 80% were successful (at least, among those who had received a decision). They also specifically identify a number of “spectacular” successes, where SGER providing seed funding for highly transformative research (judged as such from a survey of SGER awardees and programme managers, coupled with citation analysis). Indeed, Wagner and Alexander’s main critique of the programme is that it was insufficiently used. Up to 5% of agency funds could be allocated to the program, but a 2001 study found only 0.6% of the budget actually was. Wagner and Alexander also argue that, by their criteria, around 10% of funded projects were associated with transformational research, whereas a 2007 report by the NSF suggests research should be transformational about 3% of the time. That suggests perhaps program managers were not taking enough risks with the program. Moreover, in a survey of awardees, 25% said an ‘extremely important’ reason for pursuing an SGER grant was that their proposed research idea would be seen as either too high-risk, too novel, too controversial, or too opposed to the status quo for a peer review panel. That’s a large fraction, but it’s not a majority (the paper doesn’t report the share who rate these factors as important but not extremely important though). Again, maybe the high-risk programme is not taking enough risks! In general though, the SGER programme’s experience seems to support the idea that individual decision-makers can do a decent job supporting less conventional research. Goldstein and Kearney (2018) is another look at how well discretion compares to peer review, this time in the context of the Advanced Research Projects Agency - Energy (ARPA-E). ARPA-E does not function like a traditional scientific grant-maker, where most of the money is handed out to scientists who independently propose projects for broadly defined research priorities. Instead, ARPA-E is composed of program managers who are goal oriented, seeking to fund research projects in the service of overcoming specific technological challenges. Proposals are solicited and scored by peer reviewers along several criteria, on a five-point scale. But program managers are very autonomous and do not simply defer to peer review; instead, they decide what to fund in terms of how proposals fit into their overall vision. Indeed, in interviews conducted by Goldstein and Kearney, program managers report that they explicitly think of their funded proposals as constituting a portfolio, and will often fund diverse projects (to better insure at least one approach succeeds), rather than merely the highest scoring proposals. From Goldstein and Kearney (2018) Goldstein and Kearney have data on 1,216 proposals made up through the end of 2015. They want to see what kinds of projects program managers select, and in particular, how they use their peer review feedback. Overall, they find proposals with higher average peer review scores are more likely to get funded, but the effects are pretty weak, explaining about 13% of the variation in what gets funded. The figure above shows the average peer review scores for 74 different proposals to the “Batteries for Electrical Energy Storage in Transportation” program: filled in circles were funded. As you can see, program managers picked many projects outside the top. From Goldstein and Kearney (2018) What do ARPA-E program managers look at, besides the average peer review score? Goldstein and Kearney argue that they are very open to proposals with highly divergent scores, so long as at least one of the peer review reports is very good. Above, we have the same proposals to the Batteries program listed above, but instead of ordering them by their average peer review score, now we’re ordering them by their maximum peer review score. Now we’re seeing more proposals getting funded that are clustered around the highest score. This is true beyond the battery program: across all 1,216 project proposals, for a given average score, the probability of being funded is higher if the proposal receives a wider range of peer review scores. Goldstein and Kearney also find proposals are more likely to be funded if they are described as “creative” by peer reviewers, even after taking into account the average peer review score. ARPA-E was first funded in 2009, and this study took place in 2018, using proposals made up through 2015. So there hasn’t been a ton of time to assess how well the program has worked. But Goldstein and Kearney do an initial analysis to see how well projects turn out when program managers use their discretion to override peer review. To do this, they divid...
Can taste beat peer review?
Think and learn visually with Dom Zijlstra founder of Traverse
Think and learn visually with Dom Zijlstra founder of Traverse
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better and work smarter. Dom Zijlstra is the founder of Traverse, the only tool with mind mapping, note-taking and spaced repetition flashcards in one place. Traverse uses science-based features to help you deeply grasp complex topics so you can remember them for life. In this interview, we discussed how cognitive science can help us learn better, the different types of effective mind maps for learning, using spaced repetition as a powerful learning technique, the best way to create, connect, and consolidate knowledge, and much more. Enjoy the read! Hi Dom, thanks for agreeing to this interview! Combining mind mapping, note-taking and spaced repetition flashcards in one place is an ambitious endeavor. What inspired you to start building Traverse? Thank you for this interview opportunity! I’m thrilled to share my story and the inspiration behind Traverse, a science-based learning tool that combines mind mapping, note-taking, and spaced repetition flashcards in one place. It all started around six years ago, when I faced a learning challenge bigger than I could handle. I always thought of myself as a pretty smart guy, having studied physics and worked as a spacecraft engineer. But when I met my Chinese wife and tried to learn Mandarin, I realized that my learning method wasn’t up to the test. At the time, I had just completed my studies in Germany, having learned German and Portuguese. I had traveled to Sweden on an exchange program and later moved to Brazil for a while. I had always been excited about new challenges and learning new things. But learning Mandarin turned out to be a whole new level of difficulty. I spent countless hours using different tools and ineffective methods to learn the language, wasting precious time and energy. At some point, I realized that if I wanted to succeed, I needed a method based on how humans actually learn. This led me to dive into learning science and put together the method that later became Traverse, a research-based learning tool that can help anyone master complex topics. Using Traverse myself, I was finally able to get fluent in Mandarin, live in China, and chat with my wife’s family and friends. The app has since helped tens of thousands of learners, and I’m grateful for the opportunity to build the best learning tool for complex topics together with our users. When I look back at my life, there’s a thread that connects my experiences and the inspiration that led to Traverse. As a child, I was fascinated by books, nature, games, movies, and programming. Throughout my life, I’ve enjoyed learning new things and adapting to new environments. Even before going to college myself, I taught college students math and engineering, and developed a programming course for them. I loved thinking about how to teach and help others learn. Today, as the founder of Traverse, I aim to be kind, helpful, knowledgeable, and inspiring. I want to be a go-to person for those who seek to learn and grow. The possibility of financial freedom and inspiring others to join me on this mission has been a driving force behind Traverse. My vision is to be at the forefront of a revolution in education, helping people from all over the world become “superlearners” and create deep connections with others that bring happiness and fulfillment. In conclusion, my journey in creating Traverse has been fueled by my own experiences, challenges, and the desire to help others learn and connect. The app’s foundation is built on cognitive science, my passion for learning, and the experiences I’ve gathered throughout my life. Traverse is not just an app, it’s a manifestation of my life’s mission to empower people to learn anything, anywhere, and share the joy of learning with others. How would you describe Traverse to someone who has never used it? Traverse can be described as a powerful fusion of Notion, Miro, and Anki, but with a focused approach on deep learning, understanding, and memory. It is not a to-do list tool like Notion, nor is it a personal knowledge management tool. Traverse is a learning tool, specifically designed to enhance your brain. It is especially useful for those determined to learn something defined. Traverse is an all-in-one app that combines the best features of mind mapping, note-taking, and spaced repetition flashcards, offering an integrated learning experience. Unlike other tools, it is not designed for merely gathering thoughts from books and articles. Traverse is built on a solid foundation of cognitive science and is tailored for those who are serious about learning and mastering complex topics. By integrating the best of flashcard apps like Anki, note-taking apps like Notion, and mind mapping apps like Miro, Traverse provides a comprehensive and efficient learning experience. It offers user-friendly spaced repetition flashcards, note-taking features, and a visually organized mind map that allows learners to express their thoughts and knowledge in a vibrant and colorful manner. Let’s start with mind mapping. How does it work in Traverse? Mind mapping is a visual learning technique that helps individuals organize and represent information in a structured and interconnected manner. Traverse is a mind mapping application that goes beyond the traditional tree-like structures offered by many other tools, providing a comprehensive set of features for deep learning of complex topics. Traverse employs a science-backed approach called GRINDE, which has been borrowed from Dr Justin Sung, and stands for Grouped, Reflective, Interconnected, Non-verbal, Directional, and Emphasized. This method guides users in creating effective mind maps for learning: Grouped: Traverse encourages users to organize information into several boxes, forming larger concepts that offer more flexibility, similar to tree branches that can be rearranged. Reflective: The app promotes a reflection of what’s going on inside the user’s mind, as opposed to linear note-taking, which doesn’t effectively represent one’s thought process. Interconnected: Traverse allows users to form a big picture by connecting related ideas and concepts. Non-verbal: The app encourages the use of arrows, sketches, and other visual elements instead of text-heavy notes, fostering creativity and reducing time spent on note-taking. Directional: Traverse helps users give order and flow to their mind maps, creating cause-and-effect relationships and a logical framework for deeper learning. Emphasized: The app supports the use of thicker lines and larger fonts for main points, reducing cognitive load and making it easier to identify important connections at a glance. Traverse features an infinite canvas where notes can be grouped, linked, and freely arranged. Users can create customized links and use freehand drawing to express ideas visually. The app avoids auto-linking to prevent messy and overwhelming mind maps, promoting deliberate connections instead. With Traverse, users can see the big picture, stay organized, dive deeper without losing context, and experience the joy of learning and discovery. The app incorporates key principles such as visual encoding, cognitive load optimization, spaced revisions, and spatial memory to enhance the learning process and promote long-term retention. Traverse also allows you to take notes. Why should users take their notes in Traverse? Traverse offers a unique and powerful approach to note-taking by integrating notes within visually organized mind maps. This combination effectively bridges the gap between traditional note-taking and mind mapping, allowing users to take advantage of the benefits of both techniques. Using Traverse for note-taking provides several advantages: Visual organization: Notes in Traverse live within a mind map, similar to sketchnoting, but with the ability to add more information, sources, and references. This visual organization makes it easier to understand and remember the relationships between various concepts. Markdown-based and powerful embeds: Like Notion, Traverse supports markdown formatting, which makes it easy to create well-structured and visually appealing notes. Additionally, it offers powerful embeds such as YouTube videos, LaTeX math equations, and code blocks with syntax highlighting, enriching the learning experience. Visual Zettelkasten: Traverse functions as a visual Zettelkasten, a note-taking system popularized by German sociologist Niklas Luhmann. By incorporating bidirectional links and visually organizing notes, Traverse enables users to connect ideas, fostering a deeper understanding and generating new insights. All knowledge in one place: With Traverse, users can store all their notes and mind maps in a single, unified platform. This eliminates the need to switch between multiple applications and allows users to manage and consolidate their knowledge more efficiently. Bridging mind maps and retrieval practice: Traverse combines the power of mind maps with the benefits of retrieval practice, a proven learning technique that involves actively recalling information from memory. By integrating notes within mind maps, Traverse supports both the organization of knowledge and the active retrieval of information, leading to better comprehension and long-term retention. In summary, Traverse provides a versatile and effective note-taking solution by combining the best aspects of mind mapping and traditional note-taking. By using Traverse for note-taking, users can enjoy a visually organized learning experience, a powerful feature set, and the benefits of having all their knowledge in one place. Something exciting is that you can quickly create flashcards from any note. Can you tell us more about spaced-repetition in Traverse? Spaced repetition is an incredibly powerful learning technique when implemented correctly, and Trav...
Think and learn visually with Dom Zijlstra founder of Traverse
What does peer review know?
What does peer review know?
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. People rag on peer review a lot (including, occasionally, New Things Under the Sun). Yet it remains one of the most common ways to allocate scientific resources, whether those be R&D dollars or slots in journals. Is this all a mistake? Or does peer review help in its purported goal to identify the science most likely to have an impact and hence, perhaps most deserving of some of those limited scientific resources? A simple way to check is to compare peer review scores to other metrics of subsequent scientific impact; does peer review predict eventual impact? A number of studies find it does. Subscribe now Peer Review at the NIH Let’s start with peer review at the stage of reviewing research proposals. Li and Agha (2015) looks at more than 100,000 research projects funded by the NIH over 1980-2008, comparing the percentile rank of the application peer review scores to the outcomes of these research projects down the road. For each grant, they look for publications (and patents) that acknowledge the grant’s support. Besides counting the number of publications and patents each grant results in, they can also see how often the publications are cited. Note, they are only looking at projects that actually were funded by the NIH, so we don’t need to worry that their results are just picking up differences between funded and unfunded projects. The upshot is, better peer review scores are correlated with more impact, whether you want to measure that as the number of resulting journal articles, patents, or citations. For example, here’s a scatter plot of the raw data, comparing peer review percentile ranks (lower is better) to citations and publications. Lots of noise, but among funded projects, if people think your proposal is stronger, you’re more likely to get publications and citations. From Li and Agha (2015) Li and Agha also look at the correlation between peer review scores and impact measures after controlling for other potentially relevant factors, such as the year or field of the grant, or the PI’s publication history, institution, and career characteristics. The results are moderated a bit, but basically still stand - compare two grants in the same year, in the same study section, from PIs who look pretty similar on paper, and the grant with higher peer review scores will tend to produce more papers, patents, receive more citations, and produce more very highly cited papers. Among funded proposals, the predictive power of peer review seems to be highest at the top; the difference in citations, for example, between a top-scoring proposal and one at the 20th percentile tends to be much larger than the difference in citations between one at the 20th and 40th percentile.1 Moreover, even at the top, the correlation between peer review scores and outcomes isn’t great. If you compare proposals that score at the top to proposals at the 10th percentile (of grants that were ultimately still funded), the top proposal is twice as likely to result in a one-in-a-thousand top cited paper. I think that’s not actually that high - since a 10th percentile proposal isn’t that far off from the average, if peer review was really accurate, you might have expected the top proposal to be something like ten times as likely to produce a hit paper than as an average proposal. Park, Lee, and Kim (2015) exploits a peculiar moment in NIH history to provide further evidence that the NIH peer review processes, on average, pick projects with higher scientific impact. In 2009, the US government passed the American Recovery and Reinvestment Act, a stimulus bill meant to fight the economic headwinds of the 2008 financial crisis. The bill authorized $831bn in new spending, of which a tiny corner, $1.7bn, was used by the NIH to fund research projects that would not normally have been funded. This provides a rare opportunity to see how good projects that would otherwise have been rejected by the NIH (which relies heavily on peer review to select projects) fare when they unexpectedly receive funding. When Park, Lee, and Kim (2015) compare stimulus-funded proposals (which got lower peer review scores) to normally funded proposals, they find the stimulus-funded proposals tend to lead to fewer publications and that these publications tended to receive fewer citations. On average, a research proposal with peer review scores high enough to be funded under the NIH’s normal budget produces 13% more publications than a stimulus funded project. If we focus on a proposal’s most high-impact publication (in terms of citations), Park and coauthors find proposals funded only because of the stimulus got 7% fewer citations. Lastly, we can look at the 5% of publications funded by these NIH grants that received the highest amount of citations. A normally funded research proposal had a 7% chance of producing one of these “highest impact” papers; a stimulus-funded proposal had a 4% chance of producing one. I think these results are pretty consistent with Li and Agha (2015) in a few ways. They replicate the general finding that in the NIH, higher peer review scores are associated with more research impact (as measured with imperfect quantitative methods). But they also find peer review doesn’t have super forecasting acumen. Note that Park, Lee, and Kim are not comparing proposals that just barely clear the NIH’s normal funding threshold to proposals that just barely miss it - they don’t have the data needed for that. Instead, they are comparing the entire batch of proposals rated above the NIH’s normal funding threshold to a batch of proposals that fall uniformly below it. The batch of normally funded proposals includes the ones that were rated very highly by peer review, which Li and Agha’s work suggests is where peer review tends to work best. Even so, the differences Park, Lee, and Kim find aren’t enormous. Peer Review at Journals We have some similar results about the correlation between peer review scores and citations at the publication stage too. As discussed in more detail in Do academic citations measure the impact of new ideas? Card and DellaVigna (2020) have data on about 30,000 submissions to four top economics journals, including data on their peer review scores over (roughly) 2004-2013. Because, in economics, it is quite common for draft versions of papers to be posted in advance of publication, Card and Dellavigna can see what happens to papers that are accepted or rejected from these journals, including how many citations they go on to receive (both as drafts and published versions). As with Li and Agha (2015), they find there is indeed a positive correlation between the recommendation of reviewers and the probability a paper is among the top 2% most highly cited in the journal. From Card and Dellavigna (2020) Neither is this because high peer review scores lead to publication in top economics journals (though that’s also true). Card and Dellavigna also track the fate of rejected articles and find that even among rejects to these journals, those that get higher peer review scores still go on to receive more citations. Siler, Lee, and Bero (2014) obtain similar results using a smaller sample of submissions to the Annals of Internal Medicine, the British Medical Journal, and The Lancet over 2003 and 2004. For a sample of 139 submissions that received at least two peer review scores, they can track down the eventual fate of the submission (either published in one of these three journals or another). Among the 89 peer-reviewed submissions that were ultimately rejected, the peer review scores (from the first, initial review) were positively correlated with the number of citations the submissions eventually received, though the correlation was pretty weak. For the 40 submissions that were reviewed and accepted, again positive (initial) peer review reports were positively correlated with the number of citations eventually received. In this latter case, the correlation was too weak to be confident it’s not just noise (possible because the sample was so small). Siler, Lee, and Bero also emphasize that the three journals actually rejected the 14 papers that would go on to receive the most citations (though they did manage to get the 15th!). From Siler, Lee, and Bero (2014) Perhaps more reassuring is the fact that generally speaking, papers that went on to be highly cited tended to be identified as publishable in other papers pretty quickly. The figure below compares the eventual number of citations received to the time elapsed between submission to one of the three journals under study and eventual publication somewhere else. No highly cited papers took longer than 500 days (not great, but better than 2000!) to find a home. That could be because peer review at one of the next journals the paper was submitted to was quick to recognize the quality of these articles, or possibly that they rapidly resubmitted after getting favorable feedback from initial peer reviewers. But this evidence is pretty indirect and other explanations are also possible (for example, maybe the authors believed in the paper’s merit and submitted them more frequently for review, or they were more frequently desk-rejected and so could be resubmitted fast). From Siler, Lee, and Bero (2014) That said, we also have one more study looking at peer review reports and eventual impact, this time in the American Sociological Review. Teplitskiy and Bakanic (2016) have data on 167 articles published in the American Sociological Review in the 1970s, as well as their peer review scores. Among this set of published article, they find no statistically significant relationship between peer review scores and the number of citations papers go on to earn. After a...
What does peer review know?
Growth Loops: From linear growth to circular growth
Growth Loops: From linear growth to circular growth
It’s common to see progress as linear. When thinking about success, many people imagine a ladder or stairs going up. To progress, you need to climb each step one by one and get closer to the top. But that’s not the only model you can apply to visualize personal growth. Linear model: A then B then C then D. Circular model: A feeds B feeds C, which in turn feeds A. In a linear model of personal growth, you can only go up or down. By design, there are people below and above yourself. This model can be falsely reassuring, as it seems to offer a clear path to success. It’s used by many organisations as a way to manage their employees’ careers. In a circular model of growth, nobody is more advanced than anyone. There is no “up” or “down.” People are at a particular point of their own, unique growth loop. Everyone only competes against one’s self. The circular model can be more daunting, as there is no predefined direction — you need to design your own personal growth process — but it can also be infinitely more rewarding. Designing growth loops The circular model of personal growth is not too dissimilar from the concept of circular economy, where the goal is to make the most of resources and to create self-sustaining loops. It forces people to learn how to learn by designing feedback mechanisms that will allow them to continuously improve. Here is an example of the circular model of personal growth applied to learning: Learn something new Write about it and share it Connect with new people … and learn something new from them. As you can see, there is no clear “winning” end goal. When using the circular model of growth, you need to fall in love with the process. Success becomes a by-product of your learning journey, and it’s all about celebrating the small wins rather than chasing a big final victory. From single loops to double loops Growth loops are not intrinsically good if we keep approaching the same problem with no variation of method and without ever questioning the overarching goal. This is called “single loop learning.” A better approach is “double loop learning”, which is easily understood using the thermostat analogy from Teaching Smart People How To Learn: “A thermostat that automatically turns on the heat whenever the temperature in a room drops below 68°F is a good example of single-loop learning. A thermostat that could ask, “why am I set to 68°F?” and then explore whether or not some other temperature might more economically achieve the goal of heating the room would be engaged in double-loop learning.” Chris Argyris, Business Theorist and Professor at Harvard Business School. Unlike single loop learning which is simple and static, double-loop learning is more complex and dynamic, taking into account external factors and the changes in your environment, and adjusting the mental models on which a decision depends. Double loop learning is a model that encourages people and organisations to continuously challenge their assumptions and goals instead of blindly repeating the same loop. While the idea seems simple, it can be hard to implement double loop learning because of a natural need for control, a fear of failure, or an overall resistance to change. Mental models are hard to change, which is why double loop learning is more challenging to implement at first, but also more rewarding. If you’re struggling to get out of linear learning or single loop learning, try to understand the true nature of your resistance and to implement double loop learning in a small area of your life where you already feel quite comfortable. The post Growth Loops: From linear growth to circular growth appeared first on Ness Labs.
Growth Loops: From linear growth to circular growth
Better discover and understand scientific articles with Josh Nicholson co-founder of Scite
Better discover and understand scientific articles with Josh Nicholson co-founder of Scite
FEATURED TOOL Welcome to this edition of our Tools for Thought series, where we interview founders on a journey to help us think better and work smarter.  Josh Nicholson is the co-founder of Scite, an award-winning platform for discovering and evaluating scientific articles. Scite allows users to see how a publication has been cited by providing the context of the citation and a classification describing whether it provides supporting or contrasting evidence for the cited claim. In this interview, we talked about the nature of research, the research lifecycle, the problem of trustworthiness and reproducibility in science, how to navigate retractions, the importance of discoverability, and much more. Enjoy the read! Hi Josh, thanks for agreeing to this interview! Let’s start with a big question. What makes scientific research so challenging to work with in the first place? Thanks for letting me chime in! Scientific research is complex by nature. When I describe my work on aneuploidy and chromosome mis-segregation, almost immediately 99% of the population tunes out or can no longer understand me. The terms used in scientific research are often specialized and while necessary to communicate accurately, can leave a lot of people lost. With that said, scientific research is amazing and affects everyone in some way. There is research on how video games affect spatial reasoning, how Peppa Pig influences children learning English, and how SPG20 on chromosome 13 affects cytokinesis. Research touches all of our lives, mostly in a positive way. I got into cancer research to try to understand the etiology of cancer better so that we as a research community improve the outcomes of cancer patients. My work now focuses on making all of research more understandable, accessible, and trustworthy so that people, whether they are a researcher or not, use research to make better decisions in their life and work.  Peppa Pig and chromosomes — you got a point. It’s an age-old problem. So, why do you think now is the right time to tackle that challenge? With COVID-19 upending the world, we all fully understand how scientific research can impact our lives. Now with the rise of ChatGPT and other large language models, we all fully understand the need to be able to verify information online. Is that COVID-19 study trustworthy? Is that ChatGPT output factual? Scite addresses these problems head on through the development of Smart Citations — citations that make it easy to see how any research paper has been cited, how any topic has been cited, and basically how anything is cited!  While Scite was born out of the frustration of researchers trying to determine if a study was reproducible or not, the use cases have been more than we could have imagined. One of the more exciting applications of our Smart Citations is validating the output that ChatGPT and other AI based tools are generating. The timing of what we’re building couldn’t be better. People have been trying to build something like Scite since the 1960s, but failed because the technology just wasn’t there yet. And given the rise of ChatGPT as well as the general explosion in research volume in recent years, there’s a compelling need for more streamlined, efficient solutions to engage with the scholarly literature. Agreed, the time is now. Next, can you explain how Scite actually works? In one sense, you can think of Scite as like Rotten Tomatoes for research: take a paper, topic, author, etc. and easily read what research says about any of them. Could those findings be replicated by others? Did someone discuss this piece of research in the Introduction section to give background to their own work, did they mention it in the Methods section because they used similar methods, or did they cite it in the Results section to compare their findings? Without Scite, all this information is really hard to get because it requires you to read through hundreds, if not thousands of papers. With Scite, you can easily see at a glance what the research says about any topic. We accomplished this by partnering with most major academic publishers, who give us access to the full-text of research articles. Our system, which we’ve published the details of, is able to extract, link, and classify the citation statements–textual context that happens when citations are made in text– from articles and make that information available to our users. Of course, as we’ve developed the product, we’ve discovered other ways to leverage our unique data of Citation Statements to fulfill other needs — from our unique Citation Statement search experience to verifying claims made by ChatGPT. Scite also allows users to look up any research topic directly. Can you tell us more? One of the pivotal moments in our product journey was the realization that our database of Citation Statements could be searched directly. Typically, when we index those statements, we take the sentence where the reference was made and also include the sentences before and after. The resulting statement is long enough that it offers a good contextual overview. So, we designed a search experience around it. It started by letting anyone query keywords against our database of statements. As a personal example, I live in Brooklyn and often think about the rising rents in New York. Well, it turns out you can query “Rising rents in Williamsburg” in Scite and we have a few Citation Statements that cover that exact topic! One of our colleagues is a physician and travels for Doctors Without Borders. Part of his fieldwork involved the Rohingya people in Myanmar and he was curious what the rate of hypertension was in that demographic. It turns out Scite had answers. There are a few things worth pointing out here. First, we’re not restricted to life sciences but have good coverage across fields including the social sciences. While the statements and sections are useful when deciding what papers to read, we enable you to chain ideas at the level of claims instead of papers. Each citation statement from a paper makes one or more claims and has a number of other in-text references that we link by DOI. So when you’re reading a statement, you’ll see that the original authors cited e.g. 6 papers in-text which are likely related to the claims made in that statement. You can click on each of those in-text references to see more information about those papers and quickly trace ideas. Chaining ideas like this creates a natural filter for relevance in the papers you read because they’re about the specific claims you’re interested in. In addition to listening to users and developing a tool that helps meet their needs, we are also very focused on meeting our users where they are. We know how fragmented the ecosystem can be, so we have a free browser extension and Zotero plugin that researchers can add that shows our badge wherever they read and manage research. They can always click through to dig into it through Scite, but it’s often a nice integrity check that offers a little more information than a simple citation count. What about evaluating that research? It can be incredibly time consuming to compare and contrast the literature. Yeah, so properly evaluating research is time-consuming; you have to get the queries right to make sure you’re filtering for a relevant list of papers, then go through citation lists, abstracts, and even the full-texts (ideally) and track which ones are relevant and reliable for your review. Sometimes you have access to proxies of quality like citation counts, social media mentions, and so on, but they’re not always the best measure of the most fundamental thing we’re worried about when reading a set of papers: how reliable are these claims, and can I base my ideas on them? And thinking of all the tabs and notes involved is nightmarish! Scite is designed to streamline these tasks. I mentioned earlier that you could chain citations at the level of claims, and I think regardless of whether you start on our search or on a report page, this is a really special way of finding papers that are worth evaluating. Even better is the fact that this workflow places more of an emphasis on the actual claims rather than things like citation counts, which improves the discoverability of lesser known authors or publications,gives you confidence that you’re actually being thorough, and offers a voice to more underrepresented groups in the field. It doesn’t stop there, though. Often we do a literature review project and have to come back to update that information. Maybe in a few months or years. In that time, more research has undoubtedly been published and we’ve been juggling a bunch of other projects. Scite can reduce the cost of this context-switching through features like Custom Dashboards and Alerts. A very typical use case is for researchers to sync their Zotero library into Scite — essentially the list of relevant DOIs, and set an alert to be notified when new citation statements are published about any of them. This makes new qualitative information — the statements — come directly into your inbox, so you can search for them or be notified when something relevant is published. This is pretty commonly used for pharmacovigilance monitoring in pharmaceutical companies, or even individuals looking to be notified about new therapies or advances in a field that’s personal to them (think diabetes management). A big challenge in research is how difficult it is to track retracted papers. How does Scite address this challenge? Scite’s mission is to improve how researchers evaluate the reliability of research — whether it’s a reference in their manuscript or a paper or topic they come across. Besides reading any contrasting statements we’ve indexed about a paper, another quick check is to ensure it hasn’t received any retractions or other concerning editorial notices. We have our own system for detecting these notices and s...
Better discover and understand scientific articles with Josh Nicholson co-founder of Scite
The psychology of happiness
The psychology of happiness
Most people want to be happy. In other words, the majority of human beings are engaged — consciously or unconsciously — in actions designed to improve their levels of happiness. Despite our best efforts, these actions can sometimes have the opposite effect. For example, chasing a promotion at work only to realize we have become burned out in the process. Other times, our actions can make us happy in the short term but unhappy in the long term. For example, earning a large sum of money only to realize later we have over-indexed on financial success at the expanse of our relationships. These complexities are partly why there are many definitions of happiness, and the concept has changed so much over the centuries. Happiness can in fact be described as very different things depending on the time scale you consider: Short-term: your current feelings and emotions, such as pleasure, joy, or sadness. This is what you experience here and now. Medium-term: your subjective life satisfaction. In a study about how happiness differs across cultures, it was described as the “overall appreciation of one’s life as-a-whole.” Long-term: your conscious approach to thriving as a human being. Aristotle called it a life of “virtuous activity in accordance with reason.” The first two are probably very familiar to you, so it’s the third vision of happiness — the long-term one — that we will explore in this article. Aristotle coined it eudaimonia in Greek, which is sometimes translated as “human flourishing”. Aristotle’s philosophy was that, because reason (logos in Greek) is unique to human beings, the ideal goal of human life is the fullest exercise of one’s reason. According to Aristotle, it’s not enough to be skilled or talented in order to live a good life. To achieve happiness, we must be engaged in activities that are intellectually stimulating and that drive us to excellence. But Aristotle did not dismiss other important dimensions in one’s life, such as friends, wealth, and power. In fact, he doubted that we could achieve eudaimonia if we were completely missing one of these crucial aspects. For example, he found it hard to imagine a happy life if you were missing “good birth, good children, and beauty.” In more modern terms, it’s hard to conceive being happy if you’re without money and without friends. And this is exactly one of the most known theories of happiness in psychology, the pyramid of Maslow, is all about. It’s an elegant theory, but Maslow’s Hierarchy of Needs has been heavily contested. While research does seem to validate the existence of universal human needs, their ranking seems to wildly vary from one culture to another, and even from one individual to another. So what are some alternative theories of happiness that better capture the diversity and complexity of the human psyche? Theories of happiness in psychology Measuring happiness is hard. First, is happiness objective or subjective? Is it about how you feel right now, or in general? Is it rational, or emotional? Psychologists are still debating these questions. To highlight how important this field of research is, there is even a dedicated Journal of Happiness Studies. But there are three main theories towards which many researchers are gravitating: Freedom of Choice Theory: according to research by Ronald Inglehart, a professor and scientist, the extent to which a society allows free choice has a major impact on people’s happiness. When their basic needs are met, their degree of happiness depends on how much free choice people have in how they live their lives. Self-Determination Theory: evidence suggests that the ability to make choices without external influence and interference is also an important factor to live a happy life. Intrinsic motivation and the willingness to grow — basically being self-motivated — can determine how happy you are. Positive Psychology Theory: finally, positive psychology considers that instead of trying to fix things when they get broken, we should spend more time improving our mental wellbeing in a more positive and proactive way. This theory is backed by solid research showing the beneficial impact of self-help interventions. I’ll talk a bit more about it later in this article. While these theories offer solid guiding principles, it’s also worth noting that seeking happiness at all cost can also have adverse effects. For example, scientists found that failure to meet overly high expectations can leave you depressed. And research shows that happiness is way less valued in Eastern cultures than Western ones. For example, harmony is ranked higher in many non-Western cultures when it comes to the most important goals to pursue in life. It makes it worth asking ourselves: shouldn’t we accept and fully experience all of the range of our emotions, both positive and negative? Could we seek happiness in a more balanced way? A balanced approach to happiness Sometimes, life objectively sucks. And sometimes, things are fine, but for some reason we still don’t feel quite happy. This is why there’s more to happiness than comfort and managing our levels of happiness is an art in itself. I’m saying “art” and not “science”, because neuroscience has not made a lot of progress so far when it comes to understanding the biology of happiness. This great paper was published a few years ago and gives an overview of the current state of affairs when it comes to the neuroscience of happiness. In short, we have made lots of discoveries around the hedonic aspects of happiness—what brings us pleasure. We know what parts of the brain get activated when we feel pleasure, but the research trying to understand what happens in our brain when we’re happy and why is still highly speculative. So, for now, it’s psychologists that are leading the dance. Dr Carol Diane Ryff, an American academic and psychologist, has been studying psychological well-being and psychological resilience for decades. Based on her research, she created the Six-factor Model of Psychological Well-being, a theory that outlines the key factors to our happiness. Self-acceptance: this is about acknowledging and accepting all aspects of yourself, the good and the bad. It’s being aware of your strengths and weaknesses, and trying to be realistic in the way you assess your own skills and talents. It’s the daily work of loving yourself despite your mistakes and imperfections. Autonomy: being independent in the way you think, and having confidence in your opinions despite social pressures. It indicates that you are able to make your own choices. Environmental mastery: this means you are feeling in charge. You are able to use opportunities as they arise to address your personal needs. You can manage external factors and activities in your day-to-day life. It comes with a feeling of being in control of the situation in which you live. Personal growth: this is the conscious effort to continue to improve yourself through new experiences and constantly trying to become a better version of yourself. Positive relations with others: Friends, family, colleagues—in order to be happy, it’s important to have meaningful relationships with others that include reciprocal empathy, affection, and various levels of intimacy. Purpose in life: finally, and this one is a grander factor, finding meaning is about pursuing goals you deeply care about, and creating significance and value in your life. For some people, this is achieved through religion, but you can find your purpose in life through meaningful work, philosophy, or even human connections. This model was developed into a psychological well-being questionnaire used to measure how happy people are by asking them to rate statements on a scale from 1 to 6. For example, “I think it is important to have new experiences that challenge how you think about yourself and the world” for personal growth, or “I like most aspects of my personality” for self-acceptance. If you would like to take the test, I have uploaded a PDF of the questions and scoring instructions here. This is all well and good if all you want to measure your happiness, but what about improving it — being happier? Can it been learned? Teaching and learning happiness Twenty years ago, Dr Martin Seligman, one of the founders of positive psychology, decided to try to answer this question: can happiness be taught? In an essay which I strongly recommend reading, he explains how the field of psychology mostly focuses on treating conditions such as depression. How would one go about helping people nurture their positive emotions instead? He started running a seminar, where he would review the scientific research in positive psychology, and also give students a bit of homework that was quite different from what they were used to. “When one teaches a traditional seminar on helplessness or depression there is no experiential homework to assign; students can’t very well be told to be depressed or alcoholic for a week. But in Positive Psychology, students can be assigned to make a gratitude visit, or to transform a boring task by using a signature strength, or to give the gift of time to someone they care for.” Dr Martin Seligman, Psychologist & Author. His conclusion was that, while happiness itself cannot be taught, we can master the skills that make us happier. In his seminar, he teaches the skill of disputing unrealistic catastrophic thoughts, the skill of savoring and taking mental photographs, the skill of contemplation, the skill of getting in the flow, or the skill of figuring out your key strengths. “Gratitude is a skill, too little practiced, that amplifies satisfaction about the past,” he says. He gives students exercises to teach them how to connect to things larger than their own successes and failures. The students learn to mentor younger students. They read Man’s Search for Meaning. He also notes that school curriculums are not currently designed ...
The psychology of happiness
Transform your writing with Chad Thiele Founder of Chibi AI
Transform your writing with Chad Thiele Founder of Chibi AI
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better and work smarter. Chad Thiele is the founder of Chibi AI, a creative and versatile AI writing tool for bloggers, marketers, and storytellers. Chibi AI seamlessly weaves in prompts and automatically analyzes your text to offer suggestions to improve, including fact checking, so you can focus on crafting great content. In this interview, we talked about the role of AI as a creative copilot, how prompts can help elevate content, using AI to overcome writer’s block, transforming the task of editing into a breeze, the future of AI writing assistance, and much more. Enjoy the read! Hi Chad, thanks for agreeing to this interview! Let’s talk about the elephant in the room. There’s lots of conversations going about how AI will replace writers. What do you think? Can I say first, thank you so much for this opportunity. I’m a big fan of Ness Labs and the work you do. That’s an important question, and I understand the concerns surrounding AI’s impact on the writing profession. However, I firmly believe that AI will not replace writers but augment their abilities and enhance their creative process. AI is not here to replace writers, but to be their copilot towards greater heights of creativity.I envision a future where AI works in tandem with writers, acting as a copilot in their creative journey. Rather than taking over the writing process, AI is a valuable tool that supports and empowers writers, helping them overcome common challenges, such as writer’s block or time-consuming editing tasks. I understand the concern though. It’s a valid worry, especially with companies out there willing to create AI tools promising to deliver entire works “in a click.” This will lead to an overcompensation in the market where low-cost content will be relegated to AI (or to a VA using AI to generate lots of mediocre content). But as this sort of content starts to fail, the need for great writers will surface. However, it’ll be a symbiotic relationship where writers bring their unique voice, perspective, and creativity to the work (some might say nuance), while AI tools like Chibi provide valuable support, guidance, and inspiration to empower the writer. With AI’s help, writers can quickly generate a first draft. You could think of this quick draft like a sculptor’s block of clay, waiting to be shaped and molded by the writer’s chisel. But just as a sculptor’s true artistry comes from the intricate details and unique flourishes they add to their work, so too does a writer’s imagination and creativity. It is the writer who brings the work to life with their unique voice and perspective. Together, they create something truly remarkable. So you’ve designed Chibi to be that perfect creative copilot. Indeed, I’ve designed Chibi to be the perfect creative copilot for writers. I aimed to create an AI-powered writing app that actually complements the writer. Writer’s block is obviously a thing of the past. Chibi has decimated that (as most decent AI writers have). But there’s a lot more to the writing process. Ideation, brainstorming, editing, reviewing, summarizing… all things Chibi helps with. I’m sure most are familiar with ChatGPT by now. The difficulty of writing with ChatGPT is that it is a dialogue; a chat bot. You get the content then you must copy it out piece by piece. And it can be a challenge to get the best results from a chat dialogue. Everyone is talking about finding the best prompts to use in ChatGPT. Prompting is exceptional in Chibi. Writer’s can use prompts anywhere within their document. Need to insert content after a certain paragraph? Easy. Not quite happy with how a particular sentence is worded? Rework it inline. Chibi is designed with the classic writer’s experience in mind, offering a suite of tools and features that support  various aspects of the writing process—all in a familiar document format. It can adapt to the writer’s style and preferences, ensuring the generated suggestions and assistance align with their creative vision. Ultimately, my goal with Chibi is to help writers create content they can be proud to publish and share with their readers. At the end of the day it is the writer who knows their audience. Chibi is just there to help the writer make their impact. That does sound like the perfect copilot for writers. Specifically, how does Chibi work? I couldn’t continue with features without first stating the intuitive, clean interface is a key feature itself. The focus on simplicity and ease of use, allowing writers to concentrate on their writing without being distracted by the tool… it cannot be understated how powerful that is. The real star is the ChibiNLP engine. We combine large language models like GPT-4, AI21’s Jurassic 2, and others with our custom-built natural language processing engine. This fusion overcomes the limitations of individual models and provides writers with an enhanced copilot for their creative process. And this all works in the background while the writer writes. Then our unique memory feature is a special area for writers to enter information they want Chibi to remember. This helps keep Chibi’s output consistent and relevant to the story, making writing easier and faster. Smart menus offer just the right tools at the right time. No need to overwhelm writers with a ton of options if they’re not necessary. For example, when you select a passage of text you’ll see a menu where you can review or summarize it. Or you can choose the powerful rework tool. This tool is literally magic. You can tell Chibi to do whatever you want to the selected passage. Such as ‘rewrite in the first person’, ‘exchange ramen for sushi’, ‘introduce a huge but lovable grizzly bear to the story’. Check this out. Here’s a passage from one of my articles about sushi in Tokyo: “Sushi Iwa, located in the heart of Ginza, is a must-visit for any budget-conscious traveler looking for an authentic sushi experience. This hidden gem is known for its traditional Edomae-style sushi, made with fresh fish caught in Tokyo Bay. The chef, Hisayoshi Iwa, has over 20 years of experience and takes great pride in his craft. The menu changes daily based on what is available at the market, ensuring that each visit is a unique experience. Despite its high quality, Sushi Iwa offers affordable prices.” After I select it, choose rework, and tell Chibi: “Do something completely random. Be creative” this is what he changed it to: “Sushi Iwa, located on the moon, is a must-visit for any space-conscious traveler looking for an authentic sushi experience. This hidden crater is known for its traditional Lunar-style sushi, made with fresh fish caught in the Sea of Tranquility. The chef, Hisayoshi Iwa, has over 20 years of experience and takes great pride in his craft. The menu changes daily based on what is available from the intergalactic market, ensuring that each visit is a unique experience. Despite its high quality and astronomical location, Sushi Iwa offers affordable prices that are out of this world.” The editing possibilities are endless; limited only by the writer’s imagination. And you can do this completely in the flow of working on your document—no chat dialogue format to deal with. Honestly, there’s so much Chibi has to offer I really can’t go into it all in detail. Here are a few… Custom templates allow you to train Chibi to write precisely the content you need. Variables save time when you find yourself writing the same things over and over. Leads can speed up your writing. Kickstarters help you get your content started quickly. Summarize can summarize tens of thousands of words in one shot. Chibi also offers completely customizable canvas you can set up how you prefer to write, and more. What makes Chibi different from other writing tools? Ah, with so many AI apps popping up all over, this is an excellent question. Chibi AI stands out from the crowd in several ways—as you saw in the previous section. But I absolutely must start with the Community! Without a doubt it’s the Chibi community that sets Chibi AI apart. Is it weird that I immediately refer to the community rather than some tech within Chibi AI? Our community is a dedicated space full of like-minded writers—away from social media like FB groups; away from “prying eyes.” We run monthly challenges, share helpful guides, support our users, and just have a blast. We like to think of it as our little neighborhood where we all help each other succeed. Okay, back to technical ways Chibi is different. Our custom NLP engine and its ability to enhance large language models gives Chibi the ability to “see” your entire document, whereas other AI writing tools have what is often called a “look back” limit.  Another major feature that sets us apart are our artificial narrow intelligence (ANI) models. These are models custom built/trained purely to do one thing exceptionally well. These are different from the fine-tuning you might have heard of. Our ANI models are not just fine-tuned large language models like those from OpenAI. The huge benefit of doing this is we get to set our own quality and performance standards to meet. The result for writers is seamless. We sprinkle our ANI models all through the writing experience in the background to enhance the writing experience in many subtle ways. The big players are entering the market. Companies like Microsoft, Google, Canva, and others. We set ourselves apart by focusing on our writers. These companies have such a large and diverse user base they’ll remain rather generic. Whereas we’re able to continually fine-tune the writing experience specifically for our community of users and offer the absolute best results for them. I guess when you combine the Chibi community with the ChibiNLP engine, ANI models—all wrapped up in a beautiful writing experience… That’s what truly sets Chi...
Transform your writing with Chad Thiele Founder of Chibi AI
From Default Definitions to Deliberate Questions
From Default Definitions to Deliberate Questions
Since we are born, a set of defaults influences our goals, our relationships, our tastes. From fashion to friendship, many of the choices we make in life are imperceptibly constrained by default definitions. For example, the default definition of education is formal schooling. The default definition of love is monogamy. The default definition of success is wealth and power. The default definition of aging is decline. Those default definitions are the invisible puppeteers quietly manipulating our actions and directing our lives. Fortunately, even though those are the most commonly accepted definitions, we don’t have to stick to them. We can create our own definitions. Questioning our default definitions To prosper in the vast liminal space that is life is to create our own definitions of what is good, not based on top-down rules dictated by society, not based on biased moral imperatives, not based on the rigid path to success we have been told to follow, not based on the expectations of our peers — but based on our intimate experience of the world. To do so, we need to turn our default definitions into deliberate questions. Instead of simply accepting the defaults that govern our lives, we can ask ourselves what we truly want and what we truly believe so we can discover our authentic ambitions. We need to turn our default definitions into deliberate questions. Here are some examples: Default Definitions Deliberate Questions • Education is formal schooling. • Success is wealth and power. • Love is monogamy. → What do I want to learn? What do I want to teach my children? → What brings me joy in life? → What values are important to me in a romantic relationship? To do this exercise, grab a piece of paper or open your note-taking app, and go through the following steps: Audit your default definitions. What are the default definitions in my life? What are ideas that I treat as facts, without ever questioning them?  Turn them into deliberate questions. Take each default definition, and rephrase it into a question. The focus of these questions should be what is truly meaningful to you. Answer each question. Write down your truthful answers. Be honest when you don’t know the answer: it’s okay to admit that you haven’t all figured it out yet. Of course, it may be that the answers to these deliberate questions resemble the ones found in default definition. For example, faithfulness may be an important value to you in romantic relationships, or you may believe that formal schooling is the best way to study what you want to learn. I personally went back to university to study how the brain works because I believed that working alongside neuroscientists would help me learn better and faster. The aim of deliberate questions is not to turn your life upside down. It’s to have a more mindful approach to your goals in life. A bottom-up approach to life Default definitions are not inherently bad — you just want to get rid of their “default” aspect and make your answers deliberate instead. It may be that you decide to pursue what is considered a conventional career path because stability is important to you — maybe you have other projects with higher levels of uncertainty, or maybe you need to take care of a loved one. It may be that you do want to buy a house, not because it is a commonly accepted marker of success, but because you are genuinely excited to build a home for yourself and your family. It may be that building wealth is indeed a fundamental factor in your definition of success. Think about the founder of Patagonia, who gave the business away to an environmental trust and non-profit. Patagonia continues to produce outdoor clothing and camping supplies, but now all profits will go to organizations to fight the climate crisis. This would have not been possible if the business wasn’t successful in the first place. Equally, you may discover that you don’t want to stay in the same city where you grew up, and that you would like to explore the world for a while. You may realize that the career you have been pursuing is not the one that truly excites you. Asking these deliberate questions may open the door to new ideas and directions for your life. Whatever the answers you find, what matters is that these are now bottom-up definitions you have deliberately crafted for yourself. As Terry Pratchett said: “World building from the bottom up, to use a happy phrase, is more fruitful than world building from top-down.” Because the world is changing and so are we, we can play with the rules and decide what really matters to us. The post From Default Definitions to Deliberate Questions appeared first on Ness Labs.
From Default Definitions to Deliberate Questions
Biases Against Risky Research
Biases Against Risky Research
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. Recommendation: My Open Philanthropy colleague, Ajeya Cotra has teamed up with Kelsey Piper at Vox to launch a newsletter about “a possible future in which AI is functionally making all the most important decisions in our economy and society.” I would have put the newsletter on my substack recommendations, but it’s not on substack, so I’m plugging it here. If you are thinking about AI these days - and who isn’t? - check it out! A frequent worry is that our scientific institutions are risk-averse and shy away from funding transformative research projects that are high risk, in favor of relatively safe and incremental science. Why might that be? Let’s start with the assumption that high-risk, high-reward research proposals are polarizing: some people love them, some hate them. It’s not actually clear this is true,1 but it seems plausible and for the purposes of this post I’m just going to take it as given. If this is true, and if our scientific institutions pay closer attention to bad reviews than good reviews, then that could be a driver of risk aversion. Let’s look at three channels through which negative assessments may have outsized weight in decision-making, and how this might bias science away from transformative research. Subscribe now Reviewer Preferences Let’s start with individual reviewers: how does the typical scientist feel about riskier research? As far as I know, we don’t have good data that’s directly on how academic peer reviewers feel about high-risk / high-reward research proposals. There is some work on how academic scientists treat novelty at the publication stage, but there might be some big differences between how risky research is judged at the proposal versus the publication stage (an argument developed in more detail in Gross and Bergstrom 2021). For one, after the research is done, you can often see if the risk paid off! In this post I’m going to focus on work looking at research proposals and to learn about the preferences of peer reviewers, I’m going to look at Krieger and Nanda (2022), which provides some granular information about how working scientists in industry think about which kinds of pharmaceutical research projects to fund. Krieger and Nanda study an internal startup program at the giant pharmaceutical company, Novartis. The program was meant to identify and rapidly fund “transformative, breakthrough innovation” developed by teams of scientists working within Novartis. Over 150 Novartis teams submitted applications for the funding, and these were screened down to a shortlist of 12 who pitched their proposal to a selection committee. These pitches were made over video chat, due to covid-19, which meant they could be viewed by lots of people at once. About 60 additional Novartis research scientists watched some or all of the pitches and Krieger and Nanda got them to score each research proposal on a variety of criteria, and then to allocate hypothetical money to the different proposals. What’s particularly interesting for us is that we can see how scientists rated different aspects of a proposal, and how that relates to their ultimate decision about what to (hypothetically) fund. Participants in the study rated each proposal on: Transformative potential (more creative, non-standard is better) Breadth of applicability (more and higher value propositions) Timescale to first prototype (within 18 months is better) Feasibility/path to execution (more feasible is better) Team (does the team have the skill and network to achieve the goal) These different scores were aggregated into a weighted average that put extra weight on feasibility and the team, but put the most weight on a proposal’s transformative potential. (After all, that’s what the program was set up to fund.) Next, the study participants are asked how much money from a hypothetical budget to allocate to different projects. Note, when they’re doing this allocation, they can clearly see the weighted average of the scores they gave on each criteria, so it is obvious which proposals are supposed to get funding, if you strictly follow the scoring formula that Novartis devised. No surprise, Krieger and Nanda find that proposals with a higher score tend to get more hypothetical funding. But they also find, all else equal, reviewers penalize projects that have greater variation among the different criteria. That is, when comparing two projects with the same weighted average, study participants give more money to a project if it most of its criteria are close to the overall weighted average and less money if some criteria are well above the average and some well below. That implies negative attributes of a project “count” for more in the minds of reviewers. Even if bad scores on some criteria are counterbalanced by higher scores on others, these kinds of projects still get less (hypothetical) funding than less uneven proposals. But we can be even more precise. This bias against proposals with low scores on some dimensions and high scores on others is mostly driven by a particular type of divergence: proposals rated as having a high transformative potential but low feasibility tend to be the most penalized. That’s consistent with peer reviewers themselves being a source of bias against novel projects. They can recognize a project is high-risk and high-reward, but when asked which projects to give research funding too, they shy away from them in favor of lower-risk but lower-reward projects. Note though, that this data is from industry scientists, and maybe they are different in their risk preferences than their academic peers. S0 interpret with caution. Let’s next turn to some studies specifically about academia. Random Averages The previous section was about possible biases among individual reviewers. But most of the time, research proposals are evaluated by multiple reviewers, and then the scores across reviewers are averaged. And that system can introduce different problems. One way that averaging across reviewers leads to sensitivity to negative reviews is the fact that money for science tends to be tight, which means only research proposals that receive high average scores tend to be funded. If a single negative review can pull your score below this funding threshold, then negative reviews may exert excessive influence. For example, proposals submitted to the UK’s Economic and Social Research Council (ESRC) are typically scored by 3-4 reviewers on a 6-point scale, and usually only proposals that receive average scores above 4.5 make it to the stage where a panel deliberates on which proposals to fund. Jerrim and de Vries (2020) look at over 4,000 ESRC research proposals made over 2013-2019 and find 81% of proposals with an average score of 5.75-6 from the peer reviewers get funded, but only 24% of proposals with an average score of 4.5-5. That is to say, if you have three reviewers who love a proposal and rate it a maximum 6/6, it’ll be funded 81% of the time, but if you add one more reviewer who hates it and gives it a 1/6, then the average of 4.75 implies it only has a 24% chance of being funded. Of course, maybe that’s a feature, not a bug, if negative reviews actually do spot serious weaknesses. But before getting into that, we might first ask if this scenario is actually plausible in the first place: could it really be the case that three people rate a project 6/6 and another rates it 1/6? If three people think a project is outstanding, isn't it pretty unlikely that a fourth person would think it’s actually poor? This gets into the question of how consistent are peer review scores with each other, which is itself a large literature. But at least for their sample of ESRC proposals, Jerrim and de Vries find inter-reviewer correlations are very weak. Any particular reviewer’s score is only a tiny bit predictive of their peers score. That means a score of 1/6 is less likely when three other reviewers rate it 6/6 - but not that much less likely than random (though on average only 4% of reviewers give proposals a score of 1/6). So it is true that one really bad review can substantially reduce the probability of getting funded. But that doesn’t necessarily mean the system isn’t working exactly as it should; perhaps the bad review noticed serious flaws in the proposal that the other reviewers missed? Even so, there are two reasons that this seemingly innocuous procedure (get expert feedback and average it) can lead to excessive risk aversion for a funder. First, scores are asymmetrically distributed. In Jerrim and de Vries’ data, the average score is 4.4, and more than half of reviews are a 5 or 6. If you believe a proposal is really bad it’s feasible to strongly signal your dislike by giving it a score of 1, which is 3.4 below the average. But if you really love a proposal, it’s hard to signal that with your scoring: the best you can do is give it a 6, which is just 1.6 above the average. When you average out people who really love and really hate a project, the haters have more leverage over the final score.2 Second, low levels of inter-reviewer correlation imply there’s a lot of randomness in the reviewing process. That could be bad for transformative research proposals, if they are weirder and end up getting more reviews. For example, a proposal that combines ideas from disparate sources might need more reviewers to adequately vet the proposal, since it would need to pull in multiple reviewers to vet each of the idea’s sources. That could be a problem because, in general, there will be more variation in the average scores of proposals that receive fewer reviewers. For example, in Jerrim and de Vries’ data, on average about 25% of reviewers rate proposals as 6/6. If you h...
Biases Against Risky Research
Perfect your workflow with Kim Dan-Yuting founder of BOOX
Perfect your workflow with Kim Dan-Yuting founder of BOOX
Welcome to this edition of our interview stories, where we talk to founders on a mission to help us think better and work smarter. Kim Dan-Yutin is the founder of BOOX, a suite of products designed to simplify e-reading and facilitate digital workflows through user-friendly e-readers. In this interview, we talked about integrating your note-taking workflow with your tablet, the importance of using eye-friendly devices, how to boost productivity by using split-screen displays, how tech companies can collaborate together to meet the needs of specific user groups, and much more. Enjoy the read! Hi Kim, thanks for agreeing to this interview! E-readers are notoriously hard to get right. What inspired you to create BOOX? Our story began in 2008, at a time when smartphones and tablets had hardly emerged, and people had to gain knowledge with few resources and limited digital tools. It’s in that context that BOOX was founded by a group of ambitious young geeks, driven by a deep desire to simplify and digitize the reading process. We started with a question: “How can we help people study and work productively without experiencing eye strain?” Our goal was to invent reading tools that could alleviate these challenges. After much hard work, we achieved our first major milestone: the creation of our first e-reader. This accomplishment remains one of our proudest moments to this day. Building helpful hardware is known to be a difficult challenge, so you should definitely be proud! What are the advantages of using BOOX for consuming content? E-readers have become popular due to their eye-friendly screens and paper-like feel, making them ideal for reading and note-taking. With BOOX, you can enjoy the benefits of an electronic paper display to read or write while using your favorite apps without straining your eyes. This means you can implement similar workflows as you would with other tablets — like highlights and annotations — but with the added advantage of a more comfortable viewing experience. Additionally, most BOOX E Ink tablets come with dual tone front lights, making them easier to use in low light conditions without causing eye fatigue, which is a common issue with backlit OLED/LCD displays. In addition to native highlights and annotations, many people rely on note-taking apps to capture and process information. How does BOOX integrate with their existing note-taking workflows? BOOX strives to provide our users with a variety of note-taking capabilities. With the built-in NeoReader, you can scribble directly on ebooks without any extra effort, and highlight, underline, or annotate the sentences that interest you. Our Notes app is an independent notepad with versatile tools to let you freely jot down your ideas. If you’re accustomed to popular note-taking platforms like Evernote and OneNote, we’ve implemented handwriting optimization to ensure a lag-free experience. We also have people using other apps like Obsidian. Another innovation we have made is the split-screen function, allowing you to read and take notes simultaneously with two separate windows side by side, providing a seamless and efficient note-taking experience. Just to dig a little bit deeper… Many e-readers only work well with specific proprietary formats. Can you tell us what kind of documents people can read with BOOX? BOOX devices has a native support of 24 document formats, which include nearly all popular ebooks (PDF, DJVU, CBR, CBZ, EPUB, MOBI, TXT, DOC, DOCX, PPT, PPTX…), images (PNG, JPG, BMP, TIFF), and audios (WAV, MP3). Of course, you are always welcome to download any third-party apps to gain compatibility of more documents. What’s more, we have several preset navigation modes to optimize large-format PDF files so that users can conveniently view them in small-screen devices. That sounds great. What kind of people use BOOX devices? BOOX devices are the perfect companions for productivity enthusiasts, as we’ve heard from users worldwide. From university students and professors to musicians and researchers, our devices have helped many different people achieve their goals. One such story that particularly impressed me was that of Javier Del Águila, a Spanish epidemiologist working with the World Health Organization. He shared his workflow with the BOOX Note Air2 Plus and his achievement in studying the Omicron variant of COVID-19. We’re proud to have played a role in his research on the pandemic and improved his work process. It’s stories like these that motivate us to continue creating innovative and effective devices. This is an incredible story. Another exciting one is your collaboration with Connected Papers. Can you tell us more? The collaboration between BOOX and Connected Papers is a great example of how technology companies can work together to enhance the user experience and meet the needs of specific user groups. We have 10.3″ A5 and 13.3″ A4 sized models which are excellent for reading papers in PDF and other formats. With BOOX’s optimized reading and note-taking capabilities and Connected Papers’ advanced visual tool, we aim to simplify and streamline the workflow for academic users. We are proud of this partnership and look forward to exploring further opportunities to support the community. What about you, how do you use BOOX? In my daily routine, I rely on three BOOX devices: the Leaf2, Note Air2 Plus, and Tab X. For my daily commute, I carry the Leaf2 with me to browse the news feed as it fits perfectly in my handbag. Its page-turn buttons are exceptionally useful and save me the effort of tapping or swiping on the screen. During the day, I attend daily briefings with the product engineers in the morning and the marketing team in the afternoon, where I use my Note Air2 Plus to take notes. I love the writing feel of this device. On my office desk, I use the Tab X to read and reply to business emails and organize my work. It has a 13.3″ A4 size, similar to other tablets or laptops. At the end of the day, I spend my time with the Leaf2, reading my favorite books before bedtime. It’s a treasured moment when I can relax and enjoy some solitude. You use three different BOOX devices. If people had to choose only one to get started, how can they decide which BOOX device is right for them? BOOX offers a comprehensive product line to cater to all types of users. For those new to eReading, I recommend the Leaf2, a compact and lightweight e-reader that comes with built-in page-turn buttons and the option of black or white colors. If you are a sophisticated E Ink tablet user, the Note Air2 Plus is an excellent choice. It offers a close-to-paper writing experience and a 10.3″ A5 size, making it easy to carry around. For professionals, we have introduced the brand-new Tab Series, which is a premium selection and a game changer in the industry. It features the BOOX Super Refresh Technology, achieving ultra-smooth refresh rates. We have currently released two models, the 10.3″ Tab Ultra and the 13.3″ Tab X, with more to come in the future. Please stay tuned for updates! Once they have chosen a BOOX device, how do you recommend someone get started? When you receive your BOOX device, the first step is to get familiar with its user interface and explore its functionalities. It is packed with many possibilities to improve your workflow. To help you get started, we have an introductory video available on our YouTube channel that explains how to set up your new BOOX. We encourage you to check it out and take advantage of all the features and tools that BOOX has offered. And finally… What’s next for BOOX? Our top priorities now are to keep innovating and promote BOOX as an eye-friendly device to boost your productivity. To achieve this, we plan to release new devices tailored to different purposes and scenarios this year, while refining the user experience with a couple of firmware updates. We are also excited about the advancements in E Ink screen technology and are exploring how we can incorporate them into our new products. Thank you so much for your time, Kim! Where can people learn more about BOOX? Thank you for the interview. The pleasure is all mine. If you would like to know more about our brand and products, please feel free to visit our official BOOX Shop. You can also follow us on Facebook, Twitter, Instagram, YouTube, and Reddit to catch up with all our updates and join our community. The post Perfect your workflow with Kim Dan-Yuting, founder of BOOX appeared first on Ness Labs.
Perfect your workflow with Kim Dan-Yuting founder of BOOX
Loneliness or solitude? The case for being alone
Loneliness or solitude? The case for being alone
Being alone can sometimes feel pleasurable. A good book, some quiet time to ourselves, just us and our thoughts, away from the hustle and bustle of daily work and social obligations. But, other times, it can feel isolating. We are not simply alone, we are lonely. Why is it that being alone can lead to such dramatically different experiences? The difference between loneliness and solitude While loneliness and solitude share their basis into the same fundamental experience, the way we interact with this experience gives rise to two different mental states. Loneliness is a common but uncomfortable human emotion. Loneliness is the subjective experience in which a person is alone and which produces a feeling of desolation. When fleeting, it’s perfectly fine to feel lonely. It can be a way to process some feelings, which can be difficult but necessary. However, when loneliness becomes a constant feeling, it can actually be harmful to your health. A review of the research literature suggest that loneliness increases mortality risk by 26%. And the experience really hurts. We are social animals and we need to feel that we belong. Researchers have found that pain from loneliness and social rejection activate the same parts of the brain as physical pain. “Why do people have to be this lonely? What’s the point of it all? Millions of people in this world, all of them yearning, looking to others to satisfy them, yet isolating themselves,” wondered Haruki Murakami in one of this novels. Isolation is the key word here: loneliness is a sense of isolation that can persist even when other people are present. That’s why knowing more people will not alleviate feelings of loneliness. It has become a common trope — but a true one — to say that we’re more connected but also more lonely than ever. The rates of loneliness have doubled in the United States in the last fifty years only. Scientists speak of a loneliness epidemic. In contrast, solitude is just the state of being alone. The concept of solitude doesn’t have any negative feelings attached to it. Which is why it can actually be enjoyable, or just neutral. There is a wonderful poem by Robert Duncan called “Childhood’s Retreat” which perfectly captures the beauty of solitude: It’s in the perilous boughs of the tree out of blue sky the wind sings loudest surrounding me. And solitude, a wild solitude is revealed, fearfully, high I’d climb into the shaking uncertainties, part out of longing, part daring myself, part to see that widening of the world, part to find my own, my secret hiding sense and place, where from afar all voices and scenes come back —the barking of a dog, autumnal burnings, far calls, close calls—the boy I was calls out to me here the man where I am “Look! I’ve been where you most fear to be.” How we perceive being alone makes all the difference in whether we will experience it as loneliness or solitude. When we focus on the feeling of isolation from others and world, being alone can produce a spiral of negative thoughts. When appreciated as a generative moment of self-discovery and reconnection with oneself, being alone can yield powerful insights and support your mental health. The science-based benefits of solitude It’s hard to consider inserting a little solitude in our busy schedule, but spending time alone is far from being a waste of time. In fact, the busier you are, the more likely you are to benefit from some quiet time. And research shows that solitude has lots of benefits, which include: More meaningful relationships. It may sound paradoxical, but research suggests that being able to feel comfortable on our own helps us become more comfortable when around others. Better resilience. Studies show that your ability to tolerate alone time is linked to increased happiness, better stress management, and improved life satisfaction. Basically, spending time alone makes you happier and less anxious. Increased creativity. Being in a private, secluded space, allows you to be more creative. That’s why artists, authors, and musicians seek solitude when they want to generate ideas and focus on their creative work. Self-discovery. By spending time alone and taking a moment for self-reflection—to think about our goals, our concerns, and our self—we are able to define and confirm our identities with less influence from other people, researchers found. Increased productivity. This may be the most counter-intuitive benefit of them all, but spending time alone makes you more productive. Many people work better when on their own compared to when working in a busy and noisy office. In the end, it all boils down to being intentional in the way we approach solitude. Loneliness is time alone that we didn’t choose, and therefore don’t appreciate. Solitude can be a mindful activity, if you decide to dedicate time to it and approach it as a constructive experience. On seeking solitude The good news is that you don’t need to set aside huge chunks of time to be by yourself in order to benefit from solitude. Just ten to twenty minutes of alone time a day could be enough to help you recharge. And if you think you don’t have time to dedicate to intentional solitude, you probably need that alone space more than ever. To go from simply being alone to creating space for mindful solitude, make sure to put your phone and laptop away. You won’t get any of the benefits of solitude if you spend your time scrolling on a screen. Here are a few suggestions of things you could do in your alone time. However you decide to spend your alone time, the goal is to be fully immersed in the moment, whether you actively think about interesting questions or let your mind wander. Go for a walk. Walking alone can be a simple way to clear your mind and take time to reflect on your thoughts, while getting some exercise. Bonus points if you can do it in nature. Meditate. Meditation allows you to focus on your inner self so you can find a sense of calm and clarity. It can help reduce anxiety and improve focus. Journal. Writing down your thoughts and emotions can help you process them more effectively. Journaling is also a great way to gain valuable insights into the inner workings of your mind. Listen to music. Music can be an amazing way to relax and unwind, especially if you listen to music that resonates with you and aligns with your current mental state, which can help you feel more connected to your emotions. Read a book. Besides being a lot of fun and a way to gain knowledge, reading a book alone can be an uncomplicated way to escape into another world and, if it’s fiction, get lost in a good story. You can also try gardening, working on a DIY project, dance in front of the mirror, do yoga, or practice an instrument. Any activity that allows you to enjoy your time alone will help you appreciate those precious moments with yourself. Or you could, you know, do nothing. Just think, or let your mind wander. If you’re not used to solitude, silence can feel uncomfortable at first. But allowing yourself to be alone with your thoughts is powerful, and can be a great addition to your mental gym. So trying setting aside a bit of alone time and making it part of your daily routine. The post Loneliness or solitude? The case for being alone appeared first on Ness Labs.
Loneliness or solitude? The case for being alone