Digital Gems

Digital Gems

2442 bookmarks
Newest
Age and the Nature of Innovation
Age and the Nature of Innovation
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. The previous post also now has a podcast available here. Subscribe now Are there some kinds of discoveries that are easier to make when young, and some that are easier to make when older? Obviously yes. At a minimum, innovations that take a very long time basically have to be done by older innovators. So what kinds of innovations might take a long time to complete? Perhaps those that draw on deep wells of specialized knowledge that take a long time to accumulate. Or perhaps those that require grinding away at a question for years and decades, obsessively seeking the answers to riddles invisible to outsiders. What about innovations that are easier when young? Well, we can at least say they shouldn’t be the kinds of innovations that take a long time to achieve. That means discoveries that can be made with years, not decades, of study. But what kinds of innovations that don’t take long study to make are still sitting around, like unclaimed $20 bills on the sidewalk? One obvious kind of unclaimed innovation is the kind that relies on ideas that have only been very recently discovered. If people learn about very new ideas during their initial training (for example, for a PhD), then we might expect young scientists to disproproportionately make discoveries relying on frontier knowledge. At the same time, we might look for signs that older scientists build on older ideas, but perhaps from a place of deeper expertise. Indeed, we have some evidence this is the case. Age, Frontier Ideas, and Deepening Expertise Let’s start with Yu et al. (2022), a study of about 7mn biomedical research articles published between 1980 and 2009. Yu and coauthors do not know the age of the scientists who write these articles, but as a proxy they look at the time elapsed since their first publication. Below are several figures, drawn from data in their paper, on what goes into an academic paper at various stages of a research career. In the left column, we have two measures drawn from the text of paper titles and abstracts. Each of these identifies the “concepts” used in a paper’s title/abstract: these are defined to be the one, two-, and three-word strings of text that lie between punctuation and non-informative words. The right columns relies on data from the citations made by an article. In each case, Yu and coauthors separately estimate the impact of the age of the first and last author.1 Moreover, these are the effects that remain after controlling for various other factors, including what a particular scientist does on average (in economics jargon, they include author fixed effects). Together, they generally tell a story of age being associated with an increasing reliance on a narrower set of older ideas. Source: Regression coefficients with author fixed effects in Tables 4 and 5 of Yu et al. (2022) Let’s start in the top left corner - this is the number of concepts that appear in a title or abstract which are both younger than five years and go on to be frequently used in other papers. Measured this way, early career scientists are more likely to use recent and important new ideas. Moving to the top-right figure, we can instead look at the diversity of cited references. We might expect this to rise over a career, as scientists build a larger and larger knowledge base. But in fact, the trend is the opposite for first authors, and mixed at best for last authors. At best, the tendency to expand the disciplinary breadth of references as we accumulate more knowledge is offset by rising disciplinary specialization. Turning to the bottom row, on the left we have the average age of the concepts used in a title and abstract (here “age” is the number of years that have elapsed since the concepts were first mentioned in any paper), and on the right the average age of the cited references (that is, the number of years that have elapsed since the citation was published). All measures march up and to the right, indicating a reliance on older ideas as scientists age. This is not a phenomenon peculiar to the life sciences. Cui, Wu, and Evans (2022) compute some similar metrics for a wider range of fields than Yu and coauthors, focusing their attention on scientists with successful careers lasting at least twenty years and once again proxying scientist age by the time elapsed since their first paper was published. On the right, we again have the average age of cited references; these also rise alongside scientist age. Source: Regression coefficients with author fixed effects in Tables 4 and 5 of Yu et al. (2022) On the left, we have a measure based on the keywords the Microsoft Academic Graph assigns to papers (of which there are more than 50,000). Between two subsequent years, Cui and coauthors calculate the share of keywords assigned to a scientist’s papers which recur in the next year. As scientists age, their papers increasingly get assigned the same keywords from year to year (though note the overall effect size is pretty small), suggesting deeper engagement with a consistent set of ideas. Lastly, we can look outside of science to invention. Kalyani (2022) processes the text of patents to identify technical terminology and then looks for patents that have a larger than usual share of technical phrases (think “machine learning” or “neural network”) that are not previously mentioned in patents filed in the preceding five years. When a patent has twice as many of these new technical phrases as the average for its technology type, he calls it a creative patent. He goes on to show these “creative” patents are much more correlated with various metrics of genuine innovation (see the patent section of Innovation (mostly) gets harder for more discussion). Kalyani does not have data on the age of inventors, but he does show that repeat inventors produce increasingly less creative patents as time goes by. From Kalyani (2022) This figure shows, on average, an inventor’s first patent has about 25% more new technical phrases than average, their second has only 5% more, and the third patent has about the same number of new technical phrases as average. Subsequent patents fall below average. This is consistent with a story where older inventors increasingly rely on older ideas. As discussed in more detail in the post Age and the Impact of Innovations, over the first 20 years of a scientists career, the impact of a scientist’s best work is pretty stable: citations to the top cited paper published over some multi-year timeframe is pretty consistent. The above suggests that might conceal some changes happening under the hood though. At the outset, perhaps a scientist’s work derives its impact through engagement with the cutting edge. Later, scientists narrow their focus and impact arises from deeper expertise in a more tightly defined domain. Conceptual and Experimental Innovation So far we’ve seen some evidence that scientific discoveries and inventions are more likely to draw on recent ideas when the innovator is young, and an older, narrower set of ideas (plus deeper expertise?) when the innovator is older. I suspect that’s because young scientists hack their way to the knowledge frontier during their training period. As scientists begin active research in earnest, they certainly invest in keeping up with the research frontier, but it’s hard to do this as well as someone who is in full-on training mode. Over a 20-40 year career, the average age of concepts used and cited goes up by a lot less than 20-40 years; but it does go up (actually, it’s pretty amazing the average age of concepts used only goes up 2 years in Yu et al. 2022). I argued at the outset we might expect this. The young cannot be expected to make discoveries that require a very long time to bring about. But among the set of ideas that don’t take a long time to bring about, they need to focus on innovations that have not already been discovered. One way to do that is to draw on the newest ideas. But this might not be the only way. The economist David Galenson has long studied innovation in the arts, and argues it is useful to think of innovative art as emerging primarily from two approaches. The first approach is "experimental." This is an iterative feedback driven process with only vaguely defined goals. You try something, almost at random, you stand back and evaluate, and then you try again. The second approach is “conceptual.” It entails a carefully planned approach that seeks to communicate or embody a specific preconceived idea. Then the project is executed and emerges more or less in its completed form. Both require a mastery of the existing craft, but the experimental approach takes a lot longer. Essentially, it relies on evolutionary processes (with artificial rather than natural selection). It's advantage is that it can take us places we can't envision in advance. But, since it takes so long to walk the wandering path to novelty, Galenson argues that in the arts, experimental innovators tend to be old masters. The Bathers, by Paul Cezanne, one of Galenson’s experimental innovators. Begun when Cezanne was 59. Conceptual approaches can, in principle, be achieved at any point in a lifecycle, but Galenson argues there are forces that ossify our thinking and make conceptual innovation harder to pull off at old ages. For one, making a conceptual jump seems to require trusting into a radically simplified schema (complicated schema are too hard to plan out in advance) from which you can extrapolate into the unknown. But as time goes on, we add detail and temper our initial simplifications, adding caveats, carveouts and extensions. We no longer trust the simple models to leap into the unknown. Perhaps for these reasons, conceptual innovators tend...
Age and the Nature of Innovation
2022 year in review: wander and wonder
2022 year in review: wander and wonder
This year was not the year I expected. It was a year of darkness and doubt, a year of light and love, a year of self-discovery and community. I usually start my annual reviews with a few bullet points listing my proudest accomplishments, but it feels wrong this time. Instead, I’ll describe some of the ebbs and flows I went through and why this year has been a pivotal one. Renaissance The year started great. A smart team at Ness Labs, two wonderful PhD supervisors, a research project I cared about, a comfortable home in a neighborhood I liked. But it also started the same way every year, every week, and every day of my life had started as far as I could remember: with a sense of emptiness, as if my mind was a dissociated observer watching the movie of my life from the outside. I had become used to the familiar claws of depression. It was like a shadow following me everywhere. Some weeks were worse than others, but I always found enough interesting questions and met enough interesting people to keep on playing the game. As a silver lining, struggling with my own mental health allowed me to bring a more nuanced perspective to conversations around personal growth. Following my curiosity as a way to make a living and to persist on living — that was winning enough. Fortunately, 2022 had some surprises in store for me. Through a series of unexpected events, I experienced what I can only call a renaissance (“rebirth” in French, my native language). The first jolt happened in the Spring. I was visiting a friend in a coliving community in the French countryside, and was about to help prepare lunch for everyone when said friend gave me a piece of chocolate. An hour later, I was cutting vegetables while high on psilocybin, which gave me a newfound appreciation for food as fuel for my body. I’ve always been intellectually interested in nutrition — I even ran a startup in that space — but never before had I felt like I did that afternoon, staring at the dancing patterns on a beet while thanking my luck to have access to such nice food. This moment unlocked a little spark somewhere in me, something that said: life can feel good. A few days later, I went to Italy for the Indie Founders Conference organized by Rand Fishkin and Peldi Guilizzoni. The conference felt more like an intimate retreat, where it was safe to be vulnerable and to openly share our challenges. No facades, just friends. We laughed, we cried, and we bonded. I didn’t know it then, but this would be the second event of the year to significantly affect my path. Look at these happy people Thanks so much to the team at @balsamiq for hosting the inaugural Indie Founders retreat! So much food for thought & many new friendships grazie mille!! pic.twitter.com/uCxJAdJLQZ — Anne-Laure Le Cunff (@anthilemoon) March 25, 2022 There, I met an amazing woman (whose name I won’t share for privacy reasons) who I connected with over many different topics, including neuroscience and neurodiversity research. She told me she had signed up for an Ayahuasca retreat. Ayahuasca is a potent psychedelic brew which originated from the Amazon basin. Reports written by early Christian missionaries described it as “the work of the devil”. Today, researchers around the world are investigating its therapeutic potential as an antidepressant, antianxiolytic, and anti-addiction medication. It knew it wasn’t the miracle cure-all some people touted it to be, but it certainly felt worth exploring. That night, as soon as I got back to my hotel room, I looked up the retreat center she had mentioned, and I booked my spot for a month later. Working with Ayahuasca was my third life-altering experience of the year. You can read a full account of my journey with Ayahuasca here. If you’re in a rush, here’s the TL;DR. I’m not depressed anymore, I quit drinking… And, for the first time ever, I’m truly happy to be alive. Research I could stop this annual review right here. There was no bigger accomplishment this year than breaking free from the dark companionship of depression. But I write these reviews as a record of my progress, so I can later look back and remember how it felt to be where I was. So, a few more things. While I’ve been reading papers and writing about what I learn for a little while now, this was my first year conducting my own scientific research. As a complete newbie, there was only one milestone I wanted to attain: successfully passing my PhD upgrade viva. Some context: after performing a review of the existing literature and running some initial studies, PhD candidates are required to go through an oral exam where they present their early findings and a detailed plan for the rest of the research project. I thought it would be a terrifying affair, but the examiners at my university were friendly and provided lots of useful suggestions. I passed without any corrections. After the upgrade viva, I spent three weeks at St Andrews University in Scotland to study diverse forms of intelligence across human, animal, plant, and even fungal species; I gave my first academic presentations, wrote a book chapter, and got a paper accepted for publication in a journal. I’m currently typing these words from the Netherlands, where I just completed an intensive eye-tracking training at Utrecht University. Next year, I will teach my first class for the Neuroscience & Psychology BSc students. It will be about neuroscience and the digital world. Academia is such a strange microcosm. I love being surrounded by friendly nerds asking big questions, but I don’t know if I’d enjoy spending 100% of my time there. Things are painfully slow, there’s a lot of admin, and people are overworked. I feel privileged to have one foot in academic research and one foot in entrepreneurship. It makes my work more interesting, and the space between the two is fun to explore. Reach Three years ago, I sent the first edition of my newsletter. I had no idea I was laying the foundations for a sustainable community-based business. Today, the newsletter is read by 55,000 subscribers, and thousands of people have completed one of the online courses we offer in the learning community. In November, I hosted the Mindful Productivity Masterclass, a four-week cohort-based course which received fantastic feedback. Students of all ages and all professions joined from everywhere in the world. This experience was a powerful reminder of how the Internet enables lifelong learning and collective intelligence. I’m grateful for the team at Ness Labs: Joe, Haikal, and Melanie, and all of the writers who contribute fantastic content to share with our readers. You all teach me so much and I could not imagine doing the work I do without you. I’m grateful for my family and for my friends, whether online or offline, whether we talk everyday or once a year. You feed my sense of wonder and support my courage to wander, lose my way, and find myself. Next year, I want to reach even more curious minds and spread the message that we don’t need rigid productivity frameworks to succeed. We don’t need to be in control of everything. In any case, the economic, political and humanitarian crises of the past few years were a brutal reminder that we really cannot predict what life will throw at us. Our visibility is limited. Control is overrated. Instead, we need curiosity, consistency, and a community. In the sea of chaos, these act as a discovery engine: they help steer our boat in a direction that maximizes personal growth. Sure, we don’t know where we’re going, but we can have fun while we roam this turbulent planet of ours. We can still be active participants and shape the world around us. That’s why I want to keep on learning, feeling, and exploring everything life has to offer — making friends, connecting ideas, co-creating spaces for play and inquiry. I know things won’t go to plan. I don’t have a map. But I’m excited to play. Who knows, maybe there will be more surprises along the way. Thank you for being part of my journey! I wish you a restful and reflective end of the year. The post 2022 year in review: wander and wonder appeared first on Ness Labs.
2022 year in review: wander and wonder
Age and the Impact of Innovations
Age and the Impact of Innovations
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. No podcast today, as I am sick and can’t talk without coughing: maybe later. Also, there is more to say about age and innovation, so stay tuned! Subscribe now Scientists are getting older. Below is the share of employed US PhD scientists and engineers in three different age ranges: early career (under 40), mid-career (ages 40-55), and late career (ages 55-75). The figure covers the 26 years from 1993-2019. Author calculations. Sources: NSF Survey of Doctorate Recipients (1993-2019), data drawn from age by occupation by age tables Over this period, the share of mid-career scientists fell from about half to just under 40%. Most (but not all) of that decline has been offset by an increase in the share of late career scientists. And within the late career group, the share older than 65 has more than doubled to 27% over this time period.1 This trend is consistent across fields. Cui, Wu, and Evans (2022) look at more than one million scientists with fairly successful academic careers - they publish at least 10 articles over a span of at least 20 years. Cui and coauthors compute the share of these successful scientists who have been actively publishing for more than twenty years. Across all fields, it’s up significantly since 1980 (though, consistent with the previous figure, this trend may have peaked around 2015). From Cui, Wu, and Evans (2022) Alternatively, we can get some idea about the age of people doing active research by looking at the distribution of grants. At the NIH, the share of young principal investigators on R01 grants has dropped from a peak of 18% in 1983 to about 3% by 2010, while the share older than 65 has risen from almost nothing to above 6%. From Rockey (2012) This data ends in 2010, but the trend towards increasing age at receiving the first NIH grant has continued through 2020. Is this a problem? What’s the relationship between age and innovation? Aging and Average Quality This is a big literature, but I’m going to focus on a few papers that use lots of data to get at the experience of more typical scientists and inventors, rather than the experience of the most elite (see Jones, Reedy and Weinberg 2014 for a good overview of an older literature that focuses primarily on elite scientists). Yu et al. (2022) look at about 7mn biomedical research articles published between 1980 and 2009. Yu and coauthors do not know the age of the authors of the scientists who write these articles, but as a proxy they look at the time elapsed since their first publication. They then look at how various qualities of a scientific article change as a scientist gets older. First up, data related to the citations ultimately received by a paper. On the left, we have the relationship between the career age of the first and last authors, and the total number of citations received by a paper.2 On the right, the same thing, but expressed as a measure of the diversity of the fields that cite a paper - the lower the number, the more the citations received are concentrated in a small number of fields. In each case, Yu and coauthors separately estimate the impact of the age of the first and last author.3 Note also, these are the effects that remain after controlling for a variety of other factors. In particular, the charts control for the typical qualities of a given author (i.e., they include author fixed effects). See the web appendix for more on this issue. Also, they’re statistical estimates, so they have error bars, which I’ve omitted, but which do not change the overall trends. Source: Regression coefficients with author fixed effects in Table 2 of Yu et al. (2022) The story is a straight-forward one. Pick any author at random, and on average the papers they publish earlier in their career, whether as first author or last author, will be more highly cited and cited by a more diverse group of fields, than a paper they publish later in their career. In the figure below, Cui, Wu, and Evans (2022) provide some complementary data that goes beyond the life sciences, focusing their attention on scientists with successful careers lasting at least twenty years and once again proxying scientist age by the time elapsed since their first paper was published. They compute a measure of how disruptive a paper is, based on how often a paper is cited on it’s own, versus in conjunction with the papers it cites. The intuition of this disruption measure is that when a paper is disruptive, it renders older work obsolete and hence older work is no longer cited by future scientists working in the same area. By this measure, as scientists age their papers get less and less disruptive (also and separately, papers are becoming less and less disruptive over time, as discussed more here).4 From Cui, Wu, and Evans (2022). There is an error in the figure’s legend: the top line corresponds to the 1960s, the one below that to the 1970, below that is the 1980s, and below that is the 1990s. Last up, we can even extend these findings to inventors. Kaltenberg, Jaffe, and Lachman (2021) study the correlation between age and various patent-related measures for a set of 1.5mn inventors who were granted patents between 1976 and 2018. To estimate the age of inventors, Kaltenberg and coauthors scrape various directory websites that include birthday information for people with similar names as patentees, who also live in the same city as a patentee lists. They then compute the relationship between an inventor’s estimated age and and some version of each of the metrics discussed above. Once again, these results pertain to what remains after we adjust for other factors (including inventor fixed effects, discussed below). From Kaltenberg, Jaffe, and Lachman (2021) On the left, we have total citations received by a patent. In the middle, a measure of the diversity of the technologies citing a paper (lower means citations come from a narrower set of technologies). And on the right, our measure of how disruptive a paper is, using the same measure as Cui, Wu, and Evans. It’s a by-now familiar story: as inventors age, the impact of their patented inventions (as measured by citations in various ways), goes down. (The figures are for the patents of solo inventors, but the same trend is there for the average age of a team of inventors) So in all three studies, we see similar effects: the typical paper/patent of an older scientist or inventor gets fewer citations and the citations it does get come from a smaller range of fields, and are increasingly likely to come bundled with citations to older work. And the magnitudes involved here are quite large. In Yu et al. (2022), the papers published when you begin a career earn 50-65% more citations than those published at the end of a career. The effects are even larger for the citations received by patentees. The Hits Keep Coming This seems like pretty depressing news for active scientists and inventors: the average paper/patent gets less and less impactful with time. But in fact, this story is misleading, at least for scientists. Something quite surprising is going on under the surface. Liu et al. (2018) study about 20,000 scientists and compute the probability, over a career, that for any given paper, their personal most highly cited paper lies in the future. The results of the previous section suggest this probability should fall pretty rapidly. At each career stage, your average citations are lower, and it would be natural to assume the best work you can produce will also tend to be lower impact, on average, than it was in earlier career stages. But this is not what Liu and coauthors find! Instead, they find that any paper written, at any stage in your career, has about an equal probability of being your top cited paper! The following figure illustrates their result. Each dot shows the probability that either the top cited paper (blue), second-most cited paper (green), or third-most cited paper (red) lies in the future, as you advance through your career (note it’s actually citations received within 10 years, and normalized by typical citations in your field/year). The vertical axis is this percent. The horizontal one is the stage in your career, measured as the fraction of all papers you will ever publish, that have been published so far. From Liu et al. (2018), extended data figure 1 This number can only go down, because that’s how time works (there can’t be a 50% chance your best work is in the future today, and a 60% chance it’s in the future tomorrow). But the figure shows it goes down in a very surprising way. Assuming each paper you publish has the same probability of being your career best, then when you are 25% of the way through your publishing career, there is a 25% chance your best work is behind you and a 75% chance it’s ahead of you. By the time you are 50% of the way through your publishing career, the probability the best is yet to come will have fallen to 50%. And so on. And that is precisely what the figure appears to show! What’s going on? Well, Yu and coauthors show that the number of publications in a career is not constant. Through the first 20-25 years of a career, the number of publications a scientist attaches their name to seems to rise before falling sharply. Since the average is falling over this period, but the probability of a top cited paper is roughly constant, it must be that the variance is rising (the best get better, the worse get worse), in such a way that the net effect is a falling average. And Yu and coauthors present evidence that is the case. In the figure below, we track the average number of citations that go to hit papers in two different ways. In dark blue, we simply have the additional citations to the top cited paper by career stage. Note, unlike average citations, it does not fall s...
Age and the Impact of Innovations
Self-Motivation: how to build a reward system for yourself
Self-Motivation: how to build a reward system for yourself
Despite my best intentions, I do not always feel as motivated as I would like to be. Whether it is a work task or a chore at home, if a job doesn’t appeal then I will sometimes ignore it until the last minute. While I always meet a deadline, there is a far more effective – and less stressful – way I could motivate myself to take action sooner. Building a reward system is a powerful way to boost your productivity, reducing the need to rely on intrinsic motivation to complete the work you need (and want) to do. However, the rewards you choose must appeal to you, and for maximum impact, they need to be perfectly timed. The science of self-motivation In her memoir My Beloved Reward, Sonia Sotomayor, the first Latinx and third woman appointed to the US Supreme Court, wrote that “success is its own reward.” While achieving a goal will naturally bring happiness, this can only be the case if you are driven to hit your target. But this is not enough. It might seem like cheating, but it is becoming more widely accepted that having a separate incentive to reach a goal has many benefits. Far from being frivolous, rewards are considered by researchers to be “the most crucial objects for life.” Rewards are needed to encourage us to eat and drink, and even to mate. In evolutionary terms, the better we are at striving for rewards, the greater our chances of survival. With a treat in mind for reaching a target you may be more likely to commit to working towards the goal, and less inclined to procrastinate. Motivating yourself with a reward also acts as a form of positive reinforcement, increasing the likelihood that the promise of a treat will incentivize you to achieve future goals as well. Part of the brain’s reward system sits within the mesocorticolimbic circuit. Dopamine neurons feed into this reward circuit, and it is understood that the offer of a reward increases the firing of these neurons. The stimulation of the circuit leads to positively motivated behaviors and reinforcement learning. The mesocorticolimbic circuit is also responsible for what researchers call “incentive salience” – the increased firing of dopamine neurons increases our desire for the reward, which in turn creates motivation. Rewards that are related to the task are likely to be more effective. This is known as “proximity to the reward”, and scientists have noted that a related reward can be a particularly salient factor in enhancing motivation. For example, if you want to read more research papers for work, you could motivate yourself by pledging to buy a novel on your wish list if you succeed in reading one academic paper per day for one week. However, it is not only the treat itself that is important for building a successful reward system; timing your rewards correctly is crucial to ensuring self-motivation is maintained over the long-term. Building a reward system It can take time and multiple adjustments to build a reward system that will work for you. For operant conditioning to occur – when an association is made between a behavior and a consequence – the scheduling of rewards must be carefully planned to assist us in establishing new habits. A study conducted in 2018 compared the benefit of receiving frequent rewards for completing small tasks with the promise of a reward for finishing a long project. The researchers, Kaitlin Woolley and Ayelet Fishbach, found that when a small, regular reward was available, participants experienced greater interest and enjoyment in their work than those waiting for the delayed reward. Although Woolley and Fishbach demonstrated that regular rewards incentivized individuals to keep going with a project, to build a successful reward system you should first consider trying continuous reinforcement. Continuous reinforcement is often used to begin teaching a dog a new trick. At first, the dog will need a treat every time he sits or offers his paw, because that way he knows he is doing the right thing. Withholding a treat in the early stages of learning will either make him think he has done the trick incorrectly, or disincentivize him next time you give the command.  Once the dog has got the hang of it and sits every time you ask, you can move to intermittent reinforcement. He won’t know if he will get a treat every time, but he will sit anyway, in anticipation of maybe being rewarded. As humans, we also need to start with continuous reinforcement to boost motivation. If you want to learn a programming language for example, you will need to reward yourself every time you sit down to teach yourself. This will reinforce a positive association with the habit, making it easier to maintain regular practice. Only after continuous reinforcement has helped establish your desired habit, you can move to intermittent rewards. To keep dopamine firing in your mesocorticolimbic system, you could for instance create a “self-motivation lottery”. It works like this: first, write down a variety of prizes on pieces of paper, and place them into a cup or jar. The prizes must be things that you will really value, such as a new pen, a fresh journal, or a meal out. Each time you reach a goal, draw out a prize at random and enjoy the rush of rewarding yourself. Treating yourself regularly with those surprise gifts will help you to maintain new positive behaviors, as you will keep performing the task in the hope of a reward coming around again. How to select effective rewards To make an effective reward system that fosters self-motivation, you will need to choose your carrot wisely. Rewards are personal to the individual and will therefore depend on your needs and preferences. Although going for a run is proven to improve physical health and has multiple mental health benefits, if you don’t truly enjoy it then it will not feel like a reward. Think carefully about which rewards will truly appeal to you, and therefore motivate you. The following list of ideas may help you start thinking about which rewards will encourage you to foster self-motivation, based on your own interests: Watching one episode of a TV show without feeling guilty Going out for a delicious lunch, or ordering treats to enjoy at home (try to keep it healthy!) Taking a break to walk in nature Reading a novel Organizing an at-home spa day Splurging on books or new stationery Hosting a game night with friends Trying a new form of exercise or a workout class Going to the movies Enjoying a long, relaxing soak in the bath A reward does not need to be expensive. If it appeals to you, then you will be motivated to reach your goal so that you can treat yourself. Don’t feel discouraged if you set up a reward system and find that it does not seem to work for you. The key is to work out what you need to tweak. Perhaps you need to be rewarding yourself more regularly or for smaller milestones, or you may need to think of a more appealing reward. Once you’ve got the balance of timing and rewards right, the brain’s reward system will take it from there to increase self-motivation, boost productivity, and help you meet your targets. The post Self-Motivation: how to build a reward system for yourself appeared first on Ness Labs.
Self-Motivation: how to build a reward system for yourself
Digital detoxes dont actually work
Digital detoxes dont actually work
Each Monday, I get a “digital well-being” alert on my phone. It tells me how much time I spend staring at the screen each week, and highlighting the apps I use the most. It helps me cut down on unnecessary use. But a more extreme approach to dealing with technology overwhelm has become popular: digital detoxes. A digital detox is a period in which a person voluntarily refrains from using digital devices including smartphones, computers and social media platforms. However, recent research has shown that digital detoxes can negatively impact our overall well-being. Is there a healthier, more sustainable way to improve our relationship with the digital world? The popularity of digital detoxes Today, the search term “digital detox” is three times more popular than it was in 2004. People have concerns about internet addiction, or worry that social media is causing them anxiety. They may also attempt a digital detox to refocus on real life social interactions. Media hype around the supposed harmful effects of technology have also increased the popularity of digital detoxes. For example, it is common to describe a correlation between mental health problems and overuse of technology as if the latter was the cause of the problem, rather than a co-occurring symptom. It is therefore no surprise that many of us have considered ditching our devices to help us feel more present or to find more time for self-reflection. Why digital detoxes don’t work Although it seems that a digital detox could solve problems including FOMO and comparison anxiety, as well as giving us back the hours we lose to mindless scrolling, research has suggested that digital detoxes may do more harm than good. A collaboration between Oxford University, The Education University of Hong Kong, Reading University and Durham University has found “no evidence to suggest abstaining from social media has a positive effect on an individual’s well-being.” The researchers noted that this contrasts popular beliefs about the benefits of digital detoxes. Moreover, this international study found that those who took a break from social media didn’t replace online socializing with face-to-face, voice, or email interactions, as the researchers had expected. Taking a break from social media therefore led to reduced overall interaction and loneliness as social media was not replaced with forms of socializing. In 2019, a research paper published in the Perspectives in Psychiatric Care journal showed that individuals who abstained from using social media developed a lower mood and demonstrated reduced life satisfaction. They were also lonelier than the control group. The researchers concluded that while excessive social media use can be associated with negative consequences, abstaining will not necessarily lead to positive results. Crucially, the outcome of detoxing may depend on what you use your devices for. Focusing on Instagram and Facebook, Sarah Manley and colleagues reported that abstaining from the platforms for one week had no impact on passive users. For active users—who share content and participate in conversations—taking a “social media vacation” led to a lower overall mood. She concluded that social media use can be beneficial for active users, but must be balanced with the risk of addiction. We have all become more reliant on our devices. While they sound like a good idea, digital detoxes are unsustainable because they cut us from the world. This can have a negative impact on our overall mood, as well as leading to feelings of isolation and loneliness. Rather than trying to detox, we should strive to develop a better relationship with the digital world. How to cultivate healthy digital practices Much like the difficulties experienced by fad dieters, heavily restricting our online behavior is unsustainable. Rather than starting a detox, we should aim for digital re-enchantment. The following strategies can help to cultivate a healthier, and more realistic, relationship with technology: Become an active participant in the digital world. Multiple research papers have shown that passively consuming information via social media may lead to upward social comparison, depression, and anxiety. Reassuringly, active participation, through comments and conversation has been shown to increase social connection and support, as well as enhancing positive emotion and well-being. Interacting with others on social media, rather than mindlessly scrolling, can therefore support your mental health. Cultivate awareness. A lot of the frustration with social media comes from the feeling of wasting our time. Interstitial journaling is a way to track your time meaningfully. Each time you go on social media, write down what you did. Did you just scroll through your timeline? Did you reply to a friend’s post? Did you learn something new? This will help to acknowledge when you are using your social media sensibly and when you might be getting distracted. Make small changes. The key to cultivating new practices is to implement changes progressively. Whereas going cold turkey will be an unpleasant shock, gradually changing the way you use your phone will make it far easier to maintain a healthier digital lifestyle. Consume a healthy information diet. Choose your sources of information wisely to assist your learning. If you get distracted by the news throughout the day, set aside 30 minutes each morning and evening to catch up on current affairs. Try to consume an information diet that is valuable to you and helps you grow. Foster deeper connections. Harness the power of the internet to connect with like-minded people or to learn about topics that excite you. The internet has made it possible to talk at length with strangers who share your passion for any subject—make the most of it. Digital detoxes are popular, but like a crash diet they are unlikely to boost your well-being or improve the way you consume online information. In fact, those who attempt a detox may notice low mood and feelings of isolation or loneliness. Instead, focus on using technology to your advantage by cultivating genuine connections with others, only consuming information that will help us grow personally and professionally, and reflecting on the way we use our devices. The post Digital detoxes don’t actually work appeared first on Ness Labs.
Digital detoxes dont actually work
Deliberate doubt: the art of questioning our assumptions
Deliberate doubt: the art of questioning our assumptions
Socrates, Galileo, Marie Curie, Einstein… What did these great thinkers have in common? They all practiced deliberate doubt and used it as a tool to improve their thinking and generate creative ideas. Deliberate doubt is the practice of actively questioning our beliefs and assumptions. It is about suspending our certainty and letting go of our preconceived notions in order to explore new ideas and perspectives. By turning doubt into a deliberate process, we open ourselves up to new possibilities and allow our minds to wander in unexpected directions. A thinking tool for systematic curiosity When we’re certain of something, we tend to stop looking for alternative explanations or possibilities. But when we doubt, we’re forced to consider other perspectives and look for evidence to support our beliefs. Of course, doubt can feel uncomfortable, but it can lead to a more nuanced understanding of a topic and can spark new ideas and insights. Let’s say that you’re working on a research project and your intuition tells you that a certain hypothesis is correct. You may become so focused on this hypothesis that you’re blind to other—equally interesting—options. By leaving room for uncertainty, you may find that a different explanation could be supported by the evidence, which might lead to new insights. By doubting your initial assumption, you open yourself up to new possibilities which can improve the quality of your research. When we’re faced with a difficult problem, it can also be tempting to rely on our preconceived notions and try to solve it in the same way that we’ve solved similar problems in the past. But if we consider alternative approaches, we may find that a different solution is actually more effective and will lead to better outcomes. Deliberate doubt can help us to develop a more open-minded and curious approach to the world. It encourages us to consider other perspectives and to seek out new information. This approach has been used by some of the best thinkers to generate new, innovative ideas: Socrates. The Greek philosopher is known for his method of questioning, which he called elenchus, better known today as the Socratic method. He believed that by asking questions and doubting the beliefs and assumptions of others, he could help people to think more deeply and critically about the world around them. Galileo Galilei. Considered the father of modern observational astronomy, Galileo was known for doubting existing theories and beliefs and testing them through observation and experimentation. This method helped him to make many important discoveries, including the fact that the Earth orbits the Sun, which was contrary to the prevailing belief at the time. Marie Curie. The Polish-French physicist and chemist is known for her pioneering work in radioactivity. She was the first woman to win a Nobel Prize and the only person to win Nobel Prizes in two different scientific fields (physics and chemistry). A practitioner of deliberate doubt, Curie was known for her ability to challenge existing theories and beliefs and to seek out new evidence to support her ideas. Albert Einstein. The most famous had an uncanny ability to think outside of the box and challenge existing theories and beliefs. In his own words to a journalist at LIFE Magazine: “The important thing is not to stop questioning. Curiosity has its own reason for existence.” But it doesn’t mean you should use it all the time. Deliberate doubt can help us challenge our assumptions, stimulate creative thinking, and improve our problem-solving skills. And the good news is: it’s simple to start implementing its principles in your daily life and work. How to practice deliberate doubt Practicing deliberate doubt requires regularly challenging your own beliefs and assumptions. Ask yourself questions like: What if I’m wrong about this? What evidence do I have to support my belief? What are the alternative explanations? Another way to practice deliberate doubt is to seek out a diverse range of experiences and expertise. Ask yourself: Are there people who have different perspectives on this matter? This way, you can broaden your understanding of the world by exposing yourself to different viewpoints. For instance, you can read books or articles by authors who have different backgrounds or opinions than your own, or you can have conversations with people who have different experiences than you do. The variety of perspectives will help you develop a more nuanced understanding of a topic, and potentially generate more interesting ideas. Finally, test your beliefs with evidence. Let’s say that you’re working on a product launch and you believe that a certain marketing strategy will be the most effective. Instead of treating this assumption as your only option, you can test it by conducting a pilot study or a small-scale experiment to see if it actually produces the desired outcome. Deliberate doubt is incredibly effective if your goal is to open your cone of uncertainty and think more creatively but, like all thinking tools, it shouldn’t be used indiscriminately. When doubt becomes counterproductive While deliberate doubt can be a valuable tool for generating creative ideas and exploring complex problems, it can also be counterproductive if it is not practiced in the right way. It’s important to keep in mind that deliberate doubt is not constant doubt. When practiced all the time, deliberate doubt can lead to inaction. If we’re continuously doubting our own ideas, we’ll be less likely to pursue them and see them through to completion. We can become overly hesitant, which can prevent us from making decisions. We spend so much time doubting everything, we end up not doing anything. Deliberate doubt can also lead to a lack of confidence when we apply it to ourselves. We can become self-critical and unsure of our abilities. In this case, deliberate doubt can undermine our self-esteem. As a result, we may be too afraid to try new things or take risks. To avoid these pitfalls, it’s important to strike a balance between doubt and certainty, and to use doubt as a tool to stimulate creative thinking and exploration, rather than as a means of undermining ourselves or others. Avoiding the pitfalls of deliberate doubt There are a few caveats to keep in mind in order to avoid pitfalls and make the most of this valuable tool. Some of these caveats include: Balance doubt with certainty. It’s important to strike a balance between doubt and certainty. If we doubt everything, we may become overly skeptical and cynical. On the other hand, if we’re certain of everything, we may stop looking for alternative explanations or possibilities, and this can limit our creativity and thinking. Dance with uncertainty: find a balance between doubt and certainty. Use doubt as a tool, not as a weapon. When we use doubt as a weapon, it can lead to a lack of confidence in ourselves and trust in others. When practicing deliberate doubt, it is important to use it as a tool to stimulate creative thinking and exploration, rather than as a means of undermining ourselves or others. Seek out diverse perspectives and experiences. By exposing ourselves to different viewpoints, we can broaden our understanding of the world and challenge our assumptions. This can help us to develop a more nuanced understanding of a topic and generate new ideas. By actively questioning our beliefs and assumptions, and by exposing ourselves to diverse perspectives, we can open ourselves up to new possibilities and generate original ideas. As long as you use it as one of the many thinking tools at your disposal, deliberate doubt can be a powerful source of insights and inspiration. The post Deliberate doubt: the art of questioning our assumptions appeared first on Ness Labs.
Deliberate doubt: the art of questioning our assumptions
December 2022 Updates
December 2022 Updates
New Things Under the Sun is a living literature review; as the state of the academic literature evolves, so do we. This post highlights some recent updates. Subscribe now Science: Trending Less Disruptive The post “Science is getting harder” surveyed four main categories of evidence (Nobel prizes, top cited papers, growth in the number of topics covered by science, and citations to recent work by patents and papers) to argue it has become more challenging to make scientific discoveries of comparable “size” to the past. This post has now been updated to include an additional category of evidence related to a measure of how disruptive academic papers are. From the updated article: …The preceding suggested a decline in the number of new topics under study by looking at the words associated with papers. But we can infer a similar process is under way by turning again to their citations. The Consolidation-Disruption Index (CD index for short) attempts to score papers on the extent to which they overturn received ideas and birth new fields of inquiry. To see the basic idea of the CD index, suppose we want to see how disruptive is some particular paper x. To compute paper x’s CD index, we would identify all the papers that cite paper x or the papers x cites itself. We would then look to see if the papers that cite x also tend to cite x’s citations, or if they cite x alone. If every paper citing paper x also cites x’s own references, paper x has the minimum CD index score of -1. If none of the papers citing paper x cite any of paper x’s references, paper x has the maximum CD index score of +1. The intuition here is that if paper x overturned old ideas and made them obsolete, then we shouldn’t see people continuing to cite older work, at least in the same narrow research area. But if paper x is a mere incremental development, then future papers continue to cite older work alongside it. That’s the idea anyway; does it actually map to our ideas of what a disruptive paper is? It’s a new measure and it’s properties are still under investigation, but Wu, Wang, and Evans (2019) tried to validate it by identifying sets of papers that we have independent reasons to believe are likely to be more or less disruptive than each other. They then checked to see that the CD index matched predictions. Nobel prize winning papers? We would expect those to be disruptive, and indeed, Wu and coauthors find they tend to have high CD index scores on average. Literature review articles? We would expect those to be less disruptive than original research, and their CD index is indeed lower on average than the CD index of the papers they review. Articles which specifically mention another person in the title? We would expect those tend to be incremental advances, and they also have lower CD index scores. Lastly, for a sample of 190 papers suggested by a survey of 20 scholars as being distinctively disruptive or not disruptive, the CD index closely tracked which papers were disruptive and which were not. Park, Leahey, and Funk (2022) compute the CD index for a variety of different datasets of academic publications, encompassing many millions of papers. Below is a representative result from 25 million papers drawn from the web of science. Across all major fields, the CD index has fallen substantially. Declining Disruption - from Park, Leahey, and Funk (2022) This decline is robust to a lot of different attempts to explain it away. For example, we might be worried that this is a mechanical outcome of the tendency to cite more papers, and to cite older papers (which we discuss in the next section). For any given paper x, that would increase the probability we cite paper x’s references, in addition to x. Park, Leahey, and Funk, try to show this isn’t solely driving their results in a few different ways. For example, they create placebo citation networks, by randomly shuffling the actual citations papers make to other papers. So instead of paper y citing paper x, they redirect the citation so that paper y now cites some other paper z, where z is published in the same year as x. This kind of reshuffling preserves the tendency over time of papers to cite more references and to cite older works. But when you compute the CD index of these placebo citation networks, they exhibit smaller declines than in the actual citation networks, suggesting the decline of disruption isn’t just a mechanical artifact of the trend towards citing more and older papers. Lastly, it turns out this decline in the average value of the CD index is not so much driven by a decrease in the number of disruptive papers, as it is a massive increase in the number of incremental papers. The following figure plots the absolute number of papers published in a given year with a CD index in one of four ranges. In blue, we have the least disruptive papers, in red, the most disruptive, with green and orange in the middle. Annual # of publications in four CD index ranges. Blue = 0.0-0.25. Orange = 0.25-0.5. Green = 0.5-0.75. Red = 0.75-1.0. From Park, Leahey, and Funk (2022). While the annual number of the most disruptive papers (in red) grew over 1945-1995 or so, it has fallen since then so that the number of highly disruptive papers published in 2010 isn’t much different from the number published in 1945. But over the same time period, the number of the mostly incremental papers (in blue) has grown dramatically, from a few thousand a year to nearly 200,000 per year. As an aside, the above presents an interesting parallel with the Nobel prize results discussed earlier: Collison and Nielsen find the impact of Nobel prize-winning discoveries are not rated as worse in more recent years (except in physics), but neither are they rated better (as we might expect given the increase in scientific resources). Similarly, we are not producing fewer highly disruptive papers; we simply are not getting more for our extra resources. The updated article also includes some new discussion of additional text-based evidence for a decline in the number of topics under study in science, relative to the number of papers, again from Park, Leahey, and Funk (2022). It also adds in some evidence that the rise in academic citations to older works does not merely reflect a rise in polite but inconsequential citations - at least in recent times, the citations to older work are just as likely to be rated influential citations as the citations to younger work. Read the whole thing Creative Patents and the Pace of Technological Progress The article “Innovation (mostly) gets harder” has a similar conclusion to “Science is getting harder”, but applied to the case of technological progress: eking out a given proportional increase along some technological metric seems to require more and more effort. The original article reviewed evidence from a few specific technologies (integrated circuits, machine learning benchmarks, agricultural yields, and healthcare) as well as some broad-based proxies for technological progress (firm-level profit analogues, and total factor productivity). I’ve now updated this article to include a discussion of patents derived from a fascinating PhD job market paper by Aakash Kalyani: …it’s desirable to complement the case studies with some broader measures less susceptible to the charge of cherry-picking. One obvious place to turn is patents: in theory, each patent describes a new invention that someone at the patent office thought was useful and not obvious. Following Bloom et al., below I calculate annual US patent grants1 per effective researcher. As a first pass, this data seems to go against the case study evidence: more R&D effort has been roughly matched by more patenting, and in fact, in recent years, patenting has increased faster than R&D effort! Is innovation, as measured by patents, getting easier? Author calculations. Annual patent grant data from here. US effective researchers computed by dividing annual R&D spending (see figure RD-1 here) by median wage for college educated US workers (spliced data series from Bloom et al., here). The trouble with the above figure is that patents shouldn’t really be thought of as a pure census of new inventions for a few reasons. First off, the propensity of inventors (and inventive firms) to seek patent protection for their inventions seems to have increased over time.2 So the observed increase in annual patenting may simply reflect an increase in the share of inventions that are patented, rather than any change in the number of new inventions. Second, patents vary a lot in their value. A small share of patents seem to account for the majority of their value. We don’t care so much about the total number of patents as the number of valuable patents. On the second problem at least, Kalyani (2022) shows that one way to separate the patent wheat from the patent chaff is to look at the actual text of the patent document. Specifically, Kalyani processes the text of patents to identify technical terminology and then looks for patents that have a larger than usual share of technical phrases (think “machine learning” or “neural network”) that are not previously mentioned in patents filed in the preceding five years. When a patent has twice as many of these new technical phrases as the average for its technology type, he calls it a creative patent. About 15% of patents are creative by this definition. Kalyani provides a variety of evidence that creative patents really do seem to measure new inventions, in a way that non-creative patents don’t. Creative patents are correlated with new product announcements, better stock market returns for the patent-holder, more R&D expenditure, and greater productivity growth. Non-creative patents, in general, are not. And when you look at the number of creative patents (in per capita terms - it’s the solid green line below), Kalyani finds they have been on the decline since at least 1990. From Kalyani (20...
December 2022 Updates
Proprioceptive writing: a method for embodied self-reflection
Proprioceptive writing: a method for embodied self-reflection
For the last few years, I have been looking for ways to get to know myself better. An unexpected life event in 2019, followed swiftly by trying to maintain my freelance career while solo parenting through a pandemic, left me feeling I had lost my sense of self. Back on my feet, but, like many parents, still trying to maintain the balance of work and home life, I have been searching for a way to support my current reflective practices. Expensive and time-consuming options are off the table, so it has been refreshing to learn about a free method for boosting self-awareness: proprioceptive writing. This process combines meditation and writing, two of the most effective ways to tap into the inner self. Rediscovering your sense of self Proprioceptive writing was first invented in the mid-1970s by author Linda Trichter Metcalf. Metcalf was working as a professor at Pratt Institute and began researching methods to help students find their writing voice. She developed the proprioceptive writing method as a tool to bring the self into focus and clarify one’s own life. The word proprioception comes from the Latin proprius, meaning “one’s own”. In medical terminology, proprioception is the sense that tells us about the location or movement of our bodies. If healthy proprioception is present, you will know whether someone has moved your finger upwards or downwards even when your eyes are closed. Conditions such as diabetes can disrupt this sense, making it difficult to perceive where your digits, or even limbs, are in space. The same is true of our emotions and imagination. It is easy to lose sense of where we are in our lives right now, and the metaphorical direction we are heading in. With emotional proprioception missing, we start to feel lost or as if we are simply coasting along. If we have ignored our inner voice for some time by not completing any reflective practice, we can become switched off to our own feelings and ideas. Curiosity dwindles, and we may not register our everyday thoughts. Proprioceptive writing can help us to rediscover our dreams and creative energy, while rebuilding our self-trust. Furthermore, it may help us to resolve emotional conflict while dissolving inhibitions. The benefits of embodied self-reflection In their book, Writing the Mind Alive, Linda Trichter Metcalf and Tobin Simon describe the ritual of proprioceptive writing as “utter simplicity”. The writing task only takes around 25 minutes. During this time, one listens to inner thoughts, writing down whatever is heard. This could include feelings, emotions or worries that come to mind. These are explored through a combination of writing and inner hearing. Researchers Jennifer Leigh and Richard Bailey noted that self-focus based purely on the reflection of thoughts can lead to rumination, anxiety and neuroticism. Conversely, they found that embodied reflective practices such as proprioceptive writing reduced the likelihood of unhealthy rumination. Furthermore, this practice was found to be helpful for both personal and professional development. The combination of writing and reflection serves as a method to connect physical sensations with thoughts that an individual might otherwise remain unaware of. By learning to listen to one’s own thoughts in a supportive, empathetic manner, it is possible to develop a stronger connection to our emotions. Writing in the Journal of Vocational Behavior, Reinekke Lengelle and colleagues reported that proprioceptive writing demonstrated increased vulnerability. Students who completed career questionnaires submitted answers that showed openness and depth of understanding, with richer material than would usually be expected of similar reflective exercises. Lengelle concluded that proprioceptive writing increased the development of students’ career identities and narratives. This could, in turn, “enable them to contribute usefully to society in a way that is personally meaningful to them.” By connecting with physical sensations through practising proprioceptive writing, you are likely to experience better internal and external emotional connections. This can lead to greater empathy for both oneself and others, as well as improved confidence levels, providing the right environment for personal and professional growth. How to practise proprioceptive writing This self-reflection method involves writing for 25 minutes while listening to music. Professor Metcalf recommends Baroque music to aid creativity. The only equipment required is a pen and pad of plain paper. You should then follow these three steps for each session: Write down what you hear. It takes practice to recognise your thoughts and convert them to words, so take your time writing down each feeling as it comes. It can be helpful to think of your thoughts as voices. Perhaps a voice says, “I still need to enrol in that online course”, while another says, “I can’t take on anything else right now.” In the first stage, write it all down without any judgement. Hear what you write. Now that you have written down your thoughts, it will be easier to listen to what your mind is saying. Take time to explore each thought before you move on to another feeling or concern. For example, if you find yourself worrying that your income is lower than you would like, dig deeper into where this thought comes from and what your mind is trying to say. The process will help you listen to the story you are telling yourself. Go deeper for each thought. With every thought you wrote down, ask yourself: “What do I mean by…?” In the salary example above, you would explore your income worries further to understand whether your concerns are related to financial difficulty, your perceived status, personal expectations, self-image, self-esteem, self-worth or another issue. Keep your thinking slow to fully explore every thought in the above three steps, rather than letting your mind race ahead. The, at the end of the 25 minutes, stop writing and and ask yourself four review questions: Which thoughts were heard but not written down? How do I feel now? What story am I telling? Do I have any direction for future proprioceptive writing sessions? These review questions should help to clarify the thoughts you have had, as well as providing prompts to help you get started on your next session. Proprioceptive writing is a simple technique that combines writing and meditation to support embodied self-reflection. This method of self-reflection can reduce rumination and supports both personal and professional growth. Practising hearing intelligence can be enlightening as it helps you not only discover what is on your mind, but the meaning and significance behind these thoughts, too. By putting your thoughts into words, you can play closer attention to feelings that might otherwise go unnoticed. It’s a way to make time to really listen to your inner self. The post Proprioceptive writing: a method for embodied self-reflection appeared first on Ness Labs.
Proprioceptive writing: a method for embodied self-reflection
Reopening the mind: how cognitive closure kills creative thinking
Reopening the mind: how cognitive closure kills creative thinking
Finding answers is a highly-valued skill in today’s world, where more than ever knowledge is power. We pride ourselves in quickly resolving issues and creating consensus. In job descriptions, companies clearly state that they are looking for problem solvers. But what if this single-mindedness blinds us to more creative answers? What would happen if we became more comfortable with unsolved problems? The need for cognitive closure is the motivation to find an answer to ambiguous situations — any answer that aligns with our existing knowledge. Not only can it lead us to make mistakes based on erroneous assumptions, but it can obscure the path to innovation. The psychology of cognitive closure Ideally, we should seek knowledge to resolve questions regardless of whether that new knowledge points to an answer that aligns with what we believe or what we want (“I don’t like this answer, but it is the most logical answer”). We should also accept the ambiguous nature of a situation for as long as we don’t have enough knowledge to resolve it (“I currently don’t know enough to answer that question”). That’s what we would do if we were rational agents. But dealing with uncertainty feels uncomfortable, so we try to get to an answer as fast as possible, sometimes irrationally, as long as it seems to neatly close the open loops we’ve been struggling with — thus providing us with a sense of closure. That’s why our need for cognitive closure is related to our aversion toward ambiguity. According to Professor Arie Kruglanski and his team at the University of Maryland, the need for cognitive closure manifests itself via two main tendencies: The urgency tendency: our inclination to attain closure as fast as possible. The permanence tendency: our inclination to maintain closure for as long as possible. When we find ourselves in an uncertain situation, urgency and permanence act as irrational sources of motivation that push us to try our hardest to eliminate ambiguity and to arrive at a definite conclusion. We are compelled to find an answer, irrespective of its actual validity. Some people feel more comfortable than others in ambiguous situations. Professor Arie Kruglanski and his team designed the Need for Closure Scale (NFCS), which, in their own words “was introduced to assess the extent to which a person, faced with a decision or judgment, desires any answer, as compared with confusion and ambiguity.” Items such as “I think that having clear rules and order at work is essential to success” and “When dining out, I like to go to places where I’ve been before so that I know what to expect” will make you score higher. Items such as “Even after I’ve made up my mind about something, I am always eager to consider a different opinion” and “I enjoy the uncertainty of going into a new situation without knowing what might happen” are reverse coded. People who score high on the NFCS are more likely to make stereotypical judgments and to distort new information so it aligns with their existing beliefs. Conversely, people who score low on the scale will display fluider, more creative thinking, and will be more open to new ideas and exploring new environments. While our individual need for cognitive closure is mostly stable throughout our lives, it can sometimes be affected by specific circumstances. For instance, experiments show that under high time pressure, we’ll tend to use shortcuts to process information and get to a solution faster. Just as heuristics can often be helpful, our need for cognitive closure can be beneficial in simple situations that require a quick answer. However, when faced with more complex problems that demand creative thinking, our need for cognitive closure can get in the way by motivating us to accept any answer that fits our existing knowledge, whether explicitly or tacitly. Cognitive closure and creative thinking A high need for cognitive closure may lead us to select only information that matches our current knowledge and may result in faster resolution. We may also analyze that information in ways that produce simple, quick solutions — but not always the best solution. Another way cognitive closure impacts the way we think is by making us cling to our current ideas to maintain our sense of expertise. Instead of expending cognitive resources towards learning new information and dealing with the discomfort of uncertainty, we hold on to the reassuring perception of solid knowledge. Preserving the stability of our web of knowledge becomes more important than expanding it. In contrast, a lower need for cognitive closure means we are more comfortable playing with many shades of gray and remaining in a situation where we don’t have an answer yet — and may never get to a satisfactory resolution. Of course, you don’t want your need for cognitive closure to be too low, as in many situations we do need to make a decision at some point, even if we don’t have all the information. But more often than not, high cognitive closure can be blamed for rushed, unimaginative decisions. Fortunately, our need for cognitive closure can be reduced by being intentional about the way we navigate ambiguous situations and by making space for productive mistakes. Embracing ambiguity to unlock creativity The first step is to know where you sit on the scale. The more you know how you tend to react in uncertain and complex situations, the better you will be able to manage your relative need for cognitive closure. Researchers Arne Roets and Alain Van Hiel from Ghent University created a short version of the questionnaire, with only 15 items. Here are the questions, where 1 means “strongly disagree” and 6 means “strongly agree”: 1. I don’t like situations that are uncertain. 1 – 2 – 3 – 4 – 5 – 6 2. I dislike questions which could be answered in many different ways. 1 – 2 – 3 – 4 – 5 – 6 3. I find that a well ordered life with regular hours suits my temperament. 1 – 2 – 3 – 4 – 5 – 6 4. I feel uncomfortable when I don’t understand the reason why an event occurred in my life. 1 – 2 – 3 – 4 – 5 – 6 5. I feel irritated when one person disagrees with what everyone else in a group believes. 1 – 2 – 3 – 4 – 5 – 6 6. I don’t like to go into a situation without knowing what I can expect from it. 1 – 2 – 3 – 4 – 5 – 6 7. When I have made a decision, I feel relieved. 1 – 2 – 3 – 4 – 5 – 6 8. When I am confronted with a problem, I’m dying to reach a solution very quickly. 1 – 2 – 3 – 4 – 5 – 6 9. I would quickly become impatient and irritated if I would not find a solution to a problem immediately. 1 – 2 – 3 – 4 – 5 – 6 10. I don’t like to be with people who are capable of unexpected actions. 1 – 2 – 3 – 4 – 5 – 6 11. I dislike it when a person’s statement could mean many different things. 1 – 2 – 3 – 4 – 5 – 6 12. I find that establishing a consistent routine enables me to enjoy life more. 1 – 2 – 3 – 4 – 5 – 6 13. I enjoy having a clear and structured mode of life. 1 – 2 – 3 – 4 – 5 – 6 14. I do not usually consult many different opinions before forming my own view. 1 – 2 – 3 – 4 – 5 – 6 15. I dislike unpredictable situations. 1 – 2 – 3 – 4 – 5 – 6 Then, add up all your answers. Scores up to 30 mean low need for closure, and scores between 75-90 mean high need for closure. If you find you have a high need for closure, here are some simple strategies you can apply so you can keep your mind open to competing possibilities when facing uncertain situations, and to avoid making decisions too fast. Design a psychologically safe environment. Our need for closure goes up when we feel threatened, and it goes down when we feel safe to make mistakes. By fostering psychological safety and encouraging creative experimentation, you and the people you work with are more likely to open their mind to the power of uncertainty. Fall in love with problems. Instead of trying to find answers as quickly as possible, train yourself to become comfortable with open issues that you know are unsolved. Richard Feynman recommended keeping a dozen of your favorite problems constantly present in your mind. He said: “Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps.” Practice mind gardening. In French, my native language, we talk of ideas as seeds that need to sprout (“faire germer une idée”). Keeping your mind open doesn’t mean you should passively wait for an answer. When you find yourself in an uncertain situation, collect nuggets of information and grow your tree of knowledge by connecting ideas together. You may not get to a definite answer, but you will still generate interesting insights. Instead of building a prison of convergence, cultivate a garden of emergence. Learn in public. Similarly, don’t wait until you have an answer to share it with the world, as this may lead you to rush to a clear solution. Instead, publish your early ideas, especially if they feel half-baked. You can do this on your blog, on social media, or in a public digital garden. Decide when to decide. While reducing your need for cognitive closure will allow you to explore more innovative answers to complex problems, there will be times when you need to make a decision, whether it is because of time pressure or other imperatives. Know when questions can remain open, and when you should move forward, even if you wish you had more information. The DECIDE framework can be a useful tool to make a decision and then evaluate the result. Liminal states can be uncomfortable, but they offer an unparalleled time for creativity. Some people are more comfortable than others in these moments of ambiguity, and the way we handle uncertainty greatly impacts our ability to think creatively under pressure. Knowing your own level of need for cognitive closure can help you better navigate those unfamiliar spaces and ensure you don’t constrain your imagination by rushing to make a decisio...
Reopening the mind: how cognitive closure kills creative thinking
The psychology of prestige: why we play the social status game
The psychology of prestige: why we play the social status game
With social media at our fingertips, we are regularly alerted to the news of a friend’s new car, an ex-colleague being awarded yet another promotion, or the lavish holiday your neighbors have somehow managed to afford. It’s hard to not get swept up in the pursuit of social status. Far from being a modern phenomenon, we have craved status ever since we were monkeys, when it already offered advantages within hierarchical micro-societies. However, now that status is not so closely linked to our survival, pursuing goals based on the assumed prestige our success will confer can be a bad idea. For instance, those who choose to study medicine based on the future status of being a doctor could later find themselves unfulfilled in a career they are not truly interested or invested in. Rather than striving for status, we need to find more sustainable incentives for success. Our natural desire for prestige The importance we confer to prestige makes sense from an evolutionary perspective. For our ancestors, being more popular was a survival advantage. Social status offered greater group protection and longevity, which means that they were more likely to reproduce. Similarly, as modern individuals, we seek out and follow paths that will maximize our social status and capital, even if we do not realize we are doing it. Professor Cameron Anderson explained that status influences how we behave and think. For example, wearing designer clothing or driving a sports car may be part of our inbuilt desire for prestige. Such status symbols can help maintain social hierarchies. Dr Sabina Siebert from the University of Glasgow found that when faced with competition from other professions, barristers protected their prestige with the use of status symbols including professional dress, ceremonies and rituals. She concluded that this allowed “elite professionals to maintain their superior status.” Modern society has exacerbated our natural desire for high-ranking status, with social media as a giant leaderboard where we compete with each other to gain the most prestige points.  Eugene Wei, who has worked in media, technology and for consumer internet companies, wrote that social media is built on the idea that it offers an efficient way to accumulate social capital. The likes, retweets, or comments that are felt to increase reach and boost the perception of one’s own value. It’s a “world of artificial prestige”. But farming prestige points is not without a cost. The impact of status anxiety When you focus on how successful you appear to others, status anxiety can occur. Your fear of not being valued by society may sadly lead to harmful long-term decisions being made. If you study law to claim the associated status of working as a lawyer, rather than because you are drawn to the career itself, you may later find yourself dissatisfied, stressed or unhappy at work. The desire to achieve status may mean you did not consider other career options, and may have turned down more suitable opportunities because of your drive to appear prestigious.  In his book Status Anxiety, philosopher Alain de Botton writes that the anxiety about what others think of us and about whether we are judged a success or a failure can lead us to make decisions that are self-defeating, lower our self-worth, or are at odds with our values. Status symbols such as a large house in a desirable area, multiple holidays each year, or being able to flash a Rolex on your wrist may all be ways that you feel you demonstrate your significance and value in society. However, when your drive to be outwardly successful supersedes all else, you may ignore exciting vocational work opportunities, put too little energy into personal relationships, or fail to make time for rest. If you decline opportunities for personal growth or self-discovery while striving for status, you could progress fast, but not in the right direction. In situations in which status, rather than the achievement itself, is the goal, we will find that even when acquired, we will likely remain dissatisfied. So what’s the alternative? Breaking free from the social status game It is possible to replace irrational status-seeking behaviors with healthier alternatives in which the value is found in the act itself rather than by the aimless collection of status symbols. Here are a few strategies to help you replace empty prestige with playful exploration: Practice metacognition to reflect on long-term goals. By becoming more aware of your thought processes, it is possible to observe patterns regarding your motivations. If you notice that you are instinctively drawn to actions based on the potential for increased status, note this down in a journal. Take time to consider whether the goal or motivation is truly aligned to your values, or if you are being coerced by a desire for prestige. Surround yourself with explorers. If your colleagues, friends or family are all driven by status, it is difficult not to get sucked into the pursuit of outward signs of success. Even worse, you may find yourself playing a game of one-upmanship and in a vicious cycle of trying to appear better than one’s peers. To avoid this trap, find friends online and in real life who are not playing the status game. This will help to avoid feelings of inadequacy and the desire to keep up with others. Explore unconventional paths. Many people have achieved success in pursuing their interests. Reserve time to read memoirs and biographies of those who have achieved their dreams not by striving for wealth or status, but by reflecting on what is important to them and following their own path. Focus on learning new skills. Rather than collecting status symbols, try to acquire skills that could help you grow and develop as an individual. This could include working on your communication skills, self-confidence, or problem-solving capabilities. It may even involve considering a career change. The psychology of prestige has its roots in evolution. However, in the modern world, we have the ability to reflect on the motivations behind our pursuit of status. It is important to distinguish between wanting to achieve a goal that is aligned to our values and will truly make us feel good, and a goal we want to meet purely for its associated status in our society. If we’re aware that we’re playing the social status game, then we can reflect on whether there is an intrinsically motivated path that could provide opportunities for growth and greater overall satisfaction. The post The psychology of prestige: why we play the social status game appeared first on Ness Labs.
The psychology of prestige: why we play the social status game
Connect all your workflows with Michael Dubakov CEO of Fibery
Connect all your workflows with Michael Dubakov CEO of Fibery
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us work better and happier. Michael Dubakov is the CEO of Fibery, an all-in-one workspace allowing the whole company to do everything together, whether it’s research, product development, marketing, customer management, and more. In this interview, we talked about the proper metrics for productivity, how to augment organizational intelligence, what we can learn from hypertext tools from the 80’s, the benefits of combining work management and knowledge management, how to work with both structured and unstructured information, and much more. Enjoy the read! Hi Michael, thank you so much for agreeing to this interview. Let’s start with a bit of a controversial question: what do you think is the problem with most productivity tools? I’ll speak about teams’ productivity most of the time here. Productivity tools should increase productivity, right? But productivity of knowledge workers is extremely hard to measure. There is no good metric yet. Working hours, lines of code, or any similar metrics measure effort, but not results. We need a better metric.  I think the proper metric is the quality and quantity of insights. What is an insight? It’s a piece of new knowledge. It can take many forms: a new question, a new answer to an existing question, a new theory, a new proof, a new experiment, etc. The more insights a knowledge worker generates in a given timeframe, the more productive she is.  Most tools promote values like “save time”, “work faster”, etc. However, in the knowledge economy, we compete with knowledge, not efficiency. Our productivity tools should become knowledge management tools as well, thus making companies more intelligent.  The second problem is that productivity tools create silos. Wiki, Spreadsheets, CRM, Project management tools create many walls and barriers inside a company. As a result, it is much harder to extract and connect information. Connections are really important here, this is how we discover novelty. Data silos impede connections and impede insights generation.  These problems are extremely hard to solve and there is no tool on the market that does it, but at least we should embrace them and move the new generation of productivity tools into the right direction. You are on a mission to augment organizational intelligence — what does that mean exactly? Well, intelligence is hard to define. It is easy to understand de-augmentation though, Engelbart demonstrated it with a brick attached to a pen. But what is intelligence augmentation? Engelbart defined it as “more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insolvable.” Beautiful!  I define intelligence as quantity and quality of insights. This definition is shorter and includes everything Doug said. We need a tool that increases the probability of insights and quality of insights.  Conceptually, what would that look like? There are many things here, but let me try to nail some important traits. First, I think that the knowledge management and work management dichotomy is false, we have to unite these spaces. A dream tool should combine work management and knowledge management processes together. It should work very well with unstructured information that has poor meta-data (note, chat, text, document, diagram) and with structured information that has rich meta-data (task, product, protein formula). Second, this tool should be a single point of truth about anything important happening in a company. It should break information silos, replace many tools, and fetch data from those tools it can’t replace. As an example, a team usually uses different software for chat, task management and documents management. A dream tool should have all these things as features that are tightly coupled and work together with a single database.  Third, this tool should support connectivity. All information should be connected via all kinds of links (bi-directional links, relations, transclusions). It should be possible to build ontologies and transform unstructured information into structured easily. Interestingly, most organizations don’t try to connect data. However, true intelligence lives in connections, this is how we invent new things. Finally, this tool should support information and process evolution. Teams and organizations evolve and processes change. However, most productivity tools are relatively rigid.  To summarize, we need a tool that accumulates, mixes, connects, and visualizes structured and unstructured information in a single space. That sounds like a simple yet ambitious vision. Can you tell us how you turned these principles into an actual tool when designing Fibery? Fibery is my second company. My first company was Targetprocess that I started in 2004. It was a software that focused on agile project management practices and was acquired by Apptio two years ago. So we learned a lot about companies’ processes and problems. The most important problems to me were processes’ connectivity and evolution. We wanted to create a tool that connects many processes in a company and evolves with a company, but, to be honest, we completely missed the knowledge management part.  About two years ago I started to dig into the past and discovered many beautiful ideas. Surprisingly, hypertext tools from the 80’s were very powerful. They provided a unique environment to create, connect and share knowledge. For example, Intermedia tool was created in 1985 and it had bi-directional links, various visualizations and features we are reinventing last decade. The Internet killed all these systems, but now we have a renaissance of hypertext tools. That is how we discovered that knowledge management is super important and unstructured + structured information mix is paramount for a real productivity tool for a knowledge economy. Fibery is five years old already, but we nailed the current vision only a year and a half ago. The deeper we dig into it, the deeper we believe in it. That sounds amazing. So, how does Fibery work concretely? Fibery core is what we call a “flexible domain”. You can create your own structures and hierarchies that represent how your company operates. Basically, you can design your database, but it is well hidden from the creator. It means that Fibery supports structured information really well. Here is the very basic map of four processes: Then you have all kinds of visualizations. You can visualize data using several Views: Timeline, Board, Table, Hierarchical List, Calendar, Graphical Report.  Then we have tools to work with unstructured information (Documents and Whiteboards). Our documents are kinda tricky, we combine them with databases in an unusual way, so you have a rich edit document in every entity in a database. Whiteboard View mixes databases and free form diagrams, you can include entities from the database and do cool things. And we pay much attention to links. Connections and linking information is where Fibery really shines. You can select a part of text anywhere and connect it to any entity via a bi-directional link. You can connect databases via strong relations and build deep hierarchies and complex data structures. It all helps people to discover new things. Fibery has a relatively unique panel navigation, so you can quickly explore these links and get back without losing focus. Then you want to bring the data from external systems. Fibery power is that you can replicate any domain. You can fetch data from dozens of systems (Intercom, GitLab, GitHub, Airtable, Braintree, Zendesk, etc) and connect this data to other databases. For example, you can fetch Pull Requests from GitLab and connect them to Features, or you can fetch Subscriptions from Braintree and connect them to Accounts.  Finally, you can automate things in Fibery, it has automation rules and buttons. It helps to keep data consistent and, well, save time.  These sound like powerful workflows! What kind of people use Fibery? Fibery is a horizontal product, but we are mostly focusing on product development companies and startups now. Our largest customer has 500 people in Fibery and uses it for all kinds of processes, from product management to legal.  Our typical customer is a product company or a startup below 100 people that uses Fibery for everything: product development, CRM, feedback accumulation, HR, strategic plans. We have more than 250 paid customers already. And how do you personally use Fibery? As you can imagine, we use Fibery for all our processes. In fact we have only two major tools: Fibery + Slack. Eventually we want to get rid of Slack and add sync communication in Fibery. My favorite use case is feedback accumulation and prioritization. We have several channels of feedback: Intercom, customers’ calls, community forum, and some random suggestions in other places.  Fibery integrates with Intercom and Discourse, and fetches all communication. Thus we can easily highlight a part of text and link it to some Feature, Bug or Insight in Fibery. We write notes for every call and do the linking afterwards, here is how it looks: The best thing is that these links are bi-directional. When you navigate to some feature, you will find all the feedback inside linked to it. Eventually feedback accumulates and you can create a list that shows what features or insights are requested by customers and leads more often. It helps to decide what to take next into development. From my experience, feature prioritization is one of the hardest processes for product managers, and Fibery solves it. Another cool use case is that we use Fibery as a CRM. All registered accounts are added into Fibery, we also have a...
Connect all your workflows with Michael Dubakov CEO of Fibery
Answering Your Questions
Answering Your Questions
To celebrate passing 10,000 subscribers, last week I asked for questions from readers. There were too many to answer, but here’s an initial 10. If I missed yours, or you want to submit another question, I’m going to add a reader questions section to the bottom of my future updates posts, so feel free to ask a question using this form and I’ll try to get to it in the future. Otherwise, back to normal posting next time. One more piece of news; I’ve joined Open Philanthropy as a Research Fellow! I will continue to write New Things Under the Sun while there, but among other things I’ll also be trying to expand the New Things Under the Sun model to more writers and more academic fields. More details will be coming down the road, but if you are an academic who wants to write the definitive living literature review for your passion topic, drop me an email (matt.clancy@openphilanthropy.org) and I’ll keep you in the loop! I’m sad to leave the Institute for Progress, which continues to do outstanding work I really believe in, but I will remain a senior fellow with them. On to your questions! Subscribe now What is the most critical dataset that you would like to do research on but currently does not exist or is not available? - Antoine Blanchard I’m going to dream big here: I would love to see a better measure of technological progress than total factor productivity or patents. One particularly interesting idea for this was suggested to me by Jeff Alstott. Imagine we collected the technical specifications of thousands (millions?) of different kinds of individual technologies that seek to give a representative cross-section of human capabilities: solar panels, power drills, semiconductors, etc. There is some precedent for trying to collect technical specifications for lots of technologies, but it has typically been pretty labor intensive. However, gathering and organizing this data at a huge scale seems to be entering the realm of possibility, with the digitization of so much data and better data scraping technology. For example, we now have some inflation indices based on scraping price data from the web at a very large scale. Once you have all this data, for each class of technology, you can map out the tradeoff among these specifications to map the set of available technologies. How tradeoffs evolve over time is a quite direct and tangible measure of technological progress. This kind of technique has been used, for example, to model technological progress in the automobile industry (see image below). You then need a way to normalize the rate of progress across very different domains, and to weight progress across different goods so we can aggregate them up to a meaningful measure of overall progress. Lastly, to be most useful for research, you would want to link all this data up to other datasets, such as data on firm financials, or underlying academic research and patents. Adapted from Knittel (2011) It would be a huge undertaking, but with modern computing power, I’m not sure it’s much worse than computing many other economic statistics, from inflation to GDP. And it would help remove some serious measurement issues from research to understand what drives innovation. Can we quantify the impact of information and knowledge storage/sharing innovations, on the progress of innovation? Things like libraries, and more modern knowledge management systems. And obviously things like mobile characters and the printing press etc. What is the value of knowledge commons? - Gianni Giacomelli  Let’s start with the assumption that most good inventions draw on the accumulated knowledge of human history. If you can’t accumulate knowledge, I think most innovation would proceed at a glacial pace. Tinkering would still occasionally result in an improvement, but the pace of change would be evolutionary and rarely revolutionary. So if it’s a question of having access to accumulated knowledge or not having access, the value of having access is probably close to the value of R&D. But our ability to store and access knowledge is itself a technology that can be improved via the means you suggest. What we want to study is the incremental return on improvements to this knowledge management system. Some papers have looked at this for public libraries, patent libraries, and wikipedia (see the post Free Knowledge and Innovation). Having a public or patent library nearby appears to have helped boost the local rate of innovation by 10-20%. One way to interpret this is that an improvement in the quality of knowledge commons equivalent to the difference between a local or distant library could buy you a 10-20% increase in the rate of innovation. Nagaraj, Shears, and de Vaan (2020) find significantly larger impacts from making satellite imagery data available, in terms of the number of new scientific papers this enabled. And other papers have have documented how access to a knowledge commons changes what kinds of works are cited: Zheng and Wang (2020) looks at what happened to Chinese innovation when the Great Firewall cut off access to google; Bryan and Ozcan (2020) show requirements to make NIH-funded research open access increased people citation of it. In each case, its clear access had a measurable impact, but it’s tough to value. As an aside, my own belief is improving the knowledge commons gives you a lot of bang for your buck, especially from the perspective of what an individual researcher can accomplish. But of course, I’m biased. I was wondering if there has been a significant long-term impact of internet on economic growth and if there is any evidence to suggest that any of the economic growth in the last 2 decades can be attributed to the rise of internet - Daniyal from Pakistan There’s at least two different ways the internet affects economic growth. First and most obviously, it directly creates new kinds of economic activity - think Uber, Netflix, and Amazon. Unsurprisingly, this digital economy has been growing a lot faster than the non-digital economy (6.3% per year, compared to 1.5% per year for the whole economy, over 2012-2020 in the USA), but since it only counts for about 10% of the US economy, the impact on headline growth can’t have been too big yet. So, sure, the internet has contributed to faster economic growth, though the effect isn’t particularly large. Second, and more closely related to the themes of this newsletter, the internet can also affect the overall rate of innovation (including innovation in non-internet domains). It allows researchers to collaborate more easily at a distance and democratizes access to frontier ideas. These impacts of the internet have been a big theme of my writing - see the post Remote work and the future of innovation for a summary of that work, and more specifically the post The internet, the postal service, and access to distant ideas. I think on the whole, the internet has likely been good for the overall rate of innovation; we know, for example, that it seems to help regions that are geographically far from where innovation is happening keep up. It also helps enable new kinds of collaboration which, though possibly less disruptive than their more traditional counterparts, might simply not exist at all otherwise. It does seem a bit surprising the effect is not much larger though; why doesn’t having easy access to all the world’s written information multiply innovation by a factor of 10 or 100? The fact that it doesn’t suggests we should think of innovation as being comprised of lots of factors that matter (see this overview for some of those factors) and it’s hard to substitute one for the other. We get bottle-necked by the factors that are in short supply. To take a concrete example, it may be that the world’s written information is now at our fingertips, but the overall number of people interested in using it to innovate hasn’t increased much. Or that written information is rarely enough to take an R&D project across the finish line, so that we’re bottlenecked by the availability of tacit knowledge. Research in developing countries is both cheaper and of lower perceived quality than that which is carried out in developed countries. To what extent are these two outcomes separable? Do you think it's conceivable that the former can improve to the extent that a large share of technologically sophisticated R&D will be outsourced in the future? - Aditya I take it as a given that talent is equally distributed around the world, but I think developing countries face at least two main disadvantages in producing research that is perceived to be high quality. First, research can be expensive and rich countries can provide more support to researchers - not only salary support, but also all the other non-labor inputs to research.  Second, rich countries like the USA have tended to attract a disproportionate share of top scientific talent. As I’ve argued, while academic work is increasingly performed by teams collaborating at a distance, most of the team members seem to initially get to know each other during periods of physical colocation (conferences, postdocs, etc). Compared to a researcher physically based in a rich country on the scientific frontier, it will be harder for a researcher based in a developing country to form these relationships. Compounding this challenge, researchers in developing countries may face additional challenges to developing long-distance relationships: possibly linguistic differences, internet connectivity issues, distant time zones, lack of shared cultural context, etc. Moreover, we have some evidence that in science, the citations a paper receives are better predicted by the typical citations of the team member who tends to get the least citations on their own work. That means the returns to having access to a large pool of collaborators is especially high - you can’t rely on having a superstar, you need a whole team of high performers. Lastly, the...
Answering Your Questions
The Uncertain Mind: How the Brain Handles the Unknown
The Uncertain Mind: How the Brain Handles the Unknown
Our brain is wired to reduce uncertainty. The unknown is synonymous with threats that pose risks to our survival. The more we know, the more we can make accurate predictions and shape our future. The path forward feels more dangerous when we can sense essential gaps in our knowledge. In fact, fear of the unknown has been theorized to be the “one fear to rule them all”—the fear that gives rises to all other fears. Unfamiliar spaces and potential blind spots make us uncomfortable. This fear makes sense from an evolutionary perspective, but can be unnecessarily nerve-wracking—and sometimes paralyzing—in our modern world. Fortunately, we have also evolved an ability that’s deeply human: metacognition, or thinking about thinking. Metacognitive strategies can help us think better and manage the anxiety that arises from the unknown. How the brain reacts to uncertainty Humans react strongly to uncertainty. A study from researchers at the University of Wisconsin–Madison shows that uncertainty disrupts many of automatic cognitive processes that govern routine action. To ensure our survival, we become hypervigilant to potential threats. And this heightened state of worry creates conflict in the brain. First, uncertainty impacts our attention. The sense of threat degrades our ability to focus. When we feel uncertain about the future, doubt takes over our mind, making it difficult to think about anything else. Our mind is scattered and distracted. We feel like we’re all over the place. The underlying biology is still poorly understood, but research in primates conducted by Dr Jacqueline Gottlieb and her team at Columbia University’s Zuckerman Institute reveals that uncertainty leads to major shifts in brain activity, both at the micro-level of individual cells and at the macro-level of signals sent across the brain. Put simply, their results suggest that our brain redirects its energy towards resolving uncertainty, at the expense of other cognitive tasks. Uncertainty also affects our working memory. You can think of your working memory as a mental scratch space where you jot down temporary information. Working memory is attention’s best buddy. It’s what helps you visualize the route to a new place when you drive and keep several ideas in as you write down a sentence. Our working memory capacity is limited. Cognitive load is the amount of working memory resources we use at one given time. A high cognitive load means that we’re using a lot of our working memory resources. And uncertain situations force us to use additional working memory resources. In the words of Samuli Laato, a researcher at the University of Turku: “Uncertainty always increases cognitive load. Stressors such as health threat, fear of unemployment and fear of consumer market disruptions all [cause] cognitive load.” Cognitive overload makes it harder to keep crucial information in mind when making decisions or to think creatively by connecting ideas together when we experience. Because it has such a big impact on our cognitive functioning—decreasing our attention and using up more of our working memory resources—uncertainty often leads to anxiety and overwhelm. The good news is: the heavy load of uncertainty is not inevitable. Studies suggest that responding to uncertainty is resource-intensive, but metacognitive strategies can help us reduce the impact of uncertainty. By using thinking tools, we can offload some of the burden uncertainty puts on our mind, so we can regain control of our attention and free our working memory resources—and, ultimately, think more clearly in times of uncertainty. A thinking tool for dealing with uncertainty Uncertainty is not a binary concept—“I am certain or I am uncertain.” Rather, uncertainty is multifaceted, with many flavors that should be treated differently. Former United States Secretary of Defense Donald Rumsfeld famously said: “…there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” The Uncertainty Matrix, sometimes called the Rumsfeld Matrix, is a tool that can be used to help make decisions when facing an uncertain situation. It can be used to differentiate between different types of uncertainties, and to come up with possible solutions for each. The matrix consists of four quadrants: Known-Knowns, Known-Unknowns, Unknown-Knowns, and Unknowns-Unknowns. Each quadrant represents a different type of uncertainty, and each has its own set of possible solutions. Known-Knowns are uncertainties that are known to us, and that we can plan for. For example, if we know that there is a high possibility of a layoff at our company, we can make a plan for how to deal with it. Known-Unknowns are uncertainties that we know exist, but where we don’t have enough knowledge to make a plan. For example, we may not know if our company will be acquired by another in the future. Or, you may be aware of the inherent uncertainties of leaving your job to work on a venture of your own, but you can’t make a step-by-step plan of what to do because you don’t have enough data yet. Unknown-Knowns are uncertainties that we’re not aware of but that we tacitly understand, which may lead to biases and assumptions in our decisions. Hidden facts Unknowns-Unknowns are uncertainties that we don’t know about. For example, a new technology may be developed that makes our product obsolete. “ unknown unknowns are risks that come from situations that are so unexpected that they would not be considered.” Once we know what type of uncertainty we’re dealing with, we can come up with possible solutions. For example, if we’re dealing with a Known-Known, we can exploit the factual data at our disposal to make a contingency plan, which will allow us to mitigate known risks. When dealing with a Known-Unknown, we conduct experiments to gather more information, so we can close some of our knowledge gaps and turn those Known-Unknown into Known–Knowns. For Unknown-Knowns, explore our assumptions—the things we don’t know we know—and identify biases in those assumptions, so we can potentially replace them with factual data. Finally, in the case of Unknown-Unknowns, we can conduct market research and use strategic intelligence to try and uncover blind spots. It’s a good practice to have in place, but it should be noted that there is no guarantee we will be able to turn Unknown-Unknowns into Known-Unknowns. There will always be events we could not have predicted. The Uncertainty Matrix is a useful tool for dealing with uncertainty, and can help us make better decisions when faced with it. It’s even better when used as part of a team, as different people may have different perspectives on the same uncertainties. “The oldest and strongest emotion of humankind is fear, and the oldest and strongest kind of fear is fear of the unknown,” wrote H.P. Lovecraft. While the fear of the unknown is deeply rooted in our biology, it is possible to elevate ourselves above our automatic reactions so we can make the most of uncertainty. Metacognition can be a great ally in reducing anxiety, freeing our working memory resources, and making better decisions when navigating unfamiliar spaces. The post The Uncertain Mind: How the Brain Handles the Unknown appeared first on Ness Labs.
The Uncertain Mind: How the Brain Handles the Unknown
Single-tasking: the power of focusing on one task at a time
Single-tasking: the power of focusing on one task at a time
We are all juggling multiple obligations, roles, and responsibilities across our personal and professional lives. Multitasking seems like it should be the perfect solution when faced with multiple demands and limited time. Doing two things at the same time is faster than doing them one after the other… Right? I’m a freelance medical copywriter. When I sit down to work, I get email notifications, have multiple tabs open, my phone nearby, and other distractions including a never-ending personal to-do list. While I like to kid myself that quickly answering an email and then returning to writing an article is a great feat of multitasking, my output, as well as the scientific evidence, tells me otherwise. In fact, psychiatrist Edward Hallowell defined multitasking as a “mythical activity in which people believe they can perform two or more tasks simultaneously as effectively as one”. Trying to multitask can not only hurt our productivity, but also our ability to learn. Fortunately, there is an alternative way to boost your efficiency: single-tasking. Illustration by DALL·E The dangers of multitasking Despite being an established word in the English language, when multitasking was first coined in the 1960s it was not with human productivity in mind. Rather, its meaning was related to computers performing more than one task at once. As humans, although it might seem that we’re performing multiple tasks at the same time, the reality is that we only work on one task at a time. The multitasking illusion is achieved by opening an email, saving a document and streaming an audiobook one after the other so quickly that it appears simultaneous. Performing multiple tasks in series, rather than parallel, is also how we attempt to multitask as humans. As Canadian author Michael Harris puts it: “When we think we’re multitasking we’re actually multi-switching”. Multitasking makes us feel busy, but rather than being productive, we are lowering our efficiency. Researchers Kevin Madore and Anthony Wagner investigated what happens to the brain when trying to handle more than one task at a time. They found that “the human mind and brain lack the architecture to perform two or more tasks simultaneously.” That’s why multitasking leads to decrements in performance when compared to performing tasks one at a time. Furthermore, it is worrying that those who multitask often inaccurately consider their efforts to be effective, as studies have demonstrated that multitasking leads to an over-inflated belief in one’s own ability to do so. Not only are we bad at multitasking, but we can’t seem to be able to see it. While micro-level multitasking, such as responding to an online work chat while producing a report, will lead to lost efficiency, it’s important to note that macro-level multitasking can be achieved when you are balancing several projects at once. However, in most cases, research shows that single-tasking is the most efficient way of working, as it avoids switching costs and conserves energy that would be expended by mentally juggling multiple competing tasks. Single-tasking boosts more than just productivity To single-task, we must relearn how to focus our attention on one task, rather than becoming drawn into another project or social distraction. In 2016, an analysis of 49 studies found that multitasking negatively impacted cognitive outcomes. For young adults in education, multitasking, such as studying and texting, was found to reduce educational achievement and increase the amount of time it took to complete homework. Students who multitasked in class failed to offset the damage done to their final grades, even if they put in additional hours of study at home to try to make up for it. It is therefore difficult to combat the damage caused by multitasking. In contrast, single-tasking can help you meet your targets more efficiently. By consciously blocking out distractions, you counteract the stop-start nature of task-switching and instead reach a flow state. This ensures you can focus solely on the current brief without interruption, leading to increased productivity in a shorter space of time.  Focusing on one task can, surprisingly, boost creativity. Whereas multitasking creates a constant stream of distraction, the tedium of focusing on a single task gives your brain the space it needs to explore new paths that you might otherwise not have considered By focusing on one workstream, inspiration and creativity can bloom because you are not trying to split your focus in multiple directions at once. By dedicating yourself to one task, you will complete tasks more effectively and therefore feel more confident about your capabilities at work, and less stressed about keeping up with deadlines or targets. How to single-task With studies demonstrating that multitasking drains your energy and diminishes your productivity, those of us trying to multitask are at risk of falling behind. Failing to complete tasks, having to work overtime, or feeling exhausted by a never-ending to-do list will likely lead to stress or anxiety. Fortunately, there are three strategies which will help you implement a single-tasking approach to work: Design a distraction-free environment. Both your digital and physical environment should be free of distractions to enable you to focus solely on one task. Turn off email notifications, and instead, only check your emails at when you start work, at lunchtime, and an hour before you finish. Put your phone in your bag or leave it in a different room to reduce the urge to check it. Close any tabs or browsers that are not relevant to your current task to avoid the temptation to get sucked into the latest sale or any breaking news. Use the Pomodoro technique. The Pomodoro technique involves working for 25 minutes and then taking a 5 minute break. During the 25 minutes of work, you must be completely focused on the task. Breaking your time down in this way offers certainty that you will be able to focus solely on one task for a relatively short amount of time, rather than setting a more overwhelming time target, such as a whole morning. Using a timer is beneficial for keeping you on track and ensuring you take breaks. For maximum productivity, be sure to return to work as soon as the break is over. Take regular breaks. In addition to the 5 minute Pomodoro breaks, you need to regularly take meaningful breaks to fresh and recharge. Leave your screens behind and go for a walk at lunchtime or commit to reading a novel for thirty minutes. Focussed work requires energy, so you will need to make sure you factor in respite to reduce the risk of burnout. Many of us think we can multitask, but an unfortunate risk of multitasking is that we develop an over-inflated perception of just how effectively we juggle multiple tasks. For micro-tasks, single-tasking is a far more effective way to complete projects, boost creativity, and even reduce stress levels. As we have become accustomed to so-called multitasking, learning to focus on one thing takes time, but it is worth the effort. By creating an environment free from distractions, using techniques to boost your focus and incorporating regular breaks, you are likely to become more efficient and ultimately more successful. The post Single-tasking: the power of focusing on one task at a time appeared first on Ness Labs.
Single-tasking: the power of focusing on one task at a time
AI and I: The Age of Artificial Creativity
AI and I: The Age of Artificial Creativity
A new generation of AI tools is taking the world by storm. These tools can help you write better, code faster, and generate unique imagery at scale. People are using AI tools to produce entire blog posts, create content for their company’s social media channels, and craft enticing sales emails. The advent of such powerful AI tools begs the question: what does it mean to be a creator or knowledge worker in the age of artificial creativity? Artificial creativity is a new liminal space between machine and human, between productivity and creativity, which will affect the lives of billions of workers in the coming years. Some jobs will be replaced, others will be augmented, and many others will be reinvented in an unrecognizable way. If your work involves creative thinking or knowledge management, read on for a primer on what’s going on with the latest generation of AI tools, and what it means to be a creator or a knowledge worker in the age of artificial creativity. Illustration by DALL·E The advent of Generative AI Artificial creativity, also known as computational creativity, is a multidisciplinary field of research that aims to design programs capable of human-level creativity. The field is not new. Already in the 19th century, scientists were debating whether artificial creativity was possible. Ada Lovelace formulated what is probably the most famous objection to machine intelligence: if computers can only do what they are programmed to do, how can their behavior ever be called creative? In her view, independent learning is an essential feature of creativity. But recent advances in unsupervised machine learning do bear the question of whether the creativity exhibited by some AI software is still the result of simply executing instructions from a human engineer. It’s hard not to wonder about Ada’s thought had she seen what computers have become capable of creating. In the words of Sonya Huang, Partner at Sequoia: “As the models get bigger and bigger, they begin to deliver human-level, and then superhuman results.” To understand what’s going on and why it matters for the very future of knowledge work and creative work, we need to understand the difference between Discriminative AI and Generative AI. There are two main classes of statistical models used by AI. The first one, which has been used the longest and is what you’ll find in classical AI, is called “discriminative”: it discriminates between different kinds of data instances. The second class of model, much more recent, is called “generative”: it can generate new data instances. It’s a bit easier to understand with an analogy. Let’s say you have two friends: Lee and Lexi. They’re both brilliant and they’re doing great at school, but the way they study is very different. When preparing for an exam, Lee learns everything about the topic and researches every single detail. It takes a lot of time, but once he knows it, he never forgets it. On the other hand, Lexi creates a mind map of the topic, trying to understand the connections between ideas in that problem space. It’s less systematic, but a lot more flexible. In this story, Lee uses a discriminative approach whereas Lexi uses a generative approach. Both approaches work very well, and it’s hard to tell the difference from the outside, especially when the goal is to perform well on a specific exam. But, as you can imagine, Lexi is likely to do much better with her generative approach in situations where coming up with novel ideas is required. That’s why discriminative models are often used in supervised machine learning (which is great for analytical tasks like image recognition), while generative models are preferred in unsupervised machine learning (which is better for creative tasks like image generation). For many years, Generative AI was constrained by a number of factors. Those models were difficult and expensive to run, requiring elaborate workload orchestration to manage compute resources and avoid bottlenecks, and only organizations with deep pockets could afford the exorbitant cost of using cloud computing. But things are changing fast. New techniques, more data, cheaper computing power—we’ve come to a point where any developer can now build an AI application from their living room. For an affordable cost, these applications can solve problems, come up with new ideas, and transform the way we work. The growing landscape of AI applications The artificial creativity space is moving so quickly, it would be impossible to map the entire landscape without missing any of the new applications that are launched every day. However, this map with more than 180 AI tools gives you an idea of the thriving ecosystem as of 2022: DOWNLOAD THE CLICKABLE PDF This map was initially created using the three classical categories of artificial creativity: linguistic, visual, and musical creativity. However, the range of creative tasks AI applications can perform has widely expanded in recent years, so the map also includes an additional category (scientific creativity) and a catchall category for all the weird and original ways generative models are used to augment human creativity. 1. Linguistic creativity. Have you ever found yourself staring at a blank page, unsure where to start? AI applications may mean the end of writer’s block as we know it. And there is a huge market for those AI writing tools, as evidenced by the exponential search volumes: Some tools like Jasper, Lex, and Rytr position themselves as general-purpose writing assistants. You just need to feed them a prompt or a paragraph, and they can complete those initial thoughts with original content. This is one of the most promising categories of AI tools: Jasper, which was founded in 2021, recently announced a record $125 million fundraising round at a $1.5 billion valuation. Others are specialized, addressing specific pain points. Lavender will write your sales emails, Surfer will generate SEO-optimized blog posts, Copy.ai will produce high-conversion marketing copy for your website, Nyle will create product descriptions as scale. Code is another area of linguistic creativity where AI can change the way we work. Replit’s Ghostwriter promises to become your “partner in code”, using AI to help you write better code, faster. It generates boilerplate functions and files, provides suggestions, and refactors code—all thanks to AI. GitHub has a similar solution called Copilot, which they’ve dubbed “your AI pair programmer”. Other tools allow you to use AI to code websites in a couple of clicks. These writing and coding tools are evolving extremely fast. Soon, typing everything manually will feel outdated and inefficient. 2. Visual and artistic creativity Long gone are the days where AI was mostly used for image recognition. AI-generated art is everywhere. Tools like Midjourney, Deep Dream Generator, and Stability AI allow anyone to type a few words and to get back an image. Not sure what to type? Websites like Lexica offer massive libraries of pre-testes prompts you just have to copy and paste. Many services such as Astria, Avatar AI, and AI Profile Picture allow you to train a model on photos of yourself, so you can create a series of AI-generated avatars to use on social media. You can ask Tattoos AI to design a unique tattoo for you, or Interior AI to create interior design mockups based on photos you upload. The output of visual creativity tools doesn’t have to be static. Video generation has also come a long way. Recently, Sundar Pichai, the CEO of Google, shared a long, coherent, high-resolution video that was created by AI just from text prompts. 1/ From today's AI@ event: we announced our Imagen text-to-image model is coming soon to AI Test Kitchen. And for the 1st time, we shared an AI-generated super-resolution video using Phenaki to generate long, coherent videos from text prompts and Imagen Video to increase quality. pic.twitter.com/WofU5J5eZV — Sundar Pichai (@sundarpichai) November 2, 2022 Opus lets you turn text into movies. Tavus allows you to record one video, and to generate thousands, automatically changing some words. You could record one video sales pitch, and change the name of the person it’s addressed to in one click. And Colossyan provides you with AI actors ready to deliver the lines you provide. 3. Audio and musical creativity Will we ever need to reach out to potential guests to invite them on our podcast? Maybe not. Podcast.ai is a podcast that is entirely generated by AI. Listeners are invited to suggest topics or even guests and hosts for future episodes. Powerful text-to-speech applications have also hit the market. Using advanced machine translation and generative AI, Dubverse automates dubbing so you can quickly produce multilingual videos. You can generate entire songs with AI apps like Soundful or Boomy. Melobytes allows you to transform your audio files so you can become a rapper. Innovative apps like Endel (whose founder we have interviewed here) use AI to create personalized soundscapes to help their users focus, relax, and even sleep. The possibilities are endless. 4. Scientific creativity Scientific research requires rigor and creativity to solve complex problems and invent innovative solutions. In that realm too, AI is coming to the rescue. Elicit uses language models like GPT-3 to automate parts of researchers’ workflows, allowing researchers to ask a research question and to get answers from 175 million papers. Genei automatically summarizes background reading and produces reports. In biochemistry, Cradle uses AI to predict a protein’s 3D structure and generate new sequences, saving days of work for scientists who use it. Wizdom continuously monitors billions of data points about the global research ecosystem to automatically provide actionable insights to their users. By unlocking data and making it accessible and digestible, all these AI applications are making research fas...
AI and I: The Age of Artificial Creativity
Tana: the all-in-one tool for thought?
Tana: the all-in-one tool for thought?
Notion, Evernote, and Roam have long been the gold standard for online collaboration and note-taking. However, a new player has emerged that promises to be the all-in-one tool for thought everyone has been waiting for. This tool is called Tana. The end of context switching Knowledge work often requires us to switch between tools for thought, and this can make the process of thinking and learning tedious. Tana’s vision is to create a tool that can end context switching between tools. Tana combines the best features from Notion, Roam, and Airtable, and it allows you to easily transition between free-flowing thoughts, collaboration, and structured data. Tana could be the perfect tool for people who feel like they are at the intersection of the note-taking styles of architect, gardener, and librarian. Tana requires a mindset shift in order to use it effectively, from thinking about information in terms of files to thinking of them in terms of nodes. In Tana, everything is a node. This means that every piece of information — whether it be a task, a note, a file, or even a person — is represented as a node in a graph. Those nodes are connected through bi-directional links. This means that you can link to any piece of information from any other piece of information. This makes it easy to find the information you need when you need it, without having to switch between different applications. This approach allows you to easily see the relationships between different pieces of information and quickly find what you are looking for. Because everything is connected in a graph, you no longer need to think of information in terms of files and folders. Bi-directional links are powerful, but they are not new. A feature that is truly unique to Tana is Supertags. Supertags are like templates for your nodes. They allow you to create a template once and use it in multiple places. This makes it easy to keep track of information and find it when you need it and creates a database that you can search and view on any page. Updated productivity workflows Tana is still in early access, but it’s already showing a lot of promise. It’s easy to use and has a lot of potential to change the way we think about and work with information. Let’s go through some productivity workflows that feel like magic with Tana. Task management Task management in Tana can be as simple as using checkboxes on any node. For explicit to-dos, you can add the tag #todo to the node. You can add tasks anywhere, from the node that you are currently working on, your daily note, or even use the quick add feature to capture your thoughts from any node. You do not need to worry about remembering where you kept these tasks, as you view all your to-dos in the sidebar, and filter it further using live search which ensures that these nodes will not fall through the cracks. This makes it frictionless to capture and organize your tasks. The #todo tag also has a due date to indicate when it needs to be done. This will show up as a reference at the bottom of the day the task is due. Tana’s live search makes it a great option for task management. For example, you can find all of your outstanding tasks with these steps: Go to your home node and run the command “Find todos…” with the shortcut Cmd/Ctrl + K. Hit enter and open the “Find todos…” node. In the search parameters, add a new node in the search and write “NOT DONE”. Doing this will create a node containing all your outstanding tasks. This is just one way you can manage your tasks more efficiently, but Tana is so flexible that you can practically design any task management workflow that suits your needs. Building a knowledge base As we discussed earlier, Supertags are a feature that’s unique to Tana. With Supertags, you can easily add templates to your nodes. Let’s see how it works by building a database for all the books you have read. List down the books you have read as a new node, and add a tag to it. Let’s add the “book notes” tag to our nodes. Click on the tag, where you will see a configure option. This allows you to add fields, similar to Notion’s databases. You can add fields such as date, number, user, URL, and even create your own custom fields. Once you are done configuring your Supertag, try it out by clicking on the nodes with the book notes tag. Here, you can add values to each field. To create a dashboard of all the books you’ve read, click on the tag and go to the list of #book notes. This will create a database and show a list of all the nodes you tagged with book notes. You can then sort, filter, group, and view the database from different perspectives. You can open this list from anywhere in Tana by using the command menu and typing “Find nodes with tag #book notes” This is only one of the many ways you can use Supertags in Tana. This feature is incredibly powerful and can unlock productivity workflows that were previously not possible without cobbling together several tools. Interstitial journaling Interstitial journaling is a journaling technique where you write down a few thoughts when taking breaks from your tasks, and note the time you took these notes. With interstitial journaling, you combine note-taking, task management, and time tracking into one unique workflow. It can make your breaks more mindful, where you reflect on your previous task, plan your next steps, and jot down your thoughts so you can focus on the work at hand. It can also keep you accountable when working, as you have a record of the time you spent working and time spent resting. While you can do interstitial journaling with any tool, it is greatly enhanced with Tana. Let’s see how you can use Supertags to enhance your journaling. Go to your daily note and create a new node. Write down the current time and type whatever you are thinking about. If you are working on a task, you can mention the task by using @ and typing the name of the task. This will link your interstitial journaling node with the initial todo node. Add the tag “Interstitial Journaling” to the node, and configure the tag to add fields for each of your journaling nodes. Add goals, self review, next plans, and anything you want to jot down into your fields. By finding nodes with the tag #Interstitial Journaling, you will have a log of all the work you have done. This is useful for doing a weekly review, or to look back at the progress you have made on your tasks and projects. Limitations of Tana Although Tana is an incredibly powerful tool, there are some limitations — which is to be expected considering that it is still in early access and is relatively new in the tools for thought space. First, Tana is a cloud based web app, so you also might find it a bit slower compared to tools for thought that are local-based. As we just mentioned, Tana is still in early access and there are some features that are still being developed. You might find some features buggy, such as the panels feature, where you might find it difficult to place and resize panels. Another limitation is that the learning curve for Tana is relatively steep. It might come easy to power users who are used to other tools for thought that Tana draws inspiration from, but for the majority of users, the concepts and principles that Tana uses are not intuitive and may take some time to get used to. Concepts such as everything is a node and multiple view databases will take some time to digest before it becomes second nature. However, there are many good videos from the team on how to use Tana. We also have several easy-to-follow Tana tutorials here at Ness Labs to help you get started. Finally, there are some features that are unavailable in Tana, which may be a dealbreaker to some depending on their use cases. For example, Tana is not yet available on mobile devices, making it unsuitable for people who frequently need to access their notes while away from the computer. In terms of task management, users who like to timeblock may be disappointed that there is no calendar for them to timeblock and schedule their tasks. There is no API as well, which limits the integrations with the workflows you currently use. However, it is still early days and the team may address this in the future. The good news is that the Tana team is very responsive to feedback and is working hard to improve the platform. In addition, Tana has a very active community on Slack where they help each other out and share their tips and tricks. Some useful resources include the Tana Pattern Library, which is a shared workspace containing patterns from the community that you can import into your own database with one click. However, before jumping into Tana, beware of the shiny toy syndrome. It’s common to want to jump ship to the latest toy everyone is talking about, but think about your current use cases for your tools for thought, whether there is some important feature missing, and consider if switching to Tana is worth the time and effort. Overall, Tana is a great tool for thought with a lot of potential. It’s well worth checking out if you are looking for an all-in-one tool to manage your tasks, notes, projects, and collaborate with your team members. Tana is still in early access, but you can sign up for the waitlist here. The post Tana: the all-in-one tool for thought? appeared first on Ness Labs.
Tana: the all-in-one tool for thought?
Taking Your Questions
Taking Your Questions
Dear reader, This newsletter recently got it’s 10,000th subscriber. To celebrate, I thought I would try something new and take reader questions. So: ask me anything by using this google form. I’ll try to get through as many questions as I can in the next post, which will hopefully come out the week of November 14th. Cheers everyone and thanks for your interest in this project, Matt
Taking Your Questions
Use timeboxing to regain calmness and control with Marie Ng founder of Llama Life
Use timeboxing to regain calmness and control with Marie Ng founder of Llama Life
Welcome to this edition of our interview series, where we meet with founders on a mission to help us work better and happier. Marie Ng is a long-time Ness Labs reader and the founder of Llama Life, a uniquely designed tool to manage timeboxed working sessions. The quirky branding, the attention to details, the simple features… Everything has been crafted to help you whiz through your to-do list. In this interview, we talked about the concept of time boxing, how it may be particularly useful to people who suffer from time blindness but can help everyone, the power of whimsical effects to maintain motivation, and how to set reasonable expectations to avoid overloading yourself with work. We also talked about time management and ADHD, and what Marie and her team have in mind for the future of Llama Life. Enjoy the read! Hi Marie, thank you so much for agreeing to this interview. First, can you tell us more about timeboxing? For sure, thank you for having me! I was actually doing timeboxing before I knew it was called timeboxing! I’ve also heard it referred to as “time blocking”. Essentially, it’s about being more mindful and purposeful in how you’re spending your time — so you set aside a fixed amount of time to do a particular task, or to do a particular piece of work. By setting a fixed amount of time, you’re creating a positive constraint, and also a bit of pressure to encourage focus to get things done. Timeboxing uses a principle called Parkinson’s Law. Parkinson’s Law states that the work you have to get done fills the time allotted to it. If you’ve ever noticed yourself procrastinating because you think “oh it’s not due till next week”, and then scrambling to get it done at the last minute, then that’s Parkinson’s Law. You knew you had over a week to do the task, so that’s how long it took to get it done. If you had the same task, but a shorter deadline, you would often increase your focus, waste less time, and get it done in that shorter time. Timeboxing plays on this principle, at a micro-level — it’s like creating many little deadlines throughout your day. I use timeboxing for every aspect of my life: getting ready in the morning, doing household chores, doing work tasks… Everything. I suffer from something called “time blindness”, which may sound a little strange, but it just means I have a hard time keeping track of time — how long things might take, how long I spend on something, and generally just knowing where the hours in the day go. This, combined with the challenge of having ADHD, makes timeboxing an essential method for me. Do you think everyone should try it, even if they don’t suffer from time blindness? I think we can all benefit from being more purposeful and efficient with our time. There’s a lot of distractions these days. A lot of us are working remotely, trying to juggle home life with work, with family, all in the same space. There’s also social media, which is really designed to catch your attention and hold it. Now this may be ok if that’s something you’re intending to spend time on and enjoying it, but not ok if it becomes a distraction and is pulling you away from other things which you need to do. And if we’re all more purposeful with our time, it helps to create more time to spend on other things which we may want to do or experience. Above all, I think this helps to reduce stress, because no one likes feeling that they’re behind on what they need to get done, or feeling that they’re missing out on things that they want to do. So, this is why you decided to build Llama Life. There are two main reasons why I made Llama Life. The first is, I was teaching myself how to code. When Covid first hit, everyone was learning a new skill, so I decided it was about time I took the plunge and started to learn web development. So Llama Life started as a project to practice what I was learning by actually building something. The second reason is that I really wanted to create a product to help myself. I had been doing timeboxing for a long time just using timers, but I wanted a way to be able to quickly and easily attach those timers to specific tasks, and I just couldn’t find a product that worked the way I wanted it to. It makes such a big difference to me in terms of how I manage my day, and importantly how I feel at the end of the day, that I thought it was worth bringing the product to life and sharing it with others. So now Llama Life’s mission is about helping people achieve calm, focused productivity. The kind of productivity where it feels immersive, enjoyable, fun and effortless. I have to ask… Why is Llama Life called this way? Pre-Covid, I went on a soul-searching journey with one of my best friends. We traveled to Peru, did a lot of hiking, and did the trek to Machu Picchu. As part of the trip, we also ended up visiting a small village to get to know the people and see how they live. There were around twenty people in this village. They had no modern conveniences – no running water, electricity, no internet… But they had a lot of llamas! The llamas were their livelihood. So they would use the wool from the llama’s backs to make sweaters, scarfs, beanies etc to sell.  And what struck me about these people was, although they had no modern conveniences, they were very happy, calm and content with their way of life. And that feeling stayed with me.  So years later when it came to naming Llama Life, I was trying to think of a name that would embody our mission of “calm, focused productivity”. And the name “Llama” just came to mind almost immediately. The interesting thing is, over time, customers started calling it “Llama Life”. Previously it was just “Llama”. Customers started to say stuff like “I want to live the llama life” and they were posting this on Twitter. And it occurred to me that this was a much better name for the product, because it helped people aspire to a certain lifestyle — a llama life — which is much more powerful than aspiring to be just an animal! That’s very true! I love the story behind the name. More specifically, how does Llama Life work? Llama Life is all about helping you work through your todo-list, rather than just making never-ending lists. It does this by letting you set a fixed amount of time to each task (with a countdown timer) and by making it super fun and rewarding to use. For example, when you complete a task, you get a confetti animation! It’s also full of whimsical little sound effects, all of which are designed to help boost your motivation, encourage focus, and get stuff done. A lot of other apps let you set 25-minute timers, but 25-minute timers never worked for me. I find it much easier to start with short timers, and work my way up. Therefore, Llama Life is flexible and allows you to set a timer of any duration, depending on what works best for you. This means you can set a timeboxed session starting with something very achievable, for instance five minutes. And this helps our users transform big overwhelming tasks into more manageable bite-sized chunks of productivity. I imagine this can be used for many different use cases, like study sessions for instance. Yes, there are so many use-cases! We do have students using it for study sessions, and also a lot of indie hackers / startup founders using it to make the most of their time. It makes a lot of sense for founders because you’re often trying to do several different roles at once, and there’s only so much time you have in the day.  To help with this, Llama Life shows you the total time it would take to complete all the tasks on your list, as well as an estimated finishing time of the day. This helps with planning — it makes sure you don’t overload yourself (guilty!) and encourages you to set reasonable expectations for what you can achieve in a given time slot. We also have executives using it to plan and keep to meeting agendas so they don’t run over time. Actually… ‘Overtime’ is a feature Llama Life has, which was hotly requested. Essentially when your task timer runs out, it starts counting the extra time, so you can get a sense of how long things actually take versus how long you had planned. And it’s being used for non-work tasks too! We had a customer the other day who used it to keep herself on track to clean different rooms of her house, before guests showed up for dinner that evening! I think the most interesting thing is that people are using Llama Life as part of a workflow. So we’re not trying to compete with Todoist, Notion, Asana, Trello, etc. Llama Life is not meant to be a place to store your master list of todos, or manage and collaborate on a project. It’s designed to be the “tip” of the workflow. Most of our customers store their todos and projects somewhere else and then transfer them to Llama Life for their focus session during the day. Llama Life is very much a tool to help you get through today. As such we’re also focusing on integrations, to make the transfer of tasks as frictionless and easy to do as possible. There’s indeed something powerful about the idea of simply getting through today. So, what kind of people use Llama Life? I think the thing which ties all of our customers together is that everyone shares a goal of wanting to increase their focus, and make the best use of their time. Sometimes those are people who are already productive and are looking to ‘level-up’. But very often it’s people who are struggling with focus, for example people with ADHD. We don’t specifically ask people if they have ADHD, but we know they make up a large part of our customer base, because they take time to email me and explain the challenges they have (which are always very relatable to me, being someone who was diagnosed with ADHD much later in my life). What about you, how do you use Llama Liffe? I use Llama Life’s “Preset Lists” a lot. A Preset List is a template list of tasks that you can create, save and then re-use as many t...
Use timeboxing to regain calmness and control with Marie Ng founder of Llama Life
The default effect: why we renounce our ability to choose
The default effect: why we renounce our ability to choose
Why is it that we like having choices, but we don’t like choosing? Being able to decide between several options makes us feel in control. Yet, we tend to exhibit a preference for the default option when presented with a selection of choices. This is called the default effect, and it rules many aspects of our lives from the products we buy to the career we build. Choosing the default option The default effect is our tendency to go with the status quo, even when a different option would be better for us. Many studies show that we tend to generally accept the default option—the one that was preselected for us—and that making an option a default increases the likelihood that such an option is chosen. One of the theories behind the default effect is that humans are hardwired to avoid loss. We feel a strong aversion to any kind of loss. This aversion is so strong that it can override our logical thinking and lead us to stick to what seems like the safest path. Another theory relates to the cognitive effort needed to consider alternative options. It’s much easier to go with what’s right in front of us, compared to researching and evaluating other potential choices. Opting for the safest path may seem like a good idea but it can often lead to suboptimal decisions. For example, we might choose the default health insurance plan, even though there are better options available. Or we might stay in our current job, even though we’re unhappy, because the idea of starting over is too daunting. In each of these cases, we’re letting the default effect guide our decision-making, and as a result, we’re not reaching our full potential. Then, many years later, we look back, surprised to find ourselves in a less-than-ideal situation. There’s a saying, often misattributed to Lao Tzu, that goes: “If you do not change direction, you might end up where you are heading.” In other words, you cannot be surprised about finding yourself in a certain position if you decided to stick to the default path. We shouldn’t blame ourselves for falling prey to the default effect. It’s a powerful evolutionary force that’s hard to resist. Our survival instinct tells us to avoid risky situations and potential losses. But we can learn to recognize when the default effect is influencing our decisions and take steps to overcome it. Breaking free from the default effect While there are some situations in which the default effect can be beneficial—for example, by only having healthy options in your fridge—letting it guide all of your daily decisions can lead you to live a life you have not chosen. Make space for metacognition. We’re often so busy thinking about how to get things done, we forget to think about why we want to get these things done. Metacognition is “thinking about thinking”, it’s an awareness of your own thoughts, an examination of the underlying patterns that guide your decision. Block some time in your calendar to reflect on your recent choices and what led you to them. Journaling is a great metacognitive strategy, but you can also think out loud with a friend or colleague. Practice intentional decision-making. It doesn’t have to be about big decisions. Next time you notice yourself grabbing the exact same snack between two meetings, ask yourself: is there another option? Or, when you’re about to walk into a meeting room to have your weekly chat with your employee, ask yourself: can we have this chat somewhere else? These little acts of intentionality will train your mind to not always stick to the default routine. Project yourself into the future. While it’s great to live in the present, it can also be helpful to imagine the path forward. To avoid blindly taking one step at a time and “ending up where you are heading”, consider where you want to go. It doesn’t have to be very precise. You can start by describing a perfect day. Is the default option leading to your ideal destination? Or should you change direction? Your annual review at the end of the year can be a good time for such an exercise. In Robert Frost’s famous words: “Two roads diverged in a wood, and I— I took the one less traveled by, And that has made all the difference.” Breaking free from the default effect so you can choose your own path is not easy, but it can make all the difference. The post The default effect: why we renounce our ability to choose appeared first on Ness Labs.
The default effect: why we renounce our ability to choose
Tutorial: Collaborative task management in Tana
Tutorial: Collaborative task management in Tana
Tana is a powerful tool for thought that allows you to easily turn raw notes into tasks. Its goal is to end context switching and copy-pasting, so you can accomplish all of your goals from an all-in-one workspace. You can start experiencing the power of Tana by creating a simple solo workflow for task management. But Tana also makes it simple to collaborate in teams. Follow this tutorial to learn how to manage your tasks as a team with Tana. How to manage tasks as a team with Tana Managing your tasks as a team with Tana is as easy as one, two, three. You only need to create a new workspace, decide which workspace tags you’ll use, create shared tags, and set up a team workflow. Step 1. Create a new workspace and decide which workspace tags you’ll use. You can accept invitations to other workspaces by clicking the plus symbol at the bottom of the sidebar, or you can create your own workspace. By navigating to “Options” and clicking on the “Allow content from…” section, you can choose which workspace tags you want to utilize. Step 2. Create shared tags. It’s best to create the super tags in the workspace where you want the data to live in order to deal with different workspaces successfully. However, if you create a tag in your personal space and attempt to use it in a workspace, only you will be able to view it. You can still utilize tags from other workspaces. You can add nodes that have shared tags after creating them in your personal workspace or in the shared workspace itself. Since you can write everything out first and then submit it to the workspace when you’re ready to share it, this is really useful. The node is being suggested to be moved to the shared workspace, as you’ll see. With the “Move to” command, nodes can also be moved to different workspaces. Step 3. Set up a team workflow. Set up calendar tags. Since a new workspace doesn’t come with a built-in calendar, you must create calendar tags to make this feature available. On the today page of your new workspace, enter #day, #week, and #year. Set up the child supertags #week and #day for the supertags #year and #week’s legacy, respectively. Set up task and project databases. This is comparable to what we discussed earlier about managing your own tasks. Due to the fact that this is for a team and the tasks and projects will come from the team workspace rather than your own workspace, there is an additional “Assignee” user field. Set up people and organizations. This will serve as your team’s and customers’ database. Make a supertag out of a “person” tag by configuring it. Make sure the supertag contains the following fields: “Organization” as an instance field setting #organization as the source supertag “Email” address as an email field “Phone” number as a number field “Twitter” or other preferred social networking sites Make a tag called “organization” and configure it to become a supertag. Make sure the supertag contains the following fields: Shortcode URL as a link field “Employees” as a search node (type in #person and make a field called Organization and set the value as PARENT) Make a tag called “team member” and configure it to become a supertag. To obtain the person fields, remember to “extend an existing tag” to #person in the supertag’s advanced settings. Make databases for “people” and “organizations” with the names “People” and “Organizations” and include them in the sidebar. Set up work logs. This is similar to the time log that was previously explained, with the exception that since this is a collaborative process, the individual who performs the task is identified. Make a supertag called “work log” and configure it to become one. Make sure the supertag contains the following fields: Start Time End Time “Task” as a dynamic option field that searches nodes with the tag ‘task’ that are not done yet “Date” as a date field: Open the advanced section, go to initialize expression, click fx and type in ‘formatDate’ and type in ‘CREATED’ and ‘DATE_REFERENCE’ as its child node and switch back to edit mode by clicking fx again. Set hide field conditions to always using command k. “Who” as an instance field: Type in #person in the source supertag. Open the advanced section, go to initialize expression, click fx and type in ‘filter’ and type in ‘childrenOf’ as its child node and type in ‘the name of your company’ as the child node of ‘childrenOf’. Type in a new ‘Email’ field under ‘filter’ and type in ‘CURRENT_USER’. Set hide field conditions to always using command k Check ‘build title from fields’ in the advance section and type in ‘${Tasks}: ${Who} ${Start Time} – ${End Time}’ in the expression field. By setting the child supertag to #worklog, you may have the tag “work log” appear automatically when you click enter inside the “Work Logs” section of your daily template. That’s it! Tana makes it simple to set up a simple yet powerful team management system. Not only is the user interface attractive and the layout is simple, but collaboration is made simpler by the ability to view what others have accomplished. Have fun with Tana, and feel free to join the Ness Labs Learning Community to discuss Tana and other tools for thought! The post Tutorial: Collaborative task management in Tana appeared first on Ness Labs.
Tutorial: Collaborative task management in Tana
Tutorial: How to manage your tasks with Tana
Tutorial: How to manage your tasks with Tana
Tana is a brand-new tool for thought that claims to put a stop to context-switching. It enables you to begin by entering data and then readily find it using searches rather than figuring out where to place it before you write it. Benefitting from both database-based note-taking like Notion and block-based note-taking like Roam Research and Obsidian, Tana perfectly balances spontaneous and structured data. It is performing so admirably that it attracted many Personal Knowledge Management experts in a short amount of time. The core of Tana includes powerful features such as fields, supertags, live queries, and views, which enable users to create extremely complex workflows without having to install any further plugins. In this tutorial, you will learn how to manage your tasks with Tana so you can increase your productivity and remove any unnecessary friction from your daily workflow. Primer on Tana First, you need to log in to Tana. Before we get started, let’s have a quick look at some of the core design principles that govern the way Tana works. It will be much easier to design a task management system with Tana once you understand these ideas.  Workspace. Your private workspace is located at the top of your sidebar. This is your exclusive workplace, which no one else may use. But you can also use Tana in collaboration with other people. Each workspace allows you to manage access rights, add tags specifically for that workspace, and decide whether or not to accept tags from other workspaces. There is a library specific to each workspace, and you can export nearby structures for the entire workspace.  Nodes. Tana’s nodes are similar to Roam’s blocks. They make up the core of Tana’s network-based structure and outliner functionality. Supertag. Tana’s superpower is called a supertag. By letting a tag contain more data pieces, they elevate a basic tag to a higher level. You can specify the attributes of a supertag or add nodes to it, and all instances where you use a specific supertag will utilize these values as default metadata or schema. Fields. In Tana, fields are similar to properties in Notion. These fields can be configured any way you like, and if you turn a tag into a supertag, they will be accessible to you whenever you’re ready to fill them in. Inheritance. Tana’s inheritance feature enables you to create a supertag that inherits the fields of another supertag while maintaining its own distinctive fields. It may sound complex, so let’s use an example. For instance, the inheritance feature can be used with related tags like “person” and “customer”. Because a customer is a person, the fields you create for the person’s supertag can be inherited by the customer’s supertag. Emergence. When both supertags are used in the same node, Tana merges the fields you set up for each supertag. This feature is called “emergence”. The fields from the “task” supertag and the “work” supertag will both emerge underneath the node, allowing you to capture those fields in one note. As you can see, Tana is an effective task management tool because it combines the two complex realms of databases and bidirectional connections. Though it’s useful to have an overall idea of how Tana works, you don’t need to fully understand these principles to start managing your tasks with Tana. Next, I’ll share a basic approach to get you started. How to manage your own tasks with Tana With Tana, managing your own tasks is easy. Simply create your own “Tasks and Projects” database, choose how to automate your days using a template, and handle your accumulated tasks at the end of the day. Step 1. Setup up a “Tasks and Projects” database. Create tags called “task” and “project”. To make the tag “task” a supertag, configure it. Include the following fields in your “task” supertag: “Do Date” as a date field “Due Date” a date field “Status” as a fixed options field (“To Do”, “Doing”, and “Done” options) “Related Project” as an instance field setting #project as the source supertag Then, you need to also make the tag “project” a supertag by configuring it. Fill out the following fields in your “project” supertag as needed: “Due Date” as a date field “Status” as a fixed options field (“To Do”, “Doing”, and “Done” options) “Tasks” as a search node (type in #task and make a field called “Related Project” connected into the task database and set the value as PARENT) Create search nodes for your task and project databases. Enter #task for the task database and #project for the project. Both databases can be viewed as cards and grouped based on status. Give them whatever names you like, then pin them both to your sidebar. Step 2. Decide what your day will look like by automating your days using a template. If your private workspace doesn’t already have a “day” tag, create one. Make your “day” tag become a supertag by configuring it. Include the following fields in your “task” supertag: Agenda: you can create a reference of the tasks you want to do for a particular day in this section. Choose tasks on your task database. Time Log: you can do your interstitial journaling here. Create a ‘time log’ supertag with fields like Start Time, End Time, Task as a dynamic options field (create a search node on it and type in #task and NOT DONE), and Notes. Use the ‘build title from fields’ feature under the advanced section to automatically set the name of your time logs and type in ‘${Task}: ${Start Time} – ${End Time}’ in the title expression field. Step 3. Handle your accumulated tasks at the end of the day. You can use Tana’s “Quick Add” feature if you enjoy journaling and writing down your thoughts as they come to you. However, I highly recommend the practice of interstitial journaling. It’s the most straightforward method to incorporate note-taking, tasks, and time tracking, and it works great with Tana. To manage the tasks you accumulated through interstitial journaling, all you need is a simple habit. Just add a search node that looks for tasks that were created within the previous 24 hours, and go through these tasks at the end of each day. And you’re done! This is a simple three-step process to set up a task management system in Tana. It enables you to incorporate both your planned and spontaneous ideas throughout the day. Have fun with Tana, and feel free to join the Ness Labs Learning Community to discuss Tana and other tools for thought. The post Tutorial: How to manage your tasks with Tana appeared first on Ness Labs.
Tutorial: How to manage your tasks with Tana
Are Technologies Inevitable?
Are Technologies Inevitable?
Dear reader, This week’s post is not the usual thing. I designed New Things Under the Sun to feature two kinds of articles: claims and arguments. Almost everything I write is a claim article (or an update to them). Today’s post is the other kind of article, an argument. The usual goal of a claim article is to synthesize several academic papers in service of assessing a specific narrow claim about innovation. Argument articles live one level up the chain of abstraction: the goal is to synthesize many claim articles (referenced mostly in footnotes) in service of presenting a bigger picture argument. That means in this post you won’t see me talk much about specific papers; instead, I’ll talk about various literatures and how I think they interact with each other. Also, this article is really long; probably about twice as long as anything else I’ve written. Rather than send you the whole thing in email, I’m sending along the introduction below, an outline, and a link to the rest of the article, which lives on the NewThingsUnderTheSun.com. Alternatively, you can listen to a podcast of the whole thing here. Cheers everyone and thanks for your interest, Matt Subscribe now Take me straight to the whole article Are Technologies Inevitable? Introduction In a 1989 book, the biologist Stephen Jay Gould posed a thought experiment: I call this experiment “replaying life’s tape.” You press the rewind button and, making sure you thoroughly erase everything that actually happened, go back to any time and place in the past… then let the tape run again and see if the repetition looks at all like the original.” p48, Wonderful Life Gould’s main argument is: …any replay of the tape would lead evolution down a pathway radically different from the road actually taken… Alter any early event, ever so slightly and without apparent importance at the time, and evolution cascades into a radically different channel. p51, Wonderful Life Gould is interested in the role of contingency in the history of life. But we can ask the same question about technology. Suppose in some parallel universe history proceeded down a quite different path from our own, shortly after Homo sapiens evolved. If we fast forward to 2022 of that universe, how different would the technological stratum of that parallel universe be from our own? Would they have invented the wheel? Steam engines? Railroads? Cars? Computers? Internet? Social media? Or would their technologies rely on principles entirely alien to us? In other words, once humans find themselves in a place where technological improvement is the rule (hardly a given!), is the form of the technology they create inevitable? Or is it the stuff of contingency and accident? In academic lingo, this is a question about path dependency. How much path dependency is there in technology? If path dependency is strong, where you start has a big effect on where you end up: contingency is also strong. But if path dependency is weak, all roads lead to the same place, so to speak. Contingency is weak. Some people find this kind of thing inherently fun to speculate about. It’s also an interesting way to think through the drivers of innovation more generally. But at the same time, I don’t think this is a purely speculative exercise. My original motivation for writing it was actually related to a policy question. How well should we expect policies that try to affect the direction of innovation to work? How much can we really direct and steer technological progress? As we’ll see, the question of contingency in our technological history is also related to the question of how much remains to be discovered. Do we have much scope to increase the space of scientific and technological ideas we explore? Or do we just about have everything covered, and further investigation would mostly be duplicating work that is already underway? I’ll argue in the following that path dependency is probably quite strong, but not without limits. We can probably have a big impact on the timing, sequence, and details of technologies, but I suspect major technological paradigms will tend to show up eventually, in one way or another. Rerun history and I doubt you’ll find the technological stratum operating on principles entirely foreign to us. But that still leaves enormous scope for technology policy to matter; policies to steer technology probably can exert a big influence on the direction of our society’s technological substrate. The rest of the post is divided into two main parts. First, I present a set of arguments that cumulatively make the case for very strong path dependency. By the end of this section, readers may be close to adopting a view close to Gould’s: any change in our history might lead to radically different trajectories. I think this actually goes too far. In the second part of the essay, I rein things in a bit by presenting a few arguments for limits to strong path dependency. The rest of the piece goes on to make the following argument: Part One: The Case for Strong Path Dependency Small scale versions of replaying the technology tape point to path dependency being at least big enough to notice The landscape of possible technologies is probably very big because Combinatorial landscapes are very big Technology seems to have an important combinatorial element Our exploration of this space seems a bit haphazard and incomplete From the constrained set of research and invention options actually discovered, an even smaller set get an early lead, often for highly contingent reasons, and then enjoy persistent rich-get-richer effects Part Two: The Limits of Path Dependence It may not matter that the landscape of technological possibility is large, if the useful bits of it are small. This may be plausible because This might be the case for biology It is probably possible to discover the small set of universal regularities in nature via many paths Human inventors can survey the space of technological possibility to a much greater degree than in biological evolution A shrinking share of better technologies combined with our ability to survey the growing combinatorial landscape can yield exponential growth in some models Read the whole thing here As always, if you want to chat about this post or innovation in generally, let’s grab a virtual coffee. Send me an email at mattclancy at hey dot com and we’ll put something in the calendar. New Things Under the Sun is produced in partnership with the Institute for Progress, a Washington, DC-based think tank. You can learn more about their work by visiting their website.
Are Technologies Inevitable?
Bringing clarity to your ideas with Masry CEO of Walling
Bringing clarity to your ideas with Masry CEO of Walling
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think and work better. Ahmed Elmasry is the founder and CEO of Walling, a visual workspace to organize your ideas, tasks and projects. Users have been raving about the way it helps them be more productive, synthesize complex information, and work collaboratively. In this interview, we talked about the challenges of traditional note-taking and project management apps, how visual thinking increases productivity, how to combine flexibility with efficiency, the benefit of choosing the path of least resistance, and much more. Enjoy the read! Hi Masry, thank you so much for agreeing to this interview. What do you think usually gets in the way of turning ideas into action? Thank you so much for having me! I believe most tools focus on helping you organize ideas and store information, which is not really the challenging part, even a pen and paper can help you jot down ideas. The real challenge is to be able to turn those ideas into action, or communicate them with someone such as your team or clients.  The linear structure of note-taking and project management apps can bury your ideas and make it difficult to navigate through them. On the other hand, whiteboard apps provide a visual experience but they lack organization. They can be great for brainstorming, but you can’t share ideas and project information effectively with someone in a freeform canvas format. The lack of organization would make it challenging for them to understand the context of the information given and collaborate with you on turning the ideas into actions. True productivity happens when you are able to actually get things done. The tools you are using also shouldn’t slow you down, generate busywork or get in the way of your creativity. How does Walling address those challenges? Walling provides a balance between organization and visuality, a unique experience that brings clarity to your work and helps you to communicate ideas more effectively with your team. Studies show that your brain processes visuals thousands times faster than text. The visual experience of Walling puts your ideas, tasks and all components of your work side by side. It empowers you to step back and see how everything fits together. We also find this experience to improve team communication significantly.  Unlike whiteboard apps, Walling combines the visual experience with organization. Not only are you visually collecting ideas, but your ideas on Walling are organized and actionable too.  One thing you will also notice when you use Walling is how frictionless and fast everything is. No jumping between pages or layer after layer of clicks to reveal your ideas and tasks. We designed Walling for professionals and teams who want to get work done faster and move forward with decisions more efficiently. That sounds amazing. So, how does Walling work concretely? A new page in Walling is called a wall. Each wall is divided into sections where each set of ideas or tasks is organized in one section. In Walling, you organize your ideas and tasks in blocks called bricks. They are like mini documents that contain all the rich text information you can possibly gather all associated with the idea/task. Bricks can also be tagged, contain due dates or be assigned to a wall member. For example, if you are organizing a new project on a wall to redesign a website, you can start by visually organizing the project brief and requirements in the first wall section. Then you can break down the target audience information in separate bricks under another section. You can manage the tasks of the project in a section dedicated to tasks. One of the best features of the wall sections is the ability to view them in different views, so while your target audience ideas are organized side by side, your tasks can be managed in a Kanban board or a table. You can also have more sections for collecting inspirations in a moodboard, or sharing a list of files. All organized in one visual place within a fast, flexible experience.  You can invite your team to your wall to collaborate with you in realtime. Our users love how easy it’s for their teams or clients to get started with Walling. No learning curve or unnecessary bells and whistles. A straightforward experience that helps them to get work done faster. How does Walling compare to all-in-one tools such as Notion or Coda? Right off the bat, the first difference you will notice when you try Walling is how visual everything is. This brings a lot more clarity to your work than the linear and text-based experience of apps such as Notion or Coda. The extreme flexibility of all-in-one tools makes them inefficient for most use cases. You always have to go through overwhelming setup steps or search for a template to fit your use case, but I believe you should never force your ideas into a predetermined structure too early. At Walling we are more focused on helping you to organize your ideas and projects than forcing you to use Walling for everything. This enables us to be more efficient at what we offer. For example, you can easily track all your tasks from different projects on Walling inside the “My Tasks” tab. I’m not sure if you can achieve something similar in Notion/Coda, and if you can, it will probably be an overcomplicated setup that’s not efficient.  We find Notion/Coda to be great for building wikis and sharing documents, this is why integrating with Notion/Coda is on our roadmap to help you connect your wikis and documents inside your projects on Walling. What kind of people use Walling? Walling is for everyone who needs to organize their ideas and projects, which is a pretty wide spectrum of users. We have social media managers that use Walling to manage their ideas and campaigns for their social media accounts that have millions of followers. We have managers from big companies like Microsoft, InVision and Ubisoft using Walling to organize and share ideas with their teams. Creatives and design agencies love the visual experience of Walling to organize and manage their projects. Entrepreneurs, small business owners depend on Walling everyday to organize their work. The flexibility and ease-of-use of Walling makes it appeal to a large variety of people to organize their work and improve their productivity. Were there any other surprising use cases you didn’t expect people to use Walling for? The ability to generate a public link of a wall to share it with anyone has surprised us with several use cases we never expected. From designers building beautiful walls of creative briefs to share with their clients, to schools using the walls as notice boards or to inform their students about lesson plans and upcoming events. The visuality and organization of the walls make them perfect for sharing ideas and information. And how do you personally use Walling?  My team and I use Walling everyday to communicate and organize ideas. Our work typically consists of small projects, such as designing and developing new features, creating or redesigning a page on our website, planning our newsletters and emails. Each of these projects is organized into a wall where we collaborate on ideas, tasks and references in different sections.  If I also need to share some ideas with a contractor or a social media influencer we are working with, I organize the ideas on a wall and share a public link of the wall with them. It’s not because Walling is our tool, but because it’s the path of least resistance and the most efficient way for us to quickly organize ideas and share them with someone. How do you recommend someone get started?  Go ahead and create a free account on Walling.app. We designed the experience of Walling to be as frictionless as possible with little to no learning curve. Start a new wall and see how easy it is to collect and organize ideas. You will also find example walls in your account to get a glimpse of what you can do on Walling and the visual experience of the app. If you are working within a team and it’s not easy for you to move to a new tool, you can use Walling on your own first to organize ideas and share them with your team using the public link feature of the walls.  Also make sure to check out and subscribe to our YouTube channel. We regularly publish new video tutorials there.  And finally… What’s next for Walling?  We are continually working on more improvements to give our users a better experience with Walling. Improving the overall experience of the phone app to match the desktop app is our top priority right now.  Integrating with other apps to make Walling the single source of all ideas and project information is also on our list. Our long term vision for Walling is to be the default tool for everyone to organize and manage their work. From managers, creatives, solopreneurs, small business owners, to remote teams, startups and marketing agencies. Thank you so much for your time, Masry! Where can people learn more about Walling? You can sign up for Walling on our website. We also publish new updates frequently, so you can follow us on Twitter to stay updated with all the new features and improvements we release. Thanks again for having me! I’m excited to hear the feedback from Ness Lab’s community about Walling. The post Bringing clarity to your ideas with Masry, CEO of Walling appeared first on Ness Labs.
Bringing clarity to your ideas with Masry CEO of Walling
The science of motivation: how to get and stay motivated
The science of motivation: how to get and stay motivated
When your motivation vanishes, what can you do to get it back? Many of us will buy an inspirational book or watch motivational videos, thinking this will help us get our mojo back. But these tricks are unlikely to be successful. In reality, motivation only starts to build again once we have taken the first steps and gained some momentum in our task. In the words of Lao Tzu: “The journey of a thousand miles begins with a single step.” Motivation is all about getting started and consistently taking action, making sure we get back on track when we fall off the bandwagon. Why we do what we do There are two types of motivation: intrinsic and extrinsic. When interest or enjoyment in an activity comes from within us, we experience intrinsic motivation. A violinist, for example, may desire to improve as a musician because playing brings intense joy, rather than to pursue fame or awards. Intrinsic motivation associated with doing what you love therefore strongly correlates with sustained behavioural change and improved well-being, because the activity itself brings pleasure. With intrinsic motivation, an activity provides its own inherent reward. Researchers in organisational psychology note that “intrinsic motivation is key for persistence at work”. If you enjoy what you do, the activity and the goal will collide so that both your interest and experience of work are enhanced. Extrinsic motivation, conversely, is driven by influences outside of us. You may want to progress in your career to earn more money, achieve recognition within the workplace, or to avoid sanctions. With extrinsic motivation, the outcome you desire is separate to the activity you engage in to achieve it, which will make dips in motivation more likely. But it doesn’t mean that extrinsic motivation is bad: the recipe for motivation is a bit more complex than that. The ingredients of motivation Motivation has been studied for many years, and one of the most popular schools of thought is Self-Determination Theory. This theory uses empirical methods to highlight the importance of self-regulation, the process of taking in social values before transforming them into our own values and self-motivations. Writing in the American Psychologist journal, Richard Ryan and Edward Deci highlighted the three innate psychological needs which must be satisfied to enhance self-motivation and mental health: competence, autonomy and relatedness. If we feel competent in a behaviour, either as a result of feedback, communication or rewards, our intrinsic motivation will be greater. However, this is only the case if we have a sense of autonomy over the action.  Relatedness is often more relevant to extrinsic motivation. If a behaviour is valued by a manager, client, or friend, we will feel a sense of connectedness with them, which will lead to internalisation of an extrinsic motivation. Self-Determination Theory therefore demonstrates that our psychological needs must be met for self-regulation to occur, so that both intrinsic and extrinsic motivation can be maintained. Why is it so hard to stay motivated? We like to think of ourselves as curious learners. But many of us will have experienced a sense of apathy at some point in our personal or professional lives. Whether we have stayed in a stagnant job or spent night after night mindlessly scrolling on the sofa, demotivation can affect us all. There are many causes of demotivation. Perhaps a challenge feels too difficult. If getting out of your comfort zone causes intense fear or anxiety, you may lose any drive you once had and abandon the task. It’s not just goals that are too hard that can cause demotivation. If a goal is too easy or will not lead to a suitable reward, you may lack the drive to pursue it despite it being achievable. Similarly, if you set goals that do not suit you, you may be unable to self-regulate and see how they relate to you. Without self-regulation, neither intrinsic nor extrinsic motivation will be present, and you won’t manage to stay motivated. In some cases, the absence of clarity in your aspirations may deter you from pursuing them. You might know that you’re unhappy in your current role, but feel unsure about where to begin with instigating a career change. Whether your motivation for change is internally or externally motivated, you cannot sustain motivation without having an aim clearly in focus. As you can see, there are many reasons why you may have slip-ups in motivation. Fortunately, there are ways to get and to maintain motivation. What matters is that when you recognise a lull in momentum, you get back on track as quickly as possible. Try to “never miss twice in a row” by acting on any demotivation and not allowing it to persist.  Getting, and maintaining, motivation Motivation will only come once we have started a task or behaviour, not before we get going. Rather than letting tasks accumulate until they feel insurmountable, the best strategy is to generate the momentum required to conquer a long-term goal by consistently showing up every day. As Confucius put it: “The man who moves mountains begins by carrying away small stones”. But, of course, that’s easier said than done. When your get up and go has gone, the following five strategies will help you to rekindle your motivation. Focus on the right goals. Intrinsic motivation will occur naturally if you choose a goal you care about. If you don’t feel committed or connected to the goal, you’ll need to rely mostly on willpower, which isn’t sustainable in the long-term. You can also use the Goldilocks rule to confirm that a goal is not too easy nor too hard. Create a motivation routine. Block out time first thing each morning to focus on the goal that matters to you. Prioritising this time reaffirms your commitment to the task. Practice self-reflection. To promote motivation, you need to take care of yourself. Allow time for self-care, such as reading and exercise, and commit to self-reflection. Using a metacognitive method such as journaling gives you space to reflect on your motivation, progress and any setbacks, so that you can continue to move forwards. Use the motivation clinic. The 3C model of motivation can help you dig deeper to figure out which component of motivation exactly is the source of the problem and, crucially, which strategy you should employ to get back on track. Plan to bounce back. We will all slip up, but having a safety net in place will help to prevent demotivation setting in. Finding an accountability buddy can help keep you focused and celebrate your successes. Most importantly, remind yourself that you will never regret persevering with hard, but valuable, work once it is done. Self-regulation is an important part of motivation, and it will only occur if you feel competent, autonomous and understand how your goal relates to you. By focusing on the right goals, using self-reflection, and implementing safety nets, you can boost your motivation and ensure a prompt comeback at times of demotivation. The post The science of motivation: how to get and stay motivated appeared first on Ness Labs.
The science of motivation: how to get and stay motivated
Everything is aiming: forget the target and focus on your aim
Everything is aiming: forget the target and focus on your aim
We live in a world obsessed with outcomes. At school, we’re encouraged to climb an artificial leaderboard that reflects our test scores. At work, performance is based on reaching specific targets, sometimes known as OKRs for “Objectives and Key Results.” In this goal-based society, success is defined by how our peers evaluate our track record. But what if you’re not excited about this definition of success? What if you’re feeling lost and want to find your way — not the default path, but your own path? Kyūdō, the Japanese martial art of archery, offers an alternative philosophy where aims matter more than goals, and where success is the process itself. The difference between goals and aims People tend to use the words “goal” and “aim” interchangeably, but those words have very different definitions. Archery offers the perfect metaphor to understand the difference between a goal and an aim, and there’s no better way to illustrate it than the story of a German professor who fell in love with the art of the bow. Eugen Herrigel (1884 – 1955) moved to Japan in the 1920’s to teach philosophy. There, he decided to train in Kyūdō as a way to better understand Japanese culture. He was fortunate to be taught by legendary archer Kenzō Awa, who was known as the man of “one hundred shots, one hundred bullseyes.” The training was too slow to Herrigel’s taste, who kept on missing his target after months of training, and he complained about his lack of progress. The archery master replied: “The more obstinately you try to learn how to shoot the arrow for the sake of hitting the goal, the less you will succeed.” The master encouraged him to forget about the goal, and to focus on the way he was aiming — how he held the bow, the way he positioned his feet, the way he was breathing while releasing the arrow. The goal is the target we want to achieve, while the aim is the course we set to reach that target. A goal fixates on the finish line, while an aim considers the trajectory. When we focus on our aims, the process becomes the goal. And we’re more likely to reach our goal when we become fully aware of our aim. This is the essence of the way of the bow. As James Clear puts it: “It is not the target that matters. It is not the finish line that matters. It is the way we approach the goal that matters. Everything is aiming.” How to define your aims Thomas Fuller (1608 – 1661), one of the first English writers to have enough patrons to be able to live by his pen, wrote: “A good archer is not known by their arrows but by their aim.” Letting go of outcomes doesn’t mean abandoning your ambitions. Instead, focusing on your aims is a mindset shift that allows you to break free of your illusion of control so you can zero in on your output. When we focus on our aims rather than our end goals, we learn how to design a daily life where the process itself is so fulfilling that it doesn’t matter whether we ever reach an hypothetical finish line. Success is enjoying the process. However, we have all been so well-trained in obsessing over outcomes, it can be difficult to change the way we direct our energy and attention. Metacognition (“thinking about thinking”) can help us untangle those deeply ingrained patterns. The AIMS Self-Reflection Questionnaire is a simple thinking exercise to break free from a goal-based approach to life and help you focus on your aims instead. AIMS stands for Aspiration, Implementation, Metacognition, and Success. You just need a pen and paper, a timer (for example on your phone), and about half an hour to complete it. Are you a member of the Ness Labs learning community? You can access a detailed version with additional details on each question and a downloadable workbook to write down your answers. Download 1. Aspiration (10 minutes) To refocus on intrinsic motivation, the first section of the questionnaire encourages you to reconnect with your dreams. When you were a kid, what did you want to be when you grew up? What experiences filled you with awe and wonder? What do you want to learn? What are some past projects you enjoyed working on, including abandoned projects? What excites you the most about the future? 2. Implementation (10 minutes) The second part of the questionnaire is about the process of aiming towards your aspirations, when you forget about the outcome and enjoy the journey instead. What does an ideal day look like to you? What things would you like to say no to if you could? When do you feel most energized? Who are the people you trust and you can count on to support you? What things would you do if you supported yourself unconditionally? 3. Metacognition (5 minutes) In the third part of the questionnaire, you will reflect on the ways you can avoid living your life on autopilot, how you can monitor your progress, and where to get the help you need when you feel stuck. What are your favorite modes of thinking? What are your self-reflection tools of choice? Where do you seek advice when you feel stuck? 4. Success (5 minutes) The last part draws on your answers to the questions in the previous sections. Looking at what you wrote in each corresponding section, complete the following sentences: In the future, I would like to… I will direct my time, energy and attention towards these aspirations by… I will reflect on my progress by… To me, success means… That’s it. You have now completed the AIMS Self-Reflection Questionnaire. It’s a simple way to rethink your relationship with ambitions by focusing on the trajectory rather than the finish line. And it’s hopefully also a nice way to remember some of your past experiences and to get excited about the future. The post Everything is aiming: forget the target and focus on your aim appeared first on Ness Labs.
Everything is aiming: forget the target and focus on your aim
Define every problem: how to write a personal problem statement
Define every problem: how to write a personal problem statement
To solve a problem, you first need to understand the problem. As Irish author Derek Landy puts it: “Every solution to every problem is simple. It’s the distance between the two where the mystery lies.” A problem statement can help bridge that gap. It’s a brief summary of a problem you want to address. It’s most commonly used in research to clearly identify the problem to be solved before taking action. However, it has many benefits that can be harnessed outside of a research context, and can be especially helpful to tackle personal problems. The power of defining your problems In research, problem statements are valued because they crystallize the issue at hand, help researchers to avoid preconceived ideas, and ensure that each study has a clear direction before work begins. Researcher Max Kush explains that problem statements should explore the gap between your current state and your future goal. The concise description should emphasize all of the facts that need to be addressed to move forwards. However, he goes on to highlight that unfortunately, problem statements often incorrectly assume that everyone understands the issue, leading to weak, error-laden or incomplete statements. Using a series of ‘five W’ questions (who, where, what, when and why) that ensure a comprehensive understanding of the problem can help avoid that pitfall. You will notice that the problem statement doesn’t include a ‘how’ question. The statement can suggest options, but it doesn’t define the final answer. In a team setting, the problem statement is a tool for constructive conversation: it allows group members to discuss potential solutions together. According to physics Professor Mahyuddin Nasution and colleagues, designing a problem statement can work for any “interests that require answers.” That’s why this powerful method can be adapted to better understand problems you face in your personal and professional life. A bridge between problem and solution Whether you’re facing a challenge or feel like there’s a potential area of growth you’d like to explore, writing a problem statement for a personal or professional issue can be beneficial in many situations. Problem statements are often helpful during times of transition. If you’re considering a career change, a problem statement will help you to fully understand the crux of any issues you might be facing now. By looking ahead to the future you envision, you can identify the gap between your current position and your ideal work situation. Having a problem statement in place will help to avoid rushed decisions into a new role which might not truly deliver the growth you’re looking for. Problem statements work In your private life as well. For example, you may feel stuck in a relationship. Writing a problem statement will help to identify communication gaps, and the mismatch between your current situation and how you feel your relationship should be. You can also explore how to improve your physical and mental health with a problem statement. It helps you reflect on why you want to take better care of your health, and how you might go about it. It’s a way to investigate why your behaviors do not always match your intentions, so that you can begin to address this disparity. By reflecting on the gap between ideal and reality, it becomes easier to understand the crux of a problem and lay the groundwork for potential solutions. With a personal problem statement in place, it’s also far easier to communicate the issues to others who may be able to help you. Rather than feeling anxious or overwhelmed, you will start to see the issue as a puzzle to be solved, rather than a source of stress. Writing your personal problem statement Crafting a personal problem statement involves an audit of the current situation, followed by an assessment of how this differs from the state you’re aspiring to. To write your problem statement, open your note-taking app or start a fresh page in your notebook or journal before exploring the following questions: What is your ideal? What is your reality? What are the consequences of your current situation? What can you propose as improvements? A personal problem statement might be: “I would like to read more books (ideal), but I spend two hours scrolling on social media everyday while commuting (reality), which impacts my mental health and my creativity (consequences). Instead, I should put my phone in my backpack and take my Kindle out as soon as I get on the bus in the morning (potential solution).”  Another statement could be, “I want to improve my fitness (ideal), but I am too tired to exercise when I get home from work (reality), which impacts physical health and mental wellbeing (consequences). Instead, I could start walking or cycling to work to fit exercise into my day more easily (potential solution).” If you’re struggling to articulate your answers to the above questions, you may in addition go through the series of ‘five W’ questions (who, where, what, when and why) to get to the core of the problem at hand: What is the problem? (the gap between ideal and reality) Who is experiencing the problem? (you, a friend, family member, colleague…) Where is the problem occuring? (at home, at school, at work…) When does the problem occur? (every day, week or month, during specific events, when around certain people…) Why does the problem occur? (gap in skills, knowledge, communication…) While problem statements are typically used in research, writing a statement for the issues you face in your personal and professional life can also be a powerful way to better understand those issues. With a crystalised view of the problem, you can explore potential solutions to close the gap between ideal and reality. And, who knows, one of these may become a favorite problem of yours. At work, there may be occasions when you have witnessed the hasty launch of a project that later seems to be heading down the wrong path, or requires extensive additional work due to poor initial planning.  The post Define every problem: how to write a personal problem statement appeared first on Ness Labs.
Define every problem: how to write a personal problem statement
Remote Breakthroughs
Remote Breakthroughs
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps: Apple, Spotify, Google, Amazon, Stitcher. Remote work seems to be well suited for some kinds of knowledge work, but it’s less clear that it’s well suited for the kind of collaborative creativity that results in breakthrough innovations. A series of new papers suggests breakthrough innovation by distributed teams has traditionally been quite difficult, but also that things have changed, possibly dramatically, as remote collaboration technology has improved. Subscribe now Distant and Colocated Collaboration Are Not Alike We can begin with Van der Wouden (2020), which looks at the history of collaboration between inventors on US patents, over the period 1836 to 1975. To build a useful dataset, he has to extract the names and locations of inventors from old patent documents, which have been digitized into super messy text files by Google. These digitized patents are rife with misspelling (because optical character recognition scanning is very imperfect for old documents) and lacking in much of any standardization. It’s a ton of work that involves fuzzy matching text strings to a big list of names which in turn is drawn from the US census, modern patent documents, and an existing database of inventor names. And that’s only the first step - it just tells you the names of people mentioned in a patent, not whether those names are inventors, rather than lawyers or experts. To figure out who is an inventor, Van der Wouden uses a set of classification algorithms that predict the probability a mentioned name is an inventor using a dataset of known inventors linked to patents. It’s not a perfect method, but it is able to find an inventor on about 90% of historical patents. Moreover, the people it identifies as top patent holders, and the number of patents they hold, matches pretty closely other lists of top patentees in US history. He also has to do similar work to pull out the locations mentioned on a patent. Now that he has an estimate of how many people worked on each patent, and where they lived, Van der Wouden can start to look at how common collaboration and remote collaboration are. We can see that collaboration really began to take off in the 1940s and that the probability a team of inventors didn’t reside in the same city rose from under 5% in 1836 to over 10% by 1975. From Van der Wouden (2020) Van der Wouden next tries to measure the complexity of a patented invention with an approach originally used in another paper, Fleming and Sorenson (2004).1 Fleming and Sorenson attempted to create a measure of how “fussy” technological classifications were, based on how well they seem to play nice with other technologies (fussy is my term, not their’s, but I think it captures what they’re going for in a colloquial way). If a technological classification is frequently attached to a patent alongside a wide range of other classifications, they’re going to say this isn’t a very “fussy” technology. It can be used in plenty of diverse applications. On the other extreme, if a classification is only ever assigned to a patent with one other classification, then we’re going to assume the technology is very sensitive and very fussy. It only works well in a very specific context. While this measure is a bit ad-hoc, Fleming and Sorenson also did a survey of inventors and showed their measure is correlated with inventors self-assessments of how sensitive their own inventions are to small changes, and that this measure is not merely picking up how novel or new the technology is; it’s picking up something a bit different. Returning to Van der Wouden (2020), his measure says a patent is more complex if it involves more technologies, and if these technologies are “fussy.” There are two key results: complex patents are more likely to be the work of teams. And among patents by a team of inventors, the inventors are more likely to reside in the same city if the patent is more complex. It seems that, at least over 1836-1975, it is hard to do complex work at a distance. Lin, Frey, and Wu (2022) pick up Van der Wouden’s baton and take us into the present day. They look at the character of both patents and academic papers produced by collocated and remote teams over 1960-2020 (actually 1975-2020 for patents), but focusing on how disruptive a paper or patent is. To measure disruption, they use an increasingly popular measure based on citations. To simplify a bit, the idea here is that if a paper or patent is disruptive, you’re not going to cite the stuff it cites, because the paper or patent has rendered those older ideas obsolete. After Einstein, you no longer cite Newton. On the other hand, if a paper is an incremental improvement within a given paradigm, you are likely to cite it as well as its antecedents. This disruption measure quantifies this notion: for some focal document, it’s based on how many citations go to the focal document alone relative to how many citations go to the focal document as well as the documents cited by the focal document. Across 20mn research articles and 4mn patents, Lin, Frey, and Wu find that, on average, the farther away the members of the team are from one another, the less likely the paper is to be disruptive. From Lin, Frey, and Wu (2022) So, over 1836-1975 the patents of inventors who reside in the same cities tended to be more complex, in the sense that they either drew on more technologies, or more technologies that don’t have a long history of successfully being combined with other technologies. And over 1975 to 2020, patents with inventors residing in the same city were more likely to be disruptive, in the sense that they are more likely to receive citations that do not also reference earlier work. Does Distance Inhibit Strange Combinations? These measures are not picking up exactly the same thing, but neither are they as different as they might seem at first. As discussed in a bit more detail here, Lin, Evans, and Wu (2022) find that papers that draw on novel combinations of ideas (in this paper, proxied by the kind of journals a paper cites) are also more likely to be disruptive. In other words, it might well be that the reason Lin, Frey, and Wu find papers by distant teams are less likely to be disruptive is because dispersed teams have a harder time connecting different ideas. We’ve got a few pieces of evidence that support the notion that remote teams have a harder time making novel connections across ideas. First, both Berkes and Gaetani (2021) and Duede et al. (2022) find some evidence that colocation is an important channel for exposure to intellectually distant concepts. As discussed here, Berkes and Gaetani (2021) show that: The patents of inventors residing in denser parts of cities comprise a more diverse set of technologies The set of technologies that comprise the patents of denser parts of cities is more unorthodox: two different technologies might rarely originate from the same geographical location, but when they do that area is more likely to be a dense part of a city The patents of inventors residing in denser parts of cities are more likely to feature unusual combinations of technologies themselves. That’s all consistent with the idea that being physically around lots of different kinds of inventive activity increases the chances you draw an unexpected connection between two disparate concepts. Duede and coauthors provide some fine-grained evidence from academia. They have a big survey where they ask thousands of academics across many fields about citations they made in some of their recent work. Among other things, they asked respondents how well they knew the cited paper, as well as how influential was the citation to the respondent’s work. In the latter case, respondents rated their citations on a scale from “very minor influence”, which meant the respondent’s paper would have been basically unchanged without knowledge of the cited reference, to “very major influence”, which meant the cited reference motivated the entire project. If we have a way to measure the geographic distance between the authors and the “intellectual distance” between the citation and the author’s normal expertise, we can see how the two are related: does being close in space facilitate learning about ideas you wouldn’t normally know about? Computing distance in space is straightforward: Duede and coauthors just code whether authors are in the same department, same institution, same city, or same country. To measure intellectual distance, they rely on the similarity of the title and abstract of the citing and cited paper, as judged by natural language processing algorithms. This algorithm judges papers to be more similar if they contain words that are themselves more closely related to each other. Duede and coauthors find if you and the author of a paper you cite are at the same university, then you are indeed more likely to say you know the cited work well and that it was influential on you. But what’s interesting is that the strength of this relationship is stronger if the cited and citing paper are less similar to each other. In other words, if you cite a paper that’s surprising, given the topic you are working on, you are more likely to say you know that paper well and that it influenced you if the author is at the same university. That’s quite consistent with colocation being a useful way to learn about ideas you wouldn’t otherwise encounter in the course of your normal knowledge work. The second line of evidence is larger, but less direct: physical proximity seems to be quite important for helping people form new relationships, especially relationships that wouldn’t have been formed in the course of ordinary knowledge work. I’ve looked at this line of evidenc...
Remote Breakthroughs
Bridging chaos and coordination with Cara Borenstein co-founder of Stashpad
Bridging chaos and coordination with Cara Borenstein co-founder of Stashpad
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think faster and work better together. Cara Borenstein is the co-founder of Stashpad, a fast and easy-to-use notepad designed to help developers stay organized as they work without breaking their flow. In this interview, we talked about the power of embracing messiness, how to use a daily brain-dump to set your intentions, how to practice graceful context switching, how to cultivate calm in an increasingly noisy and complex world, and much more. Enjoy the read! Hi Cara, thank you so much for agreeing to this interview. Before we start talking about Stashpad, let’s get a little philosophical. Lots of software is about structure and organization. Why do you think we’d do well to instead embrace messiness? It’s a great question! At Stashpad, we’re focused on how individual developers go about their day. The reality is that our daily experience can often be a bit messy. You plan projects, research solutions, debug issues, and brainstorm workarounds. You need somewhere to stash things as you’re working so that you can make sense of it all and stay on top of things. Problem-solving is messy and involves various moving parts, and to navigate this effectively, you need a way to manage your working memory. Cal Newport, the author of Deep Work, has talked about the value of having a messy space for your thoughts as part of your tooling. In fact, the scratchpad is his favorite productivity tool: “It’s a way to offload things out of your brain where you can still see them, look at them, organize and make sense of them, without having to keep all of these things in your mind at the same time.” Our personal scratch notes should not be a place where we feel pressure to make them look beautiful and polished — that would only be an impediment to getting things down. It would create a high bar for what is acceptable, and would discourage us from capturing our thoughts in a raw form. The point of our scratch notes is to serve us as we’re going about our day — to help us keep track of things and make sense of things. We like to parallel the scratchpad to RAM (Random Access Memory) on a computer. RAM is your working memory. It’s a cache of all the things that you need while you’re executing on a task. This is different from things that are on your hard drive (“disk”). Disk is your long-term storage. It’s where things go when you’re done working on them. We’ve found that too often, writing notes is assumed to mean writing a finished product to disk. But messiness can also mean interruptions, which makes it harder to get in the flow. How does Stashpad address this challenge? Step one is acknowledging that our day involves juggling different threads, and sometimes context-switching is inevitable. Our goal is to make capture as frictionless as possible, so that you’re not trying to keep too much in your head. Whenever an idea comes up that you want to put away for later — whether someone mentioned something to you, you came across some new information, or it just popped into your head — it should be super easy to jot it down, and in such a way that you’ll be able to find it later. With Stashpad, you can jump to the right context in two keystrokes. Something comes up, and it’s not urgent enough to pre-empt what you’re doing, but it still may be worth revisiting later? Stash it. We’re not completely eliminating the context-switch — we’re trying to make it as quick and seamless as possible. The outcome is that you can get back to focusing on your Main Thing right away, and you’re also not dropping the ball later. Stashing quick ideas as you’re going about your day is key to maintaining flow and momentum. If you don’t have a good way of doing this, you either get sidetracked wondering what to do with this idea that came up, or you try to stow it away in your mind — but it may very well continue to vaguely take up some of your attention and prevent you from fully focusing on what’s in front of you. As Sophie Leroy, Associate Professor at the University of Washington Bothell School of Business, puts it: attention residue is “when our attention is focused on another task instead of being fully devoted to the current task at hand.” By getting things out of your head and into written form, you can reduce the chances that attention residue from one task will pollute your focus in your next task. Another key to effective stashing is being able to easily retrieve what you stashed. Or, put another way, how can you quickly stash things without your stash devolving into complete chaos? If you think about the classic pen and paper approach to note-taking, this is where it can start to break down. It can be convenient and enjoyable to jot things down on a piece of paper, but finding something you wrote down two weeks ago — or yesterday — or something you don’t remember when you thought of can get difficult. When it comes to retrieval and keeping things in order, it’s important to be able to flexibly compartmentalize your notes, jump between these compartments, and send things to the right place. More generally, we’ve found that existing solutions tend to be either convenient for capture, but unable to handle complexity over time; or they’re great at organizing, but too heavy and unapproachable to be your tool of choice for capture. We’re putting a lot of work into making Stashpad the best option for capturing thoughts and finding them later — so that you don’t get bogged down on either side of this process. As humans, we’re “single core machines” — meaning we can focus on one thing at a time. But our day-to-day involves juggling multiple threads going on at the same time. Stashpad is designed to ease the tension between these two realities. When did you start tackling this challenge? We actually first started out on a somewhat different problem. Theo and I noticed at our respective engineering jobs that knowledge sharing between engineers often did not happen as smoothly as one would hope. And specifically, the team wiki, which was supposed to be where a lot of this knowledge sharing happened, often fell short when it came to supplying complete and up-to-date information. So in 2019, we set out to improve knowledge sharing in engineering organizations by building a more approachable wiki. After working on this for a few months, we found that not very many people were interested in our new wiki. We ultimately realized that in people’s day-to-day workflows, their team’s wiki tool was not a major pain point. In fact, it wasn’t really a big part of their workflows at all. We went back to the drawing board and asked around a hundred developers more detailed questions about how their knowledge management practices at work. And we found that 90% of them frequently used what we like to call a “barebones” notepad — something like Apple Notes, Notepad++, untitled text files, etc. And that occasionally, certain things would get transferred over from their notepad into collaborative tools like Slack, Jira, or Google Docs when they were worth sharing. That’s when we realized that, subtly, this barebones notepad plays a major role in how we do our work. And we noticed that many developers weren’t very pleased with their setup. The truth is that the notepad hasn’t seen much innovation in the last few decades. So we decided to build a better developer notepad. This was the summer of 2020. So, how does Stashpad work exactly? At the core of Stashpad is bytes and stacks. A byte is a short note. A stack is a kind of byte that can contain other bytes. Stacks let you structure your notes and add hierarchy in whatever way is useful to you. Navigating around your notes quickly is key. To that end, you can open a stack in a tab, or expand a stack inline. You can pin a stack, and jump to a recent stack. You can search for a stack or for any note, with filters to help narrow your query. You can also capture notes to the right stack without needing to navigate to it. There’s a shortcut to pull up Stashpad from anywhere. Home is your top-level stack. It is where you can create your first stacks. It’s also a good default “dumping ground” for notes when you’re not sure where they should go. You can act on your notes. An advantage of jotting down short notes is your notes are more modular, making it easy to perform commands on specific bytes. Moving, deleting, re-arranging, formatting, copying to clipboard — these are all actions you can perform via the keyboard. You can access all of the available actions by hitting cmd/ctrl + K at any time in the app. In general, you can do anything in Stashpad from the keyboard. And of course, there’s markdown support and code syntax highlighting. Soon, you’ll be able to customize the keyboard shortcuts to your liking. You’ll also be able to define flexible queries that will dynamically generate stacks with the relevant content. We will offer image support and other file attachments. There will be a mobile app with hosted sync. There will be a public API and hooks for getting content in and out of your Stashpad, as well as integrations with other popular tools. What kind of people use Stashpad? Software engineers at companies like AWS, Coinbase, and Twitter use Stashpad everyday to manage their notes and thoughts as they do their work. In particular, people who work on complex projects and who often have multiple threads of work going on at the same time benefit especially from a better working memory solution. Before finding Stashpad, our users would often rely on a combination of untitled text files, Slack messages to themselves, and even pen and paper as a quick way to capture information as they were working. Some then paste some of this information into a more robust long-term knowledge store like Evernote and Notion. Stashpad is at least as fast at capture as these simple options and it works bette...
Bridging chaos and coordination with Cara Borenstein co-founder of Stashpad
How to figure out a career change
How to figure out a career change
Once upon a time, an organization could take on a young, new employee, and know that with slow and steady development, that individual would loyally climb the career ladder, remaining with them until retirement age. Nowadays, the concept of a career for life has become outdated. Our professional lives are becoming increasingly squiggly, with a new normal that allows us to move frequently and fluidly between not only roles, but careers too. Even those in vocational, so-called lifelong jobs may feel the urge to leave, as I know only too well myself. My transition from working as a doctor to becoming a freelance writer came with huge uncertainty. Thinking about changing careers is likely to provoke inner turmoil, stress, disruption and even feelings akin to a personal crisis. It’s therefore reassuring that following certain strategies can make the process as painless and as exciting as possible. The extinction of the single career  With the idea of a lifelong career becoming archaic, there has been much interest in what a modern career journey looks like and what millennials and younger generations will expect.  In 2021, research found that 49% of employees “had changed careers from a wide range of industries”. However, in many cases it had taken an individual years to upskill, network and prepare themselves financially for making the change successfully. This preparation may be particularly pertinent when individuals are recognized to be making frequent transitions that cross significant boundaries including industry, occupation, labor market and geographical location. Furthermore, one is more likely to be driven by opportunism. If our current role has little potential for growth or promotion, we will apply for roles elsewhere to ensure professional development. Not everyone seems to be made equal when it comes to career change. A study by Carole Kanchier and Wally Unruh showed that occupational change is more likely for those who place a higher value on personal fulfillment and intrinsic job rewards. Career changers are also likely to have higher self-esteem than those who stay put. Knowing when it is time to leap It can be difficult to know when it’s time to attempt a career change. However, there are three tell-tale signs that may alert you to the fact that you are ready to move on. 1. Physical signs Lack of energy. You want to feel that you are getting a buzz from, and thriving, at work, even if the role itself is demanding and you feel tired at the end of the day. If you feel lethargic, drained or apathetic about your role, this could be a sign that rather than inspiring you, there is no joy left in your current role.  Struggling to get out of bed each morning. Tiredness is common, but work should not cause you to dread getting up each morning. Pay attention to this sign, because it could indicate that your current role is having a negative impact on your mental health. 2. Psychological signs Boredom. Every job has tedious tasks and you will need to accept that each role will have its less enjoyable elements. However, if you always find work boring, it is time to look for alternative jobs or careers. Poor concentration. Work that does not captivate you is far harder to focus on. If you find yourself taking too many breaks, regularly being distracted by your phone, or making poor progress with tasks, then you could be in a job that does not suit you. Feeling stuck. If it appears that there is no opportunity for growth or progression, you may feel frustrated or bored. Ask about upskilling, professional courses, sideways moves or promotion. If this confirms that you have hit a wall, consider alternatives. Dreaming of a new career. If you feel you are in the wrong job, you may fantasize about your dream role. If these dreams are intense or persistent, it could be time to start preparing for a new venture. Feeling envious. You may notice you feel envious of friends who have a much smaller salary, but clearly have enormous job satisfaction. Money may not be as important to you as you once thought, and considering a lower paid job may increase your options for a career change. Financial motivation. If you feel you only go to work for the money, then it is likely that you are dissatisfied with your role. It is important to recognise this sign, as you may be able to earn a similar salary with far greater satisfaction elsewhere.  3. Behavioral signs Reading about other careers. To distract yourself from a role you dislike, you may spend hours reading about the careers of others you admire. Not talking about your job. If you find you avoid talking about your job at parties, this could be a sign of your dissatisfaction. Coasting along. If you are not making the effort to perform well, consider whether your heart is truly in your current career. Lack of interest in your employer. Appreciating a role often leads to interest and emotional investment in the company. If you feel indifferent, perhaps this company is not for you. Figuring out a career change I was certain that I wanted to transition to a new career, but finding the right alternative took time and taking the plunge was daunting. If you notice one or more of the above signs, there are several ways to figure out your career change so that you can make the move with confidence. Firstly, define your career goals by considering your own personal values, how much purpose you assign to your occupation, and the careers that appeal. Write down the skills you have and those that you want to acquire, as well as your financial requirements. Reflect on whether flexibility is important to you, and how happy you truly feel now. Explore alternative career options objectively and be open to both the pros and cons. This should help to confirm whether you are changing careers for the right reasons. Next, you will need to accept that career change is often slow. If you want to go freelance, you may need to start up a business that you can run alongside your current employment so that you remain financially secure. To transition into a new sector, you may need to consider lower paid roles that will allow you to gain experience and new skills before you can progress to a more senior position. Commit to expanding your professional network to discover new work opportunities that may not be obvious. If this is difficult, you may even find it helpful to work with a mentor or career coach to support you while you metamorphose. Career change is common in the modern workforce, but when figuring out how, when, and which role to transition to you may experience extreme stress and internal friction. The shift will not happen overnight, so take your time to reflect on your current job and dream career, while also exploring your motive for change. Even if you have undergone rigorous training or development, if you are unhappy, disillusioned, or bored by your role, persevering is unlikely to be the right choice. By thoroughly investigating the alternatives, you will feel more confident in making a leap of faith and dedicating yourself to a career that you love. The post How to figure out a career change appeared first on Ness Labs.
How to figure out a career change
Getting everyone on the same page with Michael Villar founder of Height
Getting everyone on the same page with Michael Villar founder of Height
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us work better without sacrificing our mental health. Michael Villar is the founder of Height, the all-in-one tool to share project management across the entire company. Before starting Height, Michael co-founded another productivity startup, and was an early engineer and product designer at Stripe. In this interview, we talked about interconnected companies, how to reconcile project management with personal productivity, how to foster cross-collaboration between different types of teams, how to create “swimlanes” of work, and much more. Enjoy the read! Hi Michael, thank you so much for agreeing to this interview. What inspired you to build Height? I’ve always been interested in collaboration tools, I actually founded a previous company in the space called Kickoff. We were acquired in 2013 by Stripe, which is where I saw how project management works at a high-growth company.  Spoiler alert, we kept switching tools: from GitHub issues, to Asana, to Dropbox Paper, to Phabricator, to Jira.  We kept outgrowing existing tools because existing tools are fairly rigid and enforce workflows, but our company kept growing: new teams would get created, new people would be hired, new workflows would be spun up, and our overall project management strategy kept changing. When I left Stripe, I knew I wanted to build a project management tool that evolves with your growing company, that unlocks cross-collaboration between various types of teams instead of, for example, feeling like the tool was only designed for one type of team, like just engineering. How would you define an interconnected company? One trend I’ve noticed is that work is becoming increasingly cross-functional: at modern companies, many people work across multiple projects and teams as needs arise. For example, a designer might work primarily with the platform team, but may also support projects on the growth team, or even help the recruiting and marketing teams as needed. Or, as another example, in order to launch a new product, people from many teams come together to make the release a success: marketing, engineering, product, design, customer success & support, sales. And yet, with the existing project management tools out there, people end up siloed, with a situation like engineering and product using Jira, marketing and comms on Asana, and design on Dropbox Paper. This results in information about different aspects of the project getting scattered across different tools, rather than being centralized in one place to track the state of the entire project. This makes it hard to keep everyone working on a project on the same page, and ends up meaning more meetings to discuss project progress, since there’s no one one-shop stop to see how the project is going. The inverse of this chaotic picture is a truly interconnected company, one which is working like a well-oiled machine, shipping high-caliber features and projects quickly, and without any last minute scrambling or nasty surprises. Company culture helps determine to what level this happens, but using one project management tool that keeps teams in sync, with centralized information about the state of work can help make that dream a reality. And how does Height help interconnected companies do their best work? We’ve really designed Height from the beginning to be a tool that works for every type of team, and to act as a centralized hub for all things project management. This is reflected in what we’ve prioritized building: visualizations like Spreadsheet, Kanban, Calendar, and Gantt, custom attributes, powerful integrations that enable actual workflows, and a robust privacy and sharing model. Spreadsheet view Kanban board Calendar view Gantt chart Maybe less obvious, but equally importantly, Height has real-time chat per task. This has been a gamechanger for our customers — chat per task ensures all conversation about work happens in context, is searchable, and only notifies the right people for whom the info is relevant, instead of an entire channel worth of people. The default these days, especially in a remote-first world, is people having conversations about their tasks or work in Slack or Teams. Conversation in these types of tools is not attached to specific tasks, it’s freeform and topics can change frequently (and do!), making it easy to lose track of decisions, and hard for remote teams working asynchronously. By keeping all conversation for a specific task in one place — the task — it’s easy for people to get caught up in other timezones, or after coming back from a holiday. And it sounds small, but Height also tells you who read your messages, so you don’t need to ping them individually and ask if they saw your message! All teams have different workflows which can be hard to seamlessly integrate together. How does that work in Height? With Height, we’re building a tool with which you implement your own workflows. Rather than creating an opinionated methodology or system on how project management should be run, we believe there are a million different ways to run a successful company, and Height should accommodate your way of working. For example, a small company just getting going might simply use a few lists of tasks, and assign people to the right tasks. A larger company, with multiple engineering teams can organize their work in sprints or releases, and marketing teams can plan their launches with dates and calendars… You get the point. We’re building the features you need to manage your work, and we’re doing so in a way that ensures you can change the way you work overtime too. When you hire a Head of Product or a VP of Engineering who comes in with new ideas of streamlining work or updating workflows to improve efficiency, we’ve made it super easy to reflect those changes in Height fast, through a combination of our super powerful search filters and keyboard actions to bulk-edit tasks. Because Height doesn’t believe there’s one best way to work, companies can change workflows and continue using Height as their own needs and goals evolve. Team collaboration software can sometimes get so tedious that it creates friction and lowers personal productivity. How does Height address this challenge? Our goal is to contain the source of truth of all things projects, so it needs to be extremely easy to update that information. When tasks are kept up-to-date, making decisions becomes a lot more efficient. When you can find all the latest information about a project in one place, including any conversation about it from the people working on it, suddenly you no longer need project status meetings or other synchronous communication. The way we keep task data in Height up-to-date starts from making it incredibly easy to create and manage tasks. Think about how easy it is to write bullet points in a document — we’ve effectively replicated that same experience in Height, just making your tasks now exist in a structured form. To get into more granular detail, here’s how we brought the ease of making tasks in documents to Height: press `Enter` to edit and create tasks, `Tab` to create a subtask, use all the keyboard shortcuts you would expect, multi-select to edit tasks in batch, and more. We also introduced the Command palette in which you can run any feature right from the keyboard, and you can also assign any custom shortcut to any command, which makes it even more powerful. Secondly, the way you keep task data up-to-date is by making it incredibly easy for the ICs who are doing the work to update their tasks. As a bit of an aside, when it takes extra effort, or 5 clicks to update your task, what ends up happening is you either don’t keep tasks updated, or you update your tasks on Friday afternoon just before your stand-up meeting. This means this task data is no longer fresh, and that other stakeholders have probably been DMing you to ask about project status all week. When people no longer trust the shared task management tool to be an up-to-date source of truth on project status, that tool has become dead weight without providing its real value. The most important way we’ve found of helping ICs keep their tasks up-to-date in Height is by investing in powerful integrations. Every integration we build, we build with the question of: how do we make this the most useful version of itself? For example, our GitHub and GitLab integrations allow you to automatically update tasks from a pull request. You can customize when tasks should change status and what to, so that when you link a task to a pull request, it automatically changes the task status to “In progress” (or whatever custom status your team uses), and similarly, when you merge a pull request, the task will be marked as “Done”. This makes it super easy for your colleagues in product and support to stay on top of what’s happening without DMing you or having to figure out how to navigate through GitHub/GitLab to see updates. What kind of people use Height? It’s still early days for us, but already companies from 1 to 1000 employees use Height every day, many which grew 10x or more as they’ve been using Height. From established tech companies, to early-stage web3 startups, to dev and marketing agencies, each of these companies all have different teams using Height, including engineering, marketing, ops, support, HR, IT, design, legal, and product teams. None of these companies are the same, and this is exactly what we are striving for. And how do you personally use Height? We organize projects and tasks with lists, one of the more unique features of Height. A task can belong to many lists, which makes it easy for teams to cross-collaborate, but also easy to find these tasks. We have lists for features (e.g. #feature-chat, #feature-filters), bug triaging, quick improvement sessions, customer requests (e.g. ...
Getting everyone on the same page with Michael Villar founder of Height