Public Health & Medicine

Public Health & Medicine

556 bookmarks
Custom sorting
Science is getting harder
Science is getting harder
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. Audio versions of this and other posts: Substack, Apple, Spotify, Google, Amazon, Stitcher. One of the most famous recent papers in the economics of innovation is “Are Ideas Getting Harder to Find?” by Bloom, Jones, Van Reenen, and Webb. It showed that more and more R&D effort is necessary to sustain the present rates of technological progress, whether we are talking about Moore’s law, agricultural crop yields, healthcare, or other proxies for progress. Other papers that look into this issue have found similar results. While it is ambiguous whether the rate of technological progress is actually slowing down, it certainly seems to be getting harder and harder to keep up the pace. What about in science? A basket of indicators all seem to document a trend similar to what we see with technology. Even as the number of scientists and publications rises substantially, we do not appear to be seeing a concomitant rise in new discoveries that supplant older ones. Science is getting harder. Before diving into these indicators, I want to head off one potential misunderstanding. My claim is that science is getting harder, in some sense, not that science is ending or that we are on the verge of running out of ideas. Instead, the claim is that discoveries of a given “size” are harder to bring about than in the past. Share Raw paper output We’ll actually start with an indicator that shows no evidence of a slowdown though. Since scientists primarily communicate their discoveries via papers, the first place to look for evidence of increasing difficulty of making discoveries is in the number of papers scientists publish annually. The figure below, drawn from Dashun Wang and Albert-László Barabási’s (free!) book on the Science of Science compares publications to authors over the last century. From Wang and Barabási (2021) At left, we can see the number of papers and authors per year has increased basically in lockstep over the twentieth century. Note, the axis is a log-scale, so that a straight-line indicates exponential growth. Meanwhile, at right, the blue dashed line shows that the number of papers per author has hovered around 2 for a century and rather than falling, it is actually on the rise in recent decades. (As an aside, the solid red line at right is strong evidence for the rise of teams in science, discussed more here) So absolutely no evidence that scientists are struggling to find stuff worth writing up. But that’s not definitive evidence, because scientists are strongly incentivized to publish and what constitutes a publishable discovery is whatever editors and peer reviewers think is publishable. If fewer big discoveries are made, scientists may just publish more papers on small discoveries. So let’s take a more critical look at the papers that get published and see if there are any indicators that they contain smaller discoveries than in the past. Nobel Prizes Let’s start by looking at some discoveries whose importance is universally acknowledged. The Nobel prize for discoveries in physics, chemistry, and medicine is one of the most prestigious scientific prizes and has a history long enough for us to see any long-run trends. Using a publicly available database on Nobel laureates by Li et al. (2019), we can identify the papers describing research that is eventually awarded a Nobel prize, and the year these papers were published. Note several papers might be associated with any given award. For each award year, we can then ask, what share of the papers related to the discovery were published in the preceding twenty years. The results of that are presented below, though I smooth the data by taking the ten-year moving average. Share of papers describing Nobel-prize winning work, published in the preceding 20 years. 10-year moving average.Author calculations, based on data from Li et al. (2019). Prior to the 1970s, on average 90% of the time, awards went to papers published in the last twenty years. But by 2015, the ten-year moving average was closer to 50%. So recent discoveries seem to have a harder time getting recognized as Nobel-worthy, relative to a few decades ago. We can also compare the importance of different discoveries that won Nobel prizes. In 2018, Patrick Collison and Michael Nielsen asked physicists, chemists, and life scientists to pick the more important discovery (in their field) from sets of two Nobel prize winning discoveries. For example, they might ask a physicist to say which is more important, the discovery of Giant Magnetoresistance (awarded the Nobel in 2007) or the discovery of the Compton effect (awarded in 1927). For each decade, they look at the probability a randomly selected discovery made in that decade would be picked by their survey respondents over a randomly selected discovery made in another decade. The results are below:1 Probability a discovery in a given decade is rated more important than discovery in another decadeFrom Collison and Nielsen (2018) A few points are notable from this exercise. First, physicists seem to think the quantum revolution of the 1910s-1930s was the best era for physics and it’s been broadly downhill since then. That’s certainly consistent with discoveries today being in a sense smaller than the ones of the past, at least for physics. In contrast, for chemistry and physiology/medicine, the second half of the twentieth century has outperformed the first half. In the Nobel prize data, within the second half of the century, there is no obvious trend up or down for chemistry and medicine. While that’s better than physics, it remains consistent with the notion that science might be getting harder. As we can see in the first figure here, the number of papers and scientists rose substantially between 1950 and 1980, which naively implies that the number of candidates for Nobel-prize winning discoveries should also have risen substantially. If we are selecting the most important discovery from a bigger pool of candidates, we should expect that discovery to be judged more important than discoveries picked from smaller pools. But that doesn’t seem to be the case. So Nobel prize data is also consistent with the idea that discoveries today aren’t what they used to be. Whereas it used to be quite common for work published in the preceding twenty years to be recognized for a Nobel, that doesn’t happen nearly so much today. That said, an alternative explanation is that the Nobel committee is just trying to work through an enormous backlog of Nobel-worthy work which they want to recognize before the discoverers die. In this explanation, we’ll eventually see just as many awards for the work of today. But it’s not clear to me this is how the committee is actually thinking: recent work is awarded half the time still if the committee thinks the discovery is sufficiently important. For example, Jennifer Doudna and Emmanuelle Charpentier were awarded a Nobel for their work on CRISP-R in 2020, less than a decade after the main discoveries. And when you look specifically at the work performed in the 1980s, it doesn’t seem particularly notable, relative to work in the 40s, 50s, 60s, and 70s, despite the fact that many more papers were published in that decade. Top Cited Papers Still, perhaps the Nobel prize is simply too idiosyncratic for us to learn much from. Next, let’s look at another indicator of big discoveries, one which shouldn’t be biased by the sort of factors peculiar to the Nobel. This is the most top-cited papers in a given field. For example, if we look at the top 0.1% most highly cited papers of all time in a particular field, we could ask how easy is it for a new paper to join their ranks. If that has fallen over time, then that’s further evidence that today’s papers aren’t making the same contributions as yesterday’s. On the other hand though, we might think it should get harder and harder to climb to the top 0.1%, even if discoveries are not getting smaller. After all, if discoveries are of constant size, earlier works have more time to get citations; it may not be possible for later papers to catch up, even if they are just as good. But there are also some factors that lean in the opposite direction. First, if work is only cited when relevant, then newer work should have an easier time being relevant to newer papers. Since the number of new papers grows over time, that gives one advantage to the new; they can be tailored to a bigger audience, in some sense. Second, the most esteemed papers of all time may actually stop being cited at high rates, because their contributions become part of common knowledge: it is no longer necessary to cite Newton when talking about gravity, or even Watson and Crick when asserting DNA has a double-helix shape. So let’s proceed with seeing if there has been any change in how easy or hard it is to become a top cited paper, noting that won’t be the last piece of evidence we look at. The closest paper I know of that looks into this is Chu and Evans (2021), which looks at the probability of a new paper ever becoming one of the top 0.1% most cited, even for just one year. But this paper does not plot this probability against time, like the previous charts: instead, it plots this probability against the size of a field, measured by the number of papers published per year. In the scatterplot below, each point corresponds to a field in a year. On the horizontal axis is the number of papers published in the fields in that year and on the vertical axis the probability a paper in that field and year is ever among the top 0.1% most cited. The colored lines are trends for each of these ten fields. Note this figure only includes papers published in the year 2000 or earlier. Since the analysis is conducted with data from 2014, every paper has more than a deca...
Science is getting harder
Audio: Science is getting harder
Audio: Science is getting harder
This is an audio read-through of the initial version of Science is getting harder. Like the rest of New Things Under the Sun, the underlying article upon which this audio recording is based will be updated as the state of the academic literature evolves; you can read the latest version here.
Audio: Science is getting harder
Building an extension of your mind with Tobias van Schneider co-founder of mymind
Building an extension of your mind with Tobias van Schneider co-founder of mymind
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better, grow our knowledge, and be more creative without sacrificing our mental health. Tobias van Schneider is the co-founder of mymind, a privacy-first tool designed to work like your actual mind. And when they are serious when they say they care about privacy: it means no social features, no collaboration, no vanity metrics, no tracking, and no ads. In this interview, we talked about the memex as described by Vannevar Bush in 1945, how our mind doesn’t think in lists and folders, the use of artificial intelligence to catalog and find the information that matters to us, the concept of memory indexing, how we might think differently if no one was watching, and more. Enjoy the read! Hi Tobias, thank you so much for agreeing to this interview. First things first, what inspired you to build mymind?  Thank you for having me! Like anything we have created, mymind came from a need and frustration of our own — ”our” being me and my business partner, Jason. For better or worse, much of our life is intertwined with the Internet. We save posts on Instagram with home or design inspiration. We save tweets on Twitter with quotes or memes we want to hold onto. We keep tabs open in our browser with articles we want to read or research for a project. We put notes in an email or in one of a handful of notes tools. We save photos and screenshots to my Camera Roll of anything and everything. Everything we deemed important was scattered, buried in clutter or forgotten and lost forever. Many tools promised to help organize one or more of those things — and we tried every one of them. We would temporarily feel some small sense of satisfaction keeping up with each new tool, creating folders and complex organizational systems, only to ultimately abandon it. What we wanted was simple: If we see something we like and want to remember, we want to save it within a second. And if we’re trying to remember it later, we want to be able to find it within seconds. That’s it. Many people will relate to those pain points. More specifically, what do you think is the problem with the way existing tools approach note-taking and knowledge management?  Our tools are too complicated, bloated, outdated or specific. They ask us to learn new systems and keep up with them, managing folders, adding tags, creating categories, managing filters and curating our content. We are forced to create a Frankenstein system pieced together from a mix of tools that all do different things. We can’t necessarily find anything with this system, but we feel some pleasure in managing our chaos. Then we’re confronted with an aberration that doesn’t fit our perfectly calculated system, and it all falls apart. And how is mymind different? Rather than requiring you to adopt new mental models, mymind is meant to get out of the way and support the way your brain already works. We don’t think in lists and folders, we think visually and chaotically. Especially in our fast-paced world today. With a fire-hose of ephemeral information blasting at our brains every day, our minds are constantly jumping from one thought, image or piece of information to the next. Folders, lists, categories, tags, boards, groups, filters and other structures are antiquated systems that can’t keep up with the pace of our minds and our current lifestyles. mymind doesn’t ask or require anything from you. Put simply: it’s one place to save everything, without worrying about where it goes or what category it fits into. It’s one place to find everything, without wondering where you put it. It’s meant to remove all the friction from our workflow and allow us to just flow. So it works in a similar way to our actual mind.  That’s the idea. A few years ago, we discovered this essay published in 1945 by engineer and inventor, Vannevar Bush. It was titled “As We May Think.” Bush predicted the modern information age and outlined an abstract machine called “the memex.” He envisioned a mechanical device (computers did not yet exist) that would help people collect and sort information.  He wrote: “Consider a future device (…) in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.” That last line, “enlarged intimate supplement to his memory,” really stuck with us. Of course, we have computers today – which are likely far beyond anything Bush ever imagined. We have Google, which gives us all the information we could ever ask for and more. Yet what we still didn’t have was an intimate supplement to our own memory. A Google for our own minds, with all of our personal dreams, inspiration, memories, ideas and notes within reach. Fun fact: we originally called it As We May Think, before eventually opting for the more simply “mymind”. mymind uses artificial intelligence to catalog and find the information that’s important to us. It gives us visual cues to help us identify and find that information. It works with any trigger we can recall – an image, a word, a theme — to help us find what we need. It is, for all intents and purposes, an intimate supplement to our memory. Talking about memory, can you tell us more about the concept of memory indexing? Even scientists don’t fully understand the mysteries of the human mind. We’re certainly not scientists ourselves, but we do know this: our memories are scattered all over different parts of our brain, depending on the type of memory. We rely on connections between neurons to complete the picture. Those connections are stronger if they are related to a strong emotional or sensory experience. In recounting a conversation you had with a friend, you might recall the visual interior of the coffee shop. Or the smell of the coffee you were drinking. Or the sound of the music playing over the speakers. These are all data points in your mind. If you can access one of them, they can trigger each other so you can eventually recall the entire memory. The fewer access or trigger points a memory has, the harder it will be for you to recall it. It’s that moment when you’re telling a story and trying to recall the name of a book you read or quote you appreciated, but you can’t quite reach it. If someone threw triggers at you, it might help: What color was the book? What was the author’s name? What picture was on the cover?  Like stepping stones across a stream, every trigger point leads you closer to your memory. And mymind, in essence, works the same way. It uses contextual and visual clues to provide trigger points to your memory. So if you can remember any piece of something you want to remember, you can find it instantly in mymind. That’s fascinating. Some people do like a bit of organization. How can people organize their extended mind?   Everything in your mind is already organized. It just looks different than your standard folder and file structure. mymind uses artificial intelligence to automatically index everything you save inside it. It analyzes an image of a car and recognizes the car, the type of car it is, the color of the car, the brand, everything in the background behind it. It scans an article you save and extracts key words and phrases that you’re most likely to recall, giving you trigger points when you want to find it later. It reads the words in the notes you save, whether they’re typed in our notes editor or or a picture of your own handwriting. All of this is filed away behind the scenes, so you don’t have to categorize or organize it yourself.  From there, you can organize it instantly in infinite ways. You can search for a color and mymind will show you everything you’ve ever saved with that color. You can search for an object like a tree, and mymind will show you every visual, note or screenshot containing the word or image of a tree. You can search for a keyword like “interior,” add an additional keyword like “wood” and you’ll see only the images you’ve saved with wood interiors. The possibilities are much more fluid and current than anything you would organize yourself. As I mentioned before, we believe folders and traditional organizing of that nature is a thing of the past. What we think about, work on or consume today doesn’t fit so neatly in one category or folder. It’s ephemeral, fluid and dynamic. Plus, managing these old systems is exhausting. We spend hours managing our information and creating a false sense of productivity, when all we’re really doing is moving data around. However, we understand there are some cases where you want some input on this organization. For those cases, you can use tags. If you’re saving random notes from your French lessons, for example, you might tag every card with “french” so they’ll all turn up together in your search. Or if you’re working on a design project and collecting article research, visual inspiration, colors and images, you can tag all of them with “project x” so you can instantly see it all in one place when you need it. We also understand that it will take some time to fully undo the old way of organizing ourselves. It’s our hope that these small gestures in mymind, like tags, will bridge the gap as people adapt to an evolved way of organizing their lives. You describe mymind as “private first” — what does that mean exactly? Our relationship with technology today is not healthy. Every app asks you to share your activity, your data, your ideas, your content, your creations, your hot takes, your selfies… We are constantly curating and performing for an audience. Constantly collaborating. Sharing to a feed. Looking for engagement and being pressured to engage. The nature of our apps today also changes how we use them. We were curious how we might behave or think differently if no one were watching. Would we be i...
Building an extension of your mind with Tobias van Schneider co-founder of mymind
Servant leadership: why being a servant leader is worth the work
Servant leadership: why being a servant leader is worth the work
Servant leadership may sound antithetic. Isn’t the role of a leader to guide and manage, rather than follow and serve? However, being a leader and being of service are not only compatible, their combination can lead to better outcomes than the sum of their parts. Instead of blindly following organisational goals, servant leaders prioritise the well-being and development of individuals within their team. This results in better engagement, better mental health, and better personal growth. Despite its many benefits, servant leadership can be time consuming, requires a lot of mental energy, and can lead to slow decision making, so it’s important to mitigate the few challenges associated with managing people this way. The role of the servant leader The concept of servant leadership has been around for millennia, but it was Robert K. Greenleaf who first coined the term in his 1970 essay The Servant as Leader. Greenleaf described the characteristics of servant leaders, and provided examples of how such an approach to leadership could make a substantial difference to society. The first peer-reviewed servant leadership scale was not published until after Greenleaf’s death. In 1998, Richard S. Lytle, Peter Hom and Michael Mowka developed the “service orientation scale”, in which ten factors were identified as forming the core principles of servant leadership. As a servant leader, you should… Be empathetic. If you know your team well, you will understand their strengths, weaknesses, and what they need. Learning more about who you are working with, and taking an interest in them as a person, will help you create a psychologically safe space for your team. Actively listen. Giving a team member your full attention shows that you value their contributions. This avoids employees feeling that they are passive workers whose thoughts and feelings are not appreciated. Create an environment that promotes healing. If the work environment feels healthy, your team will not only be happier, but more productive as well. Be aware of other people’s mental, emotional, and physical needs, and create a workplace that allows people to practise self-care. This will help those who have had a negative experience in a previous job to heal and grow in their new role. Be self-aware. Practise self-reflection to identify your own strengths and weaknesses, and define how you can contribute to the team and organisation overall. Be honest about your limitations, as there may be someone else in your team whose strengths can fill these gaps, resulting in better team success. Be persuasive. Without being coercive, servant leaders should encourage people to take the desired actions. Without falling into the trap of the middle ground, aim to build agreement and encourage shared drive within your team. Conceptualise the vision. Make sure that everyone in the team can visualise the overall goal, so that people can clearly see the direction they should be taking. Having a “North star” can be especially helpful when faced with complex decisions. Work on foresight. Through experience, servant leaders should be able to anticipate the future and avoid unnecessary hurdles for their team members. This process may involve exploring the weaknesses of previous projects to identify what went wrong and what can be improved upon moving forwards. Be an effective steward. Instead of shouting orders, lead your team by example, setting the tone and taking responsibility for your own actions. This aspect of servant leadership helps to build trust and respect amongst team members. Commit to the growth of people. We are more than just our jobs. If you want your team to work effectively you will need to invest in team members as people. Provide appropriate growth and development opportunities — even when it doesn’t seem directly related to the job — and make sure to support  their individual career dreams. Build community. Having a sense of belonging will help the team get more done. Encouraging authentic relationships within your team is vital to make work more enjoyable, more creative, and more productive. As you can see, being a servant leader is not an easy feat. In addition, it comes with a set of specific challenges that need to be considered in order to make the most of this unique leadership style.  The challenges of servant leadership In a paper exploring the neuroscience of servant leadership, Grant Avery, a leading expert in project management, reported that servant leadership is ideal for complex projects due to the increased engagement and improved collaboration of teams. Furthermore, he explains how servant leadership can address the problem of social pain at work. He wrote: “Recent neuroscientific research on how social pain is experienced by the brain suggests that social pain — a phenomenon that is often triggered by unthinking managers — is highly damaging to teamwork and problem solving in projects. Servant leadership reduces social pain and, so, also the damaging effect on individual and team effectiveness that it creates.” Vaneet Kashyap and Santosh Rangnekar also linked servant leadership with improved overall employee wellbeing, a key factor in deciding to remain with an organisation. Being a servant leader may therefore help to reduce staff turnover. However, although servant leadership may help to inspire trust, generate results, and motivate action from teams while preserving their mental health, it’s not without its risks. As with many management styles, it takes a significant amount of time to develop servant leadership skills. During this time, decision making is likely to be much slower. At first, the team may also find it hard to adapt to a leader who may appear to have less authority, and may wrongly consider that such a servant leader appears to be  “weaker” than others. Though, this only becomes a problem in teams that have low motivation and cohesion. Servant leadership can only ever be as effective as the leader’s motivation allows it to be. Being a servant leader necessitates a high level of authenticity, and retraining to work in this way requires hard work. This management style does not have a specific end goal — boom, you’re a servant leader! — and is instead one of constant fine-tuning and development, which can be frustrating for some. But the many benefits make it worth a try. How to practise servant leadership We have seen that there are many benefits to servant leadership, as well as some risks. The following strategies may help you develop as a servant leader in a safe manner. Preach by example. Instead of giving people a set of instructions, demonstrate the behaviours you want to see in your team members. Explain why your colleagues’ work is essential. When the team understands the bigger picture, they will recognise how their work fits within the vision. This helps to drive engagement and motivation. Encourage collaboration. Working together effectively improves problem solving, creates efficient processes and will lead to great innovation and success. Don’t be a bottleneck to collaboration: remove yourself from the equation, and let your team members brainstorm ideas together. Support growth and development. Help your team members develop professionally and personally to boost the range of hard and soft skills within your team. This could mean giving them an educational stipend, letting them take time off to attend conferences, or asking them to regularly share what they recently learned with the rest of the team. Show empathy and compassion. An individual who feels that their thoughts and emotions are being considered will be happier, more effective, and more likely to stay in their role. Make space for honest, vulnerable conversations with your team members. Ask for feedback. Encourage an environment of open discussion, and ask for honest feedback from your team. Whether it’s about processes or communication, these suggestions will help you to improve the way you manage them. Take care of your own health. It’s easy to forget about your own well-being when you are so focused on taking care of your team members. A servant leader needs to be able to support their team over the long run, which means they need to take care of their own mental and physical health. Take breaks, recharge your batteries, and make sure you have time and energy to do the things you love outside of work. Training can help to learn how to become a servant leader by applying these strategies. However, it’s not easy to switch to an organisational model built on the principles of servant leadership. Dr Nathan Eva and colleagues wrote: “Regardless of the quality of a training program, we contend that it is unlikely that self-centred, dogmatic, narcissistic people can be trained to be other-centred, sensitive, empathetic, socially sensitive servant leaders.” They added: “As with virtually every major organisational change, moving an organisation from a command and control culture to one based on servant leadership will take several years to complete. Thus, organisations attempting to implement servant leadership cultures need to be patient.” While it requires a bit of patience and determination, servant leadership can help tackle many of the challenges faced within the modern workplace, such as burnout and emotional exhaustion. Adopting this management style can boost business performance and ensure employees feel valued and respected. The post Servant leadership: why being a servant leader is worth the work appeared first on Ness Labs.
Servant leadership: why being a servant leader is worth the work
How to switch from Roam Research to Logseq
How to switch from Roam Research to Logseq
Ever since Roam popularised networked thinking, a few alternatives have become available. Logseq is local-first, plain text, and a block-based outliner app similar to Roam Research. While Roam partly inspires it, Logseq is not just a Roam clone and has features that make it a unique tool for thought. If you are considering switching from Roam to Logseq, read on to explore key considerations and steps to ensure the transition goes smoothly. Why you may want to switch from Roam to Logseq While they may seem similar, Roam and Logseq have fundamental differences that may convince you to make the switch. Open Source Unlike Roam, Logseq is open-source software. There is no guarantee that the software companies you depend on these days will not go out of business. With open-source software, you have peace of mind that you can still access your notes, even if the Logseq team calls quit tomorrow. Your notes will outlive these companies.  Built-in features Logseq offers a few built-in features that make it stand out from Roam, such as: Built-in spaced repetition system. By adding #card to a block, you can turn it into a flashcard. You can also type in “/cloze” if you want to make cloze deletion flashcards. Reviewing your flashcards is as simple as clicking on Flashcards in the left sidebar. Advanced task management. While Roam can handle task management by creating to-do lists and ticking them off, it is not as advanced as Logseq’s. Logseq adds to Roam’s task management features by allowing you to differentiate between the task states, prioritise your tasks, add deadlines and schedules, and track how long it took for you to complete the task. You can read more about task management in Logseq here. Built-in PDF highlighter. By dropping a PDF file in your notes, you can open them in Logseq and highlight texts. These highlights can be used as block references, making Logseq suitable for researchers reading PDF journals. Researchers may also appreciate that Logseq can link directly to their PDFs with the Zotero integration.  While these workflows can be done in Roam, these features are built-in in Logseq and do not require you to use other workarounds or plugins. Plain-text notes If accessing your notes with other apps is crucial, you may consider switching to Logseq, a plain-text based app. While most plain-text based apps do not support the block structure implemented in Roam, Logseq does, which means you can access your notes both with Logseq and with other apps that can read plain-text. Instead of relying on proprietary formats, Logseq can ensure the durability of your ideas. Security Because your notes are located on your computer, there is no worry about the security of your notes. Only you have access to your notes. Though with Logseq you lose out on the sync feature of hosted options, you can still keep your files in a cloud service provider like Google Drive or Dropbox and sync your notes across your devices securely. Why you may want to stick with Roam As you have seen, there are some fundamental differences between Logseq and Roam. However, if these features are not relevant to your core concerns, you may not necessarily want to switch. You may consider sticking to Roam if the following features are essential to your workflow: Collaborative knowledge management. Because your graph is hosted in Roam servers, you can share your notes with others for collaboration. Logseq uses local files and is not collaboration friendly, with it being more suitable for building your digital garden. Syncing between apps. If you want to access your Logseq graph on other devices, you may need to use cloud services to set up your sync configuration. Because Roam hosts your notes for you, you can access your notes with multiple devices without any tinkering. Kanban boards, Pomodoro timers, and tables. If features such as Kanban boards, Pomodoro timers and tables are essential to your workflow, you may consider sticking with Roam instead of migrating to Logseq. These features do not transfer nicely to Logseq, and if your notes rely heavily on them, it might not be worth it to migrate. As usual, you should not switch if you do not need to. While the shiny toy syndrome might tempt you to switch your tools for thought, switching apps can cost you time and energy that could be spent refining your craft. How to migrate from Roam to Logseq in three steps If you have decided you want to switch from Roam to Logseq, there are only three simple steps you can go through to migrate. 1. Export your files from Roam Open your Roam database and click on the three dots in the top right corner. Then, select the “Export All” option, set the export format to JSON, and export all your notes. Roam will download your database into a zipped folder, which you need to unzip. At this point, you should have a JSON file that is ready to be imported into Logseq. 2. Set up Logseq Next, launch Logseq. You can either download the desktop app or use the web version of Logseq by opening a local folder. If you want to use the web app, open a local folder by clicking on the open button at the top right corner. You can then create a new folder or open an existing folder. This is where Logseq will store all your Markdown files. Just follow the same process if you download the desktop app. 3. Import your notes into Logseq Once Logseq launches, click the three dots in the top right corner and select `Import”. There will be an option to import a JSON export of your Roam graph. This process might take some time, depending on the size of your Roam graph. Getting used to Logseq Great job, you are now all set up with Logseq! If you have previously used Roam, you might feel at home with Logseq. Some features such as alt/option-drag to make a block reference and using page and block backlinks and the sidebar are implemented the same way. However, there are some little differences you need to know about when making the switch so you can used to them. Daily Notes page vs Journal. In Roam, you are greeted by the Daily Notes page as you open the app. Logseq, however, calls this Journals. They both work the same way, act as a frictionless way for note-taking, and can be accessed via the left sidebar. Case sensitive links. In Roam, “Productivity” and “productivity” are two different pages. In Logseq, “Productivity” and “productivity” are considered one and the same page. Query syntax. Queries are done differently in Logseq, with it using a different syntax. You can learn more about it here. Aliases. Unlike Roam, you can use aliases to link to a page with a different word. For example, by writing down “alias:: latte” on the first block of the page [[coffee]]. You can search for [[latte]] or click on [[latte]] to bring you to the page [[coffee]]. You can also add multiple aliases by separating them with a comma, such as “alias:: latte, americano, espresso, mocha”. Readwise integration. Unlike Roam, Logseq does not have a way to integrate automatically with Readwise yet. If you want to connect Readwise to Logseq, you can do so via Obsidian. Launch Obsidian and pick the same vault you used for Logseq. With the official Readwise Obsidian plugin, it will create pages for your Readwise notes in Obsidian, which also shows up in Logseq as they both share the same folder.  Linking unlinked references. In Roam and Logseq, you can check unlinked references to a page at the bottom of the page. This allows you to connect your notes in meaningful and surprising ways and helps you to generate new ideas and thoughts. However, unlike in Roam, where linking your unlinked references is as easy as clicking the link button, you need to manually select the term and type in double brackets to link it. Whether you decide to migrate from Roam to Logseq or stick with using Roam, it is not the tool you use but how you use it to shape your thinking. If you would like to learn more about using Logseq, join our community’s digital gardening support group. The post How to switch from Roam Research to Logseq appeared first on Ness Labs.
How to switch from Roam Research to Logseq
How to switch from Notion to Roam Research
How to switch from Notion to Roam Research
Are you considering switching from Notion to Roam? Both have their advantages: Notion is an all-in-one workspace with features such as tables and templates, while Roam focuses on helping you connect your ideas and exploring connections between them. As a general rule, Roam is best for gardeners, while Notion is best for architects. If you feel like you could benefit from having a different way to approach your notes, read on to explore key considerations and steps to ensure the transition from Notion to Roam goes smoothly. Why you may want to switch from Notion to Roam Beware of the shiny toy syndrome — switching tools always requires a bit of work, and it may not always be worth the hassle. If you are wondering whether you should migrate from Notion to Roam, here are some key differences to consider before making the switch. Serendipity In Notion, to look up a piece of information, you need to go to a project folder, open the specific page, and unroll the particular header. If you’ve tried using Notion and don’t like the structure it imposes on you, you might want to switch to Roam. It encourages you to explore your knowledge graph, increasing serendipity in your notes as you organically create connections between them to generate new ideas. If you’re more of a gardener than an architect, you might want to switch to Roam. Creativity Notion is better for organising information, while Roam is better for creative exploration. Because you don’t need to structure your notes in Roam, some patterns emerge over time organically. By exploring your notes and these patterns, Roam helps you to implement combinational creativity and idea sex. Frictionless note-taking Notion’s cabinet-file approach requires you to think about where to put the note before you even create the note. Roam allows you to skip this step and go straight to note-taking. Simply dump the note into the Daily Note and you can organize it later. Instead of deliberating about where the note should fit and how exactly you will use it later, Roam helps you to focus on note-taking by making it as frictionless as possible. It’s also easier to create a new note in Roam, where you can use the search bar or even in line by typing double brackets. In addition, Roam becomes more useful over time as more references are effortlessly created as you write new notes. Bi-directional linking One of the main features that make Roam different from Notion is its approach to bidirectional linking. You can link to an existing page or create a new page by using double brackets or a hashtag.  Here is an example of a page with content: And here is a page without any content yet, but has other notes linked to it. As you can see, it’s effortless to refer to the other pages linked to it under the linked references. While the linked references themselves are powerful, Roam can also show pages that mention the term of your current page but haven’t been linked yet. You can then link these pages back to the current page by clicking on Link. You can also do this at the block level, where each bullet point can be linked to other pages and you can easily look at the pages it links to, similar to the implementation in the page-form mentioned above. Notion also has bi-directional linking, which they call backlinks. These links are created by using @ mentions targeting specific pages. Here’s an example of how backlinks are created and used in Notion. While Roam shows your backlinks at the bottom of each page, Notion shows them at the top, below the page title. These backlinks are, however, hidden by default in Notion and must be clicked to reveal them. Research Compared to most note-taking apps, Roam’s ability to help you collect and connect information is second to none. This is why Roam is particularly suitable for research work. With the sidebar panel, you can open many pages at once to view and expand on your research across several workstreams. Roam also makes it easy to cross-reference content with bi-directional links and block references. Journaling Like note-taking, journaling is also frictionless in Roam. One of the ways to journal in Roam is by documenting your day in the Daily Notes page. Because Roam allows you to create timestamps, it is a suitable app for practising interstitial journaling.  Why you may want to stick with Notion As we have seen, there are some fundamental differences between Notion and Roam. However, if these features are not relevant to some of your core concerns, you may not necessarily want to switch. You may actually consider sticking to Notion if the following features are essential to your workflow. Productivity workflows When it comes to productivity workflows, there are endless possibilities with Notion. With task lists, calendars, Kanban boards, myriads of templates from the community, and multiple database views, Notion excels in creating the perfect workflow for your needs. Databases in Notion can be linked to other documents, filtered for different views, and expanded to provide additional context. While Roam offers features such as tasks lists and Kanban boards, it is more focused on exploratory knowledge management than productivity workflows. Collaboration One of Notion’s main features is powerful collaboration. Notion allows you to create a team dashboard or project management list and share it with members of your team. Sharing options are granular as well, where you can give access to documents and revoke access as needed. Roam on the other hand offers limited options for sharing your notes, such as sharing the whole database with an individual, making your database public, or sharing individual pages. The latter option, however, can make the rest of your database vulnerable if a tech-savvy person wants to snoop around. Integrations Unlike Notion, Roam does not have a public API. If you rely on integrations with other apps in your workflow, it can be hard to switch to Roam as it requires you to manually copy information into Roam. How to easily migrate from Notion to Roam in three steps As we have explored earlier, Notion and Roam are very different to each other. Fortunately, it’s not hard to switch between the two note-taking apps. 1. Export your Notion notes  Open the database containing the notes you want to export. Then, click the three dots in the top right corner, set the export format as Markdown & CSV, pick whether you want to include (the current database or everything), and click Export. If you would like to download pages contained within the page as separate files, switch on the “Include Subpages” option. Notion will export your notes and images as a zip folder, which you can expand to find your notes in Markdown. 2. Import your notes into Roam Open your Roam database and click the three dots in the top right corner and click on Import Files. Select the Markdown files you downloaded earlier. This step also gives you the chance to rename some of your notes. 3. Create a page for all your images Because Notion stores your images in its servers, all your images are exported in a separate folder to your notes. This might lead to missing links in your exported notes. Instead of going through each note and pasting your images, create a page in Roam called “Exported Images”, and put all your images in the document so you can find the missing images. An alternate method of migration While the steps mentioned above can be used to migrate your notes, you can also opt to manually migrate each and every one of your notes. This alternative method will allow you to only migrate relevant notes, especially the notes you are currently using for your work. By migrating your notes manually, it allows you to audit your notes and focus on the notes that matter. To do so, simply: Create a new note in Roam, either by using the search bar or creating a new page inline from the Daily Notes or any other page by using double brackets or hashtag function. Copy and paste the content from Notion, making sure to include any images as well. Add relevant metadata to the note, especially the metadata associated with your note in Notion. Add backlinks in your notes for easy reference and organization. Getting used to Roam Now that you’ve successfully migrated to Roam, let’s explore some changes that you need to make to your workflow. Backlinks and block references Similar to how links work in Notion, you can create links to other notes by using the double bracket, or using the hashtag. However, it is easier to create backlinks in Roam due to how fast the fuzzy search is. Unlike in Notion, you can also create links to individual blocks by typing (()). Productivity features While Roam’s productivity features are not as extensive as Notion’s, you can use features such as Kanban boards, task lists, as well as Pomodoro timers in Roam. Similar to how Notion does it, you can use these features by typing a forward slash in line, typing in the feature you want, or using keyboard shortcuts such as cmd + Enter to create a to-do list Unlinked references One of the most powerful features in Roam which Notion lacks is the ability to link your notes with the unlinked references function. This allows you to serendipitously generate connections between your notes as you roam and explore your database. Make sure to incorporate this into your workflows to make the most of Roam! In summary, if you prefer structure and organization in your tool for thought, stick to Notion. If you prefer less structured notes that prioritise brainstorming and discovery, more akin to a digital garden, you might want to consider switching to Roam. As always, it is not about the tool you use, but how you use it to shape your thinking. If you would like to learn more about using Roam, join the Roam support group in our community. P.S. Want to learn how to make the most of Roam? Join Roam Essentials, a short course to master 20% of the features tha...
How to switch from Notion to Roam Research
How to switch from Roam Research to Obsidian
How to switch from Roam Research to Obsidian
Roam Research has revolutionised the tools for thought space by bringing networked thinking front and centre with bidirectional links and knowledge graphs. The app made it easy to connect ideas between notes, which was complicated with existing approaches that worked as a filing cabinet system. Ever since Roam popularised networked thinking, a few alternatives have become available. One of the most popular ones is Obsidian, which is local-first, plain text, and incredibly extensible, with many plugins to make your knowledge base truly personal. If you are considering switching from Roam to Obsidian, read on to explore key considerations and steps to ensure the transition goes smoothly. Why you may want to switch from Roam to Obsidian Despite many common features, Roam and Obsidian have fundamental differences that may convince you to make the switch. Privacy One of the main concerns for cloud-based apps like Roam is privacy, and this concern is even more central when it comes to tools for thought. Indeed, tools for thought are used for note-taking, journaling, brainstorming, and the storage of various forms of personal data, some of which can be highly sensitive. While the recent implementation of encryption does help, cloud storage means that your notes may not always be secure. While you may not be concerned about notes from the books or podcasts you have consumed, your personal information is a different story. Obsidian is entirely local, which means your notes are extra safe. This may be a feature that will make you consider switching. Durability Unlike Roam, Obsidian is a “shell” app, which means that your notes are separate from the application, and the application merely works to help you manipulate them. Obsidian just enables you to edit your local files. Ever since the dawn of the Internet, many companies whose products people depended on went out of business. Roam is no exception to this risk. If Obsidian was gone tomorrow, you would still be able to access your notes. By opting for shell apps such as Obsidian, you can ensure the durability of your notes. Portability Roam is a very powerful note-taking app with features such as Pomodoro timers and Kanban boards. However, these features often do not transfer well into other apps, and the more you use them, the more you lock yourself into Roam. There might be a time in the future when you have to switch apps, but because Roam’s features and syntaxes do not transfer nicely, you end up losing your information. If accessing your notes with other apps is crucial, you may consider switching to Obsidian, a Markdown-based app, where you can access your notes with other apps that can read Markdown. Instead of relying on proprietary formats, using Obsidian could help you become tool-agnostic and not depend on one app to access your thoughts. Your notes are portable and can easily be accessed with other apps. As we have seen, there are some fundamental differences between Roam and Obsidian. However, if privacy, durability, and portability are not your core concerns, you may not necessarily want to switch. In addition, if features such as Kanban boards, block references, queries, task management, and calculations are essential to your workflow, you may consider sticking with Roam instead of migrating to Obsidian. These features do not transfer nicely to Obsidian, and if your notes rely heavily on them, it might not be worth it to migrate. Ultimately, you should not switch if you do not need to. Beware of the shiny toy syndrome. Switching note-taking apps takes a lot of time and energy, and those resources may be better spent by refining your writing and thinking instead of tinkering with what already works. How to migrate from Roam to Obsidian in three steps If you have decided you want to switch from Roam to Obsidian, there are only three simple steps you can go through to migrate. 1. Export your files from Roam Open your Roam database and click on the three dots in the top right corner. Then, select the “Export All” option, set the export format to Markdown, and export all your notes. Roam will download your database into a zipped folder, which you need to unzip. At this point, you should have a folder containing all your notes from Roam in Markdown format. 2. Launch Obsidian and open the folder as a vault Next, launch Obsidian. You can download it for free here. Once Obsidian launches, click “Open folder as vault” and choose the unzipped folder earlier. 3. Use Markdown Format Converter to format your notes In Obsidian, click “Open Markdown Importer”, which can be found in the left sidebar. Turn on all the Roam Research options and start the conversion. Doing so will convert Roam’s Markdown conventions into a format compatible with Obsidian. For example, Roam treats  # and [[ ]] as pages, while Obsidian only treats [[ ]] as pages. The Markdown Format Converter will convert your pages using # into [[ ]] to match Obsidian’s format. It will also convert some syntaxes such as highlighting, where it is ^^ ^^ in Roam and == == in Obsidian. Optional steps when migrating to Obsidian You should be all good to go, but if you want to take it to the next level, there are some optional steps that you can use to make your migration as seamless as possible. Enable additional features One of the problems with Roam is empty notes. These notes are created when you want to add a link to an existing one and accidentally misspell it, resulting in a new empty note without any links. You can delete those notes using this script. Roam also has a what-you-see-is-what-you-get (WYSIWYG) editor, while Obsidian requires you to toggle between edit and preview modes. However, Obsidian has recently released its WYSIWYG editor, called “Live Preview”. If you are used to writing with a WYSIWYG editor, you can turn it on by going to Settings, going to the Editor tab, and selecting the default editing mode as “Live Preview”. Import your workflows Ideally, you should learn how to use Obsidian and adapt your workflow, but to make the transition easier, you can use some scripts and plugins to make your experience more similar to the one you used to have with Roam. For example, most Roam users will normally input their information into the Daily Notes page. You can use this Python script to convert the format of your Daily Notes from Roam’s to Obsidian’s and enable Daily Notes, Fold Heading, Fold Indent, and templates in the Obsidian settings. If you do not like the paragraph approach in Obsidian and would like to return to outliners for a more Roam-like experience, you can install this plugin and use this snippet for bullet point lines. Clean up your notes Switching to Obsidian may be an opportunity to do a minor cleanup. Use the Note Composer plugin if you would like to merge notes. You can filter your completed tasks with this plugin if you previously used Roam for task management. Finally, if you have images or videos uploaded to Roam’s servers, you can download them and store them locally with this script. Getting used to Obsidian Great job, you are now all set up with Obsidian! It may feel a bit confusing at first, so let us look at the key differences you need to know. 1. Automatic note-making versus clicking to make a note. Roam automatically creates a new note when you type it inline between a double bracket or a hashtag. In contrast, Obsidian will only make a new note when you click on that inline link. This can prevent typo-filled empty pages, which is a common problem with Roam. 2. Freedom to use bullets or freeform. With Obsidian, you can type your notes in either bullet-point or free form, while Roam is limited to bullet form writing only. 3. Ease of publishing. Because Obsidian’s Markdown style is similar to most CMS, it is much easier to publish. A simple copy and paste in Obsidian should transfer all the syntax nicely into your blog page, compared to Roam, where you need to do some exporting and converting to make it look the same. While this might sound menial, it can be a massive time-saver if you regularly publish content online. 4. Complete customizability. While you can customise your Roam with CSS styles, you have more native customization options with Obsidian. It may be worth exploring all the different settings to make sure your tool for thought feels truly personal. Whether you decide to migrate from Roam to Obsidian or stick with using Roam, it is not the tool you use but how you use it to shape your thinking. And if you would like to learn more about using Obsidian, join the Obsidian support group in our community. The post How to switch from Roam Research to Obsidian appeared first on Ness Labs.
How to switch from Roam Research to Obsidian
Planning your intentions and ambitions with Ashutosh Priyadarshy founder of Sunsama
Planning your intentions and ambitions with Ashutosh Priyadarshy founder of Sunsama
Welcome to this edition of our Tools for Thought series, where we meet with founders on a mission to help us achieve our ambitions without sacrificing our mental health. Ashutosh Priyadarshy is the founder of Sunsama, a daily planner for busy professionals which helps you keep track of your tasks and calendar in one place. In this interview, we talked about the power of consistent intentions, how a concrete plan can support your long-term ambitions, their no-shame approach to late tasks, how to achieve our goals by trying to do less everyday, and more. Enjoy the read! Hi Ashutosh, thank you so much for agreeing to this interview. You are on a mission to help ambitious people do their best work. Can you tell us more? One thing we noticed about ambitious people is that they live in a world where there is always more work to be done than can actually be accomplished, and their goals are things that take months or years to accomplish. For them, success happens not by finishing all the work but by focusing their time and attention on their most important work consistently. Travis, my co-founder, and I always felt like if we can figure out how to be focused, intentional, and ambitious each day, then all we had to do was repeat that each week, each month, and each year. If we did that, we would be successful in the long run. That sounds like a wise philosophy. So, how does Sunsama work? Sunsama is a guided daily planner. Each morning, you get an email from Sunsama that reminds you it’s time to plan your day and teaches you the principles of working with focus, calm, and intention. Then, you open up Sunsama, where you go through a four step process to plan your work day. First, Sunsama helps you pull your tasks and obligations from all your tools into one place. You can pull in meetings, emails, tasks from project management tools, etc. Sunsama helps you estimate how long that work will take and nudges you to pick a reasonable and achievable workload. Finally, Sunsama can automatically put your tasks on your calendar so that you have a concrete plan of when to do your work. Once your day is planned, you don’t need to spend any more time thinking about what to do. You just work your way down your Sunsama list for the day.  Such a simple approach. Can you tell us more about the process of going from a chaotic task list to a calm work schedule? There are three big things Sunsama does to help you feel like you aren’t drowning in work.  First, Sunsama aggregates all your task tools in one place and lets you select just the tasks you want to work on for that day side by side with your meetings. Combining your meetings and tasks gives you a realistic idea of what you can get done in a day. For example, on a day where you have four meetings, it’s not realistic to get another eight hours of heads down work done.  Second, Sunsama helps you estimate your time and see how much you are trying to get done in  a day. It’s counterintuitive but if you start with a smaller and more achievable list of work for the day, you’ll actually get more done. Finally, Sunsama helps you drag and drop tasks right on to the calendar. This is the moment where you have to stop pretending because you’ve only got so much space on your calendar.  So, what about those tasks that you don’t get to tackle on a particular day? In Sunsama, tasks that you don’t finish simply roll over to the next day. If something was important today, it’s likely going to be important the next day and you should not be penalized for not finishing something. We avoid using any aggressive red coloring or callouts for a task being overdue. Things taking longer than expected is the norm, there’s no shame in that. However, if a task rolls over multiple days in a row, Sunsama auto-archives it. When you’ve got a task you haven’t made progress on multiple days in a row, that’s usually an indicator that the task isn’t particularly important or urgent. A part of team work that takes a lot of time — and space in calendars — is meetings. How does Sunsama help manage this challenge? First, we let you see your meetings side by side with your tasks. That way, you don’t feel like meetings are a constant distraction to your workday, just another thing on your to do list. In addition, Sunsama lets you add notes and action items directly to a meeting and then you can share that out to your whole team. Are there any other ways Sunsama helps teams collaborate more effectively? Sunsama basically functions as an “async” daily standup for teams who want to work closely together. Inside Sunsama, you can see your colleagues’ daily plans and get a clear idea of what they’re working on. If you use Sunsama, you can stop having standup meetings entirely.  That sounds great. Another thing… Many people struggle with setting too many goals or unachievable goals. What’s your advice to set reasonable goals? I mentioned this earlier but the simplest thing you can do is try to do less things each day. It’s counterintuitive because ambitious people want to get more done. The thing you have to realize is that successful people get a few things done consistently instead of trying to do everything.  It’s also helpful to be realistic with how long work can take. For example, if I have a day where I’ve scheduled out five and half hours of focused work time, that basically eats up an entire 8-9 hour work day. Work tends to expand. I think you can be in the top percentile of focused people if you are cranking out five hours a day of heads down work.  I couldn’t agree more, it’s impossible to be highly focused for more than a few hours each day anyway. Yes. The people who need Sunsama the most are people who need to balance heads down “maker” work with the day to day obligations of keeping business running. For a lot of people, that means doing some kind of technical, product, or creative work, but also attending meetings and doing emails. What about you… How do you personally use Sunsama? I start each work day by planning my day in Sunsama. One thing I rely on Sunsama for is balancing my daily work “chores” while also making sure to move substantial projects forward each day. In Sunsama, I’ve got a recurring task that reminds me to go through all my various inboxes and notification tools, this helps me make sure that inbound/reactive work doesn’t stretch out across my whole day, I try to keep it focused to getting it done in an hour so that I can spend the rest of the day on a handful of smaller deep work priorities like building out the product.  As a founder, my favorite thing is my end of week review where I can see how much of my weekly work hours were aligned with my weekly objectives (the big important projects) and how much just goes into the day to day operations and work chores.  It’s always nice when founders actually use their products. And finally… What’s next for Sunsama? Our team is feeling more ambitious than ever before because we just hit profitability. Now that we don’t have to worry about running out of money, we’re planning to go much deeper with all of our integrations. Thank you so much for your time! Where can people learn more about Sunsama and give it a try? Sure, you can check out our website to start a 14 day free trial of the product, you don’t have to put in a credit card to start. You can also follow our journey on Twitter. The post Planning your intentions and ambitions with Ashutosh Priyadarshy, founder of Sunsama appeared first on Ness Labs.
Planning your intentions and ambitions with Ashutosh Priyadarshy founder of Sunsama
The false compromise fallacy: why the middle ground is not always the best
The false compromise fallacy: why the middle ground is not always the best
Picture this: you are having a debate with a colleague regarding the best next steps for a complex project. You both have been presenting your arguments, the tone is friendly, but you cannot seem to agree on the best way forward. So you decide to find a middle ground. Sounds reasonable enough, right? Well, it’s often a very bad idea, and it has a name: the false compromise fallacy. When it’s hard to find a resolution, it can be tempting to search for the middle ground to resolve the conflict. By making us abandon the search for the most suitable resolution, the false compromise fallacy can lead to misleading conclusions and poor decision making at work and in your personal life. The birth of a false compromise Our tendency to seek compromises is not new. We can find some documented instances of compromises in ancient Rome’s public speaking, supported by a codified “art of speaking in public” (Ars Oratoria). At the time, it was known as the “argument to moderation” (Argumentum ad Temperantiam). But not all compromises make sense. A false compromise occurs when a resolution cannot be found between two opposing views, and so the middle ground is accepted as the “best of both worlds” instead. Here is a famous example of false compromise. If you know that the sky is blue, but someone else argues that it is yellow, a compromise might see you meeting in the middle to conclude that the sky is green. Of course, this agreement settles the difference of opinion in a wholly unsatisfactory way, as there is no truth in the sky being green. Furthermore, both parties will likely remain convinced that the sky is the colour they believe it to be. A false compromise only provides the illusion of a resolution. The false compromise fallacy is sometimes referred to as “bothsiderism”. Researchers Scott Aikin and John Casey reported that the functional problem with finding the middle ground is the belief that one view must be balanced with an opposing belief, regardless of how contrived the resulting view-point might be. Aikin and Casey explain that the issue represented by the false compromise fallacy is the belief “that there are two (or more sides) and one must presumably give both sides their due.” However, the evidence on one side may be in bad faith, incomplete, or incompetently understood. Just because someone presents an argument, it does not mean that it is as valid as another point of view. Trying to meet in the middle with a false compromise could lead you further from the truth or away from the correct conclusion. There may be specific times when you are more likely to acquiesce to a false compromise. Jan Albert van Laar and Erik C W Krabbe found that compromises are more likely to be fashioned when two parties conflict in both their preferences and their opinions on the correct course of action. Depending on your individual circumstances, you may be more likely to experience a difference of opinion when you are with work colleagues, family members, or in social situations. The danger of false compromises False compromises may seem innocent, especially when getting to the correct answer does not particularly seem to matter in everyday life. However, when the topic being discussed and the potential outcome are of great importance, a false compromise could cause harm. In their paper No Place for Compromise: Resisting the Shift to Negotiation, David Godden and John Casey state that leaning towards compromise may cause you to abandon your rational beliefs. They argue that although it might be tempting to yield when faced with a contrasting view, if both sides will be left dissatisfied by a compromise then it is better to resist the temptation of a false compromise. As well as both parties feeling dissatisfied with the outcome, false compromises can also prevent a discussion from moving forward. Had the exploration of the difference in opinion continued for longer, more evidence could have been presented and analysed. By persevering, an objectively better outcome might have been reached. A false compromise can also dangerously speed up decision making. This is particularly true if the compromise brings an abrupt end to a debate: hurriedly agreeing to a decision may prevent you from considering second-order consequences. Luckily, there are ways you can avoid falling into the worst pitfalls of the false compromise fallacy. How to manage false compromises We will likely all have had experience of false compromises, with various resulting outcomes. Learning to manage this fallacy may help you to avoid unnecessarily meeting in the middle. Consider if consensus is needed. You may try to please as many people as possible, especially when your relationship with others is important, such as in work settings. However, to avoid making a false compromise, you need to question whether reaching a collective agreement is necessary. In some situations, the decision that is objectively right may not meet everyone’s approval, but it doesn’t mean that it’s wrong. Evaluate the strength of evidence. Both parties in a debate will bring their own evidence to the table. However, it doesn’t mean the evidence should be given the same weight. Strong evidence may include peer-reviewed literature, up-to-date research, information from reliable sources, or expert opinions. Weaker evidence could be based on hearsay or personal preferences. Be open to extreme decisions. Sometimes, the best decision will be the most extreme one. If you are sure of the evidence and likely outcome, trying to meet in the middle doesn’t make sense. As uncomfortable as it may feel, you should instead be prepared to hold fast to your point of view. The false compromise fallacy can lead to misleading conclusions, poor decision-making, and dissatisfaction for all parties involved in the process. Rather than searching for a compromise, make sure to evaluate whether consensus is truly needed and how strong the evidence is for all arguments. With this in mind, you may find that the right decision is one of the more extreme options, rather than the middle ground. The post The false compromise fallacy: why the middle ground is not always the best appeared first on Ness Labs.
The false compromise fallacy: why the middle ground is not always the best
A New Things Under the Sun Update
A New Things Under the Sun Update
Dear Reader, Change is afoot! Since December 2020, I have been splitting my time between writing New Things Under the Sun and teaching economics at Iowa State University. I loved teaching and Iowa State has been fantastic. But, to use some economist lingo, my comparative advantage is in writing New Things Under the Sun and I have believed for awhile that the project could have a bigger impact if I were able to specialize completely in it. Accordingly, this is my last day at Iowa State University. Beginning May 22, I will be joining the Institute for Progress (IFP) as Senior Innovation Economist, where my job will be to work full time on New Things Under the Sun and related projects. You may recall the Institute for Progress has been New Things Under the Sun’s partner since January - they are a new non-partisan think tank with a mission to accelerate scientific, technological, and industrial progress. This new arrangement is possible thanks to them and grant support from OpenPhilanthropy. While I am excited to officially be part of IFP, I will continue to work remotely from Iowa and retain sole editorial control over New Things Under the Sun. I continue to believe IFP is doing great stuff, and being affiliated with an organization that is trying to effect actual change is a good influence on me (and I hope I can be a good influence on them!). Among other things, working with IFP provides a constant nudge to think about how academic work sheds light on questions that matter. So what does it look like to specialize in this synthesizer/communicator role I’ve carved out? I guess we’ll find out! But here is my preliminary sketch. First, the most obvious requirement of this job is knowing the academic work well. Over the last years, despite my best efforts, my to-read list has only gotten longer. So I’m going to read more. Second, part of the job is seeing connections between ideas. This is especially important as one of the things that makes New Things Under the Sun unique is I try to keep articles up-to-date with the academic frontier. That means I can’t write articles and then forget about their content. To keep what I write and read perpetually accessible, I am going to try is to build up a spaced repetition memory system.1 Third, I plan to write more. Well, actually, I plan to at least meet the goal I set for myself in January, after partnering up with IFP, to write three articles per month. I’m a bit embarrassed to say I’ve only hit this goal once, in February. Partly that’s due to covid finally catching up to me and then my kids during April, but it’s also because so far my time has been split and sometimes other deadlines assert priority. I don’t think New Things Under the Sun needs to be a really frequent publication - every original piece is designed to be perpetually relevant, with maintenance - but I at least want to get into a rhythm of producing something every ten days or so. Lastly, I’m going to try and meet with more of the producers and “end-users” of academic research. What do people whose work is related to innovation wish they knew? What do academics studying innovation think about their own field? I think it’s obvious my work would benefit from more of this kind of tacit knowledge. I’ve already had a few conversations like this with readers and academics. But I thought it might be helpful to make this a formal invitation: if you ever want to chat about something innovation related, feel free to drop me an email and we can set up a virtual coffee. I can be reached at mattclancy at hey dot com. Now that I’m working fully remote, hopefully this can be a good substitute for some of that serendipity around the water cooler I’ll be missing. If zoom is not your thing, I also plan to visit Washington DC for a few days every quarter, to work at the IFP offices, and I hope to meet with people in person during those visits. I’m sure most of you are more likely to pass through DC then you are to pass through Des Moines. Beyond that, I have plenty of other ideas for improving New Things Under the Sun, which I will also incrementally work on. But first, I’m taking a break! I’ll be taking off next week. Cheers all and thanks for your interest in New Things Under the Sun. Excited about this next step. Matt 1 This isn’t my first experiment with spaced repetition memory systems. During covid-19, I built an online intermediate microeconomics course using the Orbit platform developed by Andy Matuschak, which implements personalized spaced repetition. Check it out if you’re curious about spaced repetition or if you need to learn calculus-based microeconomics!
A New Things Under the Sun Update
Self-organized knowledge management with George Levin CEO of Hints
Self-organized knowledge management with George Levin CEO of Hints
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better, learn faster, and make the most of our mind. George Levin is the CEO and Co-Founder of Hints, an all-in-one knowledge management app to get information captured and self-organized. In this interview, we talked about the biggest challenges faced by knowledge workers, why complex systems tend to fail, the importance of revisiting your knowledge to consolidate it, the power of self-organizing your notes, and more. Enjoy the read! Hi George, thank you so much for agreeing to this interview. What do you think are some of the biggest challenges faced by knowledge workers? I believe that knowledge is an opportunity. Every piece of information — a  note, an article, a link, a screenshot, or a shower thought — is the beginning of something new, a hint that can spark changes in our lives and make us better. Unfortunately, we often miss out on these opportunities. A lot of valuable information is slipping through our fingers. Before launching Hints, we have conducted over a hundred in-depth interviews to understand how people work with new information and what obstacles they face. We identified three stages: capturing, organizing, and revisiting. Each stage has its own challenges. Let’s start with capturing. You have probably heard the saying: “If you didn’t write it down, it never happened.” The biggest challenges here are multitasking and change of context. It’s hard to pause your work, especially if you’re jumping between tasks and calls, to save important info. Then comes organizing. Our notes, screenshots, links, and tasks are often scattered across multiple apps and devices. With a significant effort we can set up complex knowledge management systems, but most solutions require a lot of discipline to maintain. When we run out of time or energy, such complex systems fail.  Finally, we get to the revisiting part. Capturing and organizing are useless without revisiting. Without it, things don’t move forward. Articles you save are not read, TedTalk videos aren’t watched, and new ideas are not developed. This will definitely resonate with readers. When did you decide to tackle those challenges? In December 2019, I sold my advertising technology startup Getintent, where I worked with Alex and Gleb. Then in 2020, I built a video distribution platform for vloggers. Unfortunately, it didn’t work out. I decided to slow down and take some time for myself. During my eight years of entrepreneurship, I’ve noticed the importance of serendipity. A sudden conversation in a coffee shop or a “random” book recommendation could change my life for the better. I learned to keep my eyes and ears open to catch small hints.   This approach brought me to the knowledge management and note-taking community. I decided to build something in this field, but I didn’t want to create a better Roam or Notion. I tried to find some unique problem that wasn’t solved yet. I invited Alex and Gleb, and very soon we found it.  As members of a network of business communities, we talked on FB, WhatsApp, or Slack groups, sharing experiences and giving each other recommendations. This valuable knowledge from conversations stayed on the surface for a few days, only to soon be lost. We built a script that scrapped discussions in these groups and created a self-organizing community wiki on the fly.       Then I remembered a story from one of my early investors about how he built a search engine for financial information and happily sold it to a bank, only to later find out that a company named Google did the same but for the whole web. I realized that the problem we solved was relevant beyond our niche communities and applied to all knowledge workers who are dealing with a lot of valuable information around them. So in August 2021, we decided to build Hints.  And how does Hints address those challenges faced by knowledge workers? The Hints app offers users the easiest way to capture, organize, revisit and share new knowledge on the fly. We are mobile-first, but we have desktop and web apps as well.  The main goal of our quick and intuitive capturing is to avoid context switching. If you see something important, you can save it without opening the app in less than a second and stay in your flow. Then all your new knowledge gets auto-organized. Finally, the revisiting stage is where static notes become active hints. Your hints will be shown to you in an interactive story format. Our recommendation engine resurfaces the most valuable hints and reminds you about them. Information can come into many formats. What kind of formats does Hints support? Notes, screenshots, photos, images, tasks, voice-to-text memos, reminders, videos, files, lists, calendar events, links. Every piece of information can be a hint, an opportunity that could change your life. That kind of flexibility sounds incredibly powerful. More specifically, how does Hints work? You can capture notes, URLs, YouTube videos, and screenshots on your phone by forwarding them to the Hints app. Also, we support SMS, WhatsApp, and Telegram bots. You can send a text, convert it into tasks and set a reminder via our bots while in the messenger. The most developed is our Telegram bot. It will allow you to create a calendar invite and add your co-workers. Other bots will catch up soon. You can also capture anything directly from the app or via our Apple Shortcuts widget.    On the desktop, you can capture a selected text from websites, emails, and messages by pressing Command+Shift+J. Or Command+Shift+K for screenshots. Auto-organizing will group your captured hints by common categories such as meeting notes, people, articles, videos, etc. We call these categories flows because you can decide what flow you want to open depending on your mood and needs. Revisiting looks like Instagram stories. You can open your revisiting when you don’t have energy and want to browse something. While browsing you can make a change, archive an old hint, add a reminder, or a tag. This format is very engaging and interactive. I started to use it instead of Instagram and Twitter when I wanted to zone out. You will be surprised how many good hints you captured two months ago and completely forgot about them.  I love the concept of swipeable stories to refresh our knowledge. In general, Hints seems to be a great tool to reduce friction in knowledge management. Absolutely. First of all, with Hints, you stay in your flow and don’t need to jump between apps to write something down. That’s already a significant relief. Then, you don’t need to think about folders and where to place your hints. It’s self-organized, and you know where to find everything. Finally, you don’t need to think about remembering what you saved. You will be reminded about it. Things you capture will be moved forward to change your life for the better.   Amazing. What kind of people use Hints? They are professionals who have a lot of work and valuable information that they don’t want to miss. In my case, the Hints app has already changed my life. Nothing falls through the cracks. I stay on top of my things without relying on my discipline. I can go to bed without thinking about what opportunity I could miss today. And finally… What’s next for Hints? Our next big step is collaboration and B2B. We want to stay free for individuals and rely on B2B pricing when startups and SMBs start using Hints. For them, Hints can be where all new knowledge is captured and distilled before it moves to in-depth project management tools. Without Hints, businesses miss out on the potential opportunities within new ideas and insights that team members encounter every day.  Thank you so much for your time! Where can people learn more about Hints and give it a try? You can sign up on our website and follow our journey on Twitter. The post Self-organized knowledge management with George Levin, CEO of Hints appeared first on Ness Labs.
Self-organized knowledge management with George Levin CEO of Hints
The dangers of apophenia: not everything happens for a reason
The dangers of apophenia: not everything happens for a reason
Humans love patterns. Sometimes that’s helpful, but other times… Not so much. Apophenia is the common tendency to detect patterns that do not exist. Also known as “patternicity”, apophenia occurs when we try to make predictions, or seek answers, based on unrelated events. Apophenia can lead to poor decision-making. For instance, many people choose their lottery numbers based on the birthdates of family members. As the numbers are picked at random, however, this approach won’t increase their chance of winning. In rare cases, apophenia can even be an indicator for some mental conditions. Let’s have a look at how apophenia works, and how you can both detect and manage this phenomenon. The science of apophenia Apophenia is the propensity to mistakenly detect patterns or connections between unrelated events, objects, or occurrences. The term was first coined in 1958 by German psychiatrist Klaus Conrad during his study of schizophrenia. However, it is an effect of brain function that is not limited to those with a form of psychosis, and is now commonly recognised in health as well. In schizophrenia, Conrad found that those who developed “apophany” started experiencing abnormal meanings in their daily life. For example, an individual might “see” various signs that they interpret as instructions meant only for them. They might be certain that an experience is proof that they are being watched, talked about, followed, or prepared for an event. In reality, these episodes are unconnected, have no pattern, and do not represent any form of sign or instruction. The delusions of schizophrenia can be all-consuming and sometimes terrifying. In healthy individuals, apophenia may not lead to such alarming consequences, but can still have a significant impact on one’s decision-making processes. For example, if you may sail through three green traffic lights in a row and see this as evidence that you are on a lucky streak. Because of this perceived pattern, you might confidently place a substantial bet on a horse race or football match. Your perception of your likely luck might therefore lead you to make a more reckless financial decision than if you had not noticed an auspicious pattern. This over-interpretation of patterns in healthy individuals could be an evolutionary survival instinct. Our ancestors may have benefitted from pattern interpretation as part of everyday life. For example, upon hearing a rustling in the trees behind them, they could either assume that the noise was due to the wind or a predator. Fleeing because they assumed there was a predator could save their life, and there would be no harm done if the assumption turned out to be wrong. Conversely, assuming the rustling was due to the wind could have put their life at risk. Believing a false positive over a false negative could, therefore, increase our chances of survival. From fun imagery to financial risk Mild apophenia is common and occurs in many domains such as finance, arts, and politics. Although it is not usually dangerous, apophenia can lead to risky behaviours or wrong beliefs about the meaning of a pattern. Here are some areas where you may encounter apophenia: Visual illusions. Have you ever seen non-existent images in clouds, dirt, toast, or household objects? For example, you might see a phoenix in the clouds, a man in the moon, or a face in your sandwich. Pareidolia is a common form of apophenia that involves imagery. For some people, these images become signs of something significant, such as a message from a loved one or a sign of something yet to come. The artist Salvador Dali experimented with pareidolia to create paintings in which faces would be recognised, despite the painting breaking the mould of what a face truly looks like. Financial decisions. In 2017, psychologists Zack W. Ellerby and Richard J. Tunney investigated how we make decisions. They reported that those who notice an illusory pattern may start to believe that the outcome of an event is not determined by chance, but instead by previous outcomes or choices. This can lead an individual to make a choice based on probability matching, rather than by selecting the choice with the highest probability of being successful. For instance, gamblers might start to believe that a win is coming because they see a pattern in lottery numbers, the roulette wheel or on the races. If they make two small wins in a row, this pattern may create the strong belief that they will certainly have a third win. This could lead one to place a large bet, which would be a risky financial decision based on a perceived pattern. The same can be true of trading decisions or business investments. Political theories. By weaving together various signs or coincidences, an irrational set of beliefs can turn into a conspiracy theory. For example, at the height of the pandemic, some individuals believed that the government had an ulterior motive for locking down the population. Psychologists hypothesised that finding a pattern, and therefore a conspiracy theory, to explain the government’s policies was a coping mechanism for those who felt their power or safety was under threat. Believing a conspiracy theory, however, can lead people to shun scientific evidence and make poor choices. Mental health. Occasionally, apophenia can be a precursor to delusional thoughts. Finding meaning in something random was described by researchers as an important factor in the formation of paranormal and delusional beliefs, and has been found to be implicated in vulnerability to schizophrenia. The balance between embracing and managing apophenia Dali showed that apophenia can be an exciting vehicle for discovering illusory patterns that could feed you creativity. However, it is important to embed strategies that will prevent you from making risky decisions or acting on erroneous beliefs because of apophenia. To avoid the pitfalls of apophenia, you must first pay attention to any biased assumptions you make when faced with false patterns. For example, three green lights in a row will not have any connection to your chance of winning the lottery that weekend. Secondly, work on accepting that not everything happens for a reason. Everyone has highs and lows in life, and there may not be any obvious cause for this. You are more likely to be successful in the long run by making rational decisions based on the available evidence, rather than making choices based upon perceived signs from the universe. Finally, perform your own research. If you think a horse might win the grand national because you saw its name appear several times in unrelated situations, do some fact-finding before you place a bet. Compared to following so-called “signs” from the universe, your own research will give you a far more realistic idea of the risk ratios. Apophenia can help you to think more creatively, but big decisions should be made only when the facts are clear. If several signs suggest that you should leave your job and start your own business, it can be very exciting. However, be critical of your thought processes, and give yourself time to assess the reality of the patterns you perceive. If, after doing plenty of market research, creating financial projections or even starting the side-business alongside your day job, you find that this new venture shows signs of being successful, then it might be time to embrace it. Although apophenia may have an evolutionary basis, placing belief in a perceived pattern could lead you to make riskier decisions. To protect yourself from the drawbacks of apophenia, pay attention to biased thoughts, accept that not everything happens for a reason, and ensure you fully research your options before you commit to a decision. The post The dangers of apophenia: not everything happens for a reason appeared first on Ness Labs.
The dangers of apophenia: not everything happens for a reason
Control your time to free your mind with Nunzio Martinello founder of Akiflow
Control your time to free your mind with Nunzio Martinello founder of Akiflow
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us achieve more without sacrificing our mental health. Nunzio Martinello is the Founder and CEO at Akiflow, a powerful tool that allows you to consolidate all the apps you use into one place so you can block time for your tasks and see everything you need to get done in your calendar. In this interview, we talked about the power of building a single source of truth for your productivity and time management workflows, how to deal with large amounts of incoming information, how to protect your time and avoid distractions by blocking “focus mode” sessions in your calendar, and more. Enjoy the read! Hi Nunzio, thank you so much for agreeing to this interview. We often waste lots of valuable time on unproductive tasks. Why do you think that is? Being productive has become more and more complicated. With more than ten apps on average, our workspace is getting bigger, and our to-dos are often scattered between project management apps, notes, calendars, etc. Communication apps are often misused and are a constant distraction. Tasks keep coming from multiple sources throughout the day, and prioritizing and planning them properly is a non-stop job that we often fail at. Even with a consolidated task list, it is very tough to be realistic about how much work you can accomplish without a calendar that provides context on how much time we have in a day at work. And I could list a dozen more reasons why it’s getting so hard to sit and focus on the right thing to do. After years of trying all possible tools, methodologies, and automation to be productive, I figured out that no app was actually helping in keeping myself organized. How does Akiflow address these challenges? First of all, Akiflow is a single source of truth. All your tasks from multiple apps and calendars are consolidated via API, in real-time. We built a bunch of features, such as the Command Bar to make capturing a new task blazingly fast. Organizing, prioritizing, and planning activities is much faster with our keyboard shortcuts and the unified tasks and calendars view. We believe in time blocking, so we made tasks and calendars interact in the best possible way. A task can be added to the calendar for visual planning, it can block time and our smart notifications will help keep you on track and focused throughout the day. We then added a lot of features to make repetitive actions faster and easier, like sharing availability or joining calls. So far our users reported at least one hour of time saved per day, and that’s the metric we are most proud of. One hour each day is a lot of saved time! How does Akiflow work, exactly? Most people start their day by checking their outstanding conversations in their email inbox or Slack from mobile or desktop. There are two types of conversations: those that can be answered right away and those that generate a task and can be saved to go into Akiflow. Once they are done, they open Akiflow, where they find all their tasks coming from their conversations, their PM tools, or tasks added from their phone. Sometimes, a user might find interesting articles online, or some ideas come up. They hit opt+space and use the command bar to add them to the Inbox. They can also assign labels, plan, or snooze for later, all of these actions helping them to get organized even further. At this point, they open their “Today” page, where they find their schedule of the day next to their calendar. Some tasks might have been added to the calendar and locked to ensure no one can book a meeting during their focused time. This helps make time for tasks, being mindful that time is limited and results in better planning. They can adjust their schedule by considering new “urgent” tasks from their Inbox, and then they are ready to start working. As the day goes on, Akiflow sends notifications on what they should be working on as soon as they need based on the calendar events and tasks. That sounds like a powerful workflow. Can you tell us a bit more about your integrations? Nowadays, tasks come from so many different tools that the only way to be well organized and prioritize them properly is to consolidate them in a single app. Unfortunately this activity is very time-consuming and happens multiple times a day. That’s why we built API integrations to do it automatically. Tasks assigned to you on project management platforms are automatically added to your Inbox. For example, with one click you can turn a Slack message or an email into a task in Akiflow. At the moment, we have built nine native integrations, as well as Zapier which allows our user to import tasks from more than a thousand different apps. With so much incoming information, one of the biggest challenges for knowledge workers is to stay organized… I agree! Just bringing in tons of information would not be a good solution. That’s why we added a lot of features like labels and folders, to organize tasks into projects, priority management, external linked content and more. Akiflow makes it very easy to organize your inbox with flexible sorts and filters. We recently added a powerful search feature to quickly find events, tasks, people, and email addresses.  We also made sure to make it the fastest possible experience. We have a keyboard shortcut for every action and a Command Bar to make the whole experience easier and faster. Another big struggle for knowledge workers is distractibility. How does Akiflow tackle this? First of all, not having to jump between different apps such as calendars and task lists helps to avoid distractions. Every time a user works on “imported” content, we send the user straight back to that specific item, which means that you don’t have to go through your email Inbox or Slack app — the most distracting places in your workspace — to check the messages you saved. We also provide a focus mode, to help commit to a single activity and avoid distractions. I personally believe that locking a task in the calendar is a great way to protect your time and to avoid being distracted by colleagues, who are now informed that you are in the middle of your focused time. What kind of people use Akiflow? Clearly, the way that people work has changed in recent years. In the modern workspace,  everybody feels busy. Everybody is working hard and trying to balance their professional, social, and personal lives. Our user base varies quite a bit but is mostly founders, managers, and autonomous workers who have to juggle operational and administrative projects and keep up with their deadlines. Akiflow is for all those looking to organize their routines and schedules without spending too much time on it. What about you… How do you personally use Akiflow? I use Akiflow to keep up with my personal and professional lives. For example, I like to create events for those habits that I do every day, such as going to the gym and having dinner at a fixed time. By doing so, such habits stand out from the other tasks and have their own time blocked on my calendar. As CEO of a startup, my tasks vary between operational and administrative, so I like to set some recurrent tasks for those little things that I have to do constantly but easily forget amidst bigger commitments. Pulling tasks from as many tools as possible also comes in handy, as sometimes someone will tag me on a Notion or Slack comment and I could miss it if not for Akiflow creating tasks about it. And finally… What’s next for Akiflow? We are going to release our mobile apps soon and we’ll add even more integrations! The ability to capture tasks from multiple apps and devices, and always access your to-do list is critical to provide a solid solution. Right after, we’ll work on improving the way people interact and collaborate with each other. Alongside all that, we want to add AI capabilities to the platform to organize and plan tasks and ultimately optimize the user’s to-do list and schedule. Rather than replace the activities of a knowledge worker, we believe that AI and machine learning can help people to accomplish tasks and empower them every day to achieve more. Thank you so much for your time, Nunzio! Where can people learn more about Akiflow and give it a try? You can learn more about Akiflow’s features on our blog and start a free trial on our website. You can also follow us on Instagram and Twitter where we publish content around productivity and the future of work. The post Control your time to free your mind with Nunzio Martinello, founder of Akiflow appeared first on Ness Labs.
Control your time to free your mind with Nunzio Martinello founder of Akiflow
Weak arguments and how to spot them
Weak arguments and how to spot them
We consume an inordinate amount of information, whether it’s blog posts, podcasts, social media content, online videos — a constant stream of data and claims we need to process and assess. When you are pressed for time, how can you quickly tell the difference between a strong argument and a weak argument, and why does it matter? Some weak arguments are more obvious than others. Displays of certitude with little substance are often a tell-tale sign. Michel de Montaigne, one of the most prominent philosophers of the French Renaissance, wrote: “He who establishes his argument by noise and command shows that his reason is weak.” But other weak arguments can be disguised behind a cloak of seemingly sound statements. For instance, the progression from one point to another seems logical up to a point, but breaks down before managing to provide sufficient support for the conclusion. Let’s have a look at how you can quickly spot these, especially when you need to make a quick judgment. The nature of a weak argument Not all bad arguments are weak in nature. An argument can be bad because it is invalid. A classic example is solving a mathematical equation: if you made a mistake in the proof, it would not be considered “weak”, it would simply be invalid. Invalid arguments are often easier to spot because you just need to look for logical errors in the deductive process. A bad argument can also be strong, but built on false premises. For instance, “Playing video games leads to violent behavior. This person plays a lot of video games, and therefore, they are likely to exhibit violent behavior.” The argument is strong, but it’s still bad, because the premise that playing video games is linked to violence is not true. So what exactly is a weak argument? You need two ingredients. Inductive reasoning. The argument should move from specific observations to broad generalizations. Uncertain premise. The specific observations used to build the argument should either have a low probability or be based on personal opinions rather than facts. Even if the argument sounds logical, the conclusions follow neither with certainty nor with high probability, and it means you are faced with a weak argument. Here is an example of weak argument: “Charlie is a woman. Some women like poetry. Therefore, Charlie likes poetry.” In this case, the premise “some women like poetry” has a low or unclear probability, so the argument is weak. Or the weak argument can be based on a personal opinion rather than a fact: “Charlie is a woman. Most women hate mathematics. Therefore, Charlie hates mathematics.” You may not always have the time to apply all the mental gymnastics to figure out whether an argument is strong or weak, but luckily there are some mental models you can apply to quickly analyze arguments, especially when consuming longer pieces of content. How to quickly spot weak arguments While philosophers have devised many methods to evaluate the quality of arguments, there are three critical thinking tools you can use to quickly distinguish a weak argument from a strong argument. Look for arguments using the “surely” operator. In his book Intuition Pumps and Other Tools for Thinking, philosopher Daniel C. Dennett explains: “The word surely is as good as a blinking light locating a weak point in the argument (…) It marks the very edge of what the author is actually sure about and hopes readers will be sure about.” While it’s not always an indicator for a weak argument, it is still a sign that you need to consider the statement with healthy skepticism. This works with similar words such as obviously, evidently, etc. Compare the conclusion of the argument to a coin toss. If you are better off throwing a coin to know if the conclusion is true, the argument is weak. For instance: “About 50% of humans I met are female. Charlie is human. Therefore, Charlie is female.” In this case, even if the premise is true, you only have 50% chance for the conclusion to be true — you may as well toss a coin! Any argument based on a premise with a low or uncertain probability would not pass the coin toss test, and can be safely classified as a weak argument. Map the argument onto the pyramid of disagreement. In his essay How to Disagree, Paul Graham places types of argument into a seven-point hierarchy going from weakest to strongest. The weakest type of argument is name-calling, followed by Ad Hominem. Graham writes: “An ad hominem attack is not quite as weak as mere name-calling. It might actually carry some weight. For example, if a senator wrote an article saying senators’ salaries should be increased, one could respond: Of course he would say that. He’s a senator. This wouldn’t refute the author’s argument, but it may at least be relevant to the case. It’s still a very weak form of disagreement, though. If there’s something wrong with the senator’s argument, you should say what it is; and if there isn’t, what difference does it make that he’s a senator?” The hierarchy of disagreement can help you spot weak arguments. The “surely” operator, the coin toss test, and the hierarchy of disagreement are three simple tools to add to your thinking toolbox. Use them whenever you are reading a long argumentative essay to quickly spot potential weak arguments, or at least to know that your alarm bells should go off and that you should tread with healthy caution. The post Weak arguments and how to spot them appeared first on Ness Labs.
Weak arguments and how to spot them
The psychology of negative thinking
The psychology of negative thinking
Of course, we all have negative thoughts from time to time. After all, our thought processes are affected by what we experience around us, and it’s normal to experience both good and bad times. However, when negative thinking becomes the norm, it can contribute to mental health problems including social anxiety, low self-esteem, and even depression. To avoid falling into that pattern, let’s explore the science of negative thinking and how you can develop a more mindful relationship to your thoughts. The science of negative thinking Our thought processes are intimately connected to the way we feel. When you’re feeling content, your thoughts tend to reflect this. In times of happiness, you may be more satisfied with your career progress, perceive your personal relationships as more secure, or have a better body image. Conversely, if you’re anxious or unhappy, you may notice that negative thoughts start to emerge. This could include feeling stressed about work, worrying about your appearance, or questioning the loyalty of your friends. In the 1970s, psychologist Aaron Beck theorised that negative thought patterns, which he labelled as “negative schemas”, reinforced negative emotions. In his book Cognitive Therapy, Beck explained: “A central feature of the theory is that the content of a person’s thinking affects their mood.” It’s an endless loop: when you’re already feeling anxious or depressed, succumbing to negative thought patterns is unfortunately likely to worsen the way you feel. Beck’s work has been cited frequently in the last fifty years, including by psychologist Leigh Goggins and colleagues, who stated that “negative interpretative bias” could be a factor in maintaining the continuation of a depressed mood. Furthermore, research suggests that amongst university students, automatic thoughts were strongly correlated with self-esteem. If you regularly experience negative thoughts, this cognitive distortion can sadly worsen an already poor mental health, leading to low mood, poor self-esteem, and anxiety. To make things worse, a bias towards negative thinking will increase the likelihood that you’ll spend time ruminating on mistakes or dwelling on things that didn’t go as well as you had hoped. Negativity bias, or the propensity to focus on negative experiences, can cloud your judgement. Decisions will appear more complex than they truly are, which will make it harder to know how to handle difficult situations. Depression and negative cognitions have a reciprocal link in which one worsens the other, and vice versa. With both factors present, a vicious cycle is set in motion. Learning how to recognise and manage negative thoughts could therefore be the key to breaking this cycle of poor mental health, as well as helping you to avoid the pitfalls of negativity bias. The principles of managing negative thoughts We all have negative thoughts, but certain principles have been shown to be beneficial in managing how often they occur, as well as helping to reduce the impact a negative thought might have. First you need to recognise negative thinking when it arises. Automatic negative thoughts often coexist with poor mental health. In some, they will have been present for many years, and recognising them can take some time. When a situation triggers a thought, pay attention to it. Negative thoughts might include: “I am going to fail at this interview”,  “I will never lose weight”, “No one cares about me”, etc. Did you notice how all of these are all-or-nothing, catastrophizing thoughts? Once you are confident in recognising negative thoughts when they arise, you can begin to interrogate your automatic thinking patterns. Rather than allowing a negative thought to control your emotions, ask yourself if the thought is truthful or helpful. If the negative thought provides no value, it’s time to shift your focus by rewiring your thought patterns. It can be tempting to try to force positive thoughts in the hope that they might replace negative ones. However, managing negative thinking involves transmuting our thoughts rather than replacing them. This process requires you to change the way you respond to your negative thoughts, as well as controlling how much impact they have. Let’s have a look at some practical ways to apply these principles. How to transmute your negative thoughts Negative thought patterns can become ingrained. But you can adopt simple strategies to recognise and detach from those negative schemas, making them less influential on your emotions. This in turn may help to break the endless loop of low mood, anxiety and low self-esteem. 1. Create distance from your thoughts. Pay attention to your automatic thoughts and start to label them as subjective thoughts. For example, you may say out loud or internally: “I’m having the thought that I am no good at my job” or “I’m having the thought that I am all alone.” Labelling your thoughts in this way will help you to detach from the critical inner voice that makes a distorted thought seem like the truth. Similar to a meditation practice, this is a way to merely observe the thought, rather than actively engage with it. 2. Start a thought diary. Journaling in a thought diary is a great way to manage negative thinking. Write down the date, the time, the event that triggered an emotion, and the resulting negative thought. In his book, psychiatrist Dr Daniel Siegel explains that you need to “name it to tame it.” Being able to name your emotions and the resulting thought will help you to understand the relationship between external triggers and internal beliefs. 3. Use de-catastrophizing techniques. Negative thinking often leads to catastrophizing. If making a mistake leads you to believe that your worst-case scenario is likely to happen, de-catastrophizing can prevent a spiral of negative thinking. You may find it helpful to ask yourself: What am I worried about? Is it likely that my worry will come true? What is the worst that could happen if my worry did come true? If my worry comes true, what is most likely to happen? Despite my worry, am I likely to be ok in one week (or month, year, and so on)? Once recognised, negative thoughts can be managed to reduce the impact on your emotional wellbeing. This in turn will break the cycle of negative thinking. By paying attention to your thoughts and interrogating their validity you can prevent cognitive distortions from skewing your beliefs and impacting your mental health. The post The psychology of negative thinking appeared first on Ness Labs.
The psychology of negative thinking
The TEA framework of productivity: managing your time energy and attention
The TEA framework of productivity: managing your time energy and attention
A few weeks ago, I was having dinner with fellow founders, and I learned about a productivity method that’s deceptively simple but incredibly powerful: the TEA framework, which stands for time, energy, and attention. This approach feels appealing because it is rooted in essential human principles, rather than creating the artificial need for a complex productivity system. It may seem obvious that we need time to produce any work, that we need energy to sustain our effort, and that we need attention to focus on the work. But, somehow, we sometimes get so obsessed with systems that we forget about those three fundamental pillars of productivity. While the core tenets of the TEA framework are easy to grasp, it has far-reaching implications for the way you live and work. The three pillars of productivity The TEA acronym was coined by entrepreneur Thanh Pham, host of the The Productivity Show podcast. After studying many productivity systems, he saw the need for a simpler, more holistic framework, comprised of three key pillars: Time. It all starts with the way you manage your schedule, your priorities, and how you invest your time — not only the quantity of time you devote to certain tasks, but the quality of this time. For instance, some time investment today may save you lots of time tomorrow. Energy. Your mind and your body are tools that need fuel. Deep work requires mental and physical energy. No mental and physical fuel, no meaningful productivity. Attention. To direct your attention, you need to know what your goals are. Then, you have to sustain your attention by staying focused on the goal and by avoiding distractions. Any of the three pillars missing, and your productivity and well-being at work will suffer. If you have energy and attention, but not enough time, you will feel overwhelmed. Lots of time and attention, but not enough energy, and you’ll end up exhausted. Finally, lots of time and energy, but not enough attention, and you’ll be distracted. You need all three pillars to be productive without sacrificing your mental health. Some people have expanded the framework and named it TEAM instead to account for the relationship between motivation and productivity, but I would argue that motivation is a factor of your mental and physical energy. The more motivated you are, the more energy you will have to tackle your goals. Conversely, if you feel demotivated, you are likely to experience low energy levels. How to apply the TEA framework of productivity The TEA framework is simple but has implications for many areas of your life and work. In essence, it boils down to three principles: Don’t spend your time, invest your time. What can your present self do for your future self? Answering this question is a great way to decide how to invest your time. For instance, you’ll find that scrolling on social media and revenge bedtime procrastination are probably not it. Instead, you could automate some tedious tasks, or book one full afternoon to record videos in a batch, or plan a trip to visit fellow founders in other cities and learn from them. But don’t overkill it. If you suffer from time anxiety, it may be tempting to try and always invest your time in a directly meaningful way. But idleness can also be a way to invest your time, letting your mind wander to let your imagination run wild and generate new, fresh ideas in the process. Fuel your body and your mind. Whether it is the food you eat, the amount of sleep you get, or the content you consume, make sure to give your body and your mind enough energy. Cook yourself healthy meals (or buy some healthy ready-made dishes if you’re not into cooking), don’t cut down on sleeping, nourish your mind with thought-provoking content and consolidate your ideas with journaling… There are many ways to sustain your levels of energy. Again, sometimes, it means doing absolutely nothing — which may not feel productive, but will recharge your batteries for later, better, more enjoyable work. Plan for distraction. If you find it hard to stay focused, don’t fret: it’s completely normal. Our mind is designed to be distracted, to keep on scanning the room around us for new information — or potential danger. Instead of beating yourself up, try to plan your work around your goals and triggers. If your goal is to write a report for an upcoming meeting, you will need a few hours of uninterrupted work. What triggers could get in the way of your focus? Is it your phone, chatty colleagues? Adapt your workspace to minimize these distractions, whether it’s leaving your phone in another room, blocking distracting apps, or locking yourself up in a meeting room with a “do not disturb” post-it note. Again, these principles may sound obvious, but it’s easy to get lost in the weeds. Before you start studying complex productivity systems, consider improving the way you manage your time, your energy, and your attention by applying the TEA framework of productivity. As often, self-reflection is a powerful tool to track your progress and make sure you apply these ideas in a thoughtful way. The post The TEA framework of productivity: managing your time, energy, and attention appeared first on Ness Labs.
The TEA framework of productivity: managing your time energy and attention
April 2022 Updates
April 2022 Updates
New Things Under the Sun is a living literature review; as the state of the academic literature evolves, so do we. This post highlights two recent updates. One of those updates was pretty big, so I will end up copying the entire updated post below, rather than an excerpt. But first, one announcement and one shorter update. Endless Frontier Fellowship First, I wanted to do a quick plug for a new fellowship that’s probably of interest to some readers of this newsletter. It’s a one-year science and tech policy fellowship for talented early career individuals, called the Endless Frontier Fellowship. Fellows spend an immersive year embedded as policy entrepreneurs at EFF’s anchor organizations, the Institute for Progress (New Things Under the Sun’s partner), the Federation of American Scientists, or the Lincoln Network. It’s paid! If you want to apply, the deadline is May 2. More details here. Covid-19 and Innovation Second, the article Medicine and the Limits of Market Driven Innovation has been updated with some discussion of a new paper by Agarwal and Gaule (2022), which describes how the biomedical R&D machine responded to covid-19. It’s a bit hard to excerpt the updates, but two points emphasized are: Agarwal and Gaule provide some additional evidence which confirms work done by other papers using earlier data. Biomedical R&D is responsive to the size of the profit opportunity associated with diseases: they find a 10% increase in the size of the market for a drug is associated with about 4% more clinical trials. Against this benchmark, the response of biomedical R&D to covid-19 was a huge outlier. According to their estimates, the size of the “market” for a covid-19 treatment (based on global mortality from the disease) was bigger than the market for any other disease they considered. Even so the number of new clinical trials was 7-20 times larger than their model would have predicted. Covid-19 was strange in other ways as well. One of the main arguments of Medicine and the Limits of Market Driven Innovation is that private biomedical R&D generally responds to profit opportunity only with projects that do not require much fundamental research. While we have pretty good evidence that this is the case, covid-19 represents a big counter-example. As discussed a bit in the new update, covid-19 did in fact lead to a major shift in the kind of research done throughout science (discussed in more detail here). Data on Combinatorial Innovation Lastly, I’ve written a fairly large update to a post originally called “Innovation as Combination: Data.” That was the fifth New Things Under the Sun I ever wrote, and it wasn’t quite in the style of today’s posts. I now try to make each piece make a specific claim, drawing on a set of related papers, but that piece was more a round up of some related articles. I’ve rewritten it to make a specific claim, which is encapsulated in the new title: “The best new ideas combine disparate old ideas.” It’s about 50% new material, with the set of articles covered going from 4 to 7. Rather than excerpt so much, I reproduce the whole updated post below; enjoy! The Best New Ideas Combine Disparate Old Ideas Where do new ideas and technologies come from? One school of thought says they are born from novel combinations of pre-existing ideas. To some extent that’s true by assumption, since everything can be decomposed into a collection of parts. But this school of thought makes stronger claims. One such claim is that new combinations - those pulling together disparate ideas - should be particularly important in the history of ideas. And it turns out we have some pretty good evidence of that, at least from the realms of patents and academic papers (and also computer programming). To get at the notion that new ideas are combinations of older ideas, these papers all need some kind of proxy for the pre-existing ideas that are out there, waiting to be stitched together. They all ultimately rely on classification systems that either put papers in different journals, or assign patents to different technology categories. These journals or technology classifications are then used as stand-ins for different ideas that can be combined. A paper that cites articles from a monetary policy journal and an international trade journal would be assumed to be combining ideas from these disciplines then. Or a patent classified as both a “rocket” and “monorail” technology would be assumed to combine both ideas into a new package technology. New Combinations in Patents and Citations A classic paper here is Fleming (2001), which uses highly specific patent subclasses to proxy for combining technologies. There were more than 100,000 technology subclasses at the time of the paper’s analysis, each corresponding to a relatively narrow technological concept. Using a sample of ~17,000 patents granted in May and June 1990 Fleming calculates the number of prior patents assigned the exact same set of subclasses. He shows patents assigned combinations without much precedent tend to receive more citations, which suggests patents that combined rarely combined concepts were indeed more important. For example, as we go from a patent assigned a completely original set of subclasses to a patent with the maximum number of prior patents assigned the same set of subclasses, citations fall off by 62%. This flavor of result holds up pretty well to a variety of differing methods. For example, Arts and Veugelers (2015) track new combinations in a slightly different way than Fleming, and use a different slice of the data. Rather than counting the number of prior patents assigned the exact same set of technology classifications, they look at the share of pairs of subclasses assigned to a patent that have never been previously combined. This differs a bit from Fleming because they are only interested in patents that are the first to be assigned two disparate technology subclasses, and also because a patent might be a new combination and still be assigned no new pairs. For example, given subclasses A, B, and C, if the pairs AB, BC, and AC have each been combined before, but the set of all three (ABC) has not, then Fleming will code a patent assigned ABC as highly novel and Arts and Veuglers will not. Arts and Veugelers (2015) look at ~84,000 US biotechnology patents granted between 1976 and 2001 and look at the citations received within the next five years. About 2.2% of patents that forge a new connection between different technology subclasses go on to be one of the most highly cited biomedical patents of all time, compared to just 0.9% of patents that fail to forge new connections. And patents that don’t become these breakthroughs still get more citations if they forge novel links between technology subclasses. Moreover, the direction of this relationship is robust to lots of additional control variables. As a final example, He and Luo (2017) also establish this result, measuring novel combinations in yet another way, and using an even broader set of data. He and Luo look at ~600,000 US patents granted in the 1990s, and which contain 5 or more citations to other patents. Rather than relying on the technology classifications assigned directly to these patents, they look at the classifications assigned to cited references. They assume a patent combines ideas from the classifications of the patents it cites. They also use a much coarser technology classification system, which has just 630 different technology categories, rather than over 100,000 used in the previous two papers. To measure novel combinations, they look at how frequently a pair of technology classifications are cited together relative to what would be expected by chance. That means they end up with lots of measures of novelty for each patent, one for every possible pair of cited references. To collapse down the set of novelty measures for each patent, they order the pairs of cited reference from the least conventional to most and then grab the median and the 5th percentile. As a measure of the importance of these patents, we can look at the probability that they are a highly cited patent for the year they were granted and for their technology class. In the figure below, they divide patents up into deciles and compute the probability a patent whose novelty measure falls into that decile is a hit patent. Because they are adapting some earlier work, they set these indices up in a kind of confusing way. In the left figure below, moving from left to right we get increasingly conventional patents, while in the right figure, moving from left to right we get increasingly more unconventional patents. From He and Luo (2017) The figure above shows that when you focus on the most unusual combination of cited technologies made by a patent (the right figure), then more atypical patents have a significantly higher chance of being a hit patent. When you focus on the median, you find a more complicated relationship: you don’t want all the combinations made to be totally conventional nor totally unconventional and strange. There’s a sweet spot in the middle. Perhaps patents that are completely stuffed with weird combinations are too weird for future inventors to understand and build on? Addressing some potential problems The link between unusual combinations of technology classifications and future citations received is pretty reliable across these papers. But before taking these results too far, there are a few potential issues we need to look into. The first potential issue is a form of selection bias. One challenge from this literature is we typically only ever look at patents that are ultimately granted. But suppose patent examiners are biased against patent applications that make unusual combinations. If that’s the case, then patents making unusual combinations will only make it through if they are so valuable that their merits overcome this deficit. That would, in t...
April 2022 Updates
Unlocking the power of less with Francesco DAlessio creator of Bento
Unlocking the power of less with Francesco DAlessio creator of Bento
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help be more productive without sacrificing our mental health. Francesco D’Alessio is the creator of Bento, a methodology that limits you three tasks per day for better prioritisation. The best way to apply the methodology is to download the app, which is available on iOS and soon on Android. In this interview, we talked about how to design your workflows to balance your energy levels, how limits can help you achieve more, the biggest challenges in personal productivity, why we should be more intentional with our to-do lists, how to go from task overload to true focus, and more. Enjoy the read! Hi Francesco, thank you so much for agreeing to this interview. What do you think are some of the biggest challenges people face when managing their productivity? Thank you to everyone at Ness Labs for having me. Personally, I think one of the biggest challenges with productivity right now is prioritising tasks. For knowledge workers, the tasks we assign ourselves are either overreaching or under-whelming, leaving us feeling unachieved by the end of the day.  We add more and more to our lists, naturally without considering the value of each task, this overwhelm can lead to burnout, workplace stress, and a lack of success at the end of your day. This can then compound further, having an impact on your next day’s productivity too. It can turn into a really destructive pattern. Introducing better, more systematic ways to select your tasks with Bento, we hope will change the approach many knowledge workers use to accomplish their most important tasks. Our goal is to help you select more intentional tasks.  Many people have tried to tackle these challenges. What makes Bento different? Bento isn’t just an app, but a methodology. Our key objective is to combine healthy, mindful practices to build a framework first, then an app second. You can use Bento anywhere — though, of course, we want you to do it with the Bento app, as we’ve built it to be the single best place to implement the method. Another big challenge we face day-to-day is balancing energy levels. Many apps that offer a more intentional approach fail to address managing energy alongside workflow elements. With  Bento you can apply simple strategies that order your tasks based on your energy that can be tailored with each box you create. Our vision is that limits can help you accomplish your most meaningful tasks. That’s why Bento has a “3×7 limit” — it limits you to three tasks (one large, one medium, one small) and seven boxes in total. Seven Bento boxes is plenty to build your own Bento for a week of tasks. Was there an “aha” moment that convinced you to Bento into the world? Bento was born from a pain point I saw with many people’s experiences through reading comments, speaking to people in offices, and my own love for everything in Japanese culture.  It was only in early 2021 that I decided to pop my developer friends Karl and Robin a message to see if they were interested in collaborating together to build an app. Many late night calls for the next year then helped us produce a beautiful application and thoughtful methodology, which was a really fulfilling experience for all of us. Okay, so let’s say I have made my Bento for the day. What do I “eat” first? Once you have your three tasks ready you apply a workflow. A workflow is very simply the order in which tasks are completed, with the goal of balancing your energy levels. You can choose one of the three workflows: Eat That Frog is taken from the classic productivity book of the same name by Brian Tracey — with this workflow, you focus on your most largest task first, move onto a medium task, and finish with your small task. Climb The Summit is a balanced approach to your day. You begin your day with a medium-energy task, moving to the biggest task, and finishing with your small task. Slow Burn is perfect for slow-starters to the day. You begin with your smallest task, and then move onto a more demanding task, gradually working your way towards the largest task, which is great for afternoon peaks of energy.  We designed Bento to be flexible, so you can assign a workflow to a box each time you create one, perfect for an ever-changing energy level you might face. That sounds like an incredibly simple and powerful method. But, let’s be honest: even with the best of planning and intentions, we often get distracted. Distraction is something everyone faces, sadly not one of us has escaped it. Whilst it isn’t impossible to remove distractions, inside of Bento, we designed our focus experience around the concept Cal Newport introduced in “Deep Work” — a classic productivity read about the value of limiting distractions. Bento’s one-task focus mode helps you hone into a single  task at a time. The goal behind this is to block the view of every other task and to focus on your one primary goal. A significant challenge with task overload is the element of exposure to what’s next. If you eliminate this by removing the other tasks on your list, you can only direct your mind’s attention to your true focus target. A subtle distraction that gets overlooked is context switching. One-task focus also reduces the occurrences of context switching that commonly come from just seeing your other tasks on your list. So, should people fully switch to Bento and forget about their current to-do lists? Short answer: No, Bento complements existing applications. We believe Bento is a layer you can add to your existing tools and use Bento as the focus framework for getting less done. In the next few weeks, we’ll be introducing the Bento Method course — a framework and guide on how to apply the exact framework to the tools you use everyday. This will allow people to use Bento where they see best, though we still maintain that the Bento app is the best place to apply the Bento Methodology. Looking forward to the course! What kind of people use Bento? Right now, we’re seeing a lot of productivity folks using it alongside their existing tools, thanks to the nature of Bento complementing apps. From our beta testing, we actually discovered that Bento can be used in a wide variety of situations. For example, we spoke with a dad who started using Bento with his daughter who has autism, and found that the timer system and focus on three tasks helped her train her focus — this is something we’re eager to explore more.  The methodology and app are so wide reaching that we’re finding many people who suffer from workplace stress, task overwhelm, or prioritisation struggles getting huge value from Bento, many of whom are knowledge workers. What about you… How do you personally use Bento? Bento is one of those concepts that can be layered over whatever you use. Right now, I’m using Bento daily as I narrow down what matters in my own Sunsama account. Obviously, things are added to my backlog throughout the day, but my Bento box helps me to stay focused on what matters if all else fails. When I complete my Bento box items, I tend to feel a sense of success by accomplishing those intentional tasks I laid out the night before. And finally… What’s next for Bento? Our next goal for Bento is Android, which is set for very soon. In between, we’ll launch the official Bento course with templates for existing applications like Notion, ClickUp and many more to offer people a way to implement the Bento methodology inside of their existing experiences. After that, our goal is to create Bento on more devices, allow synchronisation, and explore how Bento can be suggestive to working more mindfully and effectively on tasks. Thank you so much for your time, Francesco! Where can people learn more about Bento and give it a try? Thank you for sharing this folks, we can’t wait for people to try Bento. Bento is available to download on iOS, and there’s a waitlist for Android. You can also follow our journey on Twitter. The post Unlocking the power of less with Francesco D’Alessio, creator of Bento appeared first on Ness Labs.
Unlocking the power of less with Francesco DAlessio creator of Bento
How to design a sustainable workplace at home and in the office
How to design a sustainable workplace at home and in the office
You are likely to spend around 90,000 hours at work over your lifetime. If that number doesn’t seem big already, that’s ten years of your life. Depending on where you work, you may have little agency over the design of your workplace — hospital workers and flight attendants are rarely consulted when it comes to sustainability practices — but, in many cases, we do have the ability to make our workplace more sustainable. Whether it’s changing your own habits or convincing the people you work with to make more sustainable choices at work, small changes can have a big impact. Let’s have a look at the benefits of a sustainable workplace, and some simple steps you can take at work to be more mindful of our planet. Save money, save the planet First, why would you want to make your workplace more sustainable? Beyond doing what’s right for our planet and for future generations, designing a sustainable workplace has many practical, and often immediate, benefits: Reduced costs. It may sound obvious, but saving energy will reduce your bill, purchasing second-hand furniture will reduce the cost of decorating your office, and taking public transportation or cycling to the office will save you money compared to using a car. Increased creativity. Upcycling an old desk you found at a thrift shop will require a lot more creativity than buying a new one and following the three-step assembling instructions. Whether it’s to reuse materials, increase the energy efficiency of a project, or figure out how to increase the lifespan of the products you use at work, making your workplace more sustainable often requires creative thinking. Better work satisfaction. This is especially true for bigger companies. The HP Workforce Sustainability Survey reports that 61% of office workers say sustainable business practices are a “must-have” for companies, and a paper suggests that improved sustainability standards can reduce annual quit rates. The good news is: anyone can contribute to designing a more sustainable workplace, whether it’s just you working from home, or if you’re working from an office with your team. Three ways to design a sustainable workplace Of course, making your workplace more sustainable is not about applying a few quick fixes. As Andrew Cameron writes in the journal Strategic Direction: “This is not about a one‐off conference or a newsletter, it is about permanently changing the way decisions are made and the way people work to enable the organization to function, in a different and ultimately more relevant way. You will know when you have succeeded when environmental and sustainability considerations are an instinctive part of the decision‐making process at all levels.” That being said, there are some easy wins that can help you get started. If you work as part of a team or in an office, these small changes can help spark conversations around workplace sustainability. And if you work on your own or at home, you may use these as a starter pack of sustainability practices, which can prompt you to research and improve the sustainability of other aspects of your workplace. 1. Use deforestation-free products Avoid printing documents as much as possible, and if you absolutely must, use deforestation-free paper. And no, that doesn’t necessarily mean recycled paper. A study published in Nature Sustainability shows limited benefits of recycled paper, and even indicates that if all paper was recycled, emissions could increase by 10%! This is because recycling paper relies more on fossil fuels and electricity from the grid compared to producing virgin paper. Maybe that will change and recycling paper will be increasingly powered by renewable energy, but for now, this is not the best way to make your workplace more sustainable. Instead, make sure the paper you use is FSC certified. FSC stands for Forest Stewardship Council. This is a certification confirming that the forest is being managed according to strict environmental, social and economic standards, preserving biological diversity and benefiting the lives of local people. The FSC certification is also helpful for other workplace products. For instance, you may want to check that your bamboo-based laptop stand comes from sustainably managed crops, instead of areas where the land has been specifically deforested to grow bamboo. 2. Save energy There is a direct connection between the amount of electricity you use at work and the environment. Electricity generation takes place in thermal power plants, which burn either fossil fuels, biofuels, or nuclear fuel to heat water and produce steam. When you consume less power, you reduce the amount of greenhouse gas emissions released by power plants. Of course, you’re not expected to reduce your work hours so your computer uses less electricity — though taking more breaks is always a good idea — but there are small steps you can take that will have a big impact. For instance, LED bulbs use 70 to 90% less energy than incandescent bulbs. They also have a longer lifespan: up to 40 times longer than an incandescent bulb! There are other habits you can develop to save energy in the workplace, such as turning off appliances that are not in use, making sure your office is properly insulated instead of relying on the heater or air conditioner, and turning off the lights whenever you leave a room. 3. Go vintage Whether at home or in the office, a common mistake people make when designing a sustainable workplace is to buy more sustainable versions of items they already own. For instance, a new reusable water bottle, new storage containers, or new bamboo shelves to replace existing plastic shelves. Absolutely all new products require resources to produce and transport, whether they are labeled as sustainable or not. If your workplace already has an item that is working as intended, the most sustainable choice is to keep on using it instead of replacing it. When the item doesn’t do the job any more — maybe it’s broken and can’t be repaired — the second most sustainable choice is to purchase a second-hand replacement. This is of course more easily done at an individual level or for small teams, but if you can, it is worth going to a second-hand store, especially when it comes to purchasing office furniture. And vintage works for electronic devices too! French startup Back Market is valued at $5.7B valuation for its marketplace where people can buy refurbished devices without generating additional waste. It’s another good way to save money while making a sustainable choice. Small changes add up Individually, some of these changes may seem like they have a low impact on climate change, but they do add up when everyone chips in. By purchasing workplace products that don’t directly harm the environment and that are made in a socially irresponsible way, we can send a signal to companies manufacturing the products we use everyday at work and collectively encourage a shift towards more sustainable practices. By saving energy, we can reduce our greenhouse gas emissions. And by going vintage, we can avoid generating additional waste. Designing a sustainable workplace is also an opportunity to be more mindful about the way we work, and to have conversations about the impact we want to have and the legacy we want to leave. The post How to design a sustainable workplace at home and in the office appeared first on Ness Labs.
How to design a sustainable workplace at home and in the office
When Extreme Necessity is the Mother of Invention
When Extreme Necessity is the Mother of Invention
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. Audio versions of this and other posts: Substack, Apple, Spotify, Google, Amazon, Stitcher. We all know the proverb “Necessity is the mother of invention.” This proverb is overly simplistic, but it gets at something true. One place you can see this really clearly is in global crises, which vividly illustrate the linkage between need and innovation, without the need for any fancy statistical techniques. Let’s look at three examples. Share Crisis #1: Covid-19 Global Pandemic Our first crisis is the one we’re all most familiar with: the covid-19 global pandemic. During 2020-2022, the big thing we suddenly needed was medical treatment for covid-19. Agarwal and Gaule (2022) look at what happened to the number of new clinical trials (for all diseases) in the wake of the pandemic.1 No surprises: the number of new clinical trials shot up as the magnitude of the disease became clear, with essentially all of the increase coming from trials related to covid-19. From Agarwal and Gaule (2022) In the end, these trials succeeded and we got a suite of effective vaccines in record time: necessity was the mother of invention. Covid-19 had other effects too. For one, it forced the world to embark on an unprecedented experiment in remote work. Bloom et al. (2021) is a short paper that looks at the share of patent applications, filed in the USA, that relate to remote work. Bloom and coauthors scan the text of patent applications for words related to remote work, such as “work remotely”, “telework”, “video chat”, and many others. As we can see in the figure below, covid-19 induced a step change in the share of patents related to working remotely. Again, necessity was the mother of invention. Update to Bloom et al. (2021) by Mihai Codreanu Crisis #2: Oil Shocks Our second crisis is the oil price shocks of the 1970s. After a long period of relatively stable and predictable energy prices, the price of oil abruptly shot up due to disruptions to Middle Eastern supply in the 1970s. The energy crisis created an urgent need to pivot away from dependence on suddenly unreliable oil supplies. Suggestive evidence that the US economy managed to do just that comes from the following figure from Hassler, Krusell, and Olovsson (2021). The black line is the share of GDP spent on energy, the dashed line tracks the price of energy in the USA. From Hassler, Krusell, and Olovsson (2021) Around 1985 the link between the share of GDP spent on energy and the price of energy seems to have changed (in the figure, the black line moved from above the dashed one to below). That suggests the economy got better at getting more GDP out of less energy. But it’s still not 100% clear how the timing of this all played out; was this really that closely related to the oil shocks? To more precisely estimate the pace of innovation related to energy, Hassler, Krusell, and Olovsson (2021) use some fairly basic economic modeling. They assume economic output is produced by labor and energy, and that technology comes in two flavors, one for each. If the technology for energy gets twice as good, it’s as if you’ve got twice as much energy to play with (when in fact, better technology allows you to use the energy you’ve got twice as efficiently). Similarly, if labor technology gets twice as good, it’s as if you’ve got twice as much labor to work with. The cool thing is that if you accept their pretty simple model, you end up with a way to measure a concept like “technology”, which is normally so nebulous, with some very simple and readily available data. If you assume the economy uses labor and energy efficiently, you can do some math, move things around, and show that the productivity of the energy technology can be expressed as a function of GDP per capita, the share of spending on energy in the economy, and our ability to substitute labor for energy and vice-versa. That’s almost all stuff we can measure. When Hassler, Kruseell and Olovsson plug data into this equation and make some sensible assumptions about our ability to substitute labor for energy (they assume its quite hard), you get the following striking chart tracking our ability to convert energy into economic output. Here, the blue line is a measure of how technology multiplies the energy supply, so that having one barrel of oil in 2020 is like having 3 in 1950. Estimated productivity of energy. From Hassler, Krusell, and Olovsson (2021). Now it’s crystal clear: the oil shocks knocked productivity of energy technology out of its stagnation and into a steady upward trend. Necessity was the mother of invention. An aside: sometimes, people argue one reason technological progress slowed in the 1970s, because we moved from technological progress that took abundant energy for granted to technological progress that did not. Hassler, Krusell, and Olovsson’s work is broadly supportive of that narrative. This is just three data points, so don’t get too excited, but there does seem to be a negative correlation between the pace of progress in technology that converts energy into output and technology that converts labor into output. In other words, when the oil shocks forced us to expend more effort on reducing demand on fossil fuels, that may have come at the expense of other forms of technological progress that we had become accustomed to. From Hassler, Krusell, and Olovsson (2021) Crisis #3: World War II Our last crisis is World War II. We could point to many innovations born out of the exigencies of World War II: radar to defend against attack from the air; penicillin produced at industrial scale; and the Manhattan project to develop the first atomic bomb. But let’s focus on the need to build a lot of airplanes. When President Roosevelt targeted 50,000 planes over the war in 1940, this goal was viewed as simply impossible by many: contemporary economists Robert Nathan and Simon Kuznetz believed the US simply didn’t have the productive capacity to do it (Ilzetzki 2022). And yet, in reality, the US eventually succeeded in producing 100,000 planes in just one year. During the war, there was a 1,600% increase in the number of aircraft produced, and US spending on aircraft alone reached 10% of 1939 GDP. How did the US manage to do the seemingly impossible? The following figure from Ilzetzki (2022) gives some clues. It shows total US aircraft produced (measured by weight), as well as the capital and labor used to produce aircraft, relative to 1942 levels. From Ilzetski (2022) Initially, the US made more airplanes by using more labor and more capital to make airplanes. But after 1943, something surprising happened: the increase in capital and labor slowed or even stopped, but we kept on increasing how many planes we made! In order to meet their ambitious targets, airplane manufacturers were forced to discover new efficiencies. And they did! Necessity was the mother of invention. Ilzetski actually goes much further, and tracks the productivity of individual airplane manufacturers. He shows that, on average, individual manufacturers became more productive when they received more plane orders, and that this effect was greatest for the manufacturers who were already operating closest to capacity. In other words, the manufacturers who had the least ability to meet their aircraft orders by increasing labor or capital were also the ones who most improved their productivity! Invention Has Two Parents The above examples illustrate how sudden new necessities can indeed drive innovative effort. And I’ve written elsewhere about evidence that demand for new technologies, even in non-crisis settings, can also spur innovative effort. For example, the private sector tends to do more R&D on treatments for diseases that become more profitable to treat, and automobile manufacturers developed more fuel efficient vehicles in response to fuel efficiency standards and high energy prices. But we need to be careful not to take this too far. You cannot will technologies into being, simply because someone needs them (if so, we wouldn’t have waited so long for mRNA vaccines and atomic bombs). Invention has two parents. A truer proverb might be “Necessity and knowledge are the parents of invention.” We can also see this in some of the examples just cited. As discussed in a bit more detail here, most of the new clinical trials for covid-19 were not for fundamentally new kinds of drugs. Instead, they were largely attempts to re-deploy existing drugs to a novel use case. In other words, they were attempts to take what was already known to be safe and see if it had beneficial effects on covid-19. Most of these failed. The covid-19 vaccines that eventually succeeded rested on deep foundations of fundamental research that went back decades. Covid-19 was the impetus to transform this knowledge into effective new treatments (though these efforts were already underway before covid-19), but it didn’t give us the knowledge that made that possible. Most of the radical technologies developed during World War II, such as radar and the atomic bomb, relied on breakthroughs in fundamental science that preceded the war. In a 2020 review of the activities of the US Office of Scientific Research and Development, which oversaw these and many other technological breakthroughs of the war, Gross and Sampat write “the time for basic research is before a crisis, and since time was of the essence, ‘the basic knowledge at hand had to be turned to good account.’” Ilzetski shows much of the improvement in airplane manufacturing came from adopting techniques that had been shown to be effective in other sectors, rather than inventing new processes out of whole cloth. Specifically, airplane manufacturers that faced capacity constraints were more likely to adopt production line processes (instead o...
When Extreme Necessity is the Mother of Invention
Audio: When Extreme Necessity is the Mother of Invention
Audio: When Extreme Necessity is the Mother of Invention
This is an audio read-through of the initial version of When Extreme Necessity is the Mother of Invention. To read the initial newsletter text version of this piece, click here. Like the rest of New Things Under the Sun, this underlying article upon which this audio recording is based will be updated as the state of the academic literature evolves; you can read the latest version here.
Audio: When Extreme Necessity is the Mother of Invention
Building your digital legacy with Kazuki Nakayashiki co-founder of Glasp
Building your digital legacy with Kazuki Nakayashiki co-founder of Glasp
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better and achieve their intellectual and creative ambitions. Kazuki Nakayashiki is the co-founder of Glasp, a social web clipper that allows users to share their highlights and notes as they read, without any back-and-forth between the web and a note-taking app. In this interview, we talked about the nature of human legacy, the knowledge isolation problem, serendipitous spaced repetition, social knowledge management and collective intelligence, learning in public, the impact of social accountability on note-taking, and more. Enjoy the read! Hi Kazuki, thank you so much for agreeing to this interview. Glasp stands for “Greatest Legacy Accumulated as Shared Proof” — can you tell us more about what it means? Thank you so much for having me. I am a huge fan of Ness Labs and I am honored to be here today. First of all, we believe that one of the most noble pursuits is for people to learn, experience, and pass their knowledge on to future generations. The present in which we stand today is built on what our predecessors have built in the past. When we talk about legacy, it does not necessarily mean to leave a successful business or a lot of money behind. Of course, it is wonderful to be able to leave these things to future generations, but I don’t believe that these are the greatest legacies, in that not everyone can leave them behind. Instead, I believe that the greatest legacy is to live a courageous life. It is in the attitude of not being daunted by difficulties, not being overly pessimistic, and betting on the possibilities and hopes of humanity. And I believe this means leaving and weaving our knowledge, wisdom, and history for the next generation. However, even though we are standing on the shoulders of our predecessors, we do not know how or by whom all that knowledge was accumulated. Through Glasp, we want to empower people to leave, share, and weave the greatest legacy. Our mission is to democratize access to other people’s learning and experiences that they have collected throughout their lives. By doing so, we may be able to help others who may try to follow a similar path in the future.  This is an ambitious mission. Was there an “aha” moment that inspired you to build Glasp? I had a near-death experience at the age of 20 when I had a sudden subdural hematoma that paralyzed the left side of my body. My doctor at the time told me that I could have a cardiopulmonary arrest at any moment, and I was hospitalized and underwent emergency surgery. I remember the sense of fear and emptiness that welled up from the depths of my body, which words cannot express, as I was confronted with the reality that the normalcy of yesterday was suddenly taken away from me and that my existence might disappear from this world. Since that time, I have wanted to leave behind a legacy that would allow me to feel that I have made even a small contribution to the future of humanity — a legacy that would be the proof and meaning of my existence in this world. I do not know if what I leave behind is really useful to anyone. It is a matter of subjectivity. However, just as someone’s trash is someone else’s treasure, I believe that by leaving my learning and experiences behind, they can become useful to someone somewhere in the future. If you leave your knowledge, wisdom, and insight in a completely personal space, no one will be able to access it after you die. Given the fact that collective learning has made us humans smarter across generations, I think keeping knowledge in a silo is a huge loss for humanity as a whole. In other words, the problem we are addressing is the isolation of knowledge. The world is full of countless wonderful personal apps, but my near-death experience and the process of searching for the meaning of life led me to the current idea of Glasp. That’s incredibly inspiring, thank you so much for sharing Glasp’s origin story. How does Glasp work exactly? Glasp is a social web clipper that people can use to highlight and organize quotes and thoughts from the web without having to switch back and forth between screens, and access other like-minded people’s learning at the same time. You can get an idea of what Glasp is all about by looking at what our users are saying on the “Wall of Love” on the website. After breaking our mission into specific components, we decided to focus on the overlap between curation, knowledge management, and community. Some of the advice I recently received from Jeremy Brown on Twitter overlaps with these components, as well as with Michael Simmons’ ideas of public note-taking and learning in public. For curation, we currently offer a Chrome Extension and a Safari Extension, which, when installed, display a small popup like the Kindle’s highlighter when you select text, and will allow you to curate text that resonated with you. It allows for easy highlighting and note-taking without interrupting the reading experience. When reading a particular article, all highlights and notes for that article can be viewed from the right sidebar, and be easily copied and pasted into note-taking apps, markdown style. You can also add tags and comments, and see what others have highlighted on that website directly on the page. In terms of knowledge management, as you can see on my page, Glasp organizes your highlights and notes for you and allows you to filter by topic or full-text search, so you can easily access quotes, thoughts, ideas, and insights that you have found important in the past. The social nature of Glasp also allows you to access other people’s highlights and notes on your page (called “marginalia”), so you can build on others’ perspectives and deepen your knowledge. In the future, we plan to add a feature that will allow you to backlink your findings with your past highlights or those of others. I also believe that the uniqueness and fun of Glasp’s approach to knowledge management lie in its ability to resurface what you have learned. Spaced repetition is one of the most proven methods for remembering what you have read, but the sad reality is that reviewing flashcards is tedious and setting up the system is cumbersome. With Glasp, others interact with your highlights, which provides accidental and automatic opportunities for review. This is unique in that other curators resurface your highlights, and I think it is also interesting that the curators resurface the creator’s work. As for community, Glasp allows you to connect and learn from other people with similar interests through the learning byproduct: highlights and notes. Glasp’s home feed allows you to see what the people you are following are learning and what insights they have gained. You can also search content by topic, so you can see what friends, colleagues, influencers, and other people you trust or who share similar interests are learning. You can check each site’s top highlights and find your favorite authors as well. In particular, newsletter writers or content writers can share their learning process with Glasp (called “learning in public”). Deep engagement and direct feedback from audiences and followers can be a great way to get ideas and inspiration for their future content. Having learning partners is very inspiring and fun. Glasp can enhance one’s learning process by making the learning process social. For example, we are collaborating with the Month to Master learning cohort program run by Michael Simmons to help learners weave and share what they learned. When it comes to bookmarking and highlighting, a big challenge is that many people end up building a graveyard of random links they never end up actually learning from. How does Glasp address this challenge? As you say, too often we see the issue of saving random links leading to this dysfunction, and I think this is a problem that is not limited to bookmarking and highlighting, but to our information society in general. One important aspect is the difference between read-it-later apps and Glasp. There are two processes of information selection when we collect information. One is broad and shallow. The other is narrow and deep. The former is an area where the read-it-later apps show their strength as a place to store a vast amount of information that is of some interest, is relevant, or may be useful in the future, in place of your short-term working memory. The latter is an area where Glasp and other highlighter apps can show their strength as a place to store important information that has passed the primary sifting process and that you want to keep for a longer period. If the number of items to be stored is huge and their quality is not checked, it will be difficult to maintain, organize, and manage, and will most likely result in a graveyard of random links. Fortunately, Glasp’s user core action is not bookmarking, but the act of highlighting and leaving notes, so the action threshold for the user is not as light as for the bookmarking apps or read-it-later apps. Furthermore, the possibility that highlights and notes may be seen by others creates social accountability, so the threshold of action for the user is raised even higher. When you hear the word “highlight”, you probably associate it with education. Those familiar with education may know that research shows that highlighting is not a very effective learning technique. However, research also suggests that the probability of effective highlighting increases when moderate incentives and pressure are designed in. In other words, the pressure that someone might see your highlights works as a social accountability function, which can increase the likelihood of saving something better and more valuable to you. While some may argue that the volume of a person’s digital legacy may be reduced by this approach, we place more value on the insight, idea, emotion, and...
Building your digital legacy with Kazuki Nakayashiki co-founder of Glasp
How developing mental immunity can protect us from bad ideas
How developing mental immunity can protect us from bad ideas
Every day, a new video goes “viral”, and an “infectious” idea starts spreading. Mental immunity is a psychological theory that is also known as cognitive immunology. With origins dating back 70 years, this field of research is based on the premise that not only is there an immune system for the body, but an immune system for the mind as well. People with a healthy mental immune system are more likely to detect misinformation. A strong cognitive immune system can also help spot bad ideas at an earlier stage, so you may avoid wasting time, energy or money. Let’s explore the concept of cognitive immunology, together with a list of strategies you can employ to help strengthen your mental immune system. A mental immune system The concept of mental immunity was formulated by Professor Andy Norman, director of the Humanism Initiative at Carnegie Mellon University. Despite the field of cognitive immunology being in its infancy, mental immunity research has deep roots dating back to the 1950s.  The mental immune system is believed to function in a similar way to the body’s physical immune system. The purpose of the physical system of immune cells is to detect pathogens including bacteria and viruses, so that they can be eradicated from our blood stream and organ systems before they have a chance to cause damage. Similarly, a healthy mental immune system will detect harmful or incorrect information that enters our mind, so that it can be recognised as such and then promptly rejected. In a paper about immunology’s theories of cognition, philosopher Alfred Tauber explained that developing an “immune self” requires us to actively distinguish between the self and the foreign, so that foreign information can be interrogated and potentially defended against. This way, the mental immune system sifts through ideas, information and other forms of external stimuli to identify, and therefore protect us from, the adverse outcomes associated with misinformation. The benefits of mental immunity Both factual information and misinformation have the potential to spread through the population far faster than ever before. We have access to constant, real-time updates from online news platforms, as well as information shared via online magazines, social media, and unregulated websites. While factual information is a great asset, unreliable information and the inability to spot bad ideas may lead to poor decision making. Professor Sander van der Linden from the Department of Psychology at the University of Cambridge published a study which showed that the public could be inoculated against misinformation regarding climate change. In the study, the publics’ cognitive immunity to misinformation was reinforced when they were given a pre-emptive warning about politically motivated attempts to spread misinformation on the human causes of global warming. The results showed that this was an effective way to strengthen their immunity to false information. In his 2021 book Mental Immunity, Professor Andy Norman explains that the immune systems of our minds can be strengthened against ideological corruption and mind parasites, which increases our capacity for critical thinking. This in turn helps us to spot and remove bad ideas before they can cause harm. Furthermore, developing greater cognitive flexibility allows us to change our minds faster when new, better-evidenced information is presented to us. In short, moving away from rigid thinking patterns improves our relationship with information and our resultant actions. How to strengthen your mental immune system You can increase your mental immunity by making your mind more resistant to misinformation, which will lead to better cognitive flexibility and decision making. To keep on using the same analogy, these strategies work in a similar way to vaccination: they support your mind in recognising the threat of bad ideas. 1. Build awareness of misinformation. Misinformation is spread for a variety of reasons. It can be passed on innocently, especially when shared from person to person in a general conversation. However, research suggests that the spread of false content can also occur more deviously for political gain or polarisation, to generate income for media outlets, as a personal or industrial form of propaganda, or as a result of social media algorithms. Remember that misinformation is common, that fake news is designed to appear genuine, and train yourself to immediately interrogate the information or data you are presented with. This will help make your mind more resistant to bad ideas. 2. Develop healthy meta-beliefs. A meta-belief is a belief that one holds following a thorough reasoning process or cognitive interrogation to check the validity of the belief. In their 2020 paper, Gordon Pennycook and colleagues explained that “theories of belief should take into account what people believe about when and how beliefs and opinions should change — that is, meta-beliefs.” The team found that people who were politically liberal were more likely to believe that opinions and beliefs should change according to evidence. Those who were religious, or held paranormal or conspiratorial beliefs, were less likely to agree that beliefs should change. Developing meta-beliefs strongly correlates with mental immunity. To strengthen your mental immune system, be prepared to assess and re-adjust previously held beliefs if new evidence comes to light. This way, your opinions are continuously being amended based on the latest evidence. 3. Practise self-reflection. When practising self-reflection, you should start to pay attention to your patterns of consumption. If this process of reflection indicates that you are drawn to the same news sources, or solely rely on influencers or social media platforms for your updates, your information diet may not be varied enough. For greater mental nourishment, diversify your information sources and dig deeper into the underlying research to fully understand whether you are unconsciously being sold misinformation. It can also be helpful to develop a note-making practice so you can capture your thoughts and consciously reflect on the content you consume. Mental immunity is an emerging theory and more research is needed. However, the initial investigations have shown that a strong mental immune system helps filter external information to avoid falling prey to false data or flawed ideas. This cognitive system can be strengthened by building your awareness of the rampant nature of misinformation, developing healthy meta-beliefs, and reflecting on your patterns of information consumption. Once reinforced, stronger mental immunity will allow you to promptly detect misinformation, reject plans that are unlikely to succeed, and increase your cognitive flexibility to quickly adapt when presented with new evidence. Definitely a concept worth experimenting with! The post How developing mental immunity can protect us from bad ideas appeared first on Ness Labs.
How developing mental immunity can protect us from bad ideas
Making time for what matters with the co-founders of Agenda
Making time for what matters with the co-founders of Agenda
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us achieve more without sacrificing our mental health. Drew McCormack and ​​Alexander Griekspoor are the co-founders of Agenda, a date-focused note-taking tool that allows you to seamlessly plan and document your projects. In this interview, we talked about the nature of time, the delicate balance between design and simplicity, their just-in-time approach to resurfacing of notes, how to make sure formatting doesn’t get in the way of taking great notes, and much more. Enjoy the read! Hi Drew and ​​Alex, thank you so much for agreeing to this interview. Let’s start with a bit of a philosophical question. What’s your relationship with time? Trying to predict what will happen is fundamental to science, and for someone like me it is like candy to a toddler. When I was working as a physicist, time was often just a parameter in an equation that I was trying to solve. Dynamical systems like the weather and the stock market have always intrigued me, and I built a whole university career around that. The excitement of building a model or program that attempts to make a prediction, and then testing to see if it works, is hard to match. In my daily life, time feels more like a river, a constant stream. You look forward to something in the future, and before you know it, it belongs to the past, and you almost can’t grasp the anticipation you felt when it hadn’t happened yet. But I don’t look back too often, generally living in the now, with a vague and evolving idea of where I want to go in the future. Despite this complex relationship we have with time and how crucial it is in our daily lives, many tools solely focus either on documenting or planning — how is Agenda different? There are lots of note-taking apps around, and you can ask yourself: do we need another one? We felt there was that room, because none of the offerings have much of a relationship to time. It’s almost like your notes exist in a timeless vacuum, and yet you know yourself that each one has a logical temporal context. The note might be from the beginning of a project, or belong to that meeting on March 3rd when Joan joined the team. Many notes are not timeless, and you miss that context in most note-taking apps. That was the inspiration for Agenda — to add temporal context. Is this a note you are taking in a particular meeting on a particular date? Is it planning for the future? Or is it a record of what happened in the past? Agenda orders your project notes into a timeline, flowing from future to past, giving you that context. Notes usually begin in the future or present, and over time, flow back down the timeline to become breadcrumbs of the past. The timeline is really what makes Agenda unique in the note-taking world. Was there an “aha” moment when you decided to start building Agenda? The idea for Agenda came from my partner in crime, Alex. He was running his own software company, and spent a lot of time in meetings, as well as organizing a team of developers. He found that in his meetings, people would often forget what they wanted to discuss, or would come back after the meeting with “Oh, I meant to ask you…” To make things go smoother, he developed a system of taking notes. Alex would have one text file per project. He would enter new notes at the top of the file, and when he finished with one meeting, he would immediately create the note for the next meeting at the top. During the week, he would add anything that he thought needed to be discussed to that future meeting note.  When the meeting finally came around, he would locate the note at the top, and use it as the agenda for the meeting. Anything postponed during the meeting itself, or requiring a followup action, would be copied into a new note for the following meeting, and so forth. Alex has written a detailed recount of this here. This process worked great, but text files also have limitations, and Alex wanted to build a dedicated app. We have been friends and scientific colleagues for many years, and I was intrigued by the idea, and joined the team. I say “team”, but it really is just the two of us, with some intermittent help from others with design and programming tasks. Anyone who has ever tried Agenda will likely recognize the genesis of the idea in the app. Notes are organized into projects, which are equivalent to the plain text files Alex used before. By default, new notes appear at the top of the project timeline, and can be used for future planning.  As the items in the note are completed, the note becomes history, and you add a new note. If you need to check anything, you scroll down — back through history — and see why you decided to do what you did. Can you tell us a bit more about how Agenda works exactly? I mentioned the project timeline already, and that is directly inspired by Alex’s text files, but when you develop an application, you have the flexibility to add features not easily achieved purely with text. In Agenda, notes can have a date, but they can also be linked to an event in your calendar. You can also link tasks from a list to a reminder in Apple’s Reminder app. Our philosophy is to integrate with the existing apps you already use, rather than trying to build a kitchen sink app that does it all. Agenda is very much focussed on note-taking, but integrates with your calendar and other apps. One of the most important features of Agenda — and one that isn’t really possible to achieve with just text files — is an overview we call “On the Agenda”. You can flag any note as being on-the-agenda by simply clicking on an orange dot at the top. Once you do that, the note appears in an On the Agenda overview in the left sidebar. I use On the Agenda myself all the time. I keep notes there that are current, and which I want to access quickly. It might be a note for a meeting that day; a task checklist for a feature I am programming; or a recipe for the evening meal — anything I want to find quickly. And once something is no longer relevant, I take it off the agenda, knowing that if I need it, it is still there in the project. What also goes well beyond plain text files is the Agenda editor. It supports styles, so your notes have structure, as well as lists and checklists. I actually stopped using a todo app for my tasks, because I find it much faster and easier to use Agenda. I typically only add a linked reminder if I want to be reminded at a particular time to do something. You can add attached files and images to your notes too. One of the nice features of the new release, Agenda 14, is that you can now edit files in your notes. In the past, you could drag in an Excel file or PDF, but you couldn’t change it once it was in Agenda. With Agenda 14, you can double click on the attachment to open Excel, edit your spreadsheet, and save straight to the file in Agenda. Same with PDFs: open in Apple’s Preview, add some markup, and save changes directly to Agenda. Agenda also goes well beyond many note taking apps in terms of organizing, automation and searching. You can include tags in your notes, and link to outside resources or other notes. Agenda 14 improves upon this with backlinking, tag autocompletion, and a tag manager.  And for the real pro user, there are note templates and actions for inserting the date and other useful information. These allow you to build note content dynamically. You can even automate Agenda using x-callback URLs (macOS and iOS) and Shortcuts (iOS). That’s exciting. Many note-taking tools sacrifice design for the sake of functionality, but Agenda offers a beautiful note-taking experience. Can you tell us about your design process? The two of us have always had a strong focus on design in our apps. Alex has won several Apple Design Awards, which are like the Oscars of the app world, and Agenda won one in its first year. That success was largely thanks to our designer, Marcello Luppi. Marcello is also a long time friend who we knew from the scientific world, but who had transitioned into app design.  We originally began the project with no designer, and after around 6 months of programming, we showed what we had to some friends, and… They hated it! They didn’t understand the concept at all, and it looked like it had been designed by a couple of software engineers — go figure! We called in the help of Marcello, and he completely transformed the app, both visually and functionally, into the award winner it became. Marcello was so successful that Apple poached him away from us a year or so after we received the award. Marcello polished the whole appearance of the app, but one aspect we were determined to get right from the beginning was the text editor. There are lots of great markdown editors around today, where you edit plain text files with markdown formatting such as “# This is a heading”, that type of thing. We loved markdown for its ease of entry, but we thought it was a compromise to have that formatting in the final document. With Agenda, we wanted the best of both worlds. Why can’t you type “# This is a heading”, and have the text change instantly into an elegant bold heading style? Why do I have to see the # in my heading? We wanted Agenda notes to look and feel like real, well formatted documents, and that is what we have tried to achieve. In the current editor, you can type “# This is a heading”, and it will become a well-formatted heading. You can type “[ ] Bananas”, and it will turn into a checklist for your groceries. So you have the ease of entering markdown, but you end up with elegantly styled documents. And it goes much further than headings and lists. You can add tags, tables, links, and preformatted blocks, just by entering text. That sounds like such an elegant way to capture all sorts of notes. Now — a big challenge with many note-taking apps is the retrieval of older notes. How does Agenda...
Making time for what matters with the co-founders of Agenda
How to increase your creativity by cultivating creative self-efficacy
How to increase your creativity by cultivating creative self-efficacy
Do you think of yourself as someone who is not creative? Creative work can be challenging, and many people lack confidence in their own ability. Psychologists have reported that being unsure, anxious or defeatist about your creative potential can become a self-fulfilling prophecy that hinders your performance. Creative self-efficacy is the internal belief that you have the ability to complete creative tasks effectively. If you can learn to leave behind the fixed mindset of “I am not a creative person”, you will be able to make more room for personal growth, exploration, and innovation. Believing in your creative potential The concept of self-efficacy was first coined by Dr Albert Bandura. Bandura closely studied the relationship between performance and belief in oneself. He noted that those who had a strong sense of efficacy, or belief in oneself, approached challenging tasks with the determination to succeed. People with high self-efficacy tend to set goals, to become deeply engrossed in the task, and to continue their efforts despite difficulties or setbacks. Rather than feeling threatened by a challenge, they approach it with the confidence that they are in control and will eventually master the task. Conversely, Bandura noticed that those who tend to back away from difficult tasks do so because of self-doubt and a fear of failure. With little determination to succeed, they are more likely to dwell on their perceived weaknesses. For people with low self-efficacy, obstacles can quickly lead to abandoning the task, compounding an internalised belief that they are incapable of succeeding. Creative self-efficacy is a specific form of self-efficacy that was first investigated by Dr Pamela Tierney and Dr Steven Farmer. The researchers described creative self-efficacy as “the belief one has the ability to produce creative outcomes”. The greater the belief in your own creativity, the more successful you will be in pursuing your creative goals. Tierney and Farmer also reported that creativity can be impacted by your confidence in managing the overall demands of your career. If you feel that you are capable of succeeding at work, then you are also more likely to demonstrate good creative performance within your role. The most interesting part is that although job self-efficacy is a predictor of confidence in your personal creative ability, creative self-efficacy is the greater predictor of your creative performance.  This is corroborated by the results of Dr Gay Lemons, who found that creative success is most greatly influenced by belief in one’s own ability, rather than by actual creative competence. As you can see, creative self-efficacy is a psychological attribute that greatly influences creative performance, with the potential to further what we can achieve. How to cultivate creative self-efficacy Learning how to believe in your own creative ability is as important, if not more so, than developing your creative skills. While it is important to practice and explore new creative skills, cultivating creative self-efficacy can have a great influence on your creativity. Here are some practical ways to cultivate your creative self-efficacy. 1. Develop a creative network. By building a strong professional network of people who are driven to produce excellent creative work, you can start imitating part of their creative self-efficacy to increase your own. Remember that creativity is not restricted to the arts. Everyday professional dilemmas can be solved creatively, whether they relate to project management, delivery of information, or organisation of a complex budget. Watch how your peers apply creative thinking to manage everyday tasks, and start emulating some of these patterns. 2. Get creative support. Identify people whose creative efforts are often successful, and ask whether you can work under their guidance. This could be a quick brainstorming session, a creative review, or just sharing some helpful resources. Support should go both ways: consider whether there is scope to offer your co-workers some of your time to help with their creative growth. 3. Cultivate creative autonomy. The professional freedom to expand on your basic duties and responsibilities can increase your creative self-efficacy. As a bonus, perceived autonomy also has a positive impact on our mood. Creative autonomy involves fostering a growth mindset and self-directed ways of working. If you are a manager, take a step back and try to avoid excessively supervising your team. Instead, make your team feel empowered to succeed via their own methods. Remember that your creativity is more closely linked to creative self-efficacy than to your actual creative competence. Beyond its immediate benefits, cultivating creative self-efficacy can help you feel more motivated, productive, and can be an opportunity to build a strong professional network. The post How to increase your creativity by cultivating creative self-efficacy appeared first on Ness Labs.
How to increase your creativity by cultivating creative self-efficacy
The emerging theory of authentic leadership
The emerging theory of authentic leadership
Being “authentic” has become a bit of an overused buzzword, and has lost some of its meaning. However, despite the concept not being fully mature in a theoretical or experimental sense, early research has shown that authentic leadership may improve team performance compared to traditional management. Authentic leadership is an emerging theory that encourages managers to be genuine, self-aware and transparent when guiding their team. Let’s explore ​​the potential benefits of authentic leadership, and the strategies you can employ to authentically support your team in being as successful as possible. The benefits of authentic leadership Authentic leadership is a concept that was first formulated by Harvard professor and former Medtronic CEO Bill George, who was adamant that new laws alone could not help to repair the corporate crisis. Instead, he claimed that new leaders and innovative styles of leadership were required to give corporations a chance of financial recovery. Whereas a traditional leader in a large corporation might value profits above people, an authentic leader carefully balances tough ethical dilemmas with financial optimisation.  Bill George considered that there are five essential dimensions of an authentic leader: purpose, values, heart, relationships and self-discipline. According to him, an authentic leader should work compassionately, valuing both the company and its employees. So, why do teams value an authentic leader? Authentic leadership is seen as an antidote to unethical leadership. Fred Luthans and Bruce Avolio noted that an authentic leader is likely to appear more reliable and trustworthy to those who work with them. Instead of a manager with a “work persona”, people enjoy working with a manager that behaves like their true self — a manager who is self-aware, who has developed a supportive professional relationship with each individual in the team, and who has a good understanding of their thoughts, emotions, or belief systems. Traditional leadership might involve a manager working in a way that does not necessarily align with their own personal values. This can be confusing for colleagues, who might be left second guessing what is expected of them. Researchers reported that this lack of clarity or ambiguity of what is expected can lead to a team working without direction. This is likely not only to reduce job satisfaction, but could also lower overall productivity. In contrast, authentic leadership can make it far easier for co-workers to recognise your values, and predict or follow your instructions. It will require less effort to understand what you expect, helping the team to work in a more constructive and cohesive manner. In a study of 51 teams, authentic leadership improved a teams’ drive to being the very best they could be. In turn, increased virtuousness led to greater team potency — the ability to succeed. The researchers concluded that authentic leadership can foster team motivation, thereby improving overall team performance. Win-win! How to become an authentic leader Most people do not undergo leadership training before becoming a leader, and so are learning to lead on the job. Although research into authentic leadership is in its infancy, some principles can be helpful when leading a team. 1) Define your ideals. Authentic leadership lies in upholding your personal and professional values. Before you can lead authentically, you will need to define your own ethical values and ideals of leadership. Although there will usually be a corporate goal in sight, those values should still guide your decisions as a leader. 2) Practise self-reflection. Self-reflection through journaling, self-awareness exercises, or investing in a career coach may help you to identify your strengths, weaknesses, and cognitive patterns such as likely reactions to certain situations. It will also help you to develop emotional intelligence so you can become more aware of how your team is feeling and support them appropriately. 3) Foster relational transparency. People are more likely to enjoy working with you and respect the decisions you make if you are transparent about your thought processes. The line between personal and professional does not have to become overly blurred, but it is important that your colleagues don’t feel like you have a hidden agenda.   It takes courage, but openly sharing your strengths, weaknesses, and thought processes with your team shows them that you have nothing to hide, and that you are — like them — eager to keep on learning and growing. This level of transparency suggests that personal and professional growth is something to be supported and celebrated. Authentic leadership remains an emerging but promising theory. Learning to lead in a new way takes time, but defining your own ideals, practising self-reflection and developing relational transparency with your co-workers is likely to lead to improved cohesion, satisfaction, psychological safety, and performance. Give it a try! The post The emerging theory of authentic leadership appeared first on Ness Labs.
The emerging theory of authentic leadership
What is neurodiversity?
What is neurodiversity?
People think, learn, behave, and experience the world around them in many different ways. Some of this diversity is due to neurological differences. Neurodiversity refers to those variations in neurocognitive functioning. Let’s have a look at the origin of the term, and its usefulness in research and practice. A short primer on neurodiversity The term “neurodiversity” is relatively new: it was coined by social scientist Judy Singer in the late 1990’s in relation to autism, but has since come to encompass many other neurodevelopmental conditions such as attention deficit hyperactivity disorder (ADHD), dyslexia, dyscalculia, and more. People of standard neurodevelopmental and cognitive functioning are referred to as “neurotypical”, while “neurodivergent” is used to refer to people whose brain functions differ from what is considered standard — sometimes collectively referred to as neurominorities. Central to neurodiversity is the idea that naturally occurring variations in the human brain should be seen as differences rather than deficits. Some people consider neurodiversity to be related to the concept of biodiversity — a term you will mostly see being used for the purpose of advocating for the conservation of species. In the words of Dr Robert Chapman: “Proponents of the neurodiversity movement […] challenge the pathologization of minority cognitive styles and argue that we should reframe neurocognitive diversity as a normal and healthy manifestation of biodiversity.” There is currently no definitive list of neurodevelopmental conditions  that should be included under the umbrella term of neurodiversity, and some researchers even advocate for an entirely different definition that doesn’t rely on contrasting neurocognitive differences between individuals. As Dr Nancy Doyle explains: “A definition has emerged for psychologists and educators which positions neurodiversity within-individuals as opposed to between-individuals.” The spiky cognitive profile of neurodivergence (adapted from Doyle, 2020) She adds: “The psychological definition refers to the diversity within an individual’s cognitive ability, wherein there are large, statistically-significant disparities between peaks and troughs of the profile, known as a ‘spiky’ profile. A neurotypical is thus someone whose cognitive scores fall within one or two standard deviations of each other, forming a relatively ‘flat’ profile, be those scores average, above or below.” It’s important to keep in mind that neurodiversity has no official definition, and the idea does not align with the usually discrete approach to diagnosis used in medical practice — the latest Diagnostic and Statistical Manual of Mental Disorders includes more than 150 discrete diagnoses. However, it doesn’t need to be excessively controversial. Two complementary research models Because the concept of neurodiversity has initially emerged as part of the social sciences, there is currently no consensus within the scientific community as to how to use it in clinical contexts. That’s partly because the clinical model and the social model consider disability from two different perspectives. While the clinical model seeks to cure or manage disabilities, the concept of neurodiversity is based on the social model of disability, which identifies systemic barriers to the social integration of people with functional differences. The two models have their respective critics, but they are not incompatible. In different ways, both clinical research and neurodiversity research seek to contribute scientific evidence to reduce impairments experienced by neurodivergent people: clinical research focuses on treatment, and neurodiversity research focuses on adapting environments to the diverse needs of individuals. In a comment published in the The Lancet Psychiatry, Dr Edmund Sonuga-Barke and Dr Anita Thapar​​ write: “Rather than a complete reliance on disorder-based concepts and related treatment approaches, we can see many advantages of incorporating the concept of neurodiversity alongside mainstream research and clinical practice.” “Indeed, there is no contradiction between traditional approaches that look to give neurodiverse individuals additional resources through clinical treatment and neurodiverse approaches that look to adapt environments and transform neurotypical attitudes: both approaches are beneficial and together will improve the lives of neurodiverse people.” In addition, there is growing support for a “transdiagnostic” approach that cuts across traditional diagnostic categories. Researchers from the University of Cambridge explain: “Removing the distinctions between proposed psychiatric taxa at the level of classification opens up new ways of classifying mental health problems, suggests alternative conceptualizations of the processes implicated in mental health, and provides a platform for novel ways of thinking about onset, maintenance, and clinical treatment and recovery from experiences of disabling mental distress.” Instead of — often artificially — imposing categories onto a multidimensional and complex space, a transdiagnostic approach allows clinicians to account for the massive heterogeneity within diagnoses and for the common co-occurrence of many conditions in a way that makes a rigid taxonomy too limiting to properly support people. The idea is to consider continuous dimensions within the population, as opposed to distinct categorical entities. Supporting neurodiversity The concept of neurodiversity is particularly useful in environments such as schools and the workplace, where changes can be implemented to foster inclusivity and bolster people’s individual strengths while providing support for their different needs. For instance, adjustments can be made to accommodate diverse physical needs, such as letting people fidget, having a dedicated space for quiet breaks, or offering noise-cancelling headphones. A lot of these adjustments may even be helpful for all employees. Clear communication and documentation, flexible hours, and a school or workplace culture that emphasises kindness — all of these are good practices to implement, regardless of initiatives specifically targeted at supporting neurodiversity. Neurodiversity is still an emerging paradigm which has been described as a “moving target”, but it already offers several practical implications for leaders who want to build more inclusive environments and researchers who want to support people across the multitude of conditions that may escape categorical labels. Hopefully this short primer will make you want to learn more! The post What is neurodiversity? appeared first on Ness Labs.
What is neurodiversity?
Steering Science with Prizes
Steering Science with Prizes
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. Audio versions of this and other posts: Substack, Apple, Spotify, Google, Amazon, Stitcher. Finally, as part of the partnership with the Institute for Progress, the fine folks at And Now have designed a new logo for New Things Under the Sun: New scientific research topics can sometimes face a chicken-and-egg problem. Professional success requires a critical mass of scholars to be active in a field, so that they can serve as open-minded peer reviewers and can validate (or at least cite!) new discoveries. Without that critical mass,1 working on a new topic topic might be professionally risky. But if everyone thinks this way, then how do new research topics emerge? After all, there is usually no shortage of interesting new things to work on; how do groups of people pick which one to focus on? One way is via coordinating mechanisms; a small number of universally recognized markers of promising research topics. The key ideas are that these markers are: Credible, so that seeing one is taken as a genuine signal that a research topic is promising Scarce, so that they do not divide a research community among too many different topics Public, so that everyone knows that everyone knows about the markers Prizes, honors, and other forms of recognition can play this role (in addition to other roles). Prestigious prizes and honors tend to be prestigious precisely because the research community agrees that they are bestowed on deserving researchers. They also tend to be comparatively rare, and followed by much of the profession. So they satisfy all the conditions. This isn’t the only goal of prizes and honors in science. But let’s look at some evidence about how well prizes and other honors work at helping steer researchers towards specific research topics. Share Howard Hughes Medical Institute Investigators We can start with two papers by Pierre Azoulay, Toby Stuart, and various co-authors. Each paper looks at the broader impacts of being named a Howard Hughes Medical Institute (HHMI) investigator, a major honor for a mid-career life scientist that comes bundled with several years of relatively no-strings-attached funding. While the award is given to provide resources to talented researchers, it is also a tacit endorsement of their research topics and could be read by others in the field as a sign that further research along that line is worthwhile. We can then see if the topics elevated in this manner go on to receive more research attention by seeing if they start to receive more citations. In each paper, Azoulay, Stuart, and coauthors focus on the fates of papers published before the HHMI investigatorship has been awarded. That’s because papers written after the appointment might get higher citations for reasons unconnected to the coordinating role of public honors: it could be, for instance, that the increased funding resulted in higher quality papers which resulted in more citations, or that increased prestige allowed the investigator to recruit more talented postdocs, which resulted in higher quality papers and more citations. By restricting our attention to pre-award papers, we don’t have to worry about all that. Among pre-award papers, there are two categories of paper: those written by the (future) HHMI investigator themselves, and those written by their peers working on the same research topic. Azoulay, Stuart, and coauthors look at each separately. Azoulay, Stuart, and Wang (2014) looks at the fate of papers written by an HHMI investigator before their appointment. The idea is to compare papers that of roughly equal quality, but where in one case the author of the paper gets an HHMI investigatorship and in the other case doesn’t. For each pre-award paper by an HHMI winner, they match it with a set of “control” papers of comparable quality. These controls are published in the same year, in the same journal, with the same number of authors, and the same number of citations at the point when the HHMI investigatorship is awarded. Most importantly, the control paper is also written by a talented life scientist, with the same position (for example, first author or last author, which matters in the life sciences), but who did not win an HHMI investigator position. Instead, this life scientist won an early career prize. If people decide what to work on and what to cite simply by reading the literature and evaluating its merits, then whatever happens to the author after the article is published shouldn’t be relevant. But that’s not the case. The figure below shows the extra citations, per year, for the articles of future HHMI investigators, relative to their controls who weren’t so lucky. We can see there is no real difference in the ten years leading up to the award, but then after the award a small but persistent nudge up for the articles written by HHMI winners. From Azoulay, Stuart, and Wang (2014) That bump could arise for a number of different reasons. We’ll dig into what exactly is going on in a minute. But one possibility is that the HHMI award steered more people to work on topics similar enough to the HHMI winner that it was appropriate to cite their work. A simple way to test this hypothesis is to see if other papers in the same topic also enjoy a citation bump after the topic is “endorsed” by the HHMI, even though the author of these articles didn’t get an HHMI appointment themselves. But that’s not what happens! Reschke, Azoulay, and Stuart (2018) looks into the fate of articles written by HHMI losers2 on the same topic as HHMI winners. For each article authored by a future HHMI winner, Reschke, Azoulay, and Stuart use the PubMed Related Articles algorithm to identify articles that are on similar topics. They then compare the citation trajectory of these articles on HHMI-endorsed topics to control articles that belong to a different topic, but were published in the same journal issue. As the figure below shows, in the five years prior to the award, these articles (published in the same journal issue) have the same citation trajectories. But after the HHMI decides someone else’s research on the topic merits an HHMI investigatorship, papers on the same topic fare worse than papers on different topics! From Reschke, Azoulay, and Stuart (2018) Given the contrasting results, it’s hard not to think that the HHMI award has resulted in a redistribution of scientific credit to the HHMI investigator and away from peers working on the same topic. So maybe awards don’t actually redirect research effort. Maybe they just shift who gets credit for ideas? The truth seems to be that it’s a bit of both. To see if both things are going one, we can try to identify cases where the coordination effect of prizes might be expected to be strong, and compare those to cases where we might expect it to be weak. For example, for research topics where there is already a positive consensus on the merit of the topic, prizes might not do much to induce new researchers to enter the field. Everyone already knew the field was good and it may already be crowded by the time HHMI gives an award. In that case, the main impact of a prize might be to give a winner a greater share of the credit in “birthing” the topic. In contrast, for research topics that have been hitherto overlooked, the coordinating effect of a prize should be stronger. In these cases, a prize may prompt outsiders to take a second look at the field, or novice researchers might decide to work on that topic because they think it has a promising future. It’s possible these positive effects are enough so that everyone working on these hitherto overlooked topics benefits, not just the HHMI winner. Azoulay, Stuart, and coauthors get at this in a few different ways. First, among HHMI winners, the citation premium their earlier work receives is strongest precisely for the work where we would expect the coordinating role of prizes to be more important. It turns out most of the citation premium accrues to more recent work (published the year before getting the HHMI appointment), or more novel work, where novelty is defined as being assigned relatively new biomedical keywords, or relatively unusual combinations of existing ones. HHMI winners also get more citations (after their appointment) for work published in less high-impact journals, or if they are themselves relatively less cited overall at the time of their appointment. And these effects appear to benefit HHMI losers too. The following two figures plot the citation impact of someone elsegetting an HHMI appointment for work on the same topic. But these figures estimate the effect separately for many different categories of topic. In the left figure below, topics are sorted into ten different categories, based on the number of citation that have collectively been received by papers published in the topic. At left, we have the topics that collectively received the fewest citations, at right the ones that received the most (up until the HHMI appointment). In the right figure below, topics are instead sorted into ten different categories based on the impact factor of the typical journal where the topic is published. At left, topics typically published in journals with a low impact factor (meaning the articles of these journals usually get fewer citations), at right the ones typically published in journals with high impact factors. From Reschke, Azoulay, and Stuart (2018) The effect of the HHMI award on other people working on the same topic varies substantially across these categories. For topics that have not been well cited at the time of the HHMI appointment, or which do not typically publish well, the impact of the HHMI appointment is actually positive! That is, if you are working on a topic that isn’t getting cited and isn’t placing in good journals,...
Steering Science with Prizes
Audio: Steering Science with Prizes
Audio: Steering Science with Prizes
This is an audio read-through of the initial version of Steering Science with Prizes. To read the initial newsletter text version of this piece, click here. Like the rest of New Things Under the Sun, this underlying article upon which this audio recording is based will be updated as the state of the academic literature evolves; you can read the latest version here.
Audio: Steering Science with Prizes
Productivity addiction: when we become obsessed with productivity
Productivity addiction: when we become obsessed with productivity
The business and productivity app market is worth billions of dollars. Every day, there is a new productivity tool popping up, a book about productivity being published, and millions of people reading and sharing content related to personal productivity. It started as a measure of efficiency for the production of goods and services. Somehow, along the way, many of us have become addicted to productivity. Why are we so obsessed with being productive? At its core, productivity addiction is based on the same reward systems as other addictions. By providing constant reinforcement — for example financial rewards in the form of salary increases, or social rewards in the form of work recognition — productivity can become a goal in and of itself, resulting in compulsive behaviours. This phenomenon is maybe more common than you would think. Two nationally representative studies carried out in Norway and Hungary reported similar results. In Norway, Dr Cecilie Andreassen and her team found that between 7.3% and 8.3% of Norwegians are addicted to work. In Hungary, a team led by Dr Zsolt Demetrovics suggests that 8.2% of Hungarians working at least forty hours a week are at risk for work addiction. Dr Mark Griffiths estimates the prevalence of work addiction in the United States to be around 10%, mentioning some estimates as high as 15% to 25%. It doesn’t help that being addicted to productivity may be a “mixed-blessing addiction” (a term originally used to describe work addiction in the 1980’s), making it more socially acceptable and potentially hiding the negative effects for longer. Similar to someone who is addicted to exercise, a productivity addict may initially be successful in their career, earn a lot of money, and receive encouraging work accolades. But, in the long term, being obsessed with productivity can have unintended consequences, such as burnout, family issues, and health problems. The BBC ran a story about productivity addiction where Dr Sandra Chapman from Center for BrainHealth at the University of Texas explained: “The problem is that just like all addictions, over time a person needs more and more to be satisfied and then it starts to work against you. Withdrawal symptoms include increased anxiety, depression and fear.” Are you addicted to productivity? At least in the Western world, our education has often taught us to tie our self-worth to how much we contribute to society. The more we contribute, the better. “I work, therefore I am.” Being productive feels like a way to improve our self-worth. This positive reinforcement can make it hard to realise we may be falling prey to productivity addiction. However, there are five tell-tale signs you may be addicted to productivity: You don’t want to “waste” any time. Productivity addicts may suffer from time anxiety, an obsession about spending our time in the most meaningful way possible. As Dr. Alex Lickerman described it, time anxiety stems from these recurring questions: “Am I creating the greatest amount of value with my life that I can? Will I feel, when it comes my time to die, that I spent too much of my time frivolously?” Trying to always optimise the way you spend your time and struggling to do nothing may be signs of productivity addiction. You tend to turn hobbies into side projects. Let’s say you become interested in gardening, and really enjoy spending time in the garden, learning about different kinds of flowers and plants, and caring for them. You may be tempted to turn this hobby into something more productive, maybe by starting a newsletter about gardening, or a small business selling gardening guides. You feel guilty when you don’t hit your targets. Whether it’s inbox zero or tackling a long to-do list, being addicted to productivity may result in a hard time falling asleep in the evening because you haven’t managed to be as productive as you had hoped to be. Instead of closing your laptop and forgetting about it until the next day, you may struggle properly disconnecting because of the guilt you feel around not hitting these (sometimes artificial) targets. You always make work a priority. Are you rushing to finish dinner with your family so you can get back to work? Cancelling plans with friends so you can finalise a presentation? Cutting short your night of sleep to attend an early meeting hosted in a different timezone? While it happens to most people to have to make concessions from time to time, productivity addicts will tend to always choose work over other important areas of their lives. You constantly feel busy. Dr Brené Brown, a research professor at the University of Houston, describes being “crazy busy” as a numbing strategy that allows us to avoid facing the truth of our lives. She half-jokingly wrote: “I often say that when they start having 12-step meetings for busy-aholics, they’ll need to rent out football stadiums.” This numbing strategy may even give us the illusion of productivity. Luckily, productivity addiction is not a disease, and it is possible to make a few simple changes to avoid falling into its trap for long enough that we start experiencing its negative consequences.  How to manage productivity addiction There is no one-size-fits-all solution to get rid of our obsession with productivity, but practising mindful productivity is a great way to manage productivity addiction. Make space for self-reflection. Recovering from productivity addiction starts from understanding its source and mechanisms. What are the rewards that make you obsess about your productivity? Is it money, recognition, something else? What patterns have you noticed in the way you work that hurt other areas of your life, such as time with your family or sleep? Journaling can be a great way to reflect on your relationship with productivity. Define meaningful priorities. For many, work is an important part of their identity. But it doesn’t have to be the only defining aspect of your worth. What else do you care about? What are areas you would like to explore outside of work? Are your priorities aligned with your values? Instead of automatically creating endless task lists, ask yourself: what would be a meaningful goal I can work towards? Don’t pin the butterfly. Remember that not all hobbies need to become hustles. Try to keep some hobbies that are just that — hobbies. Spaces of self-expression where you can experiment and play whenever you feel like it, outside of the constraints of productivity. Reconsider your relationship with time. Time anxiety can lead to a daily feeling of being rushed that makes us feel overwhelmed and panicky. We think we are making the most of our time, but instead we are rushing through our precious time without savouring every second of it. Take breaks, become comfortable with doing nothing, and most importantly, define what “time well spent” means to you so you can make space for these moments. Create your own system. Instead of relying on prescriptive productivity methods that may not work for you and create even more stress, progressively design your own system by experimenting and iterating. Incorporate your meaningful priorities, hobbies, and insights about the way you work best to ensure you can achieve your goals without sacrificing your mental health. Finally, pay attention to your triggers. As a recovering productivity addict, you may need to always be careful about not falling back into old patterns whenever you start a new job, a new hobby, or set a new exciting goal. Practising self-reflection and paying attention to your mental health will ensure the way you work is more enjoyable and more sustainable. The post Productivity addiction: when we become obsessed with productivity appeared first on Ness Labs.
Productivity addiction: when we become obsessed with productivity