Digital Gems

Digital Gems

2442 bookmarks
Newest
The false compromise fallacy: why the middle ground is not always the best
The false compromise fallacy: why the middle ground is not always the best
Picture this: you are having a debate with a colleague regarding the best next steps for a complex project. You both have been presenting your arguments, the tone is friendly, but you cannot seem to agree on the best way forward. So you decide to find a middle ground. Sounds reasonable enough, right? Well, it’s often a very bad idea, and it has a name: the false compromise fallacy. When it’s hard to find a resolution, it can be tempting to search for the middle ground to resolve the conflict. By making us abandon the search for the most suitable resolution, the false compromise fallacy can lead to misleading conclusions and poor decision making at work and in your personal life. The birth of a false compromise Our tendency to seek compromises is not new. We can find some documented instances of compromises in ancient Rome’s public speaking, supported by a codified “art of speaking in public” (Ars Oratoria). At the time, it was known as the “argument to moderation” (Argumentum ad Temperantiam). But not all compromises make sense. A false compromise occurs when a resolution cannot be found between two opposing views, and so the middle ground is accepted as the “best of both worlds” instead. Here is a famous example of false compromise. If you know that the sky is blue, but someone else argues that it is yellow, a compromise might see you meeting in the middle to conclude that the sky is green. Of course, this agreement settles the difference of opinion in a wholly unsatisfactory way, as there is no truth in the sky being green. Furthermore, both parties will likely remain convinced that the sky is the colour they believe it to be. A false compromise only provides the illusion of a resolution. The false compromise fallacy is sometimes referred to as “bothsiderism”. Researchers Scott Aikin and John Casey reported that the functional problem with finding the middle ground is the belief that one view must be balanced with an opposing belief, regardless of how contrived the resulting view-point might be. Aikin and Casey explain that the issue represented by the false compromise fallacy is the belief “that there are two (or more sides) and one must presumably give both sides their due.” However, the evidence on one side may be in bad faith, incomplete, or incompetently understood. Just because someone presents an argument, it does not mean that it is as valid as another point of view. Trying to meet in the middle with a false compromise could lead you further from the truth or away from the correct conclusion. There may be specific times when you are more likely to acquiesce to a false compromise. Jan Albert van Laar and Erik C W Krabbe found that compromises are more likely to be fashioned when two parties conflict in both their preferences and their opinions on the correct course of action. Depending on your individual circumstances, you may be more likely to experience a difference of opinion when you are with work colleagues, family members, or in social situations. The danger of false compromises False compromises may seem innocent, especially when getting to the correct answer does not particularly seem to matter in everyday life. However, when the topic being discussed and the potential outcome are of great importance, a false compromise could cause harm. In their paper No Place for Compromise: Resisting the Shift to Negotiation, David Godden and John Casey state that leaning towards compromise may cause you to abandon your rational beliefs. They argue that although it might be tempting to yield when faced with a contrasting view, if both sides will be left dissatisfied by a compromise then it is better to resist the temptation of a false compromise. As well as both parties feeling dissatisfied with the outcome, false compromises can also prevent a discussion from moving forward. Had the exploration of the difference in opinion continued for longer, more evidence could have been presented and analysed. By persevering, an objectively better outcome might have been reached. A false compromise can also dangerously speed up decision making. This is particularly true if the compromise brings an abrupt end to a debate: hurriedly agreeing to a decision may prevent you from considering second-order consequences. Luckily, there are ways you can avoid falling into the worst pitfalls of the false compromise fallacy. How to manage false compromises We will likely all have had experience of false compromises, with various resulting outcomes. Learning to manage this fallacy may help you to avoid unnecessarily meeting in the middle. Consider if consensus is needed. You may try to please as many people as possible, especially when your relationship with others is important, such as in work settings. However, to avoid making a false compromise, you need to question whether reaching a collective agreement is necessary. In some situations, the decision that is objectively right may not meet everyone’s approval, but it doesn’t mean that it’s wrong. Evaluate the strength of evidence. Both parties in a debate will bring their own evidence to the table. However, it doesn’t mean the evidence should be given the same weight. Strong evidence may include peer-reviewed literature, up-to-date research, information from reliable sources, or expert opinions. Weaker evidence could be based on hearsay or personal preferences. Be open to extreme decisions. Sometimes, the best decision will be the most extreme one. If you are sure of the evidence and likely outcome, trying to meet in the middle doesn’t make sense. As uncomfortable as it may feel, you should instead be prepared to hold fast to your point of view. The false compromise fallacy can lead to misleading conclusions, poor decision-making, and dissatisfaction for all parties involved in the process. Rather than searching for a compromise, make sure to evaluate whether consensus is truly needed and how strong the evidence is for all arguments. With this in mind, you may find that the right decision is one of the more extreme options, rather than the middle ground. The post The false compromise fallacy: why the middle ground is not always the best appeared first on Ness Labs.
The false compromise fallacy: why the middle ground is not always the best
A New Things Under the Sun Update
A New Things Under the Sun Update
Dear Reader, Change is afoot! Since December 2020, I have been splitting my time between writing New Things Under the Sun and teaching economics at Iowa State University. I loved teaching and Iowa State has been fantastic. But, to use some economist lingo, my comparative advantage is in writing New Things Under the Sun and I have believed for awhile that the project could have a bigger impact if I were able to specialize completely in it. Accordingly, this is my last day at Iowa State University. Beginning May 22, I will be joining the Institute for Progress (IFP) as Senior Innovation Economist, where my job will be to work full time on New Things Under the Sun and related projects. You may recall the Institute for Progress has been New Things Under the Sun’s partner since January - they are a new non-partisan think tank with a mission to accelerate scientific, technological, and industrial progress. This new arrangement is possible thanks to them and grant support from OpenPhilanthropy. While I am excited to officially be part of IFP, I will continue to work remotely from Iowa and retain sole editorial control over New Things Under the Sun. I continue to believe IFP is doing great stuff, and being affiliated with an organization that is trying to effect actual change is a good influence on me (and I hope I can be a good influence on them!). Among other things, working with IFP provides a constant nudge to think about how academic work sheds light on questions that matter. So what does it look like to specialize in this synthesizer/communicator role I’ve carved out? I guess we’ll find out! But here is my preliminary sketch. First, the most obvious requirement of this job is knowing the academic work well. Over the last years, despite my best efforts, my to-read list has only gotten longer. So I’m going to read more. Second, part of the job is seeing connections between ideas. This is especially important as one of the things that makes New Things Under the Sun unique is I try to keep articles up-to-date with the academic frontier. That means I can’t write articles and then forget about their content. To keep what I write and read perpetually accessible, I am going to try is to build up a spaced repetition memory system.1 Third, I plan to write more. Well, actually, I plan to at least meet the goal I set for myself in January, after partnering up with IFP, to write three articles per month. I’m a bit embarrassed to say I’ve only hit this goal once, in February. Partly that’s due to covid finally catching up to me and then my kids during April, but it’s also because so far my time has been split and sometimes other deadlines assert priority. I don’t think New Things Under the Sun needs to be a really frequent publication - every original piece is designed to be perpetually relevant, with maintenance - but I at least want to get into a rhythm of producing something every ten days or so. Lastly, I’m going to try and meet with more of the producers and “end-users” of academic research. What do people whose work is related to innovation wish they knew? What do academics studying innovation think about their own field? I think it’s obvious my work would benefit from more of this kind of tacit knowledge. I’ve already had a few conversations like this with readers and academics. But I thought it might be helpful to make this a formal invitation: if you ever want to chat about something innovation related, feel free to drop me an email and we can set up a virtual coffee. I can be reached at mattclancy at hey dot com. Now that I’m working fully remote, hopefully this can be a good substitute for some of that serendipity around the water cooler I’ll be missing. If zoom is not your thing, I also plan to visit Washington DC for a few days every quarter, to work at the IFP offices, and I hope to meet with people in person during those visits. I’m sure most of you are more likely to pass through DC then you are to pass through Des Moines. Beyond that, I have plenty of other ideas for improving New Things Under the Sun, which I will also incrementally work on. But first, I’m taking a break! I’ll be taking off next week. Cheers all and thanks for your interest in New Things Under the Sun. Excited about this next step. Matt 1 This isn’t my first experiment with spaced repetition memory systems. During covid-19, I built an online intermediate microeconomics course using the Orbit platform developed by Andy Matuschak, which implements personalized spaced repetition. Check it out if you’re curious about spaced repetition or if you need to learn calculus-based microeconomics!
A New Things Under the Sun Update
Self-organized knowledge management with George Levin CEO of Hints
Self-organized knowledge management with George Levin CEO of Hints
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better, learn faster, and make the most of our mind. George Levin is the CEO and Co-Founder of Hints, an all-in-one knowledge management app to get information captured and self-organized. In this interview, we talked about the biggest challenges faced by knowledge workers, why complex systems tend to fail, the importance of revisiting your knowledge to consolidate it, the power of self-organizing your notes, and more. Enjoy the read! Hi George, thank you so much for agreeing to this interview. What do you think are some of the biggest challenges faced by knowledge workers? I believe that knowledge is an opportunity. Every piece of information — a  note, an article, a link, a screenshot, or a shower thought — is the beginning of something new, a hint that can spark changes in our lives and make us better. Unfortunately, we often miss out on these opportunities. A lot of valuable information is slipping through our fingers. Before launching Hints, we have conducted over a hundred in-depth interviews to understand how people work with new information and what obstacles they face. We identified three stages: capturing, organizing, and revisiting. Each stage has its own challenges. Let’s start with capturing. You have probably heard the saying: “If you didn’t write it down, it never happened.” The biggest challenges here are multitasking and change of context. It’s hard to pause your work, especially if you’re jumping between tasks and calls, to save important info. Then comes organizing. Our notes, screenshots, links, and tasks are often scattered across multiple apps and devices. With a significant effort we can set up complex knowledge management systems, but most solutions require a lot of discipline to maintain. When we run out of time or energy, such complex systems fail.  Finally, we get to the revisiting part. Capturing and organizing are useless without revisiting. Without it, things don’t move forward. Articles you save are not read, TedTalk videos aren’t watched, and new ideas are not developed. This will definitely resonate with readers. When did you decide to tackle those challenges? In December 2019, I sold my advertising technology startup Getintent, where I worked with Alex and Gleb. Then in 2020, I built a video distribution platform for vloggers. Unfortunately, it didn’t work out. I decided to slow down and take some time for myself. During my eight years of entrepreneurship, I’ve noticed the importance of serendipity. A sudden conversation in a coffee shop or a “random” book recommendation could change my life for the better. I learned to keep my eyes and ears open to catch small hints.   This approach brought me to the knowledge management and note-taking community. I decided to build something in this field, but I didn’t want to create a better Roam or Notion. I tried to find some unique problem that wasn’t solved yet. I invited Alex and Gleb, and very soon we found it.  As members of a network of business communities, we talked on FB, WhatsApp, or Slack groups, sharing experiences and giving each other recommendations. This valuable knowledge from conversations stayed on the surface for a few days, only to soon be lost. We built a script that scrapped discussions in these groups and created a self-organizing community wiki on the fly.       Then I remembered a story from one of my early investors about how he built a search engine for financial information and happily sold it to a bank, only to later find out that a company named Google did the same but for the whole web. I realized that the problem we solved was relevant beyond our niche communities and applied to all knowledge workers who are dealing with a lot of valuable information around them. So in August 2021, we decided to build Hints.  And how does Hints address those challenges faced by knowledge workers? The Hints app offers users the easiest way to capture, organize, revisit and share new knowledge on the fly. We are mobile-first, but we have desktop and web apps as well.  The main goal of our quick and intuitive capturing is to avoid context switching. If you see something important, you can save it without opening the app in less than a second and stay in your flow. Then all your new knowledge gets auto-organized. Finally, the revisiting stage is where static notes become active hints. Your hints will be shown to you in an interactive story format. Our recommendation engine resurfaces the most valuable hints and reminds you about them. Information can come into many formats. What kind of formats does Hints support? Notes, screenshots, photos, images, tasks, voice-to-text memos, reminders, videos, files, lists, calendar events, links. Every piece of information can be a hint, an opportunity that could change your life. That kind of flexibility sounds incredibly powerful. More specifically, how does Hints work? You can capture notes, URLs, YouTube videos, and screenshots on your phone by forwarding them to the Hints app. Also, we support SMS, WhatsApp, and Telegram bots. You can send a text, convert it into tasks and set a reminder via our bots while in the messenger. The most developed is our Telegram bot. It will allow you to create a calendar invite and add your co-workers. Other bots will catch up soon. You can also capture anything directly from the app or via our Apple Shortcuts widget.    On the desktop, you can capture a selected text from websites, emails, and messages by pressing Command+Shift+J. Or Command+Shift+K for screenshots. Auto-organizing will group your captured hints by common categories such as meeting notes, people, articles, videos, etc. We call these categories flows because you can decide what flow you want to open depending on your mood and needs. Revisiting looks like Instagram stories. You can open your revisiting when you don’t have energy and want to browse something. While browsing you can make a change, archive an old hint, add a reminder, or a tag. This format is very engaging and interactive. I started to use it instead of Instagram and Twitter when I wanted to zone out. You will be surprised how many good hints you captured two months ago and completely forgot about them.  I love the concept of swipeable stories to refresh our knowledge. In general, Hints seems to be a great tool to reduce friction in knowledge management. Absolutely. First of all, with Hints, you stay in your flow and don’t need to jump between apps to write something down. That’s already a significant relief. Then, you don’t need to think about folders and where to place your hints. It’s self-organized, and you know where to find everything. Finally, you don’t need to think about remembering what you saved. You will be reminded about it. Things you capture will be moved forward to change your life for the better.   Amazing. What kind of people use Hints? They are professionals who have a lot of work and valuable information that they don’t want to miss. In my case, the Hints app has already changed my life. Nothing falls through the cracks. I stay on top of my things without relying on my discipline. I can go to bed without thinking about what opportunity I could miss today. And finally… What’s next for Hints? Our next big step is collaboration and B2B. We want to stay free for individuals and rely on B2B pricing when startups and SMBs start using Hints. For them, Hints can be where all new knowledge is captured and distilled before it moves to in-depth project management tools. Without Hints, businesses miss out on the potential opportunities within new ideas and insights that team members encounter every day.  Thank you so much for your time! Where can people learn more about Hints and give it a try? You can sign up on our website and follow our journey on Twitter. The post Self-organized knowledge management with George Levin, CEO of Hints appeared first on Ness Labs.
Self-organized knowledge management with George Levin CEO of Hints
The dangers of apophenia: not everything happens for a reason
The dangers of apophenia: not everything happens for a reason
Humans love patterns. Sometimes that’s helpful, but other times… Not so much. Apophenia is the common tendency to detect patterns that do not exist. Also known as “patternicity”, apophenia occurs when we try to make predictions, or seek answers, based on unrelated events. Apophenia can lead to poor decision-making. For instance, many people choose their lottery numbers based on the birthdates of family members. As the numbers are picked at random, however, this approach won’t increase their chance of winning. In rare cases, apophenia can even be an indicator for some mental conditions. Let’s have a look at how apophenia works, and how you can both detect and manage this phenomenon. The science of apophenia Apophenia is the propensity to mistakenly detect patterns or connections between unrelated events, objects, or occurrences. The term was first coined in 1958 by German psychiatrist Klaus Conrad during his study of schizophrenia. However, it is an effect of brain function that is not limited to those with a form of psychosis, and is now commonly recognised in health as well. In schizophrenia, Conrad found that those who developed “apophany” started experiencing abnormal meanings in their daily life. For example, an individual might “see” various signs that they interpret as instructions meant only for them. They might be certain that an experience is proof that they are being watched, talked about, followed, or prepared for an event. In reality, these episodes are unconnected, have no pattern, and do not represent any form of sign or instruction. The delusions of schizophrenia can be all-consuming and sometimes terrifying. In healthy individuals, apophenia may not lead to such alarming consequences, but can still have a significant impact on one’s decision-making processes. For example, if you may sail through three green traffic lights in a row and see this as evidence that you are on a lucky streak. Because of this perceived pattern, you might confidently place a substantial bet on a horse race or football match. Your perception of your likely luck might therefore lead you to make a more reckless financial decision than if you had not noticed an auspicious pattern. This over-interpretation of patterns in healthy individuals could be an evolutionary survival instinct. Our ancestors may have benefitted from pattern interpretation as part of everyday life. For example, upon hearing a rustling in the trees behind them, they could either assume that the noise was due to the wind or a predator. Fleeing because they assumed there was a predator could save their life, and there would be no harm done if the assumption turned out to be wrong. Conversely, assuming the rustling was due to the wind could have put their life at risk. Believing a false positive over a false negative could, therefore, increase our chances of survival. From fun imagery to financial risk Mild apophenia is common and occurs in many domains such as finance, arts, and politics. Although it is not usually dangerous, apophenia can lead to risky behaviours or wrong beliefs about the meaning of a pattern. Here are some areas where you may encounter apophenia: Visual illusions. Have you ever seen non-existent images in clouds, dirt, toast, or household objects? For example, you might see a phoenix in the clouds, a man in the moon, or a face in your sandwich. Pareidolia is a common form of apophenia that involves imagery. For some people, these images become signs of something significant, such as a message from a loved one or a sign of something yet to come. The artist Salvador Dali experimented with pareidolia to create paintings in which faces would be recognised, despite the painting breaking the mould of what a face truly looks like. Financial decisions. In 2017, psychologists Zack W. Ellerby and Richard J. Tunney investigated how we make decisions. They reported that those who notice an illusory pattern may start to believe that the outcome of an event is not determined by chance, but instead by previous outcomes or choices. This can lead an individual to make a choice based on probability matching, rather than by selecting the choice with the highest probability of being successful. For instance, gamblers might start to believe that a win is coming because they see a pattern in lottery numbers, the roulette wheel or on the races. If they make two small wins in a row, this pattern may create the strong belief that they will certainly have a third win. This could lead one to place a large bet, which would be a risky financial decision based on a perceived pattern. The same can be true of trading decisions or business investments. Political theories. By weaving together various signs or coincidences, an irrational set of beliefs can turn into a conspiracy theory. For example, at the height of the pandemic, some individuals believed that the government had an ulterior motive for locking down the population. Psychologists hypothesised that finding a pattern, and therefore a conspiracy theory, to explain the government’s policies was a coping mechanism for those who felt their power or safety was under threat. Believing a conspiracy theory, however, can lead people to shun scientific evidence and make poor choices. Mental health. Occasionally, apophenia can be a precursor to delusional thoughts. Finding meaning in something random was described by researchers as an important factor in the formation of paranormal and delusional beliefs, and has been found to be implicated in vulnerability to schizophrenia. The balance between embracing and managing apophenia Dali showed that apophenia can be an exciting vehicle for discovering illusory patterns that could feed you creativity. However, it is important to embed strategies that will prevent you from making risky decisions or acting on erroneous beliefs because of apophenia. To avoid the pitfalls of apophenia, you must first pay attention to any biased assumptions you make when faced with false patterns. For example, three green lights in a row will not have any connection to your chance of winning the lottery that weekend. Secondly, work on accepting that not everything happens for a reason. Everyone has highs and lows in life, and there may not be any obvious cause for this. You are more likely to be successful in the long run by making rational decisions based on the available evidence, rather than making choices based upon perceived signs from the universe. Finally, perform your own research. If you think a horse might win the grand national because you saw its name appear several times in unrelated situations, do some fact-finding before you place a bet. Compared to following so-called “signs” from the universe, your own research will give you a far more realistic idea of the risk ratios. Apophenia can help you to think more creatively, but big decisions should be made only when the facts are clear. If several signs suggest that you should leave your job and start your own business, it can be very exciting. However, be critical of your thought processes, and give yourself time to assess the reality of the patterns you perceive. If, after doing plenty of market research, creating financial projections or even starting the side-business alongside your day job, you find that this new venture shows signs of being successful, then it might be time to embrace it. Although apophenia may have an evolutionary basis, placing belief in a perceived pattern could lead you to make riskier decisions. To protect yourself from the drawbacks of apophenia, pay attention to biased thoughts, accept that not everything happens for a reason, and ensure you fully research your options before you commit to a decision. The post The dangers of apophenia: not everything happens for a reason appeared first on Ness Labs.
The dangers of apophenia: not everything happens for a reason
Control your time to free your mind with Nunzio Martinello founder of Akiflow
Control your time to free your mind with Nunzio Martinello founder of Akiflow
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us achieve more without sacrificing our mental health. Nunzio Martinello is the Founder and CEO at Akiflow, a powerful tool that allows you to consolidate all the apps you use into one place so you can block time for your tasks and see everything you need to get done in your calendar. In this interview, we talked about the power of building a single source of truth for your productivity and time management workflows, how to deal with large amounts of incoming information, how to protect your time and avoid distractions by blocking “focus mode” sessions in your calendar, and more. Enjoy the read! Hi Nunzio, thank you so much for agreeing to this interview. We often waste lots of valuable time on unproductive tasks. Why do you think that is? Being productive has become more and more complicated. With more than ten apps on average, our workspace is getting bigger, and our to-dos are often scattered between project management apps, notes, calendars, etc. Communication apps are often misused and are a constant distraction. Tasks keep coming from multiple sources throughout the day, and prioritizing and planning them properly is a non-stop job that we often fail at. Even with a consolidated task list, it is very tough to be realistic about how much work you can accomplish without a calendar that provides context on how much time we have in a day at work. And I could list a dozen more reasons why it’s getting so hard to sit and focus on the right thing to do. After years of trying all possible tools, methodologies, and automation to be productive, I figured out that no app was actually helping in keeping myself organized. How does Akiflow address these challenges? First of all, Akiflow is a single source of truth. All your tasks from multiple apps and calendars are consolidated via API, in real-time. We built a bunch of features, such as the Command Bar to make capturing a new task blazingly fast. Organizing, prioritizing, and planning activities is much faster with our keyboard shortcuts and the unified tasks and calendars view. We believe in time blocking, so we made tasks and calendars interact in the best possible way. A task can be added to the calendar for visual planning, it can block time and our smart notifications will help keep you on track and focused throughout the day. We then added a lot of features to make repetitive actions faster and easier, like sharing availability or joining calls. So far our users reported at least one hour of time saved per day, and that’s the metric we are most proud of. One hour each day is a lot of saved time! How does Akiflow work, exactly? Most people start their day by checking their outstanding conversations in their email inbox or Slack from mobile or desktop. There are two types of conversations: those that can be answered right away and those that generate a task and can be saved to go into Akiflow. Once they are done, they open Akiflow, where they find all their tasks coming from their conversations, their PM tools, or tasks added from their phone. Sometimes, a user might find interesting articles online, or some ideas come up. They hit opt+space and use the command bar to add them to the Inbox. They can also assign labels, plan, or snooze for later, all of these actions helping them to get organized even further. At this point, they open their “Today” page, where they find their schedule of the day next to their calendar. Some tasks might have been added to the calendar and locked to ensure no one can book a meeting during their focused time. This helps make time for tasks, being mindful that time is limited and results in better planning. They can adjust their schedule by considering new “urgent” tasks from their Inbox, and then they are ready to start working. As the day goes on, Akiflow sends notifications on what they should be working on as soon as they need based on the calendar events and tasks. That sounds like a powerful workflow. Can you tell us a bit more about your integrations? Nowadays, tasks come from so many different tools that the only way to be well organized and prioritize them properly is to consolidate them in a single app. Unfortunately this activity is very time-consuming and happens multiple times a day. That’s why we built API integrations to do it automatically. Tasks assigned to you on project management platforms are automatically added to your Inbox. For example, with one click you can turn a Slack message or an email into a task in Akiflow. At the moment, we have built nine native integrations, as well as Zapier which allows our user to import tasks from more than a thousand different apps. With so much incoming information, one of the biggest challenges for knowledge workers is to stay organized… I agree! Just bringing in tons of information would not be a good solution. That’s why we added a lot of features like labels and folders, to organize tasks into projects, priority management, external linked content and more. Akiflow makes it very easy to organize your inbox with flexible sorts and filters. We recently added a powerful search feature to quickly find events, tasks, people, and email addresses.  We also made sure to make it the fastest possible experience. We have a keyboard shortcut for every action and a Command Bar to make the whole experience easier and faster. Another big struggle for knowledge workers is distractibility. How does Akiflow tackle this? First of all, not having to jump between different apps such as calendars and task lists helps to avoid distractions. Every time a user works on “imported” content, we send the user straight back to that specific item, which means that you don’t have to go through your email Inbox or Slack app — the most distracting places in your workspace — to check the messages you saved. We also provide a focus mode, to help commit to a single activity and avoid distractions. I personally believe that locking a task in the calendar is a great way to protect your time and to avoid being distracted by colleagues, who are now informed that you are in the middle of your focused time. What kind of people use Akiflow? Clearly, the way that people work has changed in recent years. In the modern workspace,  everybody feels busy. Everybody is working hard and trying to balance their professional, social, and personal lives. Our user base varies quite a bit but is mostly founders, managers, and autonomous workers who have to juggle operational and administrative projects and keep up with their deadlines. Akiflow is for all those looking to organize their routines and schedules without spending too much time on it. What about you… How do you personally use Akiflow? I use Akiflow to keep up with my personal and professional lives. For example, I like to create events for those habits that I do every day, such as going to the gym and having dinner at a fixed time. By doing so, such habits stand out from the other tasks and have their own time blocked on my calendar. As CEO of a startup, my tasks vary between operational and administrative, so I like to set some recurrent tasks for those little things that I have to do constantly but easily forget amidst bigger commitments. Pulling tasks from as many tools as possible also comes in handy, as sometimes someone will tag me on a Notion or Slack comment and I could miss it if not for Akiflow creating tasks about it. And finally… What’s next for Akiflow? We are going to release our mobile apps soon and we’ll add even more integrations! The ability to capture tasks from multiple apps and devices, and always access your to-do list is critical to provide a solid solution. Right after, we’ll work on improving the way people interact and collaborate with each other. Alongside all that, we want to add AI capabilities to the platform to organize and plan tasks and ultimately optimize the user’s to-do list and schedule. Rather than replace the activities of a knowledge worker, we believe that AI and machine learning can help people to accomplish tasks and empower them every day to achieve more. Thank you so much for your time, Nunzio! Where can people learn more about Akiflow and give it a try? You can learn more about Akiflow’s features on our blog and start a free trial on our website. You can also follow us on Instagram and Twitter where we publish content around productivity and the future of work. The post Control your time to free your mind with Nunzio Martinello, founder of Akiflow appeared first on Ness Labs.
Control your time to free your mind with Nunzio Martinello founder of Akiflow
Weak arguments and how to spot them
Weak arguments and how to spot them
We consume an inordinate amount of information, whether it’s blog posts, podcasts, social media content, online videos — a constant stream of data and claims we need to process and assess. When you are pressed for time, how can you quickly tell the difference between a strong argument and a weak argument, and why does it matter? Some weak arguments are more obvious than others. Displays of certitude with little substance are often a tell-tale sign. Michel de Montaigne, one of the most prominent philosophers of the French Renaissance, wrote: “He who establishes his argument by noise and command shows that his reason is weak.” But other weak arguments can be disguised behind a cloak of seemingly sound statements. For instance, the progression from one point to another seems logical up to a point, but breaks down before managing to provide sufficient support for the conclusion. Let’s have a look at how you can quickly spot these, especially when you need to make a quick judgment. The nature of a weak argument Not all bad arguments are weak in nature. An argument can be bad because it is invalid. A classic example is solving a mathematical equation: if you made a mistake in the proof, it would not be considered “weak”, it would simply be invalid. Invalid arguments are often easier to spot because you just need to look for logical errors in the deductive process. A bad argument can also be strong, but built on false premises. For instance, “Playing video games leads to violent behavior. This person plays a lot of video games, and therefore, they are likely to exhibit violent behavior.” The argument is strong, but it’s still bad, because the premise that playing video games is linked to violence is not true. So what exactly is a weak argument? You need two ingredients. Inductive reasoning. The argument should move from specific observations to broad generalizations. Uncertain premise. The specific observations used to build the argument should either have a low probability or be based on personal opinions rather than facts. Even if the argument sounds logical, the conclusions follow neither with certainty nor with high probability, and it means you are faced with a weak argument. Here is an example of weak argument: “Charlie is a woman. Some women like poetry. Therefore, Charlie likes poetry.” In this case, the premise “some women like poetry” has a low or unclear probability, so the argument is weak. Or the weak argument can be based on a personal opinion rather than a fact: “Charlie is a woman. Most women hate mathematics. Therefore, Charlie hates mathematics.” You may not always have the time to apply all the mental gymnastics to figure out whether an argument is strong or weak, but luckily there are some mental models you can apply to quickly analyze arguments, especially when consuming longer pieces of content. How to quickly spot weak arguments While philosophers have devised many methods to evaluate the quality of arguments, there are three critical thinking tools you can use to quickly distinguish a weak argument from a strong argument. Look for arguments using the “surely” operator. In his book Intuition Pumps and Other Tools for Thinking, philosopher Daniel C. Dennett explains: “The word surely is as good as a blinking light locating a weak point in the argument (…) It marks the very edge of what the author is actually sure about and hopes readers will be sure about.” While it’s not always an indicator for a weak argument, it is still a sign that you need to consider the statement with healthy skepticism. This works with similar words such as obviously, evidently, etc. Compare the conclusion of the argument to a coin toss. If you are better off throwing a coin to know if the conclusion is true, the argument is weak. For instance: “About 50% of humans I met are female. Charlie is human. Therefore, Charlie is female.” In this case, even if the premise is true, you only have 50% chance for the conclusion to be true — you may as well toss a coin! Any argument based on a premise with a low or uncertain probability would not pass the coin toss test, and can be safely classified as a weak argument. Map the argument onto the pyramid of disagreement. In his essay How to Disagree, Paul Graham places types of argument into a seven-point hierarchy going from weakest to strongest. The weakest type of argument is name-calling, followed by Ad Hominem. Graham writes: “An ad hominem attack is not quite as weak as mere name-calling. It might actually carry some weight. For example, if a senator wrote an article saying senators’ salaries should be increased, one could respond: Of course he would say that. He’s a senator. This wouldn’t refute the author’s argument, but it may at least be relevant to the case. It’s still a very weak form of disagreement, though. If there’s something wrong with the senator’s argument, you should say what it is; and if there isn’t, what difference does it make that he’s a senator?” The hierarchy of disagreement can help you spot weak arguments. The “surely” operator, the coin toss test, and the hierarchy of disagreement are three simple tools to add to your thinking toolbox. Use them whenever you are reading a long argumentative essay to quickly spot potential weak arguments, or at least to know that your alarm bells should go off and that you should tread with healthy caution. The post Weak arguments and how to spot them appeared first on Ness Labs.
Weak arguments and how to spot them
The psychology of negative thinking
The psychology of negative thinking
Of course, we all have negative thoughts from time to time. After all, our thought processes are affected by what we experience around us, and it’s normal to experience both good and bad times. However, when negative thinking becomes the norm, it can contribute to mental health problems including social anxiety, low self-esteem, and even depression. To avoid falling into that pattern, let’s explore the science of negative thinking and how you can develop a more mindful relationship to your thoughts. The science of negative thinking Our thought processes are intimately connected to the way we feel. When you’re feeling content, your thoughts tend to reflect this. In times of happiness, you may be more satisfied with your career progress, perceive your personal relationships as more secure, or have a better body image. Conversely, if you’re anxious or unhappy, you may notice that negative thoughts start to emerge. This could include feeling stressed about work, worrying about your appearance, or questioning the loyalty of your friends. In the 1970s, psychologist Aaron Beck theorised that negative thought patterns, which he labelled as “negative schemas”, reinforced negative emotions. In his book Cognitive Therapy, Beck explained: “A central feature of the theory is that the content of a person’s thinking affects their mood.” It’s an endless loop: when you’re already feeling anxious or depressed, succumbing to negative thought patterns is unfortunately likely to worsen the way you feel. Beck’s work has been cited frequently in the last fifty years, including by psychologist Leigh Goggins and colleagues, who stated that “negative interpretative bias” could be a factor in maintaining the continuation of a depressed mood. Furthermore, research suggests that amongst university students, automatic thoughts were strongly correlated with self-esteem. If you regularly experience negative thoughts, this cognitive distortion can sadly worsen an already poor mental health, leading to low mood, poor self-esteem, and anxiety. To make things worse, a bias towards negative thinking will increase the likelihood that you’ll spend time ruminating on mistakes or dwelling on things that didn’t go as well as you had hoped. Negativity bias, or the propensity to focus on negative experiences, can cloud your judgement. Decisions will appear more complex than they truly are, which will make it harder to know how to handle difficult situations. Depression and negative cognitions have a reciprocal link in which one worsens the other, and vice versa. With both factors present, a vicious cycle is set in motion. Learning how to recognise and manage negative thoughts could therefore be the key to breaking this cycle of poor mental health, as well as helping you to avoid the pitfalls of negativity bias. The principles of managing negative thoughts We all have negative thoughts, but certain principles have been shown to be beneficial in managing how often they occur, as well as helping to reduce the impact a negative thought might have. First you need to recognise negative thinking when it arises. Automatic negative thoughts often coexist with poor mental health. In some, they will have been present for many years, and recognising them can take some time. When a situation triggers a thought, pay attention to it. Negative thoughts might include: “I am going to fail at this interview”,  “I will never lose weight”, “No one cares about me”, etc. Did you notice how all of these are all-or-nothing, catastrophizing thoughts? Once you are confident in recognising negative thoughts when they arise, you can begin to interrogate your automatic thinking patterns. Rather than allowing a negative thought to control your emotions, ask yourself if the thought is truthful or helpful. If the negative thought provides no value, it’s time to shift your focus by rewiring your thought patterns. It can be tempting to try to force positive thoughts in the hope that they might replace negative ones. However, managing negative thinking involves transmuting our thoughts rather than replacing them. This process requires you to change the way you respond to your negative thoughts, as well as controlling how much impact they have. Let’s have a look at some practical ways to apply these principles. How to transmute your negative thoughts Negative thought patterns can become ingrained. But you can adopt simple strategies to recognise and detach from those negative schemas, making them less influential on your emotions. This in turn may help to break the endless loop of low mood, anxiety and low self-esteem. 1. Create distance from your thoughts. Pay attention to your automatic thoughts and start to label them as subjective thoughts. For example, you may say out loud or internally: “I’m having the thought that I am no good at my job” or “I’m having the thought that I am all alone.” Labelling your thoughts in this way will help you to detach from the critical inner voice that makes a distorted thought seem like the truth. Similar to a meditation practice, this is a way to merely observe the thought, rather than actively engage with it. 2. Start a thought diary. Journaling in a thought diary is a great way to manage negative thinking. Write down the date, the time, the event that triggered an emotion, and the resulting negative thought. In his book, psychiatrist Dr Daniel Siegel explains that you need to “name it to tame it.” Being able to name your emotions and the resulting thought will help you to understand the relationship between external triggers and internal beliefs. 3. Use de-catastrophizing techniques. Negative thinking often leads to catastrophizing. If making a mistake leads you to believe that your worst-case scenario is likely to happen, de-catastrophizing can prevent a spiral of negative thinking. You may find it helpful to ask yourself: What am I worried about? Is it likely that my worry will come true? What is the worst that could happen if my worry did come true? If my worry comes true, what is most likely to happen? Despite my worry, am I likely to be ok in one week (or month, year, and so on)? Once recognised, negative thoughts can be managed to reduce the impact on your emotional wellbeing. This in turn will break the cycle of negative thinking. By paying attention to your thoughts and interrogating their validity you can prevent cognitive distortions from skewing your beliefs and impacting your mental health. The post The psychology of negative thinking appeared first on Ness Labs.
The psychology of negative thinking
The TEA framework of productivity: managing your time energy and attention
The TEA framework of productivity: managing your time energy and attention
A few weeks ago, I was having dinner with fellow founders, and I learned about a productivity method that’s deceptively simple but incredibly powerful: the TEA framework, which stands for time, energy, and attention. This approach feels appealing because it is rooted in essential human principles, rather than creating the artificial need for a complex productivity system. It may seem obvious that we need time to produce any work, that we need energy to sustain our effort, and that we need attention to focus on the work. But, somehow, we sometimes get so obsessed with systems that we forget about those three fundamental pillars of productivity. While the core tenets of the TEA framework are easy to grasp, it has far-reaching implications for the way you live and work. The three pillars of productivity The TEA acronym was coined by entrepreneur Thanh Pham, host of the The Productivity Show podcast. After studying many productivity systems, he saw the need for a simpler, more holistic framework, comprised of three key pillars: Time. It all starts with the way you manage your schedule, your priorities, and how you invest your time — not only the quantity of time you devote to certain tasks, but the quality of this time. For instance, some time investment today may save you lots of time tomorrow. Energy. Your mind and your body are tools that need fuel. Deep work requires mental and physical energy. No mental and physical fuel, no meaningful productivity. Attention. To direct your attention, you need to know what your goals are. Then, you have to sustain your attention by staying focused on the goal and by avoiding distractions. Any of the three pillars missing, and your productivity and well-being at work will suffer. If you have energy and attention, but not enough time, you will feel overwhelmed. Lots of time and attention, but not enough energy, and you’ll end up exhausted. Finally, lots of time and energy, but not enough attention, and you’ll be distracted. You need all three pillars to be productive without sacrificing your mental health. Some people have expanded the framework and named it TEAM instead to account for the relationship between motivation and productivity, but I would argue that motivation is a factor of your mental and physical energy. The more motivated you are, the more energy you will have to tackle your goals. Conversely, if you feel demotivated, you are likely to experience low energy levels. How to apply the TEA framework of productivity The TEA framework is simple but has implications for many areas of your life and work. In essence, it boils down to three principles: Don’t spend your time, invest your time. What can your present self do for your future self? Answering this question is a great way to decide how to invest your time. For instance, you’ll find that scrolling on social media and revenge bedtime procrastination are probably not it. Instead, you could automate some tedious tasks, or book one full afternoon to record videos in a batch, or plan a trip to visit fellow founders in other cities and learn from them. But don’t overkill it. If you suffer from time anxiety, it may be tempting to try and always invest your time in a directly meaningful way. But idleness can also be a way to invest your time, letting your mind wander to let your imagination run wild and generate new, fresh ideas in the process. Fuel your body and your mind. Whether it is the food you eat, the amount of sleep you get, or the content you consume, make sure to give your body and your mind enough energy. Cook yourself healthy meals (or buy some healthy ready-made dishes if you’re not into cooking), don’t cut down on sleeping, nourish your mind with thought-provoking content and consolidate your ideas with journaling… There are many ways to sustain your levels of energy. Again, sometimes, it means doing absolutely nothing — which may not feel productive, but will recharge your batteries for later, better, more enjoyable work. Plan for distraction. If you find it hard to stay focused, don’t fret: it’s completely normal. Our mind is designed to be distracted, to keep on scanning the room around us for new information — or potential danger. Instead of beating yourself up, try to plan your work around your goals and triggers. If your goal is to write a report for an upcoming meeting, you will need a few hours of uninterrupted work. What triggers could get in the way of your focus? Is it your phone, chatty colleagues? Adapt your workspace to minimize these distractions, whether it’s leaving your phone in another room, blocking distracting apps, or locking yourself up in a meeting room with a “do not disturb” post-it note. Again, these principles may sound obvious, but it’s easy to get lost in the weeds. Before you start studying complex productivity systems, consider improving the way you manage your time, your energy, and your attention by applying the TEA framework of productivity. As often, self-reflection is a powerful tool to track your progress and make sure you apply these ideas in a thoughtful way. The post The TEA framework of productivity: managing your time, energy, and attention appeared first on Ness Labs.
The TEA framework of productivity: managing your time energy and attention
April 2022 Updates
April 2022 Updates
New Things Under the Sun is a living literature review; as the state of the academic literature evolves, so do we. This post highlights two recent updates. One of those updates was pretty big, so I will end up copying the entire updated post below, rather than an excerpt. But first, one announcement and one shorter update. Endless Frontier Fellowship First, I wanted to do a quick plug for a new fellowship that’s probably of interest to some readers of this newsletter. It’s a one-year science and tech policy fellowship for talented early career individuals, called the Endless Frontier Fellowship. Fellows spend an immersive year embedded as policy entrepreneurs at EFF’s anchor organizations, the Institute for Progress (New Things Under the Sun’s partner), the Federation of American Scientists, or the Lincoln Network. It’s paid! If you want to apply, the deadline is May 2. More details here. Covid-19 and Innovation Second, the article Medicine and the Limits of Market Driven Innovation has been updated with some discussion of a new paper by Agarwal and Gaule (2022), which describes how the biomedical R&D machine responded to covid-19. It’s a bit hard to excerpt the updates, but two points emphasized are: Agarwal and Gaule provide some additional evidence which confirms work done by other papers using earlier data. Biomedical R&D is responsive to the size of the profit opportunity associated with diseases: they find a 10% increase in the size of the market for a drug is associated with about 4% more clinical trials. Against this benchmark, the response of biomedical R&D to covid-19 was a huge outlier. According to their estimates, the size of the “market” for a covid-19 treatment (based on global mortality from the disease) was bigger than the market for any other disease they considered. Even so the number of new clinical trials was 7-20 times larger than their model would have predicted. Covid-19 was strange in other ways as well. One of the main arguments of Medicine and the Limits of Market Driven Innovation is that private biomedical R&D generally responds to profit opportunity only with projects that do not require much fundamental research. While we have pretty good evidence that this is the case, covid-19 represents a big counter-example. As discussed a bit in the new update, covid-19 did in fact lead to a major shift in the kind of research done throughout science (discussed in more detail here). Data on Combinatorial Innovation Lastly, I’ve written a fairly large update to a post originally called “Innovation as Combination: Data.” That was the fifth New Things Under the Sun I ever wrote, and it wasn’t quite in the style of today’s posts. I now try to make each piece make a specific claim, drawing on a set of related papers, but that piece was more a round up of some related articles. I’ve rewritten it to make a specific claim, which is encapsulated in the new title: “The best new ideas combine disparate old ideas.” It’s about 50% new material, with the set of articles covered going from 4 to 7. Rather than excerpt so much, I reproduce the whole updated post below; enjoy! The Best New Ideas Combine Disparate Old Ideas Where do new ideas and technologies come from? One school of thought says they are born from novel combinations of pre-existing ideas. To some extent that’s true by assumption, since everything can be decomposed into a collection of parts. But this school of thought makes stronger claims. One such claim is that new combinations - those pulling together disparate ideas - should be particularly important in the history of ideas. And it turns out we have some pretty good evidence of that, at least from the realms of patents and academic papers (and also computer programming). To get at the notion that new ideas are combinations of older ideas, these papers all need some kind of proxy for the pre-existing ideas that are out there, waiting to be stitched together. They all ultimately rely on classification systems that either put papers in different journals, or assign patents to different technology categories. These journals or technology classifications are then used as stand-ins for different ideas that can be combined. A paper that cites articles from a monetary policy journal and an international trade journal would be assumed to be combining ideas from these disciplines then. Or a patent classified as both a “rocket” and “monorail” technology would be assumed to combine both ideas into a new package technology. New Combinations in Patents and Citations A classic paper here is Fleming (2001), which uses highly specific patent subclasses to proxy for combining technologies. There were more than 100,000 technology subclasses at the time of the paper’s analysis, each corresponding to a relatively narrow technological concept. Using a sample of ~17,000 patents granted in May and June 1990 Fleming calculates the number of prior patents assigned the exact same set of subclasses. He shows patents assigned combinations without much precedent tend to receive more citations, which suggests patents that combined rarely combined concepts were indeed more important. For example, as we go from a patent assigned a completely original set of subclasses to a patent with the maximum number of prior patents assigned the same set of subclasses, citations fall off by 62%. This flavor of result holds up pretty well to a variety of differing methods. For example, Arts and Veugelers (2015) track new combinations in a slightly different way than Fleming, and use a different slice of the data. Rather than counting the number of prior patents assigned the exact same set of technology classifications, they look at the share of pairs of subclasses assigned to a patent that have never been previously combined. This differs a bit from Fleming because they are only interested in patents that are the first to be assigned two disparate technology subclasses, and also because a patent might be a new combination and still be assigned no new pairs. For example, given subclasses A, B, and C, if the pairs AB, BC, and AC have each been combined before, but the set of all three (ABC) has not, then Fleming will code a patent assigned ABC as highly novel and Arts and Veuglers will not. Arts and Veugelers (2015) look at ~84,000 US biotechnology patents granted between 1976 and 2001 and look at the citations received within the next five years. About 2.2% of patents that forge a new connection between different technology subclasses go on to be one of the most highly cited biomedical patents of all time, compared to just 0.9% of patents that fail to forge new connections. And patents that don’t become these breakthroughs still get more citations if they forge novel links between technology subclasses. Moreover, the direction of this relationship is robust to lots of additional control variables. As a final example, He and Luo (2017) also establish this result, measuring novel combinations in yet another way, and using an even broader set of data. He and Luo look at ~600,000 US patents granted in the 1990s, and which contain 5 or more citations to other patents. Rather than relying on the technology classifications assigned directly to these patents, they look at the classifications assigned to cited references. They assume a patent combines ideas from the classifications of the patents it cites. They also use a much coarser technology classification system, which has just 630 different technology categories, rather than over 100,000 used in the previous two papers. To measure novel combinations, they look at how frequently a pair of technology classifications are cited together relative to what would be expected by chance. That means they end up with lots of measures of novelty for each patent, one for every possible pair of cited references. To collapse down the set of novelty measures for each patent, they order the pairs of cited reference from the least conventional to most and then grab the median and the 5th percentile. As a measure of the importance of these patents, we can look at the probability that they are a highly cited patent for the year they were granted and for their technology class. In the figure below, they divide patents up into deciles and compute the probability a patent whose novelty measure falls into that decile is a hit patent. Because they are adapting some earlier work, they set these indices up in a kind of confusing way. In the left figure below, moving from left to right we get increasingly conventional patents, while in the right figure, moving from left to right we get increasingly more unconventional patents. From He and Luo (2017) The figure above shows that when you focus on the most unusual combination of cited technologies made by a patent (the right figure), then more atypical patents have a significantly higher chance of being a hit patent. When you focus on the median, you find a more complicated relationship: you don’t want all the combinations made to be totally conventional nor totally unconventional and strange. There’s a sweet spot in the middle. Perhaps patents that are completely stuffed with weird combinations are too weird for future inventors to understand and build on? Addressing some potential problems The link between unusual combinations of technology classifications and future citations received is pretty reliable across these papers. But before taking these results too far, there are a few potential issues we need to look into. The first potential issue is a form of selection bias. One challenge from this literature is we typically only ever look at patents that are ultimately granted. But suppose patent examiners are biased against patent applications that make unusual combinations. If that’s the case, then patents making unusual combinations will only make it through if they are so valuable that their merits overcome this deficit. That would, in t...
April 2022 Updates
Unlocking the power of less with Francesco DAlessio creator of Bento
Unlocking the power of less with Francesco DAlessio creator of Bento
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help be more productive without sacrificing our mental health. Francesco D’Alessio is the creator of Bento, a methodology that limits you three tasks per day for better prioritisation. The best way to apply the methodology is to download the app, which is available on iOS and soon on Android. In this interview, we talked about how to design your workflows to balance your energy levels, how limits can help you achieve more, the biggest challenges in personal productivity, why we should be more intentional with our to-do lists, how to go from task overload to true focus, and more. Enjoy the read! Hi Francesco, thank you so much for agreeing to this interview. What do you think are some of the biggest challenges people face when managing their productivity? Thank you to everyone at Ness Labs for having me. Personally, I think one of the biggest challenges with productivity right now is prioritising tasks. For knowledge workers, the tasks we assign ourselves are either overreaching or under-whelming, leaving us feeling unachieved by the end of the day.  We add more and more to our lists, naturally without considering the value of each task, this overwhelm can lead to burnout, workplace stress, and a lack of success at the end of your day. This can then compound further, having an impact on your next day’s productivity too. It can turn into a really destructive pattern. Introducing better, more systematic ways to select your tasks with Bento, we hope will change the approach many knowledge workers use to accomplish their most important tasks. Our goal is to help you select more intentional tasks.  Many people have tried to tackle these challenges. What makes Bento different? Bento isn’t just an app, but a methodology. Our key objective is to combine healthy, mindful practices to build a framework first, then an app second. You can use Bento anywhere — though, of course, we want you to do it with the Bento app, as we’ve built it to be the single best place to implement the method. Another big challenge we face day-to-day is balancing energy levels. Many apps that offer a more intentional approach fail to address managing energy alongside workflow elements. With  Bento you can apply simple strategies that order your tasks based on your energy that can be tailored with each box you create. Our vision is that limits can help you accomplish your most meaningful tasks. That’s why Bento has a “3×7 limit” — it limits you to three tasks (one large, one medium, one small) and seven boxes in total. Seven Bento boxes is plenty to build your own Bento for a week of tasks. Was there an “aha” moment that convinced you to Bento into the world? Bento was born from a pain point I saw with many people’s experiences through reading comments, speaking to people in offices, and my own love for everything in Japanese culture.  It was only in early 2021 that I decided to pop my developer friends Karl and Robin a message to see if they were interested in collaborating together to build an app. Many late night calls for the next year then helped us produce a beautiful application and thoughtful methodology, which was a really fulfilling experience for all of us. Okay, so let’s say I have made my Bento for the day. What do I “eat” first? Once you have your three tasks ready you apply a workflow. A workflow is very simply the order in which tasks are completed, with the goal of balancing your energy levels. You can choose one of the three workflows: Eat That Frog is taken from the classic productivity book of the same name by Brian Tracey — with this workflow, you focus on your most largest task first, move onto a medium task, and finish with your small task. Climb The Summit is a balanced approach to your day. You begin your day with a medium-energy task, moving to the biggest task, and finishing with your small task. Slow Burn is perfect for slow-starters to the day. You begin with your smallest task, and then move onto a more demanding task, gradually working your way towards the largest task, which is great for afternoon peaks of energy.  We designed Bento to be flexible, so you can assign a workflow to a box each time you create one, perfect for an ever-changing energy level you might face. That sounds like an incredibly simple and powerful method. But, let’s be honest: even with the best of planning and intentions, we often get distracted. Distraction is something everyone faces, sadly not one of us has escaped it. Whilst it isn’t impossible to remove distractions, inside of Bento, we designed our focus experience around the concept Cal Newport introduced in “Deep Work” — a classic productivity read about the value of limiting distractions. Bento’s one-task focus mode helps you hone into a single  task at a time. The goal behind this is to block the view of every other task and to focus on your one primary goal. A significant challenge with task overload is the element of exposure to what’s next. If you eliminate this by removing the other tasks on your list, you can only direct your mind’s attention to your true focus target. A subtle distraction that gets overlooked is context switching. One-task focus also reduces the occurrences of context switching that commonly come from just seeing your other tasks on your list. So, should people fully switch to Bento and forget about their current to-do lists? Short answer: No, Bento complements existing applications. We believe Bento is a layer you can add to your existing tools and use Bento as the focus framework for getting less done. In the next few weeks, we’ll be introducing the Bento Method course — a framework and guide on how to apply the exact framework to the tools you use everyday. This will allow people to use Bento where they see best, though we still maintain that the Bento app is the best place to apply the Bento Methodology. Looking forward to the course! What kind of people use Bento? Right now, we’re seeing a lot of productivity folks using it alongside their existing tools, thanks to the nature of Bento complementing apps. From our beta testing, we actually discovered that Bento can be used in a wide variety of situations. For example, we spoke with a dad who started using Bento with his daughter who has autism, and found that the timer system and focus on three tasks helped her train her focus — this is something we’re eager to explore more.  The methodology and app are so wide reaching that we’re finding many people who suffer from workplace stress, task overwhelm, or prioritisation struggles getting huge value from Bento, many of whom are knowledge workers. What about you… How do you personally use Bento? Bento is one of those concepts that can be layered over whatever you use. Right now, I’m using Bento daily as I narrow down what matters in my own Sunsama account. Obviously, things are added to my backlog throughout the day, but my Bento box helps me to stay focused on what matters if all else fails. When I complete my Bento box items, I tend to feel a sense of success by accomplishing those intentional tasks I laid out the night before. And finally… What’s next for Bento? Our next goal for Bento is Android, which is set for very soon. In between, we’ll launch the official Bento course with templates for existing applications like Notion, ClickUp and many more to offer people a way to implement the Bento methodology inside of their existing experiences. After that, our goal is to create Bento on more devices, allow synchronisation, and explore how Bento can be suggestive to working more mindfully and effectively on tasks. Thank you so much for your time, Francesco! Where can people learn more about Bento and give it a try? Thank you for sharing this folks, we can’t wait for people to try Bento. Bento is available to download on iOS, and there’s a waitlist for Android. You can also follow our journey on Twitter. The post Unlocking the power of less with Francesco D’Alessio, creator of Bento appeared first on Ness Labs.
Unlocking the power of less with Francesco DAlessio creator of Bento
How to design a sustainable workplace at home and in the office
How to design a sustainable workplace at home and in the office
You are likely to spend around 90,000 hours at work over your lifetime. If that number doesn’t seem big already, that’s ten years of your life. Depending on where you work, you may have little agency over the design of your workplace — hospital workers and flight attendants are rarely consulted when it comes to sustainability practices — but, in many cases, we do have the ability to make our workplace more sustainable. Whether it’s changing your own habits or convincing the people you work with to make more sustainable choices at work, small changes can have a big impact. Let’s have a look at the benefits of a sustainable workplace, and some simple steps you can take at work to be more mindful of our planet. Save money, save the planet First, why would you want to make your workplace more sustainable? Beyond doing what’s right for our planet and for future generations, designing a sustainable workplace has many practical, and often immediate, benefits: Reduced costs. It may sound obvious, but saving energy will reduce your bill, purchasing second-hand furniture will reduce the cost of decorating your office, and taking public transportation or cycling to the office will save you money compared to using a car. Increased creativity. Upcycling an old desk you found at a thrift shop will require a lot more creativity than buying a new one and following the three-step assembling instructions. Whether it’s to reuse materials, increase the energy efficiency of a project, or figure out how to increase the lifespan of the products you use at work, making your workplace more sustainable often requires creative thinking. Better work satisfaction. This is especially true for bigger companies. The HP Workforce Sustainability Survey reports that 61% of office workers say sustainable business practices are a “must-have” for companies, and a paper suggests that improved sustainability standards can reduce annual quit rates. The good news is: anyone can contribute to designing a more sustainable workplace, whether it’s just you working from home, or if you’re working from an office with your team. Three ways to design a sustainable workplace Of course, making your workplace more sustainable is not about applying a few quick fixes. As Andrew Cameron writes in the journal Strategic Direction: “This is not about a one‐off conference or a newsletter, it is about permanently changing the way decisions are made and the way people work to enable the organization to function, in a different and ultimately more relevant way. You will know when you have succeeded when environmental and sustainability considerations are an instinctive part of the decision‐making process at all levels.” That being said, there are some easy wins that can help you get started. If you work as part of a team or in an office, these small changes can help spark conversations around workplace sustainability. And if you work on your own or at home, you may use these as a starter pack of sustainability practices, which can prompt you to research and improve the sustainability of other aspects of your workplace. 1. Use deforestation-free products Avoid printing documents as much as possible, and if you absolutely must, use deforestation-free paper. And no, that doesn’t necessarily mean recycled paper. A study published in Nature Sustainability shows limited benefits of recycled paper, and even indicates that if all paper was recycled, emissions could increase by 10%! This is because recycling paper relies more on fossil fuels and electricity from the grid compared to producing virgin paper. Maybe that will change and recycling paper will be increasingly powered by renewable energy, but for now, this is not the best way to make your workplace more sustainable. Instead, make sure the paper you use is FSC certified. FSC stands for Forest Stewardship Council. This is a certification confirming that the forest is being managed according to strict environmental, social and economic standards, preserving biological diversity and benefiting the lives of local people. The FSC certification is also helpful for other workplace products. For instance, you may want to check that your bamboo-based laptop stand comes from sustainably managed crops, instead of areas where the land has been specifically deforested to grow bamboo. 2. Save energy There is a direct connection between the amount of electricity you use at work and the environment. Electricity generation takes place in thermal power plants, which burn either fossil fuels, biofuels, or nuclear fuel to heat water and produce steam. When you consume less power, you reduce the amount of greenhouse gas emissions released by power plants. Of course, you’re not expected to reduce your work hours so your computer uses less electricity — though taking more breaks is always a good idea — but there are small steps you can take that will have a big impact. For instance, LED bulbs use 70 to 90% less energy than incandescent bulbs. They also have a longer lifespan: up to 40 times longer than an incandescent bulb! There are other habits you can develop to save energy in the workplace, such as turning off appliances that are not in use, making sure your office is properly insulated instead of relying on the heater or air conditioner, and turning off the lights whenever you leave a room. 3. Go vintage Whether at home or in the office, a common mistake people make when designing a sustainable workplace is to buy more sustainable versions of items they already own. For instance, a new reusable water bottle, new storage containers, or new bamboo shelves to replace existing plastic shelves. Absolutely all new products require resources to produce and transport, whether they are labeled as sustainable or not. If your workplace already has an item that is working as intended, the most sustainable choice is to keep on using it instead of replacing it. When the item doesn’t do the job any more — maybe it’s broken and can’t be repaired — the second most sustainable choice is to purchase a second-hand replacement. This is of course more easily done at an individual level or for small teams, but if you can, it is worth going to a second-hand store, especially when it comes to purchasing office furniture. And vintage works for electronic devices too! French startup Back Market is valued at $5.7B valuation for its marketplace where people can buy refurbished devices without generating additional waste. It’s another good way to save money while making a sustainable choice. Small changes add up Individually, some of these changes may seem like they have a low impact on climate change, but they do add up when everyone chips in. By purchasing workplace products that don’t directly harm the environment and that are made in a socially irresponsible way, we can send a signal to companies manufacturing the products we use everyday at work and collectively encourage a shift towards more sustainable practices. By saving energy, we can reduce our greenhouse gas emissions. And by going vintage, we can avoid generating additional waste. Designing a sustainable workplace is also an opportunity to be more mindful about the way we work, and to have conversations about the impact we want to have and the legacy we want to leave. The post How to design a sustainable workplace at home and in the office appeared first on Ness Labs.
How to design a sustainable workplace at home and in the office
When Extreme Necessity is the Mother of Invention
When Extreme Necessity is the Mother of Invention
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. Audio versions of this and other posts: Substack, Apple, Spotify, Google, Amazon, Stitcher. We all know the proverb “Necessity is the mother of invention.” This proverb is overly simplistic, but it gets at something true. One place you can see this really clearly is in global crises, which vividly illustrate the linkage between need and innovation, without the need for any fancy statistical techniques. Let’s look at three examples. Share Crisis #1: Covid-19 Global Pandemic Our first crisis is the one we’re all most familiar with: the covid-19 global pandemic. During 2020-2022, the big thing we suddenly needed was medical treatment for covid-19. Agarwal and Gaule (2022) look at what happened to the number of new clinical trials (for all diseases) in the wake of the pandemic.1 No surprises: the number of new clinical trials shot up as the magnitude of the disease became clear, with essentially all of the increase coming from trials related to covid-19. From Agarwal and Gaule (2022) In the end, these trials succeeded and we got a suite of effective vaccines in record time: necessity was the mother of invention. Covid-19 had other effects too. For one, it forced the world to embark on an unprecedented experiment in remote work. Bloom et al. (2021) is a short paper that looks at the share of patent applications, filed in the USA, that relate to remote work. Bloom and coauthors scan the text of patent applications for words related to remote work, such as “work remotely”, “telework”, “video chat”, and many others. As we can see in the figure below, covid-19 induced a step change in the share of patents related to working remotely. Again, necessity was the mother of invention. Update to Bloom et al. (2021) by Mihai Codreanu Crisis #2: Oil Shocks Our second crisis is the oil price shocks of the 1970s. After a long period of relatively stable and predictable energy prices, the price of oil abruptly shot up due to disruptions to Middle Eastern supply in the 1970s. The energy crisis created an urgent need to pivot away from dependence on suddenly unreliable oil supplies. Suggestive evidence that the US economy managed to do just that comes from the following figure from Hassler, Krusell, and Olovsson (2021). The black line is the share of GDP spent on energy, the dashed line tracks the price of energy in the USA. From Hassler, Krusell, and Olovsson (2021) Around 1985 the link between the share of GDP spent on energy and the price of energy seems to have changed (in the figure, the black line moved from above the dashed one to below). That suggests the economy got better at getting more GDP out of less energy. But it’s still not 100% clear how the timing of this all played out; was this really that closely related to the oil shocks? To more precisely estimate the pace of innovation related to energy, Hassler, Krusell, and Olovsson (2021) use some fairly basic economic modeling. They assume economic output is produced by labor and energy, and that technology comes in two flavors, one for each. If the technology for energy gets twice as good, it’s as if you’ve got twice as much energy to play with (when in fact, better technology allows you to use the energy you’ve got twice as efficiently). Similarly, if labor technology gets twice as good, it’s as if you’ve got twice as much labor to work with. The cool thing is that if you accept their pretty simple model, you end up with a way to measure a concept like “technology”, which is normally so nebulous, with some very simple and readily available data. If you assume the economy uses labor and energy efficiently, you can do some math, move things around, and show that the productivity of the energy technology can be expressed as a function of GDP per capita, the share of spending on energy in the economy, and our ability to substitute labor for energy and vice-versa. That’s almost all stuff we can measure. When Hassler, Kruseell and Olovsson plug data into this equation and make some sensible assumptions about our ability to substitute labor for energy (they assume its quite hard), you get the following striking chart tracking our ability to convert energy into economic output. Here, the blue line is a measure of how technology multiplies the energy supply, so that having one barrel of oil in 2020 is like having 3 in 1950. Estimated productivity of energy. From Hassler, Krusell, and Olovsson (2021). Now it’s crystal clear: the oil shocks knocked productivity of energy technology out of its stagnation and into a steady upward trend. Necessity was the mother of invention. An aside: sometimes, people argue one reason technological progress slowed in the 1970s, because we moved from technological progress that took abundant energy for granted to technological progress that did not. Hassler, Krusell, and Olovsson’s work is broadly supportive of that narrative. This is just three data points, so don’t get too excited, but there does seem to be a negative correlation between the pace of progress in technology that converts energy into output and technology that converts labor into output. In other words, when the oil shocks forced us to expend more effort on reducing demand on fossil fuels, that may have come at the expense of other forms of technological progress that we had become accustomed to. From Hassler, Krusell, and Olovsson (2021) Crisis #3: World War II Our last crisis is World War II. We could point to many innovations born out of the exigencies of World War II: radar to defend against attack from the air; penicillin produced at industrial scale; and the Manhattan project to develop the first atomic bomb. But let’s focus on the need to build a lot of airplanes. When President Roosevelt targeted 50,000 planes over the war in 1940, this goal was viewed as simply impossible by many: contemporary economists Robert Nathan and Simon Kuznetz believed the US simply didn’t have the productive capacity to do it (Ilzetzki 2022). And yet, in reality, the US eventually succeeded in producing 100,000 planes in just one year. During the war, there was a 1,600% increase in the number of aircraft produced, and US spending on aircraft alone reached 10% of 1939 GDP. How did the US manage to do the seemingly impossible? The following figure from Ilzetzki (2022) gives some clues. It shows total US aircraft produced (measured by weight), as well as the capital and labor used to produce aircraft, relative to 1942 levels. From Ilzetski (2022) Initially, the US made more airplanes by using more labor and more capital to make airplanes. But after 1943, something surprising happened: the increase in capital and labor slowed or even stopped, but we kept on increasing how many planes we made! In order to meet their ambitious targets, airplane manufacturers were forced to discover new efficiencies. And they did! Necessity was the mother of invention. Ilzetski actually goes much further, and tracks the productivity of individual airplane manufacturers. He shows that, on average, individual manufacturers became more productive when they received more plane orders, and that this effect was greatest for the manufacturers who were already operating closest to capacity. In other words, the manufacturers who had the least ability to meet their aircraft orders by increasing labor or capital were also the ones who most improved their productivity! Invention Has Two Parents The above examples illustrate how sudden new necessities can indeed drive innovative effort. And I’ve written elsewhere about evidence that demand for new technologies, even in non-crisis settings, can also spur innovative effort. For example, the private sector tends to do more R&D on treatments for diseases that become more profitable to treat, and automobile manufacturers developed more fuel efficient vehicles in response to fuel efficiency standards and high energy prices. But we need to be careful not to take this too far. You cannot will technologies into being, simply because someone needs them (if so, we wouldn’t have waited so long for mRNA vaccines and atomic bombs). Invention has two parents. A truer proverb might be “Necessity and knowledge are the parents of invention.” We can also see this in some of the examples just cited. As discussed in a bit more detail here, most of the new clinical trials for covid-19 were not for fundamentally new kinds of drugs. Instead, they were largely attempts to re-deploy existing drugs to a novel use case. In other words, they were attempts to take what was already known to be safe and see if it had beneficial effects on covid-19. Most of these failed. The covid-19 vaccines that eventually succeeded rested on deep foundations of fundamental research that went back decades. Covid-19 was the impetus to transform this knowledge into effective new treatments (though these efforts were already underway before covid-19), but it didn’t give us the knowledge that made that possible. Most of the radical technologies developed during World War II, such as radar and the atomic bomb, relied on breakthroughs in fundamental science that preceded the war. In a 2020 review of the activities of the US Office of Scientific Research and Development, which oversaw these and many other technological breakthroughs of the war, Gross and Sampat write “the time for basic research is before a crisis, and since time was of the essence, ‘the basic knowledge at hand had to be turned to good account.’” Ilzetski shows much of the improvement in airplane manufacturing came from adopting techniques that had been shown to be effective in other sectors, rather than inventing new processes out of whole cloth. Specifically, airplane manufacturers that faced capacity constraints were more likely to adopt production line processes (instead o...
When Extreme Necessity is the Mother of Invention
Audio: When Extreme Necessity is the Mother of Invention
Audio: When Extreme Necessity is the Mother of Invention
This is an audio read-through of the initial version of When Extreme Necessity is the Mother of Invention. To read the initial newsletter text version of this piece, click here. Like the rest of New Things Under the Sun, this underlying article upon which this audio recording is based will be updated as the state of the academic literature evolves; you can read the latest version here.
Audio: When Extreme Necessity is the Mother of Invention
Building your digital legacy with Kazuki Nakayashiki co-founder of Glasp
Building your digital legacy with Kazuki Nakayashiki co-founder of Glasp
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us think better and achieve their intellectual and creative ambitions. Kazuki Nakayashiki is the co-founder of Glasp, a social web clipper that allows users to share their highlights and notes as they read, without any back-and-forth between the web and a note-taking app. In this interview, we talked about the nature of human legacy, the knowledge isolation problem, serendipitous spaced repetition, social knowledge management and collective intelligence, learning in public, the impact of social accountability on note-taking, and more. Enjoy the read! Hi Kazuki, thank you so much for agreeing to this interview. Glasp stands for “Greatest Legacy Accumulated as Shared Proof” — can you tell us more about what it means? Thank you so much for having me. I am a huge fan of Ness Labs and I am honored to be here today. First of all, we believe that one of the most noble pursuits is for people to learn, experience, and pass their knowledge on to future generations. The present in which we stand today is built on what our predecessors have built in the past. When we talk about legacy, it does not necessarily mean to leave a successful business or a lot of money behind. Of course, it is wonderful to be able to leave these things to future generations, but I don’t believe that these are the greatest legacies, in that not everyone can leave them behind. Instead, I believe that the greatest legacy is to live a courageous life. It is in the attitude of not being daunted by difficulties, not being overly pessimistic, and betting on the possibilities and hopes of humanity. And I believe this means leaving and weaving our knowledge, wisdom, and history for the next generation. However, even though we are standing on the shoulders of our predecessors, we do not know how or by whom all that knowledge was accumulated. Through Glasp, we want to empower people to leave, share, and weave the greatest legacy. Our mission is to democratize access to other people’s learning and experiences that they have collected throughout their lives. By doing so, we may be able to help others who may try to follow a similar path in the future.  This is an ambitious mission. Was there an “aha” moment that inspired you to build Glasp? I had a near-death experience at the age of 20 when I had a sudden subdural hematoma that paralyzed the left side of my body. My doctor at the time told me that I could have a cardiopulmonary arrest at any moment, and I was hospitalized and underwent emergency surgery. I remember the sense of fear and emptiness that welled up from the depths of my body, which words cannot express, as I was confronted with the reality that the normalcy of yesterday was suddenly taken away from me and that my existence might disappear from this world. Since that time, I have wanted to leave behind a legacy that would allow me to feel that I have made even a small contribution to the future of humanity — a legacy that would be the proof and meaning of my existence in this world. I do not know if what I leave behind is really useful to anyone. It is a matter of subjectivity. However, just as someone’s trash is someone else’s treasure, I believe that by leaving my learning and experiences behind, they can become useful to someone somewhere in the future. If you leave your knowledge, wisdom, and insight in a completely personal space, no one will be able to access it after you die. Given the fact that collective learning has made us humans smarter across generations, I think keeping knowledge in a silo is a huge loss for humanity as a whole. In other words, the problem we are addressing is the isolation of knowledge. The world is full of countless wonderful personal apps, but my near-death experience and the process of searching for the meaning of life led me to the current idea of Glasp. That’s incredibly inspiring, thank you so much for sharing Glasp’s origin story. How does Glasp work exactly? Glasp is a social web clipper that people can use to highlight and organize quotes and thoughts from the web without having to switch back and forth between screens, and access other like-minded people’s learning at the same time. You can get an idea of what Glasp is all about by looking at what our users are saying on the “Wall of Love” on the website. After breaking our mission into specific components, we decided to focus on the overlap between curation, knowledge management, and community. Some of the advice I recently received from Jeremy Brown on Twitter overlaps with these components, as well as with Michael Simmons’ ideas of public note-taking and learning in public. For curation, we currently offer a Chrome Extension and a Safari Extension, which, when installed, display a small popup like the Kindle’s highlighter when you select text, and will allow you to curate text that resonated with you. It allows for easy highlighting and note-taking without interrupting the reading experience. When reading a particular article, all highlights and notes for that article can be viewed from the right sidebar, and be easily copied and pasted into note-taking apps, markdown style. You can also add tags and comments, and see what others have highlighted on that website directly on the page. In terms of knowledge management, as you can see on my page, Glasp organizes your highlights and notes for you and allows you to filter by topic or full-text search, so you can easily access quotes, thoughts, ideas, and insights that you have found important in the past. The social nature of Glasp also allows you to access other people’s highlights and notes on your page (called “marginalia”), so you can build on others’ perspectives and deepen your knowledge. In the future, we plan to add a feature that will allow you to backlink your findings with your past highlights or those of others. I also believe that the uniqueness and fun of Glasp’s approach to knowledge management lie in its ability to resurface what you have learned. Spaced repetition is one of the most proven methods for remembering what you have read, but the sad reality is that reviewing flashcards is tedious and setting up the system is cumbersome. With Glasp, others interact with your highlights, which provides accidental and automatic opportunities for review. This is unique in that other curators resurface your highlights, and I think it is also interesting that the curators resurface the creator’s work. As for community, Glasp allows you to connect and learn from other people with similar interests through the learning byproduct: highlights and notes. Glasp’s home feed allows you to see what the people you are following are learning and what insights they have gained. You can also search content by topic, so you can see what friends, colleagues, influencers, and other people you trust or who share similar interests are learning. You can check each site’s top highlights and find your favorite authors as well. In particular, newsletter writers or content writers can share their learning process with Glasp (called “learning in public”). Deep engagement and direct feedback from audiences and followers can be a great way to get ideas and inspiration for their future content. Having learning partners is very inspiring and fun. Glasp can enhance one’s learning process by making the learning process social. For example, we are collaborating with the Month to Master learning cohort program run by Michael Simmons to help learners weave and share what they learned. When it comes to bookmarking and highlighting, a big challenge is that many people end up building a graveyard of random links they never end up actually learning from. How does Glasp address this challenge? As you say, too often we see the issue of saving random links leading to this dysfunction, and I think this is a problem that is not limited to bookmarking and highlighting, but to our information society in general. One important aspect is the difference between read-it-later apps and Glasp. There are two processes of information selection when we collect information. One is broad and shallow. The other is narrow and deep. The former is an area where the read-it-later apps show their strength as a place to store a vast amount of information that is of some interest, is relevant, or may be useful in the future, in place of your short-term working memory. The latter is an area where Glasp and other highlighter apps can show their strength as a place to store important information that has passed the primary sifting process and that you want to keep for a longer period. If the number of items to be stored is huge and their quality is not checked, it will be difficult to maintain, organize, and manage, and will most likely result in a graveyard of random links. Fortunately, Glasp’s user core action is not bookmarking, but the act of highlighting and leaving notes, so the action threshold for the user is not as light as for the bookmarking apps or read-it-later apps. Furthermore, the possibility that highlights and notes may be seen by others creates social accountability, so the threshold of action for the user is raised even higher. When you hear the word “highlight”, you probably associate it with education. Those familiar with education may know that research shows that highlighting is not a very effective learning technique. However, research also suggests that the probability of effective highlighting increases when moderate incentives and pressure are designed in. In other words, the pressure that someone might see your highlights works as a social accountability function, which can increase the likelihood of saving something better and more valuable to you. While some may argue that the volume of a person’s digital legacy may be reduced by this approach, we place more value on the insight, idea, emotion, and...
Building your digital legacy with Kazuki Nakayashiki co-founder of Glasp
How developing mental immunity can protect us from bad ideas
How developing mental immunity can protect us from bad ideas
Every day, a new video goes “viral”, and an “infectious” idea starts spreading. Mental immunity is a psychological theory that is also known as cognitive immunology. With origins dating back 70 years, this field of research is based on the premise that not only is there an immune system for the body, but an immune system for the mind as well. People with a healthy mental immune system are more likely to detect misinformation. A strong cognitive immune system can also help spot bad ideas at an earlier stage, so you may avoid wasting time, energy or money. Let’s explore the concept of cognitive immunology, together with a list of strategies you can employ to help strengthen your mental immune system. A mental immune system The concept of mental immunity was formulated by Professor Andy Norman, director of the Humanism Initiative at Carnegie Mellon University. Despite the field of cognitive immunology being in its infancy, mental immunity research has deep roots dating back to the 1950s.  The mental immune system is believed to function in a similar way to the body’s physical immune system. The purpose of the physical system of immune cells is to detect pathogens including bacteria and viruses, so that they can be eradicated from our blood stream and organ systems before they have a chance to cause damage. Similarly, a healthy mental immune system will detect harmful or incorrect information that enters our mind, so that it can be recognised as such and then promptly rejected. In a paper about immunology’s theories of cognition, philosopher Alfred Tauber explained that developing an “immune self” requires us to actively distinguish between the self and the foreign, so that foreign information can be interrogated and potentially defended against. This way, the mental immune system sifts through ideas, information and other forms of external stimuli to identify, and therefore protect us from, the adverse outcomes associated with misinformation. The benefits of mental immunity Both factual information and misinformation have the potential to spread through the population far faster than ever before. We have access to constant, real-time updates from online news platforms, as well as information shared via online magazines, social media, and unregulated websites. While factual information is a great asset, unreliable information and the inability to spot bad ideas may lead to poor decision making. Professor Sander van der Linden from the Department of Psychology at the University of Cambridge published a study which showed that the public could be inoculated against misinformation regarding climate change. In the study, the publics’ cognitive immunity to misinformation was reinforced when they were given a pre-emptive warning about politically motivated attempts to spread misinformation on the human causes of global warming. The results showed that this was an effective way to strengthen their immunity to false information. In his 2021 book Mental Immunity, Professor Andy Norman explains that the immune systems of our minds can be strengthened against ideological corruption and mind parasites, which increases our capacity for critical thinking. This in turn helps us to spot and remove bad ideas before they can cause harm. Furthermore, developing greater cognitive flexibility allows us to change our minds faster when new, better-evidenced information is presented to us. In short, moving away from rigid thinking patterns improves our relationship with information and our resultant actions. How to strengthen your mental immune system You can increase your mental immunity by making your mind more resistant to misinformation, which will lead to better cognitive flexibility and decision making. To keep on using the same analogy, these strategies work in a similar way to vaccination: they support your mind in recognising the threat of bad ideas. 1. Build awareness of misinformation. Misinformation is spread for a variety of reasons. It can be passed on innocently, especially when shared from person to person in a general conversation. However, research suggests that the spread of false content can also occur more deviously for political gain or polarisation, to generate income for media outlets, as a personal or industrial form of propaganda, or as a result of social media algorithms. Remember that misinformation is common, that fake news is designed to appear genuine, and train yourself to immediately interrogate the information or data you are presented with. This will help make your mind more resistant to bad ideas. 2. Develop healthy meta-beliefs. A meta-belief is a belief that one holds following a thorough reasoning process or cognitive interrogation to check the validity of the belief. In their 2020 paper, Gordon Pennycook and colleagues explained that “theories of belief should take into account what people believe about when and how beliefs and opinions should change — that is, meta-beliefs.” The team found that people who were politically liberal were more likely to believe that opinions and beliefs should change according to evidence. Those who were religious, or held paranormal or conspiratorial beliefs, were less likely to agree that beliefs should change. Developing meta-beliefs strongly correlates with mental immunity. To strengthen your mental immune system, be prepared to assess and re-adjust previously held beliefs if new evidence comes to light. This way, your opinions are continuously being amended based on the latest evidence. 3. Practise self-reflection. When practising self-reflection, you should start to pay attention to your patterns of consumption. If this process of reflection indicates that you are drawn to the same news sources, or solely rely on influencers or social media platforms for your updates, your information diet may not be varied enough. For greater mental nourishment, diversify your information sources and dig deeper into the underlying research to fully understand whether you are unconsciously being sold misinformation. It can also be helpful to develop a note-making practice so you can capture your thoughts and consciously reflect on the content you consume. Mental immunity is an emerging theory and more research is needed. However, the initial investigations have shown that a strong mental immune system helps filter external information to avoid falling prey to false data or flawed ideas. This cognitive system can be strengthened by building your awareness of the rampant nature of misinformation, developing healthy meta-beliefs, and reflecting on your patterns of information consumption. Once reinforced, stronger mental immunity will allow you to promptly detect misinformation, reject plans that are unlikely to succeed, and increase your cognitive flexibility to quickly adapt when presented with new evidence. Definitely a concept worth experimenting with! The post How developing mental immunity can protect us from bad ideas appeared first on Ness Labs.
How developing mental immunity can protect us from bad ideas
Making time for what matters with the co-founders of Agenda
Making time for what matters with the co-founders of Agenda
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us achieve more without sacrificing our mental health. Drew McCormack and ​​Alexander Griekspoor are the co-founders of Agenda, a date-focused note-taking tool that allows you to seamlessly plan and document your projects. In this interview, we talked about the nature of time, the delicate balance between design and simplicity, their just-in-time approach to resurfacing of notes, how to make sure formatting doesn’t get in the way of taking great notes, and much more. Enjoy the read! Hi Drew and ​​Alex, thank you so much for agreeing to this interview. Let’s start with a bit of a philosophical question. What’s your relationship with time? Trying to predict what will happen is fundamental to science, and for someone like me it is like candy to a toddler. When I was working as a physicist, time was often just a parameter in an equation that I was trying to solve. Dynamical systems like the weather and the stock market have always intrigued me, and I built a whole university career around that. The excitement of building a model or program that attempts to make a prediction, and then testing to see if it works, is hard to match. In my daily life, time feels more like a river, a constant stream. You look forward to something in the future, and before you know it, it belongs to the past, and you almost can’t grasp the anticipation you felt when it hadn’t happened yet. But I don’t look back too often, generally living in the now, with a vague and evolving idea of where I want to go in the future. Despite this complex relationship we have with time and how crucial it is in our daily lives, many tools solely focus either on documenting or planning — how is Agenda different? There are lots of note-taking apps around, and you can ask yourself: do we need another one? We felt there was that room, because none of the offerings have much of a relationship to time. It’s almost like your notes exist in a timeless vacuum, and yet you know yourself that each one has a logical temporal context. The note might be from the beginning of a project, or belong to that meeting on March 3rd when Joan joined the team. Many notes are not timeless, and you miss that context in most note-taking apps. That was the inspiration for Agenda — to add temporal context. Is this a note you are taking in a particular meeting on a particular date? Is it planning for the future? Or is it a record of what happened in the past? Agenda orders your project notes into a timeline, flowing from future to past, giving you that context. Notes usually begin in the future or present, and over time, flow back down the timeline to become breadcrumbs of the past. The timeline is really what makes Agenda unique in the note-taking world. Was there an “aha” moment when you decided to start building Agenda? The idea for Agenda came from my partner in crime, Alex. He was running his own software company, and spent a lot of time in meetings, as well as organizing a team of developers. He found that in his meetings, people would often forget what they wanted to discuss, or would come back after the meeting with “Oh, I meant to ask you…” To make things go smoother, he developed a system of taking notes. Alex would have one text file per project. He would enter new notes at the top of the file, and when he finished with one meeting, he would immediately create the note for the next meeting at the top. During the week, he would add anything that he thought needed to be discussed to that future meeting note.  When the meeting finally came around, he would locate the note at the top, and use it as the agenda for the meeting. Anything postponed during the meeting itself, or requiring a followup action, would be copied into a new note for the following meeting, and so forth. Alex has written a detailed recount of this here. This process worked great, but text files also have limitations, and Alex wanted to build a dedicated app. We have been friends and scientific colleagues for many years, and I was intrigued by the idea, and joined the team. I say “team”, but it really is just the two of us, with some intermittent help from others with design and programming tasks. Anyone who has ever tried Agenda will likely recognize the genesis of the idea in the app. Notes are organized into projects, which are equivalent to the plain text files Alex used before. By default, new notes appear at the top of the project timeline, and can be used for future planning.  As the items in the note are completed, the note becomes history, and you add a new note. If you need to check anything, you scroll down — back through history — and see why you decided to do what you did. Can you tell us a bit more about how Agenda works exactly? I mentioned the project timeline already, and that is directly inspired by Alex’s text files, but when you develop an application, you have the flexibility to add features not easily achieved purely with text. In Agenda, notes can have a date, but they can also be linked to an event in your calendar. You can also link tasks from a list to a reminder in Apple’s Reminder app. Our philosophy is to integrate with the existing apps you already use, rather than trying to build a kitchen sink app that does it all. Agenda is very much focussed on note-taking, but integrates with your calendar and other apps. One of the most important features of Agenda — and one that isn’t really possible to achieve with just text files — is an overview we call “On the Agenda”. You can flag any note as being on-the-agenda by simply clicking on an orange dot at the top. Once you do that, the note appears in an On the Agenda overview in the left sidebar. I use On the Agenda myself all the time. I keep notes there that are current, and which I want to access quickly. It might be a note for a meeting that day; a task checklist for a feature I am programming; or a recipe for the evening meal — anything I want to find quickly. And once something is no longer relevant, I take it off the agenda, knowing that if I need it, it is still there in the project. What also goes well beyond plain text files is the Agenda editor. It supports styles, so your notes have structure, as well as lists and checklists. I actually stopped using a todo app for my tasks, because I find it much faster and easier to use Agenda. I typically only add a linked reminder if I want to be reminded at a particular time to do something. You can add attached files and images to your notes too. One of the nice features of the new release, Agenda 14, is that you can now edit files in your notes. In the past, you could drag in an Excel file or PDF, but you couldn’t change it once it was in Agenda. With Agenda 14, you can double click on the attachment to open Excel, edit your spreadsheet, and save straight to the file in Agenda. Same with PDFs: open in Apple’s Preview, add some markup, and save changes directly to Agenda. Agenda also goes well beyond many note taking apps in terms of organizing, automation and searching. You can include tags in your notes, and link to outside resources or other notes. Agenda 14 improves upon this with backlinking, tag autocompletion, and a tag manager.  And for the real pro user, there are note templates and actions for inserting the date and other useful information. These allow you to build note content dynamically. You can even automate Agenda using x-callback URLs (macOS and iOS) and Shortcuts (iOS). That’s exciting. Many note-taking tools sacrifice design for the sake of functionality, but Agenda offers a beautiful note-taking experience. Can you tell us about your design process? The two of us have always had a strong focus on design in our apps. Alex has won several Apple Design Awards, which are like the Oscars of the app world, and Agenda won one in its first year. That success was largely thanks to our designer, Marcello Luppi. Marcello is also a long time friend who we knew from the scientific world, but who had transitioned into app design.  We originally began the project with no designer, and after around 6 months of programming, we showed what we had to some friends, and… They hated it! They didn’t understand the concept at all, and it looked like it had been designed by a couple of software engineers — go figure! We called in the help of Marcello, and he completely transformed the app, both visually and functionally, into the award winner it became. Marcello was so successful that Apple poached him away from us a year or so after we received the award. Marcello polished the whole appearance of the app, but one aspect we were determined to get right from the beginning was the text editor. There are lots of great markdown editors around today, where you edit plain text files with markdown formatting such as “# This is a heading”, that type of thing. We loved markdown for its ease of entry, but we thought it was a compromise to have that formatting in the final document. With Agenda, we wanted the best of both worlds. Why can’t you type “# This is a heading”, and have the text change instantly into an elegant bold heading style? Why do I have to see the # in my heading? We wanted Agenda notes to look and feel like real, well formatted documents, and that is what we have tried to achieve. In the current editor, you can type “# This is a heading”, and it will become a well-formatted heading. You can type “[ ] Bananas”, and it will turn into a checklist for your groceries. So you have the ease of entering markdown, but you end up with elegantly styled documents. And it goes much further than headings and lists. You can add tags, tables, links, and preformatted blocks, just by entering text. That sounds like such an elegant way to capture all sorts of notes. Now — a big challenge with many note-taking apps is the retrieval of older notes. How does Agenda...
Making time for what matters with the co-founders of Agenda
How to increase your creativity by cultivating creative self-efficacy
How to increase your creativity by cultivating creative self-efficacy
Do you think of yourself as someone who is not creative? Creative work can be challenging, and many people lack confidence in their own ability. Psychologists have reported that being unsure, anxious or defeatist about your creative potential can become a self-fulfilling prophecy that hinders your performance. Creative self-efficacy is the internal belief that you have the ability to complete creative tasks effectively. If you can learn to leave behind the fixed mindset of “I am not a creative person”, you will be able to make more room for personal growth, exploration, and innovation. Believing in your creative potential The concept of self-efficacy was first coined by Dr Albert Bandura. Bandura closely studied the relationship between performance and belief in oneself. He noted that those who had a strong sense of efficacy, or belief in oneself, approached challenging tasks with the determination to succeed. People with high self-efficacy tend to set goals, to become deeply engrossed in the task, and to continue their efforts despite difficulties or setbacks. Rather than feeling threatened by a challenge, they approach it with the confidence that they are in control and will eventually master the task. Conversely, Bandura noticed that those who tend to back away from difficult tasks do so because of self-doubt and a fear of failure. With little determination to succeed, they are more likely to dwell on their perceived weaknesses. For people with low self-efficacy, obstacles can quickly lead to abandoning the task, compounding an internalised belief that they are incapable of succeeding. Creative self-efficacy is a specific form of self-efficacy that was first investigated by Dr Pamela Tierney and Dr Steven Farmer. The researchers described creative self-efficacy as “the belief one has the ability to produce creative outcomes”. The greater the belief in your own creativity, the more successful you will be in pursuing your creative goals. Tierney and Farmer also reported that creativity can be impacted by your confidence in managing the overall demands of your career. If you feel that you are capable of succeeding at work, then you are also more likely to demonstrate good creative performance within your role. The most interesting part is that although job self-efficacy is a predictor of confidence in your personal creative ability, creative self-efficacy is the greater predictor of your creative performance.  This is corroborated by the results of Dr Gay Lemons, who found that creative success is most greatly influenced by belief in one’s own ability, rather than by actual creative competence. As you can see, creative self-efficacy is a psychological attribute that greatly influences creative performance, with the potential to further what we can achieve. How to cultivate creative self-efficacy Learning how to believe in your own creative ability is as important, if not more so, than developing your creative skills. While it is important to practice and explore new creative skills, cultivating creative self-efficacy can have a great influence on your creativity. Here are some practical ways to cultivate your creative self-efficacy. 1. Develop a creative network. By building a strong professional network of people who are driven to produce excellent creative work, you can start imitating part of their creative self-efficacy to increase your own. Remember that creativity is not restricted to the arts. Everyday professional dilemmas can be solved creatively, whether they relate to project management, delivery of information, or organisation of a complex budget. Watch how your peers apply creative thinking to manage everyday tasks, and start emulating some of these patterns. 2. Get creative support. Identify people whose creative efforts are often successful, and ask whether you can work under their guidance. This could be a quick brainstorming session, a creative review, or just sharing some helpful resources. Support should go both ways: consider whether there is scope to offer your co-workers some of your time to help with their creative growth. 3. Cultivate creative autonomy. The professional freedom to expand on your basic duties and responsibilities can increase your creative self-efficacy. As a bonus, perceived autonomy also has a positive impact on our mood. Creative autonomy involves fostering a growth mindset and self-directed ways of working. If you are a manager, take a step back and try to avoid excessively supervising your team. Instead, make your team feel empowered to succeed via their own methods. Remember that your creativity is more closely linked to creative self-efficacy than to your actual creative competence. Beyond its immediate benefits, cultivating creative self-efficacy can help you feel more motivated, productive, and can be an opportunity to build a strong professional network. The post How to increase your creativity by cultivating creative self-efficacy appeared first on Ness Labs.
How to increase your creativity by cultivating creative self-efficacy
What is neurodiversity?
What is neurodiversity?
People think, learn, behave, and experience the world around them in many different ways. Some of this diversity is due to neurological differences. Neurodiversity refers to those variations in neurocognitive functioning. Let’s have a look at the origin of the term, and its usefulness in research and practice. A short primer on neurodiversity The term “neurodiversity” is relatively new: it was coined by social scientist Judy Singer in the late 1990’s in relation to autism, but has since come to encompass many other neurodevelopmental conditions such as attention deficit hyperactivity disorder (ADHD), dyslexia, dyscalculia, and more. People of standard neurodevelopmental and cognitive functioning are referred to as “neurotypical”, while “neurodivergent” is used to refer to people whose brain functions differ from what is considered standard — sometimes collectively referred to as neurominorities. Central to neurodiversity is the idea that naturally occurring variations in the human brain should be seen as differences rather than deficits. Some people consider neurodiversity to be related to the concept of biodiversity — a term you will mostly see being used for the purpose of advocating for the conservation of species. In the words of Dr Robert Chapman: “Proponents of the neurodiversity movement […] challenge the pathologization of minority cognitive styles and argue that we should reframe neurocognitive diversity as a normal and healthy manifestation of biodiversity.” There is currently no definitive list of neurodevelopmental conditions  that should be included under the umbrella term of neurodiversity, and some researchers even advocate for an entirely different definition that doesn’t rely on contrasting neurocognitive differences between individuals. As Dr Nancy Doyle explains: “A definition has emerged for psychologists and educators which positions neurodiversity within-individuals as opposed to between-individuals.” The spiky cognitive profile of neurodivergence (adapted from Doyle, 2020) She adds: “The psychological definition refers to the diversity within an individual’s cognitive ability, wherein there are large, statistically-significant disparities between peaks and troughs of the profile, known as a ‘spiky’ profile. A neurotypical is thus someone whose cognitive scores fall within one or two standard deviations of each other, forming a relatively ‘flat’ profile, be those scores average, above or below.” It’s important to keep in mind that neurodiversity has no official definition, and the idea does not align with the usually discrete approach to diagnosis used in medical practice — the latest Diagnostic and Statistical Manual of Mental Disorders includes more than 150 discrete diagnoses. However, it doesn’t need to be excessively controversial. Two complementary research models Because the concept of neurodiversity has initially emerged as part of the social sciences, there is currently no consensus within the scientific community as to how to use it in clinical contexts. That’s partly because the clinical model and the social model consider disability from two different perspectives. While the clinical model seeks to cure or manage disabilities, the concept of neurodiversity is based on the social model of disability, which identifies systemic barriers to the social integration of people with functional differences. The two models have their respective critics, but they are not incompatible. In different ways, both clinical research and neurodiversity research seek to contribute scientific evidence to reduce impairments experienced by neurodivergent people: clinical research focuses on treatment, and neurodiversity research focuses on adapting environments to the diverse needs of individuals. In a comment published in the The Lancet Psychiatry, Dr Edmund Sonuga-Barke and Dr Anita Thapar​​ write: “Rather than a complete reliance on disorder-based concepts and related treatment approaches, we can see many advantages of incorporating the concept of neurodiversity alongside mainstream research and clinical practice.” “Indeed, there is no contradiction between traditional approaches that look to give neurodiverse individuals additional resources through clinical treatment and neurodiverse approaches that look to adapt environments and transform neurotypical attitudes: both approaches are beneficial and together will improve the lives of neurodiverse people.” In addition, there is growing support for a “transdiagnostic” approach that cuts across traditional diagnostic categories. Researchers from the University of Cambridge explain: “Removing the distinctions between proposed psychiatric taxa at the level of classification opens up new ways of classifying mental health problems, suggests alternative conceptualizations of the processes implicated in mental health, and provides a platform for novel ways of thinking about onset, maintenance, and clinical treatment and recovery from experiences of disabling mental distress.” Instead of — often artificially — imposing categories onto a multidimensional and complex space, a transdiagnostic approach allows clinicians to account for the massive heterogeneity within diagnoses and for the common co-occurrence of many conditions in a way that makes a rigid taxonomy too limiting to properly support people. The idea is to consider continuous dimensions within the population, as opposed to distinct categorical entities. Supporting neurodiversity The concept of neurodiversity is particularly useful in environments such as schools and the workplace, where changes can be implemented to foster inclusivity and bolster people’s individual strengths while providing support for their different needs. For instance, adjustments can be made to accommodate diverse physical needs, such as letting people fidget, having a dedicated space for quiet breaks, or offering noise-cancelling headphones. A lot of these adjustments may even be helpful for all employees. Clear communication and documentation, flexible hours, and a school or workplace culture that emphasises kindness — all of these are good practices to implement, regardless of initiatives specifically targeted at supporting neurodiversity. Neurodiversity is still an emerging paradigm which has been described as a “moving target”, but it already offers several practical implications for leaders who want to build more inclusive environments and researchers who want to support people across the multitude of conditions that may escape categorical labels. Hopefully this short primer will make you want to learn more! The post What is neurodiversity? appeared first on Ness Labs.
What is neurodiversity?
The emerging theory of authentic leadership
The emerging theory of authentic leadership
Being “authentic” has become a bit of an overused buzzword, and has lost some of its meaning. However, despite the concept not being fully mature in a theoretical or experimental sense, early research has shown that authentic leadership may improve team performance compared to traditional management. Authentic leadership is an emerging theory that encourages managers to be genuine, self-aware and transparent when guiding their team. Let’s explore ​​the potential benefits of authentic leadership, and the strategies you can employ to authentically support your team in being as successful as possible. The benefits of authentic leadership Authentic leadership is a concept that was first formulated by Harvard professor and former Medtronic CEO Bill George, who was adamant that new laws alone could not help to repair the corporate crisis. Instead, he claimed that new leaders and innovative styles of leadership were required to give corporations a chance of financial recovery. Whereas a traditional leader in a large corporation might value profits above people, an authentic leader carefully balances tough ethical dilemmas with financial optimisation.  Bill George considered that there are five essential dimensions of an authentic leader: purpose, values, heart, relationships and self-discipline. According to him, an authentic leader should work compassionately, valuing both the company and its employees. So, why do teams value an authentic leader? Authentic leadership is seen as an antidote to unethical leadership. Fred Luthans and Bruce Avolio noted that an authentic leader is likely to appear more reliable and trustworthy to those who work with them. Instead of a manager with a “work persona”, people enjoy working with a manager that behaves like their true self — a manager who is self-aware, who has developed a supportive professional relationship with each individual in the team, and who has a good understanding of their thoughts, emotions, or belief systems. Traditional leadership might involve a manager working in a way that does not necessarily align with their own personal values. This can be confusing for colleagues, who might be left second guessing what is expected of them. Researchers reported that this lack of clarity or ambiguity of what is expected can lead to a team working without direction. This is likely not only to reduce job satisfaction, but could also lower overall productivity. In contrast, authentic leadership can make it far easier for co-workers to recognise your values, and predict or follow your instructions. It will require less effort to understand what you expect, helping the team to work in a more constructive and cohesive manner. In a study of 51 teams, authentic leadership improved a teams’ drive to being the very best they could be. In turn, increased virtuousness led to greater team potency — the ability to succeed. The researchers concluded that authentic leadership can foster team motivation, thereby improving overall team performance. Win-win! How to become an authentic leader Most people do not undergo leadership training before becoming a leader, and so are learning to lead on the job. Although research into authentic leadership is in its infancy, some principles can be helpful when leading a team. 1) Define your ideals. Authentic leadership lies in upholding your personal and professional values. Before you can lead authentically, you will need to define your own ethical values and ideals of leadership. Although there will usually be a corporate goal in sight, those values should still guide your decisions as a leader. 2) Practise self-reflection. Self-reflection through journaling, self-awareness exercises, or investing in a career coach may help you to identify your strengths, weaknesses, and cognitive patterns such as likely reactions to certain situations. It will also help you to develop emotional intelligence so you can become more aware of how your team is feeling and support them appropriately. 3) Foster relational transparency. People are more likely to enjoy working with you and respect the decisions you make if you are transparent about your thought processes. The line between personal and professional does not have to become overly blurred, but it is important that your colleagues don’t feel like you have a hidden agenda.   It takes courage, but openly sharing your strengths, weaknesses, and thought processes with your team shows them that you have nothing to hide, and that you are — like them — eager to keep on learning and growing. This level of transparency suggests that personal and professional growth is something to be supported and celebrated. Authentic leadership remains an emerging but promising theory. Learning to lead in a new way takes time, but defining your own ideals, practising self-reflection and developing relational transparency with your co-workers is likely to lead to improved cohesion, satisfaction, psychological safety, and performance. Give it a try! The post The emerging theory of authentic leadership appeared first on Ness Labs.
The emerging theory of authentic leadership
Steering Science with Prizes
Steering Science with Prizes
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. Audio versions of this and other posts: Substack, Apple, Spotify, Google, Amazon, Stitcher. Finally, as part of the partnership with the Institute for Progress, the fine folks at And Now have designed a new logo for New Things Under the Sun: New scientific research topics can sometimes face a chicken-and-egg problem. Professional success requires a critical mass of scholars to be active in a field, so that they can serve as open-minded peer reviewers and can validate (or at least cite!) new discoveries. Without that critical mass,1 working on a new topic topic might be professionally risky. But if everyone thinks this way, then how do new research topics emerge? After all, there is usually no shortage of interesting new things to work on; how do groups of people pick which one to focus on? One way is via coordinating mechanisms; a small number of universally recognized markers of promising research topics. The key ideas are that these markers are: Credible, so that seeing one is taken as a genuine signal that a research topic is promising Scarce, so that they do not divide a research community among too many different topics Public, so that everyone knows that everyone knows about the markers Prizes, honors, and other forms of recognition can play this role (in addition to other roles). Prestigious prizes and honors tend to be prestigious precisely because the research community agrees that they are bestowed on deserving researchers. They also tend to be comparatively rare, and followed by much of the profession. So they satisfy all the conditions. This isn’t the only goal of prizes and honors in science. But let’s look at some evidence about how well prizes and other honors work at helping steer researchers towards specific research topics. Share Howard Hughes Medical Institute Investigators We can start with two papers by Pierre Azoulay, Toby Stuart, and various co-authors. Each paper looks at the broader impacts of being named a Howard Hughes Medical Institute (HHMI) investigator, a major honor for a mid-career life scientist that comes bundled with several years of relatively no-strings-attached funding. While the award is given to provide resources to talented researchers, it is also a tacit endorsement of their research topics and could be read by others in the field as a sign that further research along that line is worthwhile. We can then see if the topics elevated in this manner go on to receive more research attention by seeing if they start to receive more citations. In each paper, Azoulay, Stuart, and coauthors focus on the fates of papers published before the HHMI investigatorship has been awarded. That’s because papers written after the appointment might get higher citations for reasons unconnected to the coordinating role of public honors: it could be, for instance, that the increased funding resulted in higher quality papers which resulted in more citations, or that increased prestige allowed the investigator to recruit more talented postdocs, which resulted in higher quality papers and more citations. By restricting our attention to pre-award papers, we don’t have to worry about all that. Among pre-award papers, there are two categories of paper: those written by the (future) HHMI investigator themselves, and those written by their peers working on the same research topic. Azoulay, Stuart, and coauthors look at each separately. Azoulay, Stuart, and Wang (2014) looks at the fate of papers written by an HHMI investigator before their appointment. The idea is to compare papers that of roughly equal quality, but where in one case the author of the paper gets an HHMI investigatorship and in the other case doesn’t. For each pre-award paper by an HHMI winner, they match it with a set of “control” papers of comparable quality. These controls are published in the same year, in the same journal, with the same number of authors, and the same number of citations at the point when the HHMI investigatorship is awarded. Most importantly, the control paper is also written by a talented life scientist, with the same position (for example, first author or last author, which matters in the life sciences), but who did not win an HHMI investigator position. Instead, this life scientist won an early career prize. If people decide what to work on and what to cite simply by reading the literature and evaluating its merits, then whatever happens to the author after the article is published shouldn’t be relevant. But that’s not the case. The figure below shows the extra citations, per year, for the articles of future HHMI investigators, relative to their controls who weren’t so lucky. We can see there is no real difference in the ten years leading up to the award, but then after the award a small but persistent nudge up for the articles written by HHMI winners. From Azoulay, Stuart, and Wang (2014) That bump could arise for a number of different reasons. We’ll dig into what exactly is going on in a minute. But one possibility is that the HHMI award steered more people to work on topics similar enough to the HHMI winner that it was appropriate to cite their work. A simple way to test this hypothesis is to see if other papers in the same topic also enjoy a citation bump after the topic is “endorsed” by the HHMI, even though the author of these articles didn’t get an HHMI appointment themselves. But that’s not what happens! Reschke, Azoulay, and Stuart (2018) looks into the fate of articles written by HHMI losers2 on the same topic as HHMI winners. For each article authored by a future HHMI winner, Reschke, Azoulay, and Stuart use the PubMed Related Articles algorithm to identify articles that are on similar topics. They then compare the citation trajectory of these articles on HHMI-endorsed topics to control articles that belong to a different topic, but were published in the same journal issue. As the figure below shows, in the five years prior to the award, these articles (published in the same journal issue) have the same citation trajectories. But after the HHMI decides someone else’s research on the topic merits an HHMI investigatorship, papers on the same topic fare worse than papers on different topics! From Reschke, Azoulay, and Stuart (2018) Given the contrasting results, it’s hard not to think that the HHMI award has resulted in a redistribution of scientific credit to the HHMI investigator and away from peers working on the same topic. So maybe awards don’t actually redirect research effort. Maybe they just shift who gets credit for ideas? The truth seems to be that it’s a bit of both. To see if both things are going one, we can try to identify cases where the coordination effect of prizes might be expected to be strong, and compare those to cases where we might expect it to be weak. For example, for research topics where there is already a positive consensus on the merit of the topic, prizes might not do much to induce new researchers to enter the field. Everyone already knew the field was good and it may already be crowded by the time HHMI gives an award. In that case, the main impact of a prize might be to give a winner a greater share of the credit in “birthing” the topic. In contrast, for research topics that have been hitherto overlooked, the coordinating effect of a prize should be stronger. In these cases, a prize may prompt outsiders to take a second look at the field, or novice researchers might decide to work on that topic because they think it has a promising future. It’s possible these positive effects are enough so that everyone working on these hitherto overlooked topics benefits, not just the HHMI winner. Azoulay, Stuart, and coauthors get at this in a few different ways. First, among HHMI winners, the citation premium their earlier work receives is strongest precisely for the work where we would expect the coordinating role of prizes to be more important. It turns out most of the citation premium accrues to more recent work (published the year before getting the HHMI appointment), or more novel work, where novelty is defined as being assigned relatively new biomedical keywords, or relatively unusual combinations of existing ones. HHMI winners also get more citations (after their appointment) for work published in less high-impact journals, or if they are themselves relatively less cited overall at the time of their appointment. And these effects appear to benefit HHMI losers too. The following two figures plot the citation impact of someone elsegetting an HHMI appointment for work on the same topic. But these figures estimate the effect separately for many different categories of topic. In the left figure below, topics are sorted into ten different categories, based on the number of citation that have collectively been received by papers published in the topic. At left, we have the topics that collectively received the fewest citations, at right the ones that received the most (up until the HHMI appointment). In the right figure below, topics are instead sorted into ten different categories based on the impact factor of the typical journal where the topic is published. At left, topics typically published in journals with a low impact factor (meaning the articles of these journals usually get fewer citations), at right the ones typically published in journals with high impact factors. From Reschke, Azoulay, and Stuart (2018) The effect of the HHMI award on other people working on the same topic varies substantially across these categories. For topics that have not been well cited at the time of the HHMI appointment, or which do not typically publish well, the impact of the HHMI appointment is actually positive! That is, if you are working on a topic that isn’t getting cited and isn’t placing in good journals,...
Steering Science with Prizes
Audio: Steering Science with Prizes
Audio: Steering Science with Prizes
This is an audio read-through of the initial version of Steering Science with Prizes. To read the initial newsletter text version of this piece, click here. Like the rest of New Things Under the Sun, this underlying article upon which this audio recording is based will be updated as the state of the academic literature evolves; you can read the latest version here.
Audio: Steering Science with Prizes
Productivity addiction: when we become obsessed with productivity
Productivity addiction: when we become obsessed with productivity
The business and productivity app market is worth billions of dollars. Every day, there is a new productivity tool popping up, a book about productivity being published, and millions of people reading and sharing content related to personal productivity. It started as a measure of efficiency for the production of goods and services. Somehow, along the way, many of us have become addicted to productivity. Why are we so obsessed with being productive? At its core, productivity addiction is based on the same reward systems as other addictions. By providing constant reinforcement — for example financial rewards in the form of salary increases, or social rewards in the form of work recognition — productivity can become a goal in and of itself, resulting in compulsive behaviours. This phenomenon is maybe more common than you would think. Two nationally representative studies carried out in Norway and Hungary reported similar results. In Norway, Dr Cecilie Andreassen and her team found that between 7.3% and 8.3% of Norwegians are addicted to work. In Hungary, a team led by Dr Zsolt Demetrovics suggests that 8.2% of Hungarians working at least forty hours a week are at risk for work addiction. Dr Mark Griffiths estimates the prevalence of work addiction in the United States to be around 10%, mentioning some estimates as high as 15% to 25%. It doesn’t help that being addicted to productivity may be a “mixed-blessing addiction” (a term originally used to describe work addiction in the 1980’s), making it more socially acceptable and potentially hiding the negative effects for longer. Similar to someone who is addicted to exercise, a productivity addict may initially be successful in their career, earn a lot of money, and receive encouraging work accolades. But, in the long term, being obsessed with productivity can have unintended consequences, such as burnout, family issues, and health problems. The BBC ran a story about productivity addiction where Dr Sandra Chapman from Center for BrainHealth at the University of Texas explained: “The problem is that just like all addictions, over time a person needs more and more to be satisfied and then it starts to work against you. Withdrawal symptoms include increased anxiety, depression and fear.” Are you addicted to productivity? At least in the Western world, our education has often taught us to tie our self-worth to how much we contribute to society. The more we contribute, the better. “I work, therefore I am.” Being productive feels like a way to improve our self-worth. This positive reinforcement can make it hard to realise we may be falling prey to productivity addiction. However, there are five tell-tale signs you may be addicted to productivity: You don’t want to “waste” any time. Productivity addicts may suffer from time anxiety, an obsession about spending our time in the most meaningful way possible. As Dr. Alex Lickerman described it, time anxiety stems from these recurring questions: “Am I creating the greatest amount of value with my life that I can? Will I feel, when it comes my time to die, that I spent too much of my time frivolously?” Trying to always optimise the way you spend your time and struggling to do nothing may be signs of productivity addiction. You tend to turn hobbies into side projects. Let’s say you become interested in gardening, and really enjoy spending time in the garden, learning about different kinds of flowers and plants, and caring for them. You may be tempted to turn this hobby into something more productive, maybe by starting a newsletter about gardening, or a small business selling gardening guides. You feel guilty when you don’t hit your targets. Whether it’s inbox zero or tackling a long to-do list, being addicted to productivity may result in a hard time falling asleep in the evening because you haven’t managed to be as productive as you had hoped to be. Instead of closing your laptop and forgetting about it until the next day, you may struggle properly disconnecting because of the guilt you feel around not hitting these (sometimes artificial) targets. You always make work a priority. Are you rushing to finish dinner with your family so you can get back to work? Cancelling plans with friends so you can finalise a presentation? Cutting short your night of sleep to attend an early meeting hosted in a different timezone? While it happens to most people to have to make concessions from time to time, productivity addicts will tend to always choose work over other important areas of their lives. You constantly feel busy. Dr Brené Brown, a research professor at the University of Houston, describes being “crazy busy” as a numbing strategy that allows us to avoid facing the truth of our lives. She half-jokingly wrote: “I often say that when they start having 12-step meetings for busy-aholics, they’ll need to rent out football stadiums.” This numbing strategy may even give us the illusion of productivity. Luckily, productivity addiction is not a disease, and it is possible to make a few simple changes to avoid falling into its trap for long enough that we start experiencing its negative consequences.  How to manage productivity addiction There is no one-size-fits-all solution to get rid of our obsession with productivity, but practising mindful productivity is a great way to manage productivity addiction. Make space for self-reflection. Recovering from productivity addiction starts from understanding its source and mechanisms. What are the rewards that make you obsess about your productivity? Is it money, recognition, something else? What patterns have you noticed in the way you work that hurt other areas of your life, such as time with your family or sleep? Journaling can be a great way to reflect on your relationship with productivity. Define meaningful priorities. For many, work is an important part of their identity. But it doesn’t have to be the only defining aspect of your worth. What else do you care about? What are areas you would like to explore outside of work? Are your priorities aligned with your values? Instead of automatically creating endless task lists, ask yourself: what would be a meaningful goal I can work towards? Don’t pin the butterfly. Remember that not all hobbies need to become hustles. Try to keep some hobbies that are just that — hobbies. Spaces of self-expression where you can experiment and play whenever you feel like it, outside of the constraints of productivity. Reconsider your relationship with time. Time anxiety can lead to a daily feeling of being rushed that makes us feel overwhelmed and panicky. We think we are making the most of our time, but instead we are rushing through our precious time without savouring every second of it. Take breaks, become comfortable with doing nothing, and most importantly, define what “time well spent” means to you so you can make space for these moments. Create your own system. Instead of relying on prescriptive productivity methods that may not work for you and create even more stress, progressively design your own system by experimenting and iterating. Incorporate your meaningful priorities, hobbies, and insights about the way you work best to ensure you can achieve your goals without sacrificing your mental health. Finally, pay attention to your triggers. As a recovering productivity addict, you may need to always be careful about not falling back into old patterns whenever you start a new job, a new hobby, or set a new exciting goal. Practising self-reflection and paying attention to your mental health will ensure the way you work is more enjoyable and more sustainable. The post Productivity addiction: when we become obsessed with productivity appeared first on Ness Labs.
Productivity addiction: when we become obsessed with productivity
Psychological reactance: how we react to the threat of losing our freedom
Psychological reactance: how we react to the threat of losing our freedom
You may have noticed that if someone pushes you to do something, it often makes you feel less inclined to do it. This is a phenomenon known as psychological reactance: a reflex reaction to being told what to do, or feeling that your freedom is under threat. It can occur in personal, professional or social settings when you feel that you need to regain a sense of control over your autonomy. Controlling someone else’s sense of freedom can trigger anger, and motivate them to regain it. As a decision-maker, it is important to recognise that if you push people too hard, you may end up prompting them to do the opposite of what you wanted them to do. Understanding psychological reactance, and finding ways to positively impact others’ motivation, is therefore important in both professional and personal settings. Let’s have a look. A fear of losing our personal freedom The concept of psychological reactance was formulated by psychologist Dr. Jack Brehm in 1966. He defined reactance as “the motivation to regain a freedom after it has been lost or threatened.” It causes individuals to rebel against the pressure they are put under.  It is often the thought of someone else exerting control, rather than the request itself, that leads to psychological reactance. As individuals, we want to feel that we have the freedom to do as we please. This means that when a circumstance arises which threatens our sense of freedom, reactance emerges as a form of motivational arousal. For example, being told that you cannot use a mobile phone at school may increase your desire to do so, even if you previously did not have any desire to look at your phone. Being forced to pay fees for something that was previously free may reduce your desire to buy a product for which the cost can easily be justified. You may work diligently and conscientiously to complete tasks at work without complaint. However, when your manager specifically requests a piece of work, you may start to feel your resistance growing. Despite completing similar tasks previously without issue, you may now feel the urge to react against the request simply because it has now been mandated by your manager. The perceived threat to your autonomy makes the work feel unappealing and so you may put it to the bottom of your list, or even argue against doing it at all. This reactance is a direct effort to eradicate the new restrictions imposed upon you. Reactance can occur whenever our emotional freedom is challenged. Research suggests that it can be triggered by external threats, such as being asked to complete a chore, or by internal threats or dialogue. Furthermore, the intensity of reactance experienced may depend on how significant you perceive the threat to be. The greater the threat to your autonomy, the more likely you are to refuse to yield to social or professional influence. Similarly, if more than one freedom is threatened simultaneously, reactance will increase. How to manage psychological reactance Threats to freedom, and the resultant reactance, can occur in all facets of our lives. As a decision-maker, it is likely that your role will involve making requests or attempting to motivate others to work in a certain way. Finding ways to support the autonomy of others to prevent reactance from occurring is therefore vital. 1. Accommodate autonomy. Of course, you will sometimes need to make decisions that others have less input on. However, it’s essential to treat the people you collaborate with as autonomous agents. For example, if a new process will be implemented at work, give your team the opportunity to provide their thoughts and suggestions. This way, it will feel less likely freedom is being taken away, and more like power is being given. Research even suggests that “threatened individuals who feel powerful free themselves from the threatening situation and manage to reorient themselves.” 2. Set healthy constraints to breed creativity. It has been shown that having too few constraints breeds complacency, while excessive constraints can be detrimental to creativity and innovation: a moderate level of guidance “frames the task as a greater challenge and, in turn, motivates experimentation and risk-taking.” By finding a healthy middle ground between complete freedom and micromanaging, you can maximise creativity and encourage your team to investigate non-traditional solutions. 3. Use reactance as a motivator. In some situations, it may be possible to encourage others to achieve more by restricting their freedom in some way. For instance, a researcher may be driven to attend more conferences when told that they can only enrol on three per year. However, this strategy must be used with caution, as excessive or unfair infringement on freedom could result in resentment rather than motivation. As you have seen, psychological reactance occurs as a response to a perceived restriction on our personal freedom. Being told not to do something, or having requests made of us, can cause us to rebel against the situation. However, it is possible to prevent reactance from occurring, and even to use it as a motivator. By accommodating autonomy, using healthy constraints to encourage imaginative thinking, and applying reasonable restrictions as a stimulus for action, reactance can be directed in a way that improves creativity and productivity in the workplace — as long as leaders ensure that team members do not feel controlled, but instead feel empowered to achieve more. The post Psychological reactance: how we react to the threat of losing our freedom appeared first on Ness Labs.
Psychological reactance: how we react to the threat of losing our freedom
Using the goal gradient hypothesis to help people cross the finish line
Using the goal gradient hypothesis to help people cross the finish line
Our perception of progress can impact our overall drive to reach a goal. The goal gradient hypothesis posits that our efforts increase as we get closer to achieving a goal: when the reward is in sight, we feel incentivised to reach the finish line. Designers and decision-makers can effectively use goal gradients as a motivational tool. The concept of a goal gradient The goal gradient hypothesis was first introduced by Clark Leonard Hull in 1932. He tested his theory on rats, noting that the rodents ran faster the closer they got to a food reward. This phenomenon can also be observed in marathon runners of all abilities who, despite exhaustion, find a sudden burst of energy once the finish line is in sight. In 2006, researchers Ran Kivetz, Oleg Urminsky and Yuhuang Zheng followed up on Hull’s work. They investigated the goal gradient hypothesis and its relevance to purchase acceleration and customer retention for businesses. Customers were either given a 12-stamp coffee card which included two stamps to get them started, or an empty 10-stamp card. The study confirmed that those given the 12-stamp card completed it faster than those who were given an empty card, despite both groups needing to collect 10 stamps in total. The research team also noted that the frequency of coffee purchases increased as individuals approached their free coffee reward. Motivation therefore intensifies with proximity to a goal.  The impact of the goal gradient hypothesis Goal gradients do not only impact our motivation. In 2013, it was demonstrated that goal gradients could also impact how helpful or socially minded we might be. Researchers found that people were more likely to donate to charitable campaigns if the fundraiser was already close to reaching its target. Donations made in the late stage were made not only out of kindness or to relieve negative emotions, but because donors found “satisfaction from having personal influence in solving a social problem.” Those who make a charitable donation in the late stage of a campaign may feel that their contribution has a more personal impact on achievement of the fundraising target. The prosocial act of donating becomes an “influential source of satisfaction”. However, research suggests that the impact of a goal gradient can be affected by your power status. Those who perceive themselves to be in a position of low social or professional power are more likely to be motivated by proximity to a goal. For example, if a senior member of the team tells you that you can use examples from a previous job as credits towards your current goal, this can boost your motivation to complete any professional requirements.  Conversely, goal proximity may be of less importance to those who feel more powerful. If you are financially comfortable, two extra stamps on a coffee card may have less impact on your motivation to earn a free coffee than it might for someone who must budget carefully.  How you can motivate others to achieve their goals The great thing about goal gradients is that they can be used as an effective tool to motivate those around you to succeed. Whether you are a manager, designer or decision-maker, certain strategies can help you to encourage your employees, customers, or users to reach a goal. 1. Offer a head start. At the beginning of a project, it can feel like there is a marathon ahead. It can be hard to imagine getting to the finish line, and so giving those around you a head start can increase motivation. For example, you could offer a head start by creating pre-filled templates or example answers so that it appears that some of the work has already been completed, while also providing inspiration for the rest of the task, or by acknowledging previous studies and allowing a student to use them as credits for their current training. 2. Track and acknowledge progress. In the depths of a project, it can be hard for someone to see how close they are to achieving their goal. Track your colleagues progress manually or using a project management tool, and show them just how close they are to reaching the finish line. Hearing your manager tell you that you are almost there can be the motivation that is needed to finalise a project more quickly than if your progress had not been acknowledged. Consumers may also be encouraged to achieve a goal more quickly if they are made aware of their progress. If you want to encourage customers back into your coffee shop, sending an email update of the points they have accrued on their online loyalty card will not only tempt them back, but could also increase the rate at which they then reach the required points to qualify for a free coffee. This is also why progress bars are so common in mobile apps and online forms. 3. Break down milestones. Someone who perceives that a project is a long way off completion may feel demotivated. Breaking down the project into smaller milestones and celebrating micro-wins can make the goals feel more achievable. Rather than feeling overwhelmed by the volume of work left to do, your team will feel encouraged and motivated by the satisfaction that comes from ticking small victories off each day. Customer loyalty can be encouraged by the insertion of small milestones on the way to the main milestone. For example, a customer might be rewarded with a half-price coffee when they reach 5 stamps, and then a free coffee once all 10 stamps have been collected. Closing the gap between the start and finish line, with small milestones in between, can make the goal feel more attainable. As you have seen, the goal gradient hypothesis increases motivation to cross the finish line. By making projects appear easier, quicker, or simpler to complete, we feel incentivised to strive to reach our goals. But don’t keep this secret to yourself — your team can benefit as well for using goal gradients! To help boost the motivation of those around you, you can offer a head start, acknowledge someone’s progress, and create smaller milestones to help maintain focus and enthusiasm. The post Using the goal gradient hypothesis to help people cross the finish line appeared first on Ness Labs.
Using the goal gradient hypothesis to help people cross the finish line
How to become a brain myth buster
How to become a brain myth buster
Did you know that the more you are interested in how the brain works, the more likely you are to believe in neuromyths? Neuromyths are common misconceptions about the brain. Their source can be innocent — people who genuinely believe in those myths — or plain unethical, such as the case of marketers promoting brain fiction so they can sell dubious products to help customers achieve their full potential. Neuromyths are particularly prevalent in education. Researchers from the Department of Educational Neuroscience at Vrije Universiteit Amsterdam explain: “Teachers who read popular science magazines achieved higher scores on general knowledge questions. More general knowledge also predicted an increased belief in neuromyths. These findings suggest that teachers who are enthusiastic about the possible application of neuroscience findings in the classroom find it difficult to distinguish pseudoscience from scientific facts.” As we will see, while the sources of neuromyths can sometimes be innocent, their effects can be harmful, especially in a learning environment. But the good news is: anyone can become a brain myth buster and contribute to dispelling neuromyths, whether in education, at work, or in their daily lives. Brain fact versus brain fiction According to a systematic review of 24 scientific articles investigating the prevalence of neuromyths, some of the most common ones among teachers, educators, and trainers include the beliefs that… People learn better when they receive information in their preferred learning style. The first three years of a child’s life determine whether or not they will grow into a successful person (also known as the 3-year myth). Differences in hemispheric dominance can help explain individual differences among learners, for instance “right-brained” people are thought to be better at artistic expression and creativity, and left-brained people to be more comfortable with logical thoughts and calculations. We only use 10% of our brain capacity. Children are less focused after consuming sugary drinks or snacks. Listening to classical music helps make us smarter (also known as the Mozart Effect) And the list goes on. In a fascinating study about brain myths, researchers asked more than 3,800 people whether they believed in specific statements about the brain. Some of the participants were educators, others were scientists and doctors, and yet others were just members of the general public. The results of the study were striking. Almost 80% of scientists and doctors believed in one of the brain myths, 43% of them believed in the Mozart Effect — which, as we’ve seen, has no basis in scientific evidence — and almost 50% of educators believed that people are either right-brained or left-brained. As you can see, neuromyths are very common. The problem is that they are also very dangerous. The dangers of brain fiction There’s a popular saying that goes: “It’s not so much the things we don’t know that get us in trouble, it’s the things we think we know that aren’t so.” There are lots of things we think we know about the brain that aren’t so. But what kind of trouble are we talking about? Believing in neuromyths may seem harmless, but it really isn’t. Neuromyths can lead to: Wasted potential. If a student is struggling with mathematics and their teacher believes that people are either right-brained or left-brained, that teacher may just stop supporting the student with mathematics — focusing instead on areas where the student is more comfortable. Many talented people did not find their craft easy at first, and believing that believing that some brains are just not “designed” for certain skills may prevent some students from exploring less obvious learning paths. Misspending. Brain fiction also makes us waste money — whether it’s corporate money, government money, or personal money. Companies are paying for expensive training based on neuromyths, and governments are heavily investing in pseudoscientific educational programmes (a famous example is Brain Gym in the United States). Discrimination. Finally, brain fiction can be leveraged to support discriminatory practices in education. For example, Leonard Sax, who used to run the National Association for Single Sex Public Education in the U.S., said that boys and girls should be taught differently and separately because of differences in their brains (“girls are using the cerebral cortex while boys are using the hippocampus”). Whether it’s to avoid wasted potential, misspending, or discrimination, dispelling those dangerous misconceptions about the brain is important for the future of education. And anyone — that means you too — can join the fight. Becoming a brain myth buster To become a brain myth buster, we need to ask ourselves: why do we believe in brain fiction? Several factors contribute to the emergence and proliferation of neuromyths. First, these are remarkably appealing ideas. To believe in the 10% myth is to believe that we may have some untapped potential which we could unlock should we use the right techniques or tools. To believe we are right-brained or left-brained offers a practical excuse to focus on our strengths rather than aim for a well-rounded education. Researchers have also blamed the inaccessibility of empirical research, which is often hidden behind paywalls, fostering an increased reliance on media reports rather than the original research, as well as the lack of professionals trained to bridge the disciplinary gap between education and neuroscience. Becoming a brain myth buster requires critical thinking, curiosity, and access to evidence-based sources of information about the brain. Whenever you hear a new claim about the brain, look it up using one of the following resources: BrainFacts.org — And in particular their neuromyths database which answers questions such as “Can you learn in your sleep?”, “Does using your non-dominant hand make you smarter?” and more. The website is run by a group of global nonprofit organizations (the Kavli Foundation, the Gatsby Charitable Foundation, and the Society for Neuroscience)  as a public information initiative, not by marketers trying to sell you a brain-training app. OECD database — The Centre for Educational Research and Innovation at the OECD has published a collection of neuromyths which they thoroughly debunk. These include neuromyths around multilingualism, learning styles, enriched environments, and more. Books about neuromyths — There are two books that are particularly interesting if you want to learn about some of the most common myths. The first one is Great Myths of the Brain, which takes more of a neuroscientific angle, and the second one is 50 Great Myths of Popular Psychology, which is an easier read and includes many myths rooted in psychology. Blogs of brain myth busters — There are many blogs that are excellent resources, such as Neurocritic, Neurobollocks, and Neurobonkers. Dr Christian Jarrett, the author of the Great Myths of the Brain book has a blog about brain myths. While not updated anymore, Neuroskeptic offers an amazing collection of articles debunking brain fiction and getting the brain facts straight. Applied neuroscience resources — You could also learn more about applied neuroscience by taking a course from a reputable university, or joining one of the many professional organizations offering training that can help you become a brain myth buster. For example, the Centre for Educational Neuroscience regularly hosts events about neuromyths. After you are done checking a claim about the brain, you can even make a note of it by adding it to your note-taking app and tagging it as “brain fact” or “brain fiction” — after a while you will have your own personal database of information about the brain, which you can use to quickly look up a claim while having conversations with colleagues, friends, or family. Finally, of course, there is Ness Labs! To celebrate Brain Awareness Week, we hosted an interactive session about brain fiction where we dispelled some of the most common myths. You can watch the recording here and download an editable template to host your own brain myth busting game. Have fun becoming a brain myth buster! The post How to become a brain myth buster appeared first on Ness Labs.
How to become a brain myth buster
Untangling Race From Hair - SAPIENS
Untangling Race From Hair - SAPIENS
One anthropologist has made it her mission to remove racial prejudices from the study of hair and find the evolutionary roots of hair diversity.
Untangling Race From Hair - SAPIENS
Progress in Programming as Evolution
Progress in Programming as Evolution
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. Audio versions of this and other posts: Substack, Apple, Spotify, Google, Amazon, Stitcher. Evolution via natural selection is a really good explanation for how we gradually got successively more complex biological organisms. Perhaps unsurprisingly, there have long been efforts to apply the same general mechanism to the development of ever more complex technologies. One domain where this has been studied a bit is in computer programming. Let’s take a look at that literature to see how well the framework of biological evolution maps to (one form of) technological progress. Share Simulating Technological Evolution We’ll start with Arthur and Polak (2006), who look at how ever more sophisticated logic circuits can, in principle, evolve via a blind process process of mutation, selection, and recombination. The paper reports the results of a large number of digital simulations that do precisely that. These simulations have three main components. First if you’re going to simulate evolution, you need your organism, or in this case, your technology. Arthur and Polak start with a very elementary logic gate, in most simulations a Not-And (NAND) gate. This is a circuit with two binary inputs and one binary output. If every input is 1, then it spits out 0; otherwise it spits out 1. From this seed, much more sophisticated circuits will digitally evolve. From Arthur and Polak (2006) The second thing you need in order to simulate evolution is a way to modify the organism or technology. We might think the natural way to do this is to allow for slight mutations in these circuits, which is how we often think of biological evolution (single base-pairs being switched from one to another). But Arthur and Polak believe recombination is the essence of technological change, rather than mutation. So their model of digital evolution is much more explicitly combinatorial. In every period, sets of 2-12 technologies are picked and randomly wired together in sequence, though any individual circuit is also allowed to mutate a bit on its own. Third, to model evolution you need a way to evaluate the fitness of your organisms, or circuits in this case. If we’re trying to understand technological evolution, then fitness should be related to whether or not humans find technologies to be useful. Arthur and Polak come up with a list of desired functions it is reasonable for people to want circuits to fulfill. These range from very simple to very complex. For example, one simple function is just a NOT gate: it just returns the opposite of its input (1 for 0, 0 for 1). A more complex function is a 15-bit adder: if you put in two 15-bit numbers, it outputs their sum. Arthur and Polak next come up with a way to score circuits based on how close they get to giving the right answer: every time the circuit gives the right answer for a set of inputs, it scores better, every time it gives the wrong answer, it scores worse. And if two circuits perform equally well, the one that does it with fewer components scores better. In every period, the highest scoring circuits and their components gets retained. Next period, the simulation draws components from this basket of retained circuits and wires them together to see if any of the resulting combinations do a better job fulfilling the desired tasks. Finally, Arthur and Polak let this system run for 250,000 periods, 20 different times, and watch what happens. We learn a few things from the results of this exercise. First, the experiment is an existence proof that you don’t need inventors with reasoning minds to get sophisticated technologies; this blind recombinant evolution can also do the job. In 250,000 period these simulations don’t discover everything Arthur and Polak define as desirable, but it does go well beyond the simplest circuits. For example, the simulation successfully discovered circuits that can add 4-bit numbers and and circuits that can indicate if one (and only one) of 8 inputs is 1.  Second, in the experiment, technological advance tends to be lumpy. Desirable circuits tend to be discovered in clusters, after key component pieces are discovered which unlock lots of new functionality. But in between these sprints can be long periods of technological stagnation, even as under the surface the ferment of experimentation and “R&D” is going on invisible to us. Third, their simulations give a nuanced picture about the importance of path dependency. This is the idea that where our technologies start has a big impact on where they finish. If we start along one technological trajectory, we’re more likely to continue on it, and end up with a completely different basket of technologies, than if we started elsewhere. In Arthur and Polak’s experiments, one way they can investigate this is to see how different simulations evolve, when different circuits are discovered first. For example, most of the time, a “not” circuit is found before an “imply” circuit. But not always. In the rarer cases when “imply” circuits are found first, many subsequent technologies build on the imply circuit than the “not” circuit. Over time, however, the program still sniffs out the best overall approaches for different functions, and this begins to chip away at the initially atypical dominance of “imply” components. The importance of where you start matters for a time, but then begins to fade. Fourth, technological innovation, like biological innovation, is red in tooth and claw. Better technologies constantly supplant obsolete ones and sometimes this leads to waves of extinction. For example, suppose some technology x is comprised of 12 other circuits, and each of these component circuits is further comprised of 2-12 subcomponents, which are in turn comprised of sub-subcomponents and so on. If technology x is replaced by a superior technology y, then technology x naturally goes “extinct.” And if the components and subcomponents, and sub-subcomponents that comprised x are not part of any other technology that is the highest scoring on some function (and therefore retained), than they too can go extinct, leading to the collapse of an entire ecosystem of supportive circuits. Lastly, Arthur and Polak’s digital experiment illustrates the importance of intermediate goals in the evolution of technological complexity. In their simulations, if Arthur and Polak remove key desirable circuits of intermediate complexity, the simulations get trapped and unable to advance to more complex designs. Evolution needs stepping stones to get from simple to complex. Evolution in MatLab Contests This is an intriguing experiment, but it’s doesn’t demonstrate that these mechanisms are important in the actual development of technology. For that, I am a big fan of two papers from 2018 and 2020 by Elena Miu, Ned Gully, Kevin Laland, and Luke Rendell. These papers study 19 online programming competitions operated by MathWorks over 1998-2012. This is still an artificial setting, but we now have real people solving real programming problems, and as we’ll see, these contests have some important elements that make them worth studying. In these contests, nearly 2000 participants (average of 136 per contest) competed over the course of a week to write programs in MATLAB that could find the best solution to a problem in which it was impossible to find an exact solution in the time given. For example, in a 2007 contest participants wrote code to play a kind of peg-jumping game, where there is a grid of pegs (all worth different points) and an empty space, and you can remove a peg by jumping over one peg and into an empty space. A program’s score was based on three factors: the number of points it got in the game; how fast it ran, and how complex the code is (with more complex code penalized). Participants could submit their programs at any time and receive a score. They could then modify their code in response to the score they received, and this iterative improvement was an important part of the contest. But there is a catch: programs and their scores were publicly viewable by all participants. So submitting a program and getting feedback on its performance also discloses your program to all the other contest participants, who are free to borrow/steal your ideas. This is a great setting to study technological evolution, for a few reasons. As in the real world, there is robust competition, and inventions can be reverse-engineered and copied. Unlike Arthur and Polak, we have reasoning minds designing and improving programs, rather than blind processes of recombination and selection. But perhaps most importantly, for the purposes of studying technological evolution, we can see the complete “genotype” of computer programs by reading their code. And with standard text-analysis packages, Miu and coauthors can see exactly which lines and blocks of code are copied and how similar programs are to each other. Lastly, because programs are explicitly scored (and players care about these scores; they are actively seeking the highest score), Miu and coauthors also know exactly how “good” a program is. The figure below tracks how scores improve for a sample of 4 contests. In the figure, each dot is a program. The horizontal axis is time (each contest runs 7 days) and the vertical is the score (lower is better). Clearly the best programs improve over time, in fits and starts. From Miu et al. (2018) When Miu and coauthors peer into the underlying dynamics, in their 2018 paper they see that the most common type of program that is submitted is a program that is very similar to the current leader, but with minor tweaks. In their 2020 follow-up, they also document that when two programs have the same score, people are more likely to copy the one submitted by the participant who tends to score higher in...
Progress in Programming as Evolution
Audio: Progress in Programming as Evolution
Audio: Progress in Programming as Evolution
This is an audio read-through of the initial version of Progress in Programming as Evolution. To read the initial newsletter text version of this piece, click here. Like the rest of New Things Under the Sun, this underlying article upon which this audio recording is based will be updated as the state of the academic literature evolves; you can read the latest version here.
Audio: Progress in Programming as Evolution
Building your web of knowledge with Scrintal
Building your web of knowledge with Scrintal
Welcome to this edition of our Tools for Thought series, where we interview teams on a mission to help us make the most of our minds. Scrintal is an app that combines mind mapping with the power of networked note-taking. It helps you see your thoughts at a glance so you can convert cluttered ideas into connected information. In this interview, we talked about the power of bi-directional linking, the challenges when setting up a Zettelkasten, the anxiety caused by folders systems, the relationship between design and functionality, how to best connect notes together, and much more. Enjoy the read! Hi Ece, Furkan, and Arda, thank you so much for agreeing to this interview. Knowledge builders often struggle visualizing how their thoughts connect to each other. Can you tell us more about your mission? Ece: Thanks so much for having us here! The biggest challenge for any thinker is finding connections between ideas. We jot down ideas all day long, but how do they link with one another? What are the opportunities that we miss as we can’t put the puzzle pieces together?  We started with these challenges, thinking that the hardest part in each knowledge management system is to organize and connect things. We designed Scrintal in a way that it combines mind mapping with the power of networked note-taking. Scrintal is powered by the bi-directional links and provides a fully functioning canvas to effortlessly connect ideas. Our mission is to support clear thinking, creative writing, and organized knowledge. I think many research-minded people will love the idea. What inspired you to build Scrintal? Ece: I was doing my PhD on climate change at Stockholm University when I developed an interest in tools for thought. I was reading tremendously and conducting interviews for my research. This was the first time I set up my personal knowledge management system via a Zettelkasten. I was diving deep and then kept switching tools, all in the search for a space where I could both link my ideas and spread them to have a better overview.  After some point, the same thing happened again and again. My notes were not surfacable anymore. I was using bi-drectional links to come across my older notes serendipitously. However, 90% of my time was going into creating these bi-directional links when I only used 10% to go back to these links and have encounters. I wasn’t getting anything out of the graph views, either.   The “aha” moment came when I started talking to my colleagues at the university. Some were even saying they consider spreading all their ideas on papers on the floor to find a structure and a flow. The mind maps were not enough, as creating mind maps breaks the flow, you focus on the visuality more than you want, and the text editors are not powerful enough. Whereas in networked note taking, we miss an essential part of the creative flow, working with ideas freely.  With my co-founders Arda and Furkan, we conducted more than a hundred interviews before we started building Scrintal. In the end we decided to combine the easy organization and findability of the networked tools, while keeping the flexibility of visual tools. And how does Scrintal work exactly? Ece: We solve two challenges; keep the creative float of visual tools, but solve the challenge of searchability and easy organization generally found in visual tools; and keep the networked thoughts, but make them more surfacable, memorable and recognizable.  How we achieve this is that you have a daily visual desk that you can create mid-size cards with one-click. These cards are very flexible, you can make them full screen size to focus on what you are writing or even fold them if you want to make them look like post-its or a mind map.  There is a fully-fetched text editor in these cards where you can add video and images and extract them from the cards if you’d like to open them separately and take notes at the same time. To keep the serendipity, and the Zettelkasten philosophy of “you only have one brain”, you have one daily desk in Scrintal. You bring what you are working on to your desk through the archive where all your notes live. You link cards through bi-directional links, rather than creating arrows in mindmaps. This helps you to be in the flow while writing and sets the base of the visual map without you giving extra effort.  You can then visually organize these cards in the way that you like, if you don’t want to organize them yourself, the built-in graph algorithm does the job for you. When you are happy with the visual organization you create, you can save it on a board and share it with others so they can interact with what you created.  Many note-taking tools treat notes as isolated units. Can you tell us more about how Scrintal makes it easier to connect ideas together? Ece: Knowledge work starts from capturing ideas, then you connect them and then develop them. The biggest problem usually lies in forgetting our older notes and not knowing what to connect with what.  We worked on making it much easier and fun to see the connections between your ideas. Firstly you can link notes bi-directionally to indicate a direct connection. We do not have a folder structure, as folders are rigid and break the possibility to fluidly connect ideas. You have a “where do I put this?” anxiety in a folder system. The second layer of connecting ideas is done through the tags. Tags are for grouping relevant notes. Using tags based on the context you’d like to find them again is the best way to go. In Scrintal then comes the connecting notes in visual ways. You can lay out your notes on your visual desk and change their colors which instantly gives a visual segmentation.   In the end, you can have a final piece of work, showing the whole plot of your next novel, strategic plan of your company, research on a specific topic, or all your meeting notes in one screen. This gives you the power of seeing all your connected thoughts at one glance.  This actually seems to be perfect to apply the Zettelkasten method! Ece: Yes! As you can see the cards on your board, it pushes you to write one idea or topic per card. Scrintal is actually the closest version to a digital Zettelkasten. You can think of the cards in Scrintal as index cards and the nice thing is you can just lay them on your digital desk as if you are overlooking your index cards. Seeing cards visually next to one another makes creating connections much easier. Also not having a folder structure makes Scrintal an ideal tool to implement Zettelkasten. Whenever an idea pops into your mind, you can create a card, tag it if you like, rather than having to think: where shall I put this new note? This bottom-up approach is what makes Scrintal suitable for a Zettelkasten.  Knowledge builders sometimes feel reluctant to invest too much into a specific tool for fear of getting trapped in a silo. How does Scrintal integrate within the existing ecosystem? Ece: I totally understand this concern! In Scrintal we focus on shareability of your knowledge. In terms of being able to share anything you create in Scrintal, you can publicly share your whole desk, or single notes within them. This way anyone, without creating a Scrintal account can interact with your board, go deeper into each note and see the total overview at the same time. We will release the Markdown import and export options soon, which will make our current users very happy I believe. That sounds great. Can you share a bit more about your design philosophy? Furkan: We prioritize a simple and functioning design. Building a tool that is visually appealing is extremely important given the number of hours we spend looking at our screens. Our design principles are… Nothing unnecessary. We remove all the fluff. When it comes to design, less is more. We do not follow the trend of adding more features just because it makes us look cooler. Just because we can do something cool, doesn’t mean we have to do it. Simple is beautiful. We have infinite ways of expressing ourselves — but only a few ways of actually communicating effectively with people. We embrace simplicity in our design and communication because we are here to build trust with our users and this starts with communicating clearly without any white noise. Functionality first. Functionality precedes design because functionality solves the problem for us and our users — design merely facilitates functionality. Almost always there are multiple ways to express the same functionality in design; what matters is that the functionality works well, rather than how it looks on the outside (as long as it looks good). As an inherent philosophy, anything we add in Scrintal should be solving a problem, should be simple, should not be creating new problems while being a solution. And how do you incorporate your user’s perspective in the design process? Arda: Understanding the user’s perspective has been an important part of the product development process since day one. We believe user interviews are a great way to understand what people think and feel about our product, as well as their experience with similar products. Since we have released Scrintal’s first version, we have onboarded several tranches of groups who have been using Scrintal for different use cases. Once they have enough time to try the tool in their daily lives, we meet again to listen to their feedback. This user-centric approach gave us insights into what problems people have, what they appreciated and what crucial features were missing. Once we get those insights, we cluster them into groups and conduct a brainstorming workshop as a tem where each member shares their thoughts. We then decide what to prioritize next and start building those features. We’re in the process of building a community in which our early users are able to send us immediate feedback and request new features. Community building has been an integral component ...
Building your web of knowledge with Scrintal
The danger of emotional reasoning and using our emotions as proof
The danger of emotional reasoning and using our emotions as proof
Cognitive distortions are thought patterns that can affect our perception of reality. One such distortion is emotional reasoning. This is a thought pattern in which our emotional reactions, or our feelings, lead us to believe that something is true even when the empirical evidence tells us otherwise.  Emotional reasoning is very common in the workplace. If you have ever found yourself thinking, “I know this project will fail because I feel scared,” or “I know my manager must dislike me because I feel unappreciated,” or “I know my colleague has been hiding something because I feel suspicious,” then you have already experienced emotional reasoning. Taking our emotions as information Recent research described emotional reasoning as a mechanism that “can lead people to take their emotions as information about the external world, even when the emotion is not generated by the situation to be evaluated.” This leads to inaccurate emotional truths which directly contradict any objective, perceptual truths. Emotional reasoning was first coined by the American psychiatrist Aaron Beck in the 1970s. In a career that spanned more than 65 years, Beck studied cognitive theory and therapy, and is considered the founder of cognitive behavioural therapy (CBT). Beck’s extensive clinical career, and related research, illuminated the way in which our emotions sway the way we feel. For example, Beck found that his depressed patients were plagued by self-criticism and regret, whereas those with anxiety experienced fear-filled thoughts. Beck referred to thought responses to an emotion as “automatic thinking”. His research suggested that the content of automatic thoughts was often linked to the diagnosis a patient had. However, it is likely that automatic thoughts will be relevant to your state of mind, even if you do not specifically struggle with your mental health. For example, if you have been feeling anxious about a project at work, your automatic thoughts may be based on that anxiety. When presenting your findings to colleagues you may assume that they are disappointed with your progress. As a result of emotional reasoning, this automatic thought will occur in the absence of any objective proof to suggest that your colleagues are perceiving your work negatively. Furthermore, studies have shown that isolated automatic thoughts can result in negative thought cycles. Emotional reasoning such as “I am sure I am doing a bad job at work because I feel anxious about it every day,” aggravates your fear or apprehension. Increasing anxiety can begin to negatively impact your performance. You may struggle to focus, make mistakes, or see a decline in your output. This becomes a self-fulfilling prophecy, and a cycle of negative thoughts is set in motion. How to avoid emotional reasoning If your beliefs have become founded on emotional reasoning rather than logical facts, it is vital to search for objectivity to manage this cognitive distortion. Taking control of automatic thoughts will help to prevent emotional reasoning from derailing your efforts in your professional and personal life. The process involves challenging your emotional beliefs so that automatic thoughts are thoroughly interrogated before being accepted. There are several ways you can investigate the source of any discouraging thoughts to avoid unnecessary negativity or anxiety. 1. Practice validity testing. Validity testing is key to checking whether you are experiencing emotional reasoning. If you feel sure that your work has not been of the expected standard, you must search for objective evidence to prove that this belief is true. Ask yourself if anyone has questioned your work, and reflect on any recent appraisals or informal feedback you have received. In the absence of negative feedback or criticism, you may find that your thoughts cannot be upheld and are therefore unlikely to reflect the truth. 2. Write in a journal. Journaling is a great way to pay attention to your thought patterns. Specifically, you should record the difficult situations you face, and which emotions or thoughts a dilemma provokes. If a colleague requests a meeting with you without giving you any context, use your journal to document the automatic thoughts that appear. You might automatically assume that they want to talk because your performance has been below average, or that you are facing redundancy, despite there being no evidence for this.  Recording your feelings in this way will allow you to reflect on your natural thought patterns so that you can start to identify when emotional reasoning is affecting you. This provides an opportunity to reject negative thoughts before they take hold. 3. Discuss your emotions. If you feel anxious about work, you may struggle to accurately assess your performance. Talking to a trusted colleague or friend about your concerns could give you a much-needed objective view. It can be illuminating to learn that others speak highly of you or your work, and this can help to dispel cognitive distortions. Emotional reasoning is a form of distorted cognition that can lead to an unwarranted negative opinion of your ability or character. By generating negative thoughts, a downward spiral of anxiety can cause a self-fulfilling prophecy of worsening performance. Learning to probe emotional beliefs to check their validity can help to avoid unnecessary negative thoughts and self-talk. Discussing your beliefs with someone you trust, using a journal to understand your thought patterns, and practising validity testing can all help you to avoid the pitfalls of using your emotions as a form of proof. The post The danger of emotional reasoning and using our emotions as proof appeared first on Ness Labs.
The danger of emotional reasoning and using our emotions as proof