Public Health & Medicine

Public Health & Medicine

556 bookmarks
Custom sorting
Thinking Beyond the Brain: Why Neuroplasticity is Overhyped
Thinking Beyond the Brain: Why Neuroplasticity is Overhyped
Lists of exercises to rewire your brain, books about the “plastic” brain… Neuroplasticity has been touted as a magical capability anyone can harness for success. As with many neuroscience-based concepts that made it into mainstream media, the hype starts from a fact: it is true that the adult brain is not hard-wired with fixed neuronal circuits. But many how-to guides take the idea much further than most scientists would be comfortable with. So, where does the boundary lie between neuroscience and neurobabble? What exactly is neuroplasticity, and can it be capitalized on in any practical way? And is there a more holistic way to explain learning, habit formation, and human behavior in general? A primer on neuroplasticity Neuroplasticity, short for neural plasticity, is the ability of the nervous system to reorganize its structure, function and connections. These changes can be small, such as a single neuron pathway making a new connection. Or they can be quite big, like when entire cortical areas are remapped following an amputation — which can lead to something called “phantom limb pain”, where amputees feel like their amputated extremity is still there. There are two main types of neuroplasticity: Structural plasticity. These are changes to the structure of the brain, including the creation and destruction of connections between neurons, or changes in the strength of these connections. Structural plasticity can also refer to other anatomical alterations, for example changes in the density of grey matter. It’s often studied using a variety of brain imaging techniques, such as magnetic resonance imaging (MRI). Functional plasticity. This type of neuroplasticity refers to changes in how tasks are organized in the brain, and is most easily observed when parts of the brain are damaged, and other areas then “take over” the task. For instance, the area that normally fills the role of the visual cortex in sighted people can used to perceive touch in blind people. Contrary to a common misconception, the discovery of neuroplasticity is not new, with research papers on this phenomenon dating as far back as the 1800’s. However, the advent of neuroimaging techniques may have fueled the current hype. The main source of confusion is the loose definition of neuroplasticity used in the media, where it has become synonymous with learning new skills, acquiring new habits, or changing one’s behavior. By that account, any experience can be linked to neuroplasticity. As British neuropsychologist Vaughan Bell puts it: “This is the loosest and most problematic use of neuroplasticity. By definition if we learn something, acquire a habit or tendency, good or bad, something has changed in the brain. Without specifying what the brain is doing, we know nothing more.” And yet, we keep on seeing the word “neuroplasticity” pop up in articles about self-development, psychology, and human behavior in general. So, where does the problem come from? It boils down to how common it has become to use neuroscience to explain things that can be explained by other areas of research — a tendency called neuroessentialism. The problem with neuroessentialism In the words of William Schultz from Argosy University: “Neuroessentialism is the view that the definitive way of explaining human psychological experience is by reference to the brain and its activity.” Neuroessentialist thinking makes us consider mental processes and human behavior solely through the lense of brain processes. For example, we may start treating addictions through medical interventions, without taking into account the many social and environmental factors at play. Neuroessentialism is also used in more malicious ways to market products and give them an aura of scientific authority. In an article about neuroscience in the public sphere, researchers complained that “logically irrelevant neuroscience information imbues an argument with authoritative, scientific credibility.” In the case of neuroplasticity, the idea that the brain can rewire itself can be used to position products as tools to unlock one’s hidden potential. It helps that these brain processes cannot be seen, making them mysterious and exciting — like something new to work on when all other tools have seemingly failed. But Tom Stafford, a lecturer in psychology and cognitive science at the University of Sheffield, explains that “many claims about human psychology are adequately and entirely addressed at the level of behavior with no need to invoke neuroscientific evidence.” It’s not that the concept of neuroplasticity is boggus — in fact, there is lots of interesting research going on in this branch of neuroscience — but, in many cases, we simply don’t need to refer to neuroplasticity to understand and alter human behavior. And when we succumb to neuroessentialism and using neuroplasticity as an explanation for everything from habit formation to learning, we’re either ignoring other important factors beside brain processes, or we’re falling prey to pseudo-authoritative marketing designed to sell products that have little to do with the brain. Thinking beyond the brain That is not to say that we should ignore neuroplasticity as an important phenomenon that plays a role in how the brain works. Rather, we should apply caution when we engage with content that base their argument on neuroplasticity. Here are three simple ways you can practice healthy skepticism when it comes to neuroplasticity: Choose the most appropriate level of analysis. If you see the word neuroplasticity when reading an article or listening to a podcast, ask yourself: are brain processes the most relevant level of analysis to study this topic, or would a higher level of analysis make more sense? For example, when building new habits, wouldn’t the mental processes of motivation and attention be more helpful rather than the strengthening of synaptic connections? Consider other factors. Brain processes are often only a small part of the picture. Of course, neuroplasticity plays an important role in many areas of human behavior. But what about social and environmental factors? Sure, depression has been linked to chemical imbalances in the brain, but many helpful interventions don’t rely on medication. Ask yourself: what are the other factors beside neuroplasticity that can impact changes in human behavior? Question the intent. Be critical of the source of information. Are you reading a research paper from neuroscientists trying to understand brain processes, or a landing page for a product promising to help you rewire your brain? If you see “neuroplasticity” being used to sell a supplement or a brain training app, you can be almost certain that it’s being overhyped for marketing purposes. Again, neuroplasticity is a real and fascinating phenomenon, and much more research needs to be conducted to understand how it works and its role in shaping human behavior. By thinking critically about the context in which you encounter the word, you can ensure you’re not being an unknowing victim of neurobabble. The post Thinking Beyond the Brain: Why Neuroplasticity is Overhyped appeared first on Ness Labs.
Thinking Beyond the Brain: Why Neuroplasticity is Overhyped
The Science of Self-Compassion
The Science of Self-Compassion
While we try our best to be supportive of our loved ones, many of us struggle with self-compassion. We are often too harsh with ourselves, turning blame inwards and replaying the mistakes we have made on a loop. However, punishing ourselves for our failures and being too tough on ourselves may actually hinder our performance, and treating ourselves with more kindness and compassion is a better way to achieve the results we want. Self-compassion is an emergent area of research with the potential to help us develop a kinder approach to work and life. The three pillars of self-compassion Compassion can be defined as the desire to alleviate someone else’s suffering. It involves being sensitive to how others are feeling or being treated, and it motivates us to help relieve the discomfort of others, including physical and emotional pain. Self-compassion is simply the act of showing this same kindness towards oneself. Dr Kristin Neff is an associate professor in the department of educational psychology at The University of Texas at Austin. Dr Neff has extensively researched self-compassion, describing the ways that self-compassion is closely related to wellbeing, and its influence on healing in psychotherapy. As part of her work, Dr Neff has identified three pillars of self-compassion: self-kindness, common humanity, and mindfulness. Self-kindness exists in contrast to the self-critical approach that many of us are familiar with. When we turn criticism inwards, we might blame ourselves for not being good enough, or form negative thoughts regarding our inability to cope with life’s challenges. In the same way that we would be kind to a friend in distress, Dr Neff claims that we should also aim to comfort ourselves in difficult times by self-soothing and behaving thoughtfully towards our inner self. Common humanity involves recognizing that imperfection is a trait shared by us all. We are not alone in our mistakes and will all struggle at some point in life. Rather than thinking that our failures make us weak, unworthy or isolate us from others, this pillar of self-compassion encourages us to foster a sense of universal belonging. The final pillar, mindfulness, necessitates finding a measured response to difficulty or distress. If you experience uncomfortable emotions, a mindful approach entails striking a balance between ruminating on the distress and stifling your feelings. When facing unpleasant problems, whether from one’s own mistakes or through no fault of one’s own, mindfulness allows for observation of the present moment without evaluation or over-identification of emotions. When put into practice, Dr Neff’s three pillars interact to create a state of mind that favors self-compassion when faced with distressing life experiences, self-perceived inadequacies, and the mistakes we all make. And this approach comes with many evidence-based benefits. The scientific benefits of self-compassion Self-compassion has been shown to provide many benefits. In her book Compassion and Wisdom in Psychotherapy, Dr Neff writes that practicing self-compassion is linked to less anxiety and depression. We might incorrectly assume that this is because those who are compassionate towards themselves naturally have a sunnier personality or have honed the skill of avoiding difficult feelings. However, even when self-criticism and low mood are accounted for, self-compassion remains beneficial for mental wellbeing because those who try the practice learn to recognize when they are struggling. This self-awareness allows people to be kind to themselves. In these moments, they can then more effectively deal with any feelings of anxiety provoked by circumstantial difficulties. Self-compassion can also lead to empowerment — the feeling of being strong, competent, and holding the belief that we can succeed. For example, researchers Olivia Stevenson and Ashley Batts conducted a study into the impact of self-compassion for female domestic abuse survivors. They found that, when asked about a previous fight, women who showed more self-compassion reported a significantly better impact on their wellbeing. The researchers concluded that self-compassion led to feelings of empowerment, which was beneficial for processing and recovering from trauma. Developing self-compassion can bolster our inner strength and resilience as well. To be mindful, we observe our feelings without interacting with them. During mindfulness practice, you might recognize that you feel shame or regret over a mistake you made. Observing this feeling without ruminating on it, and then accepting that everyone makes mistakes, can help develop strength and resilience in the face of adversity. Finally, self-compassion is a learning tool. If a work project falls short of expectations, self-criticism will undermine your professional development. If a friend was in a similar situation, you would likely be encouraging and understanding of their mishap. Similarly, by practicing self-compassion, you will avoid falling prey to defeatism. How to practice self-compassion We are often much better at showing compassion for others than we are at directing it inwards. When learning how to practice self-compassion, it is therefore helpful to imagine how you would treat a friend in your situation. Think about what you would say to them, how you might try to help, and the tone of voice that you would adopt. Next, think about how you usually treat yourself in the face of failure. If you treat yourself differently to others, ask yourself which factors or fears lead to this disparity, and how you could close that gap to treat yourself with the same warmth, understanding and compassion that you offer your friends. Over the course of several weeks, be mindful of critical self-talk. Proactively adjust how you talk to yourself to include more kindness, encouragement, forgiveness and acceptance. Writing can be a great metacognitive strategy to practice self-compassion. Writing a letter to yourself from an unconditionally loving imaginary friend is an effective way to demonstrate self-kindness using metacognition. If you’re faced with a challenge or difficult situation, thinking about yourself from an outsider’s point of view can be beneficial in learning to treat yourself in the same gentle way that you would care for a friend. In this way, the distance metacognition offers can help to counteract feeling weak or unworthy of kindness. Most of us treat our friends with compassion when they make a mistake or are facing a difficult situation. However, under similar circumstances, we are often much harsher on ourselves. Developing self-compassion can lead to increased empowerment, strength, and resilience. When it feels difficult to turn compassion inwards, reflecting on how we would treat a friend in the same situation is a simple way to foster self-compassion. The post The Science of Self-Compassion appeared first on Ness Labs.
The Science of Self-Compassion
The Paradox of Goals
The Paradox of Goals
Success is commonly defined as reaching one’s goals. Getting accepted into a prestigious program, building a profitable business, becoming a doctor, completing an online course… Whatever the goal may be, success is simply bridging the gap between where we are and where we want to go. The Internet and our bookshelves are filled with exhortations to stay motivated, manage our time more effectively, and stick to our plan. If success is so easy to define, why is it then that we often struggle to establish, pursue, and reach our goals? The way we manage goals is broken, to the point where many people are questioning the very nature of ambition. And it is true that chasing ambitious goals may feel pointless when everything feels so chaotic. But ambiguity and opportunity are two sides of the same coin. Navigating uncertainty is how we learn and how we change. To flourish in our increasingly turbulent world, it is imperative we foster radical change at a realistic scale. Instead of applying rigid, linear models of goal management, we need to create space for our goals to emerge. A Tale of Success and Failure When we talk about goals, we suggest a desired outcome attained through some form of prolonged effort. Goal-setting usually goes like this: we define a target state, and then we map our journey to get there. It all sounds sensible: goal-setting allows you to decide where you want to go, and to define how you will get there. Then, we expect to reach our goal. And this is where things start to go wrong. See, there are only two possible outcomes: either we successfully reach our goal, or we fail. It’s easy to see why failure leads to disappointment. When we see any outcome other than the expected one is perceived as failure, it’s no wonder we start questioning our self-worth, wondering what went wrong, or blaming external factors – rightly so or not. Our reaction may vary, but the experience feels the same: distress and doubt. What’s more surprising is what happens when we successfully reach our goal. Designed for disenchantment In the process of working toward a goal, we come to imagine what it will feel like to achieve it. For example, we start thinking: “When I graduate, I will feel accomplished” or “When I launch this product, I will have more free time to spend with my family” or “When I get this job, I will feel like my career is on track.” Unfortunately, the happiness we feel when reaching a goal is short-lived. Dr Tal Ben-Shahar calls this the arrival fallacy. We give a big presentation, only to go back to our daily routines. We finish a project, then realize there are two more to work on. We receive a promotion, but still feel unsure about our career path. Life doesn’t seem that different after reaching a goal. It doesn’t help that modern life has created a giant public leaderboard that maintains the artificial need to “keep up” – to persist on climbing the ladder as new rungs are incessantly added.  Because of social media, we compare ourselves to our peers more than ever before. LinkedIn notifies us of the success of not just our colleagues, but all the people we studied with in school. Instagram is a constant reminder of the supposedly perfect lives of everyone in our network. This proverbial “rat race” feeds into the arrival fallacy: if only we can climb one more step – if only we can get that promotion, give that big presentation, grow our online audience, hire a team, buy that house – then, we’ll finally feel at peace. But both successful and failed goals seem to let us down, so those expectations are a recipe for disenchantment. The logical solution would be to let go of our ambitions and altogether abandon the idea of goals. In the words of Peter La Fleur, a character played by Vince Vaughn in Dodgeball (2004): “I found that if you have a goal, you might not reach it. But if you don’t have one, then you are never disappointed.” This sounds great in theory, but it goes against our very biological makeup. All living species are goal-oriented in nature. In fact, this is the key difference between living organisms and nonlife: all organisms have goals. These may be very basic goals, such as survival and reproduction, but goals nonetheless. Even sponges collaborate with other species to survive, and plants turn toward the sun to get the most sunlight.* As special as we like to think of ourselves, humans are no different. We are goal-oriented creatures. We need a sense of purpose to drive our actions, to survive and to thrive. Even so, goal-setting keeps on failing us. We sense that we need goals, but we know something is terribly wrong with the way we define and pursue them. Here lies the paradox of goals: Setting goals is a guarantee for disillusionment whether we reach the desired state or not, and yet working toward goals is an important part of evolving as a person. How can we resolve this paradox? From Goals to Growth Loops Notice the vocabulary we use to talk about goals. Goals drive us forward, we set out to achieve our goals, we make progress toward a goal. Those are called orientational metaphors. In our collective psyche, goals rely on a sense of movement. And that’s not wrong. But we may be misguided as to the direction of this movement. Instead of a linear scale progressing from a present state to a desired outcome (the classic “up and to the right”), goals should be conceived as cyclical. Let’s break it down. Two key ingredients are required to pursue a goal: the will and the way. The will is our motivation – a reason why we want to achieve the goal, which gives us the energy to push ourselves. The way is our ability to map out the steps to take and having the skills to execute the required actions. In simple situations, or when following a default path as prescribed by society, the will and the way are fairly easy to define. For instance, your goal might be to get a promotion, and the steps might even be outlined in a corporate handbook with a clear rating scale. However, life is rarely this simple – and, in fact, you may not want to live such a life where your goals are predefined and the way to achieve them is preprocessed for you. What if we don’t know where we are and where we want to go? What happens when the will and the way are unclear? It is tempting during such liminal moments to cling on to a ladder – any ladder – to regain an illusory sense of control and progression. But this temporary hack is rarely sustainable. Soon, we start noticing the cracks. We have the nagging sense of dread that we are on the wrong path, and yet, we are not sure what to do next. In our modern world with its infinity of potential goals to pursue, which ones should we explore? How do we know which one is right for us? As you can see, linear goals are inherently fragile. The solution, inspired by nature itself, is to design growth loops by practicing deliberate experimentation. As Nassim Taleb puts it: “It is in complex systems, ones in which we have little visibility of the chains of cause-consequences, that tinkering, bricolage, or similar variations of trial and error have been shown to vastly outperform the teleological – it is nature’s modus operandi.” The cycle goes like this: First, we commit to an action. Then, we execute the target behavior. Finally, we learn from our experience and adjust our future actions accordingly. Each cycle adds a layer of learning to how we understand ourselves and the world around us. Instead of an external destination, our aspirations become fuel for transformation. We don’t go in circles, we grow in circles. Goals turn into growth loops. Our ancestors instinctively knew of this circular model of growth. In many cultures, the wheel is a symbol of growth and success. The cyclic ages of Hindu cosmology, the wheel of life in Buddhism… The wheel combines the idea of progress and wholeness: it is complete and yet it keeps on moving. It represents the perpetual change and transitory nature of life. This cyclical model also aligns with the way our mind naturally works. The brain is built on a giant perception-action cycle with a circular flow of information between the self and the environment, and a system constantly conveying whether a signal should be intensified or stopped. This feedback loop is so well established, it is considered the theoretical cornerstone of most modern theories of learning and metacognition. Instead of ignoring ancestral wisdom and modern scientific knowledge by blindly pursuing goals at the expense of our mental health, we should consider going back to a circular model in which goals are continuously discovered and adapted – in conversation with our inner self and the outer world. An Antidote to Uncertainty We cannot think of goals without thinking of space and time. Space: where am I and where do I want to be? Time: how long will it take me to cross that gap? The uncertainty that surrounds the space-time continuum of goals is not just conceptual – it’s deeply emotional. For instance, a big gap may feel scary. Not moving fast enough can give us time anxiety.  By turning goals into growth loops, we can embrace the idea that achievement is simply the continuation of the learning cycle itself. Sure, the future is uncertain, but our personal growth is inevitable. Cycles of deliberate experimentation can help us let go of the chronometer. Growth loops may feel slower and they don’t come with a shiny finish line, but each layer of learning contributes to our ongoing success. And, perhaps paradoxically, we can often progress faster by allowing for the possibility of getting things wrong and facing challenges. This is the archetypal hero’s journey, where the hero embarks on an adventure, equipped with their current knowledge, and returns transformed by their experience. Some of the most successful endeavors are based on growth loops. The scientific method relies on formulating hypotheses, testing them, and imple...
The Paradox of Goals
Talent archetypes: What is the shape of your skills?
Talent archetypes: What is the shape of your skills?
In the past, workplaces were filled with experts who each knew a lot about one specific area. The changing scope of businesses, with more fluidity between roles and responsibilities, later led to the rise of generalists — individuals who are capable across a lot of areas but do not need in-depth knowledge of any of them. However, in a world driven by rapid changes in technology and industry best practice, being a generalist is no longer enough. To support the fast-moving targets of global businesses, a different profile now prevails: the versatilist. The shape of our skills The way we work is impacted by our talent archetype. The framework was devised by Gartner analyst Diane Morello. Morello stated that simply having technical aptitude was no longer sufficient to meet industry needs. Each archetype can be described in terms of the “shape” of its corresponding skills. Specialists, or “I-shaped” people, are experts at one thing. They may have deep technical skills in a certain field, or only work within a very narrow domain of work. They are considered to be experts in their field by their peers, but their value is often not understood by those working in different areas. On the other hand, generalists — or “hyphen-shaped” people — are capable in many different areas, but do not have expertise in any of them. The phrase “Jack of all trades, but master of none” is a popular way to describe a generalist. As a result of their broad  knowledge base, generalists can respond quickly to different situations. However, their competency and confidence are likely to be much lower than that of someone who has deep knowledge in an area relevant to the business. This is where versatilists come in. Also called “T-shaped” people, versatilists are capable in lots of areas, and can become experts in specific fields according to the business needs. Rather than settling in one area and becoming a specialist, they are comfortable exploring any new domain that demands their attention. With the fast pace of technological change, versatilists tend to be popular hires as well as successful founders, as they can more easily recognize new opportunities and make the required changes to quickly adapt to new technology. In fact, Shabnam Hamdi and colleagues argued in 2016 that when technology uncertainty is greater, having individuals with T-shaped skills is beneficial. Two years later, Haluk Demirkan and James Spohrer found that T-shaped digital professionals tend to combine critical thinking and in depth problem-solving with a “breadth of knowledge, skills, experience, and complex communication abilities.” They concluded that this combination of skills is crucial to build high-performance teams. The rise of the versatilist For fast-moving companies and startups alike, the versatilist has replaced the generalist as the ideal profile. By their very nature, versatilists are adaptable, and have an ability to take relevant information on board while developing and honing new skills. The career equivalent of a serial monogamist, the versatilist throws themselves whole-heartedly into a project or area to develop deep knowledge and competence. By becoming fully immersed in one field at a time, they develop a broad range of experience as well as multiple deep skills. Being adept at spotting new trends, they are ready to move to the next priority area as required. Being consecutively embedded in specific areas gives the versatilist an excellent foundation for multidisciplinary collaboration. Because of their wide-ranging experience, the versatilist also has greater insight than a specialist would have regarding the role they should take in a specific project. Their insight supports cross-organizational vision across people, processes, and products. How to become a versatilist Businesses thrive on adaptability and innovation. Being a versatilist involves anticipating what you need to learn to meet these demands. To do this, it’s important to cultivate a versatile mindset by consuming content across different disciplines. It’s also vital to practice lifelong learning by keeping up with trends in your industry. The versatilist is adept at anticipating what the next wave will be and upskilling themselves accordingly. Often, you will need to be prepared to study intensely and to quickly develop new skills to be ready to work on the next business initiative without delay. For instance, many versatilists are currently busy studying tools for generative AI so they can use them in their business. If you currently work as a specialist, it can be daunting to exit your comfort zone to deeply immerse yourself in unfamiliar territory. Taking time to practice metacognition is vital for reflecting on the best areas to invest your time and energy in. In addition, developing a deeper understanding of your thought processes will help you expand the versatility of your skill set in areas that are likely to be beneficial to your personal and professional growth.  Being capable in a variety of areas, able to spot future trends and adept at quickly upskilling to meet changing business are some of the benefits of the versatilist archetype. A versatile mindset can be cultivated by consuming information widely and practicing metacognition to quickly respond to fast-changing business needs. The post Talent archetypes: What is the “shape” of your skills? appeared first on Ness Labs.
Talent archetypes: What is the shape of your skills?
Reinventing the digital assistant with George Levin founder of Hints
Reinventing the digital assistant with George Levin founder of Hints
Welcome to this edition of our tools for thought series, where we interview founders on a mission to help us think better and work smarter. George Levin is the founder of Hints, an AI assistant designed to help you save up to one hour every day. You can talk to the Hints bot like you would talk to a human, which allows you to manage your business from your messenger. In this interview, we talked about the role AI can play in improving our productivity, the relationship between context switching and anxiety, the importance of streamlining tasks to increase focus, how knowledge workers can use templates to minimize distractions, and much more. Enjoy the read! Hi George, welcome back! It’s great to follow up on your progress with this second interview. Since we last talked, you pivoted from a knowledge management app to a more holistic assistant. It’s great to be back and share our progress. Initially, we had set out to create a knowledge management app focused on quickly capturing and organizing information. However, through user feedback, we realized a greater need for a more holistic solution to assist professionals in their daily tasks and alleviate the burden of information overload. Our main goal has always been to reduce anxiety caused by the overwhelming amount of information that knowledge workers have to contend with daily. We began to notice a trend in user feedback, with many requesting integration with popular knowledge management tools such as Notion, Obsidian, and others. They loved our quick capturing via messenger, sms, email, and our apps, but they wanted to send notes to their existing systems. It prompted us to start with a Notion integration, and we saw a significant increase in new users to our platform. As we continued to engage with our early adopters, we discovered that they were using our app not just for capturing notes and ideas but also for other tedious tasks, such as updating CRM or project management boards within Notion. This realization was a turning point for us, as we understood that we were addressing the same problem of context switching on a larger scale. We decided to shift our focus to direct integration with CRM and project management tools to ensure that all users can benefit from our app and streamline their workflows. We pivoted to an AI-powered assistant that helps teams increase productivity and minimize distraction, allowing them to focus on what truly matters. It seems like AI is quickly permeating all productivity layers. How does the new version of Hints use AI? AI plays a crucial role in the new version of Hints. Our goal is to create a seamless and intuitive experience for users, allowing them to interact with our assistant as they would with a human. Through advanced natural language processing and machine learning algorithms, our  AI can understand and interpret user requests, even when they may be asking for multiple tasks to be completed in one sentence. For example, our AI can parse text such as “create a deal in my CRM for company Hints, add a new contact George Levin and remind me to call him on Tuesday” and create a company, a deal, a person, and a task in the user’s CRM. When a user wants to update a Notion table, it can identify which parts of the text correspond to different columns. This way, our AI-powered assistant can automate and streamline tasks, allowing users to focus on more important things. That’s amazing. More specifically, how does it work? Setting up the AI assistant takes about 30 seconds, and it can be used through SMS, Slack, Telegram, WhatsApp, or Email, with built-in voice-to-text capabilities. Users can communicate directly with the assistant bot to update their personal or work projects, especially from the phone. Additionally, the bot can be added to team group chats or Slack channels, allowing everyone on the team to use it. Currently, we see three prominent use cases for Hints: Sales teams are using Hints to update their CRMs, as they are tired of manual updates. It allows them to focus more on what truly matters to them: selling, which leads to increased commissions. Product and support teams use Hints to submit and update tickets with feature requests and bugs. It’s convenient for them as they can move threads from Slack or other messengers to their ticketing systems. For example, after discussing a feature in Slack, they can create a ticket for it and keep updating it from messengers.  Personal productivity. Some clients use Hints to send calendar invites, update their to-dos in Notion or capture any helpful information to their knowledge management tool. Less context switching means more focus. More focus means less anxiety. Less context switching, more focus, less anxiety — that sounds great. What’s the feedback like so far? The feedback for Hints has been positive. One of the most common comments we receive is that users are happy to use it with their existing systems and messengers without the need for complex rule-based configurations. The integration process is quick and easy, taking only 30 seconds, and anyone on the team can set it up without needing to involve a tech-savvy expert. We’ve received a lot of enthusiasm from in-field sales representatives who can now update their CRMs on the go via text messengers without spending time on data logging at the end of the day.  Product team leads are pleased that everyone on the team can create and update tickets in Slack, reducing the risk of valuable information slipping through the cracks. We’ve also seen some innovative and unexpected use cases for Hints. For example, one hospital reported that they greatly simplified their doctor scheduling process by utilizing Hints’ WhatsApp bot in conjunction with Notion integration. You’ve also added lots of new integrations. Yes, we have been adding new integrations to our platform. Currently, we are integrated with Notion, HubSpot, Pipedrive, Jira, Trello, ClickUp, Google Calendar, and Obsidian. We are in the process of integration with Salesforce, Asana, Airtable, and Google Sheets. Our goal is to integrate with the top 50 productivity software. After that, we plan to open our API and create a marketplace where anyone can integrate with Hints. We are also researching other fields where Hints can be beneficial to users. What about you, how do you use Hints? As someone who values focus and productivity, I strive to minimize distractions and reduce context switching. I use the Hints Telegram bot, easily accessible from my phone or laptop, to streamline my routine tasks. My top integrations are HubSpot, Notion, and Google Calendar. I use HubSpot to track my fundraising process, as well as to keep track of my conversations with clients and related tasks. With Hints, I can create a new deal in my HubSpot pipeline and add quick notes and tasks with a single message, saving me enormous time and energy. I use Notion to capture all my thoughts and ideas, clients’ feedback, testimonials, marketing inspirations, and links. Google Calendar integration is handy, as I often need to schedule meetings with people in different time zones. I usually do it from Telegram chat, and I love that our AI understands time zones by cities, so I can message “Discuss SalesForce integration with @Alex at 2 pm Lisbon time” to schedule a meeting. Overall, Hints helps me to stay more focused and productive throughout the day. Also, we have team use cases for capturing feature requests and bugs in the group chat. How do you recommend someone get started? Getting started with Hints is easy. Visit our website, create an account, and set up a quick integration with any of the productivity software we currently offer. If you’re a product or tech team leader and use tools like Notion, Jira, ClickUp, or Trello to track tasks, add our assistant to your team’s Slack or any other messenger and show it to the team. It will ensure that your backlog is up-to-date and relevant issues are quickly addressed. If you’re in sales and tired of manual CRM updates, our HubSpot or Pipedrive integrations can save you up to an hour a day. If you’re any other kind of knowledge worker with a lot on your plate and looking to minimize distractions, try building a simple template in Notion with two columns: one for notes and one for note types (idea, link, to-do, etc.). Then, add our SMS bot and send voice messages whenever you have something on your mind. Our AI will capture your notes and add the relevant parameters to the second column.  By the way, we are building Notion template integrations, so you can pick a template that works for you, and our AI will start filling it out from your notes.  Set up our Google Calendar integration to send invites to your team members by tagging them in Slack and blocking time on your calendar on the go. And finally… What’s next for Hints? Our primary focus for Hints moving forward is to continue to improve the AI’s ability to understand and interpret user requests. We are working on teaching it to ask the right questions when it is unsure or needs help understanding a request. Additionally, we are working on teaching AI to onboard users and showing them how to use integrations effectively. We have a web app that works well on laptops and phones, but we are also developing iOS and Android apps for a more native experience. We are adding more integrations with productivity tools and working on allowing the AI to control them more granularly. Additionally, we plan to expand to other messaging platforms, such as Teams and Discord. Our ultimate vision for Hints is to build an AI assistant that can communicate with you as a human and manage all your productivity tools from one central location. We are constantly working to improve the capabilities of our AI and make it more intuitive for users. Thank you so much for your time, George! Where can people learn more about Hints? Visit our website. You can also fo...
Reinventing the digital assistant with George Levin founder of Hints
The Abilene paradox: When not rocking the boat may sink the boat
The Abilene paradox: When not rocking the boat may sink the boat
Have you ever found yourself in a brainstorming session at work, where everyone ends up agreeing on a less-than-ideal course of action? The Abilene paradox describes this unfortunately common situation where a group of people agree to an idea, despite most of them not fully believing that it is the best decision. Although it may seem surprising that several people might pursue something that few of them truly have faith in, the phenomenon has a simple explanation: it’s mainly caused by a fear of challenging the status quo. Learning to identify and manage the Abilene paradox is essential to avoid costly group decisions. A family trip gone wrong The Abilene paradox was first described by Jerry B. Harvey in his 1974 article The Abilene Paradox: The Management of Agreement. Harvey, a professor of management science at the George Washington University, D.C., was spending time with his in-laws during a heatwave in Texas. When his father-in-law suggested going for dinner in Abilene, 53 miles away, Harvey went along with the plan as his wife and mother-in-law also both agreed to making the trip.  Later, all four returned home hot and irritated, with Harvey’s mother in-law admitting that she always thought Abilene was a terrible idea and would rather have stayed at home. Harvey and his wife then declared that they had not wanted to go either, but had agreed to it to avoid rocking the boat when everyone else had seemed keen. Even Harvey’s father-in-law said he had not really wanted to travel in the unairconditioned car. He explained that he had only suggested the trip as he was worried his guests were getting bored. Harvey went on to coin this occurrence the Abilene paradox, in which there is a failure to effectively manage agreement. At the time, most managerial advice was focused on how to better manage conflict. Instead, Harvey argued that in modern organisations, learning how to deal with agreement was more pressing than the management of conflict. The Abilene paradox can have terrible consequences. The 1986 NASA shuttle tragedy, in which all seven crew members lost their lives, is such an example. After several delays and launch cancellations, managers were desperate to launch the shuttle. As a result, the group collectively disregarded warnings from engineers about the risks associated with a launch in cold weather. With millions of viewers watching live on TV, the shuttle broke apart within 73 seconds of launching. The Abilene paradox is commonly confused with groupthink, but the two have different characteristics. Researcher Yoonho Kim explained that in groupthink, a unanimous decision is driven by the “high energy” desire for cohesiveness and group harmony. Conversely, the Abilene paradox occurs in a state of “low energy” in which there is a fear of disturbing the balance. The Abilene paradox is an important topic of research in social psychology. The power of social conformity can persuade us to agree to the perceived general consensus and can lead to extremely poor group decisions. The Abilene paradox at work While numerous studies have examined the management of conflict and disagreement in organisations, far less is understood about managing agreement. In business, multiple decisions need to be made each day, and failure to agree can lead to delays or increasing costs. However, Vincent Bagire of Makerere University Business School stressed that “a serious gap arises in the agreed decision when team members individually do not agree with the group decision.”  Bagire also explored that when decisions are not truly backed or agreed to, individuals and organisations are at risk of making “wasteful, costly and at times disastrous” decisions. The Abilene paradox can lead you to believe that the “rule by committee” is superior to your own opinion on a matter. If all of your colleagues appear to have an opposing view to yours, you might assume that they must all be correct. This can make it difficult to object. Failure to speak up will be even more common if team members feel that they have been disenfranchised. Employees may feel disempowered from speaking up or have concerns that disagreeing will put their position at risk. This can lead to the conviction that they must agree with the group despite the decision going against what they believe to be correct. When individuals feel they cannot put forward an argument, the company is less likely to explore alternative options, which can lead to less creativity. Group mentality can also make people feel absolved of responsibility for a decision. Going along with what the group has voted on may lead some members of a team to feel that the decision had little to do with them. As you can imagine, this lack of accountability can have negative effects on the business. Managing the Abilene paradox The Abilene paradox may occur in your professional or personal life. It’s an insidious phenomenon that can be hard to spot, precisely because it arises from a fear of speaking up. The following strategies may be helpful in both recognising the paradox and limiting its potential for damage: Foster a safe environment. Psychological safety is paramount to avoiding the Abilene paradox. Without it, team members may remain quiet and nominally agree with the rest of the group rather than risk looking like an outsider. However, when people live or work in a setting that is psychologically safe, they will feel more comfortable about speaking up or expressing an opinion that differs from the status quo. Asking team members to create a personal manual is a simple way to foster a safe environment that is conducive to open communication. Make space for honest discussions. Instead of waiting for those conversations to happen, make sure there is a time for these to be held, which will ensure that the final decision is based on a review of diverse perspectives. Simply booking half an hour for an open forum where all thoughts are fair play can help mitigate the Abilene paradox. Be transparent in addressing feedback. As a manager, there will be times when opinions are voiced that you disagree with. It is helpful to offer feedback to team members whose suggestions or views are not taken on board, to explain the rationale behind the final decision. This should provide confidence that their opinion was still considered, so that they feel able to share their views again in future. As you can see, the Abilene paradox can lead to costly decisions. To promote a culture where people feel able to raise concerns or opinions that differ from those of others, it is crucial to foster a psychologically safe environment, promote honest discussions, and give clear feedback. And don’t forget to lead by example: if you feel safe to do so, speak up next time you disagree with a group decision! The post The Abilene paradox: When not rocking the boat may sink the boat appeared first on Ness Labs.
The Abilene paradox: When not rocking the boat may sink the boat
Eliminating the productivity paradox with Tariq Rauf founder and CEO of Qatalog
Eliminating the productivity paradox with Tariq Rauf founder and CEO of Qatalog
Wondering how AI can help you be more productive? Welcome to this education of our interview series with founders on a mission to help us work smarter. Tariq Rauf is the founder and CEO of Qatalog, an intelligent work hub for teams powered by AI. It offers a self-structuring, centralized system to seamlessly manage people, knowledge, and operations. In this interview, we talked about the unnecessary complexity and fragility of patchwork collaboration systems, why we need to simplify our tool stack, what product design can learn from architecture, the power of modular business management, and more. Enjoy the read! Hi Tariq, thanks for agreeing to this interview! Before we get into the details of Qatalog, I wanted to take a step back for a general look at the nature of work and the role of technology within it. In your documentary series you talk about the concept of the “productivity paradox” which I thought was fascinating. Can you tell us more? The productivity paradox refers to the idea that the tools that were designed to make us more productive are actually doing the opposite. There are lots of reasons for this, but here are two big ones. First, most of these tools are generic, and don’t account for the nuances of how individual businesses work, each with their own terminology, culture, and processes. Instead, we have been forced to adapt to fit the tools, which makes them hard to use and increases friction. It also means we end up finding new tools to fill the gaps, which adds to the second problem: we’re using far too many of these apps.  Data from Okta shows the average company now uses 89 apps on average and for large companies it’s double that. This results in constant context switching, which damages our productivity, as we found in our study with Cornell University. It revealed that knowledge workers waste an hour every day trying to find information hidden in different apps, while 6 in 10 of people say it’s hard to know what colleagues are doing.  In short, we’ve created an endless sprawl of generic tools, with thousands of iterations of clunky spreadsheets and documents for almost every problem. But there’s been nothing that puts the customer first and considers how all these pieces fit together. So, is that what inspired you to build Qatalog?  Yes, but there’s a bit more to the story that explains why we’ve taken such a radically different approach to most other companies. Although I’ve been building software since I was very young, I actually trained as an architect. After graduating, I had the privilege to work under the mentorship of the renowned architect, Charles Correa. It was during this time that I learned how to design spaces that lots of people could use simultaneously and navigate seamlessly. The mentorship and the lessons I learned were transformative to how I think about product and scale.  Then in 2011, I went back into the world of tech. First as CTO for a startup, then as a Product and Growth Lead at Wise, and then at Amazon, where I worked on projects spanning Prime, Prime Video, Alexa and Twitch. I had hundreds of colleagues in teams stretched across 32 different floors and 17 different countries. The productivity paradox was in full effect — everyone had an array of tools and technologies at their disposal, but work was about as disjointed as it could get. There were inconsistencies in every team and the only solution proposed was to hire more people and implement more coordination activities.  The architect in me was screaming. This was an ergonomics problem, and there had to be a systemic solution. That’s when I saw the potential to create something that tied all of this together into a single source of truth. The day after I got my permanent residency in the UK in 2019, I left Amazon to start Qatalog. That’s amazing. You recently announced Qatalog 2.0, which creates a centralized Work Hub in seconds using AI. This sounds like magic. Can you tell us more?  Qatalog 2.0 makes it easy for anyone to build powerful and scalable business software that’s intuitive and enjoyable to use. It’s built on the extensive infrastructure we built for Qatalog 1.0, but we made the underlying systems accessible to the user, meaning they get a Work Hub that they can configure precisely to their needs. To make it super quick and easy to get started, we combined it with our new Qatalog AI system, which deeply understands how different businesses and industries work and constructs a bespoke Work Hub to match to their business requirements, all in about 40 seconds. All the customer has to do is describe their business, press “Build” and Qatalog does the rest. Our goal is to democratize access to custom software and allow every company, everywhere to work the way they want.  A big challenge for knowledge workers is juggling dozens of apps and dealing with the productivity loss from constantly switching tools. Does Qatalog help to address this?  Yes, that’s the intention. Qatalog replaces your task managers, company wikis, project managers, people directories, and spreadsheets with a single Work Hub that’s bespoke to your company with all the functionality you need.  This radically simplifies the collaboration tool stack and centralizes your people, processes and knowledge in a single, seamlessly connected platform. Everything in your company is just a click away. Now, we’re not the first company to make claims like this, but I think we’re different in a few very important ways.  For one, Qatalog takes away the pain of setting up your system. For some tools, you can easily spend hundreds of hours learning how the system works and configuring everything to meet your requirements. Even then, they’re still very fragile. With our AI, we’ll do the vast majority of the work in under a minute and you just have to make the final tweaks so that it’s perfect.  The second is that it’s intuitive to use. Because everything molds to your business, the structure and terminology of your Qatalog system reflects the way your team operates. No one needs to learn a brand new set of vocabulary or processes just to retrieve or contribute to the overall system. And the last one is that it’s built to grow with your company. A frequent complaint we hear about tools is that they don’t scale well, because everyone can edit and change the structure of the system. What worked for a team of ten becomes complete chaos with a team of 100, and the problems keep compounding as you scale. Qatalog is different. We set out to solve this problem from day one, with a robust structure and guardrails that ensure consistency as your team expands.  What kind of companies use Qatalog? It’s been a little over a month since we launched Qatalog 2.0, we’ve had over 4,000 businesses creating work hubs and the range of requests and use cases we’ve had in that time show there is a clear need for custom software. But there are also some clear clusters, in terms of the types of businesses.  We’ve had tech startups, agencies, consultancies, real estate, galleries, investment firms, manufacturing companies and everything in between. Every version of Qatalog is custom to the customer’s needs, but agencies, for example, typically use it to create a clear system to organize their clients, centralize all the work related to projects and campaigns, track the services they offer, and create flexible and connected teams. Qatalog helps new joiners get up to speed quickly, as they can access all the info they need from one place.    How does it work exactly?  The foundation of Qatalog’s customizable system is something we call ‘Modules’, which typically reflect the core pillars of your business. Let’s take the design agency example, where you might have ‘Clients’ as one of your Modules. Every time you create a new client in Qatalog, it ensures a consistent format with the same information, as determined by the Module. The module centralizes everything to do with clients too: documents, data, tasks, workflows etc. This gives you a dedicated space to store information and make decisions about each client, in context.  The beauty of Qatalog is that you can configure every Module exactly as you want. For example, you could add a field to your Client Module to record the account lead for each client, or capture what industry it’s in, or the value of the retainer. Now, everytime someone creates a new client, you can ensure this information is captured consistently. This also allows you to quickly filter through all your clients using this info, giving you an overview of your client base at the click of a button. You can also add or remove features from each Module. These include Measurements to track key metrics or goals, Threads for async discussions, Workflows to make repeatable processes easy to assign and complete, and lots more. Our AI will give you a recommended set up, but you can customize these, depending on what you actually need. Where things get fun is when you have relationships between different Modules. Maybe this design agency also has another Module called ‘Contractors’, where the agency records information about the different external contractors they work with, such as their contact details, areas of expertise, location, and hourly rate. This means that when you create a new Project using your Project Module, you can select which Client it’s for, and identify the contractors being used, if any. Now, at the click of a button, anyone looking at that project can easily see what it’s all about and who’s working on it, including contractors.  Because everything is connected in one place, if someone in the team needs context on the client, it’s a click away. Or maybe they need some information about the contractor to share with the client, it’s all in Qatalog. It gives you a single system where everyone can easily find the information on their own, without switching tools or asking someone else. And when they do have questi...
Eliminating the productivity paradox with Tariq Rauf founder and CEO of Qatalog
Change fatigue: When our brains adaptive capacity is depleted
Change fatigue: When our brains adaptive capacity is depleted
All changes, even positive ones, come at a cost. Whether we deal with personal transitions — a new role, a newborn, a new city — or experience the wider societal shifts that impact our daily lives, each change forces our brain to adapt, altering its neural pathways to encode new patterns and to reduce uncertainty. This is why change feels effortful: we don’t simply observe change, we change ourselves in the process, and each change recruits our mental and physical adaptive systems. This is why many of us currently feel so tired: these systems are mostly designed to deal with sudden change, not long, drawn-out periods of change. The resources that allow us to deal with acute stressful situations have been drained by years of turbulence. As psychologists would put it: our “mental surge capacity” is depleted. We are experiencing change fatigue at an unprecedented scale. A hidden driver of burnout Imagine a world where, each morning, you would have to relearn everything you know. How to get out of bed, how to turn on the tap, how to brush your teeth, how to make coffee, how to open a door. It would be impossible to function. Instead, our brain stores all those common patterns, then matches your actions to specific situations. Sometimes, you encounter a new pattern. It could be something mundane — maybe you have bought a new coffee machine which works differently than the previous one — or something more complex, such as a new project at work which requires different skills. In those cases, performing the new action will require more effort. Maybe you’ll figure it out on your own, or maybe you’ll ask someone for help. Once the new pattern is acquired, your brain will match it to the corresponding reaction. The more often you encounter this pattern, the more effortless the process will become, and the less energy your brain will require. This process, which seems simple on the surface, applies to everything we do. Over time, we develop habits and routines, we become more comfortable with the skills we use at work, and we certainly don’t think twice about how to brush our teeth. But what happens when things keep on changing? When we can’t rely on many of the useful patterns we have acquired? Slowly, our ability to cope with change starts eroding. Each new change requires even more effort. Because of the constant cognitive overload, we start feeling a sense of resistance, apathy, or resignation. If this goes on for long enough, we may even burn out. Fortunately, change fatigue doesn’t inevitably lead to burnout. As often when it comes to mental health, being aware of the reason why we may be struggling is an important first step. When constant adaptation starts to feel like it’s becoming too much to deal with, some simple strategies can help to cope with change fatigue. How to manage change fatigue Change fatigue mostly arises when we feel like we’re not in control of the never-ending chaos that keeps on derailing our routines and forces us to constantly adapt. Very often, it is the case that change itself is unavoidable. What we have some control over, however, is how we react to change. Instead of resisting change, adding to the load we put on our adaptive systems, we can strive to accept, embrace, and even foster change in a way that leads to personal growth. Accepting change. The first step is to confront reality. No, the situation may not come back to normal anytime soon, but you must maintain hope that they will at some point — even if it is in the distant future. This is known as the Stockdale Paradox. Admiral Jim Stockdale was a military officer who was imprisoned in a prisoner-of-war camp for eight years during the height of the Vietnam War, with no set release date nor certainty as to whether he would ever see his family again. He attributed his resilience to a way of thinking that may seem contradictory: “You must never confuse faith that you will prevail in the end — which you can never afford to lose — with the discipline to confront the most brutal facts of your current reality, whatever they might be.” Accepting change is acknowledging the worst while still hoping for the best. Embracing change. Beyond the mindset shift of accepting that change, good or bad, is an integral part of life, the next step is to welcome the opportunity to learn how to do things differently. Change is a tough teacher, but a teacher nonetheless. An effective way to unlearn old patterns and relearn new patterns is to practice metacognition — thinking about thinking. Each week, write down surprising new patterns you’ve noticed, how your current reaction may not be appropriate anymore, and ways you could adapt. Treat this process as an experiment where your life is a giant laboratory, and where failure is just another data point which you can incorporate in your next iteration. Fostering change. The last step is to become a change agent yourself. You may not be able to alter the course of big societal shifts, but you can induce local change in your community, whether it’s at work, in your neighborhood, or even online. How can you support others through change? What actions can you take to improve the trajectory of projects and people around you? Is there any knowledge you can share with others, so they don’t only have change as a ruthless teacher? “The people who are crazy enough to think they can change the world, are the ones who do.” Most importantly, don’t be hard on yourself. Everyone’s mental surge capacity has been depleted in different ways, and you don’t have to push through all three stages if you don’t have the mental and emotional capacity to do so. Simply accepting change is already an amazing feat of resilience. The post Change fatigue: When our brain’s adaptive capacity is depleted appeared first on Ness Labs.
Change fatigue: When our brains adaptive capacity is depleted
Age and the Nature of Innovation
Age and the Nature of Innovation
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. You can listen to this post above, or via most podcast apps here. The previous post also now has a podcast available here. Subscribe now Are there some kinds of discoveries that are easier to make when young, and some that are easier to make when older? Obviously yes. At a minimum, innovations that take a very long time basically have to be done by older innovators. So what kinds of innovations might take a long time to complete? Perhaps those that draw on deep wells of specialized knowledge that take a long time to accumulate. Or perhaps those that require grinding away at a question for years and decades, obsessively seeking the answers to riddles invisible to outsiders. What about innovations that are easier when young? Well, we can at least say they shouldn’t be the kinds of innovations that take a long time to achieve. That means discoveries that can be made with years, not decades, of study. But what kinds of innovations that don’t take long study to make are still sitting around, like unclaimed $20 bills on the sidewalk? One obvious kind of unclaimed innovation is the kind that relies on ideas that have only been very recently discovered. If people learn about very new ideas during their initial training (for example, for a PhD), then we might expect young scientists to disproproportionately make discoveries relying on frontier knowledge. At the same time, we might look for signs that older scientists build on older ideas, but perhaps from a place of deeper expertise. Indeed, we have some evidence this is the case. Age, Frontier Ideas, and Deepening Expertise Let’s start with Yu et al. (2022), a study of about 7mn biomedical research articles published between 1980 and 2009. Yu and coauthors do not know the age of the scientists who write these articles, but as a proxy they look at the time elapsed since their first publication. Below are several figures, drawn from data in their paper, on what goes into an academic paper at various stages of a research career. In the left column, we have two measures drawn from the text of paper titles and abstracts. Each of these identifies the “concepts” used in a paper’s title/abstract: these are defined to be the one, two-, and three-word strings of text that lie between punctuation and non-informative words. The right columns relies on data from the citations made by an article. In each case, Yu and coauthors separately estimate the impact of the age of the first and last author.1 Moreover, these are the effects that remain after controlling for various other factors, including what a particular scientist does on average (in economics jargon, they include author fixed effects). Together, they generally tell a story of age being associated with an increasing reliance on a narrower set of older ideas. Source: Regression coefficients with author fixed effects in Tables 4 and 5 of Yu et al. (2022) Let’s start in the top left corner - this is the number of concepts that appear in a title or abstract which are both younger than five years and go on to be frequently used in other papers. Measured this way, early career scientists are more likely to use recent and important new ideas. Moving to the top-right figure, we can instead look at the diversity of cited references. We might expect this to rise over a career, as scientists build a larger and larger knowledge base. But in fact, the trend is the opposite for first authors, and mixed at best for last authors. At best, the tendency to expand the disciplinary breadth of references as we accumulate more knowledge is offset by rising disciplinary specialization. Turning to the bottom row, on the left we have the average age of the concepts used in a title and abstract (here “age” is the number of years that have elapsed since the concepts were first mentioned in any paper), and on the right the average age of the cited references (that is, the number of years that have elapsed since the citation was published). All measures march up and to the right, indicating a reliance on older ideas as scientists age. This is not a phenomenon peculiar to the life sciences. Cui, Wu, and Evans (2022) compute some similar metrics for a wider range of fields than Yu and coauthors, focusing their attention on scientists with successful careers lasting at least twenty years and once again proxying scientist age by the time elapsed since their first paper was published. On the right, we again have the average age of cited references; these also rise alongside scientist age. Source: Regression coefficients with author fixed effects in Tables 4 and 5 of Yu et al. (2022) On the left, we have a measure based on the keywords the Microsoft Academic Graph assigns to papers (of which there are more than 50,000). Between two subsequent years, Cui and coauthors calculate the share of keywords assigned to a scientist’s papers which recur in the next year. As scientists age, their papers increasingly get assigned the same keywords from year to year (though note the overall effect size is pretty small), suggesting deeper engagement with a consistent set of ideas. Lastly, we can look outside of science to invention. Kalyani (2022) processes the text of patents to identify technical terminology and then looks for patents that have a larger than usual share of technical phrases (think “machine learning” or “neural network”) that are not previously mentioned in patents filed in the preceding five years. When a patent has twice as many of these new technical phrases as the average for its technology type, he calls it a creative patent. He goes on to show these “creative” patents are much more correlated with various metrics of genuine innovation (see the patent section of Innovation (mostly) gets harder for more discussion). Kalyani does not have data on the age of inventors, but he does show that repeat inventors produce increasingly less creative patents as time goes by. From Kalyani (2022) This figure shows, on average, an inventor’s first patent has about 25% more new technical phrases than average, their second has only 5% more, and the third patent has about the same number of new technical phrases as average. Subsequent patents fall below average. This is consistent with a story where older inventors increasingly rely on older ideas. As discussed in more detail in the post Age and the Impact of Innovations, over the first 20 years of a scientists career, the impact of a scientist’s best work is pretty stable: citations to the top cited paper published over some multi-year timeframe is pretty consistent. The above suggests that might conceal some changes happening under the hood though. At the outset, perhaps a scientist’s work derives its impact through engagement with the cutting edge. Later, scientists narrow their focus and impact arises from deeper expertise in a more tightly defined domain. Conceptual and Experimental Innovation So far we’ve seen some evidence that scientific discoveries and inventions are more likely to draw on recent ideas when the innovator is young, and an older, narrower set of ideas (plus deeper expertise?) when the innovator is older. I suspect that’s because young scientists hack their way to the knowledge frontier during their training period. As scientists begin active research in earnest, they certainly invest in keeping up with the research frontier, but it’s hard to do this as well as someone who is in full-on training mode. Over a 20-40 year career, the average age of concepts used and cited goes up by a lot less than 20-40 years; but it does go up (actually, it’s pretty amazing the average age of concepts used only goes up 2 years in Yu et al. 2022). I argued at the outset we might expect this. The young cannot be expected to make discoveries that require a very long time to bring about. But among the set of ideas that don’t take a long time to bring about, they need to focus on innovations that have not already been discovered. One way to do that is to draw on the newest ideas. But this might not be the only way. The economist David Galenson has long studied innovation in the arts, and argues it is useful to think of innovative art as emerging primarily from two approaches. The first approach is "experimental." This is an iterative feedback driven process with only vaguely defined goals. You try something, almost at random, you stand back and evaluate, and then you try again. The second approach is “conceptual.” It entails a carefully planned approach that seeks to communicate or embody a specific preconceived idea. Then the project is executed and emerges more or less in its completed form. Both require a mastery of the existing craft, but the experimental approach takes a lot longer. Essentially, it relies on evolutionary processes (with artificial rather than natural selection). It's advantage is that it can take us places we can't envision in advance. But, since it takes so long to walk the wandering path to novelty, Galenson argues that in the arts, experimental innovators tend to be old masters. The Bathers, by Paul Cezanne, one of Galenson’s experimental innovators. Begun when Cezanne was 59. Conceptual approaches can, in principle, be achieved at any point in a lifecycle, but Galenson argues there are forces that ossify our thinking and make conceptual innovation harder to pull off at old ages. For one, making a conceptual jump seems to require trusting into a radically simplified schema (complicated schema are too hard to plan out in advance) from which you can extrapolate into the unknown. But as time goes on, we add detail and temper our initial simplifications, adding caveats, carveouts and extensions. We no longer trust the simple models to leap into the unknown. Perhaps for these reasons, conceptual innovators tend...
Age and the Nature of Innovation
2022 year in review: wander and wonder
2022 year in review: wander and wonder
This year was not the year I expected. It was a year of darkness and doubt, a year of light and love, a year of self-discovery and community. I usually start my annual reviews with a few bullet points listing my proudest accomplishments, but it feels wrong this time. Instead, I’ll describe some of the ebbs and flows I went through and why this year has been a pivotal one. Renaissance The year started great. A smart team at Ness Labs, two wonderful PhD supervisors, a research project I cared about, a comfortable home in a neighborhood I liked. But it also started the same way every year, every week, and every day of my life had started as far as I could remember: with a sense of emptiness, as if my mind was a dissociated observer watching the movie of my life from the outside. I had become used to the familiar claws of depression. It was like a shadow following me everywhere. Some weeks were worse than others, but I always found enough interesting questions and met enough interesting people to keep on playing the game. As a silver lining, struggling with my own mental health allowed me to bring a more nuanced perspective to conversations around personal growth. Following my curiosity as a way to make a living and to persist on living — that was winning enough. Fortunately, 2022 had some surprises in store for me. Through a series of unexpected events, I experienced what I can only call a renaissance (“rebirth” in French, my native language). The first jolt happened in the Spring. I was visiting a friend in a coliving community in the French countryside, and was about to help prepare lunch for everyone when said friend gave me a piece of chocolate. An hour later, I was cutting vegetables while high on psilocybin, which gave me a newfound appreciation for food as fuel for my body. I’ve always been intellectually interested in nutrition — I even ran a startup in that space — but never before had I felt like I did that afternoon, staring at the dancing patterns on a beet while thanking my luck to have access to such nice food. This moment unlocked a little spark somewhere in me, something that said: life can feel good. A few days later, I went to Italy for the Indie Founders Conference organized by Rand Fishkin and Peldi Guilizzoni. The conference felt more like an intimate retreat, where it was safe to be vulnerable and to openly share our challenges. No facades, just friends. We laughed, we cried, and we bonded. I didn’t know it then, but this would be the second event of the year to significantly affect my path. Look at these happy people Thanks so much to the team at @balsamiq for hosting the inaugural Indie Founders retreat! So much food for thought & many new friendships grazie mille!! pic.twitter.com/uCxJAdJLQZ — Anne-Laure Le Cunff (@anthilemoon) March 25, 2022 There, I met an amazing woman (whose name I won’t share for privacy reasons) who I connected with over many different topics, including neuroscience and neurodiversity research. She told me she had signed up for an Ayahuasca retreat. Ayahuasca is a potent psychedelic brew which originated from the Amazon basin. Reports written by early Christian missionaries described it as “the work of the devil”. Today, researchers around the world are investigating its therapeutic potential as an antidepressant, antianxiolytic, and anti-addiction medication. It knew it wasn’t the miracle cure-all some people touted it to be, but it certainly felt worth exploring. That night, as soon as I got back to my hotel room, I looked up the retreat center she had mentioned, and I booked my spot for a month later. Working with Ayahuasca was my third life-altering experience of the year. You can read a full account of my journey with Ayahuasca here. If you’re in a rush, here’s the TL;DR. I’m not depressed anymore, I quit drinking… And, for the first time ever, I’m truly happy to be alive. Research I could stop this annual review right here. There was no bigger accomplishment this year than breaking free from the dark companionship of depression. But I write these reviews as a record of my progress, so I can later look back and remember how it felt to be where I was. So, a few more things. While I’ve been reading papers and writing about what I learn for a little while now, this was my first year conducting my own scientific research. As a complete newbie, there was only one milestone I wanted to attain: successfully passing my PhD upgrade viva. Some context: after performing a review of the existing literature and running some initial studies, PhD candidates are required to go through an oral exam where they present their early findings and a detailed plan for the rest of the research project. I thought it would be a terrifying affair, but the examiners at my university were friendly and provided lots of useful suggestions. I passed without any corrections. After the upgrade viva, I spent three weeks at St Andrews University in Scotland to study diverse forms of intelligence across human, animal, plant, and even fungal species; I gave my first academic presentations, wrote a book chapter, and got a paper accepted for publication in a journal. I’m currently typing these words from the Netherlands, where I just completed an intensive eye-tracking training at Utrecht University. Next year, I will teach my first class for the Neuroscience & Psychology BSc students. It will be about neuroscience and the digital world. Academia is such a strange microcosm. I love being surrounded by friendly nerds asking big questions, but I don’t know if I’d enjoy spending 100% of my time there. Things are painfully slow, there’s a lot of admin, and people are overworked. I feel privileged to have one foot in academic research and one foot in entrepreneurship. It makes my work more interesting, and the space between the two is fun to explore. Reach Three years ago, I sent the first edition of my newsletter. I had no idea I was laying the foundations for a sustainable community-based business. Today, the newsletter is read by 55,000 subscribers, and thousands of people have completed one of the online courses we offer in the learning community. In November, I hosted the Mindful Productivity Masterclass, a four-week cohort-based course which received fantastic feedback. Students of all ages and all professions joined from everywhere in the world. This experience was a powerful reminder of how the Internet enables lifelong learning and collective intelligence. I’m grateful for the team at Ness Labs: Joe, Haikal, and Melanie, and all of the writers who contribute fantastic content to share with our readers. You all teach me so much and I could not imagine doing the work I do without you. I’m grateful for my family and for my friends, whether online or offline, whether we talk everyday or once a year. You feed my sense of wonder and support my courage to wander, lose my way, and find myself. Next year, I want to reach even more curious minds and spread the message that we don’t need rigid productivity frameworks to succeed. We don’t need to be in control of everything. In any case, the economic, political and humanitarian crises of the past few years were a brutal reminder that we really cannot predict what life will throw at us. Our visibility is limited. Control is overrated. Instead, we need curiosity, consistency, and a community. In the sea of chaos, these act as a discovery engine: they help steer our boat in a direction that maximizes personal growth. Sure, we don’t know where we’re going, but we can have fun while we roam this turbulent planet of ours. We can still be active participants and shape the world around us. That’s why I want to keep on learning, feeling, and exploring everything life has to offer — making friends, connecting ideas, co-creating spaces for play and inquiry. I know things won’t go to plan. I don’t have a map. But I’m excited to play. Who knows, maybe there will be more surprises along the way. Thank you for being part of my journey! I wish you a restful and reflective end of the year. The post 2022 year in review: wander and wonder appeared first on Ness Labs.
2022 year in review: wander and wonder
Age and the Impact of Innovations
Age and the Impact of Innovations
Like the rest of New Things Under the Sun, this article will be updated as the state of the academic literature evolves; you can read the latest version here. No podcast today, as I am sick and can’t talk without coughing: maybe later. Also, there is more to say about age and innovation, so stay tuned! Subscribe now Scientists are getting older. Below is the share of employed US PhD scientists and engineers in three different age ranges: early career (under 40), mid-career (ages 40-55), and late career (ages 55-75). The figure covers the 26 years from 1993-2019. Author calculations. Sources: NSF Survey of Doctorate Recipients (1993-2019), data drawn from age by occupation by age tables Over this period, the share of mid-career scientists fell from about half to just under 40%. Most (but not all) of that decline has been offset by an increase in the share of late career scientists. And within the late career group, the share older than 65 has more than doubled to 27% over this time period.1 This trend is consistent across fields. Cui, Wu, and Evans (2022) look at more than one million scientists with fairly successful academic careers - they publish at least 10 articles over a span of at least 20 years. Cui and coauthors compute the share of these successful scientists who have been actively publishing for more than twenty years. Across all fields, it’s up significantly since 1980 (though, consistent with the previous figure, this trend may have peaked around 2015). From Cui, Wu, and Evans (2022) Alternatively, we can get some idea about the age of people doing active research by looking at the distribution of grants. At the NIH, the share of young principal investigators on R01 grants has dropped from a peak of 18% in 1983 to about 3% by 2010, while the share older than 65 has risen from almost nothing to above 6%. From Rockey (2012) This data ends in 2010, but the trend towards increasing age at receiving the first NIH grant has continued through 2020. Is this a problem? What’s the relationship between age and innovation? Aging and Average Quality This is a big literature, but I’m going to focus on a few papers that use lots of data to get at the experience of more typical scientists and inventors, rather than the experience of the most elite (see Jones, Reedy and Weinberg 2014 for a good overview of an older literature that focuses primarily on elite scientists). Yu et al. (2022) look at about 7mn biomedical research articles published between 1980 and 2009. Yu and coauthors do not know the age of the authors of the scientists who write these articles, but as a proxy they look at the time elapsed since their first publication. They then look at how various qualities of a scientific article change as a scientist gets older. First up, data related to the citations ultimately received by a paper. On the left, we have the relationship between the career age of the first and last authors, and the total number of citations received by a paper.2 On the right, the same thing, but expressed as a measure of the diversity of the fields that cite a paper - the lower the number, the more the citations received are concentrated in a small number of fields. In each case, Yu and coauthors separately estimate the impact of the age of the first and last author.3 Note also, these are the effects that remain after controlling for a variety of other factors. In particular, the charts control for the typical qualities of a given author (i.e., they include author fixed effects). See the web appendix for more on this issue. Also, they’re statistical estimates, so they have error bars, which I’ve omitted, but which do not change the overall trends. Source: Regression coefficients with author fixed effects in Table 2 of Yu et al. (2022) The story is a straight-forward one. Pick any author at random, and on average the papers they publish earlier in their career, whether as first author or last author, will be more highly cited and cited by a more diverse group of fields, than a paper they publish later in their career. In the figure below, Cui, Wu, and Evans (2022) provide some complementary data that goes beyond the life sciences, focusing their attention on scientists with successful careers lasting at least twenty years and once again proxying scientist age by the time elapsed since their first paper was published. They compute a measure of how disruptive a paper is, based on how often a paper is cited on it’s own, versus in conjunction with the papers it cites. The intuition of this disruption measure is that when a paper is disruptive, it renders older work obsolete and hence older work is no longer cited by future scientists working in the same area. By this measure, as scientists age their papers get less and less disruptive (also and separately, papers are becoming less and less disruptive over time, as discussed more here).4 From Cui, Wu, and Evans (2022). There is an error in the figure’s legend: the top line corresponds to the 1960s, the one below that to the 1970, below that is the 1980s, and below that is the 1990s. Last up, we can even extend these findings to inventors. Kaltenberg, Jaffe, and Lachman (2021) study the correlation between age and various patent-related measures for a set of 1.5mn inventors who were granted patents between 1976 and 2018. To estimate the age of inventors, Kaltenberg and coauthors scrape various directory websites that include birthday information for people with similar names as patentees, who also live in the same city as a patentee lists. They then compute the relationship between an inventor’s estimated age and and some version of each of the metrics discussed above. Once again, these results pertain to what remains after we adjust for other factors (including inventor fixed effects, discussed below). From Kaltenberg, Jaffe, and Lachman (2021) On the left, we have total citations received by a patent. In the middle, a measure of the diversity of the technologies citing a paper (lower means citations come from a narrower set of technologies). And on the right, our measure of how disruptive a paper is, using the same measure as Cui, Wu, and Evans. It’s a by-now familiar story: as inventors age, the impact of their patented inventions (as measured by citations in various ways), goes down. (The figures are for the patents of solo inventors, but the same trend is there for the average age of a team of inventors) So in all three studies, we see similar effects: the typical paper/patent of an older scientist or inventor gets fewer citations and the citations it does get come from a smaller range of fields, and are increasingly likely to come bundled with citations to older work. And the magnitudes involved here are quite large. In Yu et al. (2022), the papers published when you begin a career earn 50-65% more citations than those published at the end of a career. The effects are even larger for the citations received by patentees. The Hits Keep Coming This seems like pretty depressing news for active scientists and inventors: the average paper/patent gets less and less impactful with time. But in fact, this story is misleading, at least for scientists. Something quite surprising is going on under the surface. Liu et al. (2018) study about 20,000 scientists and compute the probability, over a career, that for any given paper, their personal most highly cited paper lies in the future. The results of the previous section suggest this probability should fall pretty rapidly. At each career stage, your average citations are lower, and it would be natural to assume the best work you can produce will also tend to be lower impact, on average, than it was in earlier career stages. But this is not what Liu and coauthors find! Instead, they find that any paper written, at any stage in your career, has about an equal probability of being your top cited paper! The following figure illustrates their result. Each dot shows the probability that either the top cited paper (blue), second-most cited paper (green), or third-most cited paper (red) lies in the future, as you advance through your career (note it’s actually citations received within 10 years, and normalized by typical citations in your field/year). The vertical axis is this percent. The horizontal one is the stage in your career, measured as the fraction of all papers you will ever publish, that have been published so far. From Liu et al. (2018), extended data figure 1 This number can only go down, because that’s how time works (there can’t be a 50% chance your best work is in the future today, and a 60% chance it’s in the future tomorrow). But the figure shows it goes down in a very surprising way. Assuming each paper you publish has the same probability of being your career best, then when you are 25% of the way through your publishing career, there is a 25% chance your best work is behind you and a 75% chance it’s ahead of you. By the time you are 50% of the way through your publishing career, the probability the best is yet to come will have fallen to 50%. And so on. And that is precisely what the figure appears to show! What’s going on? Well, Yu and coauthors show that the number of publications in a career is not constant. Through the first 20-25 years of a career, the number of publications a scientist attaches their name to seems to rise before falling sharply. Since the average is falling over this period, but the probability of a top cited paper is roughly constant, it must be that the variance is rising (the best get better, the worse get worse), in such a way that the net effect is a falling average. And Yu and coauthors present evidence that is the case. In the figure below, we track the average number of citations that go to hit papers in two different ways. In dark blue, we simply have the additional citations to the top cited paper by career stage. Note, unlike average citations, it does not fall s...
Age and the Impact of Innovations
Self-Motivation: how to build a reward system for yourself
Self-Motivation: how to build a reward system for yourself
Despite my best intentions, I do not always feel as motivated as I would like to be. Whether it is a work task or a chore at home, if a job doesn’t appeal then I will sometimes ignore it until the last minute. While I always meet a deadline, there is a far more effective – and less stressful – way I could motivate myself to take action sooner. Building a reward system is a powerful way to boost your productivity, reducing the need to rely on intrinsic motivation to complete the work you need (and want) to do. However, the rewards you choose must appeal to you, and for maximum impact, they need to be perfectly timed. The science of self-motivation In her memoir My Beloved Reward, Sonia Sotomayor, the first Latinx and third woman appointed to the US Supreme Court, wrote that “success is its own reward.” While achieving a goal will naturally bring happiness, this can only be the case if you are driven to hit your target. But this is not enough. It might seem like cheating, but it is becoming more widely accepted that having a separate incentive to reach a goal has many benefits. Far from being frivolous, rewards are considered by researchers to be “the most crucial objects for life.” Rewards are needed to encourage us to eat and drink, and even to mate. In evolutionary terms, the better we are at striving for rewards, the greater our chances of survival. With a treat in mind for reaching a target you may be more likely to commit to working towards the goal, and less inclined to procrastinate. Motivating yourself with a reward also acts as a form of positive reinforcement, increasing the likelihood that the promise of a treat will incentivize you to achieve future goals as well. Part of the brain’s reward system sits within the mesocorticolimbic circuit. Dopamine neurons feed into this reward circuit, and it is understood that the offer of a reward increases the firing of these neurons. The stimulation of the circuit leads to positively motivated behaviors and reinforcement learning. The mesocorticolimbic circuit is also responsible for what researchers call “incentive salience” – the increased firing of dopamine neurons increases our desire for the reward, which in turn creates motivation. Rewards that are related to the task are likely to be more effective. This is known as “proximity to the reward”, and scientists have noted that a related reward can be a particularly salient factor in enhancing motivation. For example, if you want to read more research papers for work, you could motivate yourself by pledging to buy a novel on your wish list if you succeed in reading one academic paper per day for one week. However, it is not only the treat itself that is important for building a successful reward system; timing your rewards correctly is crucial to ensuring self-motivation is maintained over the long-term. Building a reward system It can take time and multiple adjustments to build a reward system that will work for you. For operant conditioning to occur – when an association is made between a behavior and a consequence – the scheduling of rewards must be carefully planned to assist us in establishing new habits. A study conducted in 2018 compared the benefit of receiving frequent rewards for completing small tasks with the promise of a reward for finishing a long project. The researchers, Kaitlin Woolley and Ayelet Fishbach, found that when a small, regular reward was available, participants experienced greater interest and enjoyment in their work than those waiting for the delayed reward. Although Woolley and Fishbach demonstrated that regular rewards incentivized individuals to keep going with a project, to build a successful reward system you should first consider trying continuous reinforcement. Continuous reinforcement is often used to begin teaching a dog a new trick. At first, the dog will need a treat every time he sits or offers his paw, because that way he knows he is doing the right thing. Withholding a treat in the early stages of learning will either make him think he has done the trick incorrectly, or disincentivize him next time you give the command.  Once the dog has got the hang of it and sits every time you ask, you can move to intermittent reinforcement. He won’t know if he will get a treat every time, but he will sit anyway, in anticipation of maybe being rewarded. As humans, we also need to start with continuous reinforcement to boost motivation. If you want to learn a programming language for example, you will need to reward yourself every time you sit down to teach yourself. This will reinforce a positive association with the habit, making it easier to maintain regular practice. Only after continuous reinforcement has helped establish your desired habit, you can move to intermittent rewards. To keep dopamine firing in your mesocorticolimbic system, you could for instance create a “self-motivation lottery”. It works like this: first, write down a variety of prizes on pieces of paper, and place them into a cup or jar. The prizes must be things that you will really value, such as a new pen, a fresh journal, or a meal out. Each time you reach a goal, draw out a prize at random and enjoy the rush of rewarding yourself. Treating yourself regularly with those surprise gifts will help you to maintain new positive behaviors, as you will keep performing the task in the hope of a reward coming around again. How to select effective rewards To make an effective reward system that fosters self-motivation, you will need to choose your carrot wisely. Rewards are personal to the individual and will therefore depend on your needs and preferences. Although going for a run is proven to improve physical health and has multiple mental health benefits, if you don’t truly enjoy it then it will not feel like a reward. Think carefully about which rewards will truly appeal to you, and therefore motivate you. The following list of ideas may help you start thinking about which rewards will encourage you to foster self-motivation, based on your own interests: Watching one episode of a TV show without feeling guilty Going out for a delicious lunch, or ordering treats to enjoy at home (try to keep it healthy!) Taking a break to walk in nature Reading a novel Organizing an at-home spa day Splurging on books or new stationery Hosting a game night with friends Trying a new form of exercise or a workout class Going to the movies Enjoying a long, relaxing soak in the bath A reward does not need to be expensive. If it appeals to you, then you will be motivated to reach your goal so that you can treat yourself. Don’t feel discouraged if you set up a reward system and find that it does not seem to work for you. The key is to work out what you need to tweak. Perhaps you need to be rewarding yourself more regularly or for smaller milestones, or you may need to think of a more appealing reward. Once you’ve got the balance of timing and rewards right, the brain’s reward system will take it from there to increase self-motivation, boost productivity, and help you meet your targets. The post Self-Motivation: how to build a reward system for yourself appeared first on Ness Labs.
Self-Motivation: how to build a reward system for yourself
Digital detoxes dont actually work
Digital detoxes dont actually work
Each Monday, I get a “digital well-being” alert on my phone. It tells me how much time I spend staring at the screen each week, and highlighting the apps I use the most. It helps me cut down on unnecessary use. But a more extreme approach to dealing with technology overwhelm has become popular: digital detoxes. A digital detox is a period in which a person voluntarily refrains from using digital devices including smartphones, computers and social media platforms. However, recent research has shown that digital detoxes can negatively impact our overall well-being. Is there a healthier, more sustainable way to improve our relationship with the digital world? The popularity of digital detoxes Today, the search term “digital detox” is three times more popular than it was in 2004. People have concerns about internet addiction, or worry that social media is causing them anxiety. They may also attempt a digital detox to refocus on real life social interactions. Media hype around the supposed harmful effects of technology have also increased the popularity of digital detoxes. For example, it is common to describe a correlation between mental health problems and overuse of technology as if the latter was the cause of the problem, rather than a co-occurring symptom. It is therefore no surprise that many of us have considered ditching our devices to help us feel more present or to find more time for self-reflection. Why digital detoxes don’t work Although it seems that a digital detox could solve problems including FOMO and comparison anxiety, as well as giving us back the hours we lose to mindless scrolling, research has suggested that digital detoxes may do more harm than good. A collaboration between Oxford University, The Education University of Hong Kong, Reading University and Durham University has found “no evidence to suggest abstaining from social media has a positive effect on an individual’s well-being.” The researchers noted that this contrasts popular beliefs about the benefits of digital detoxes. Moreover, this international study found that those who took a break from social media didn’t replace online socializing with face-to-face, voice, or email interactions, as the researchers had expected. Taking a break from social media therefore led to reduced overall interaction and loneliness as social media was not replaced with forms of socializing. In 2019, a research paper published in the Perspectives in Psychiatric Care journal showed that individuals who abstained from using social media developed a lower mood and demonstrated reduced life satisfaction. They were also lonelier than the control group. The researchers concluded that while excessive social media use can be associated with negative consequences, abstaining will not necessarily lead to positive results. Crucially, the outcome of detoxing may depend on what you use your devices for. Focusing on Instagram and Facebook, Sarah Manley and colleagues reported that abstaining from the platforms for one week had no impact on passive users. For active users—who share content and participate in conversations—taking a “social media vacation” led to a lower overall mood. She concluded that social media use can be beneficial for active users, but must be balanced with the risk of addiction. We have all become more reliant on our devices. While they sound like a good idea, digital detoxes are unsustainable because they cut us from the world. This can have a negative impact on our overall mood, as well as leading to feelings of isolation and loneliness. Rather than trying to detox, we should strive to develop a better relationship with the digital world. How to cultivate healthy digital practices Much like the difficulties experienced by fad dieters, heavily restricting our online behavior is unsustainable. Rather than starting a detox, we should aim for digital re-enchantment. The following strategies can help to cultivate a healthier, and more realistic, relationship with technology: Become an active participant in the digital world. Multiple research papers have shown that passively consuming information via social media may lead to upward social comparison, depression, and anxiety. Reassuringly, active participation, through comments and conversation has been shown to increase social connection and support, as well as enhancing positive emotion and well-being. Interacting with others on social media, rather than mindlessly scrolling, can therefore support your mental health. Cultivate awareness. A lot of the frustration with social media comes from the feeling of wasting our time. Interstitial journaling is a way to track your time meaningfully. Each time you go on social media, write down what you did. Did you just scroll through your timeline? Did you reply to a friend’s post? Did you learn something new? This will help to acknowledge when you are using your social media sensibly and when you might be getting distracted. Make small changes. The key to cultivating new practices is to implement changes progressively. Whereas going cold turkey will be an unpleasant shock, gradually changing the way you use your phone will make it far easier to maintain a healthier digital lifestyle. Consume a healthy information diet. Choose your sources of information wisely to assist your learning. If you get distracted by the news throughout the day, set aside 30 minutes each morning and evening to catch up on current affairs. Try to consume an information diet that is valuable to you and helps you grow. Foster deeper connections. Harness the power of the internet to connect with like-minded people or to learn about topics that excite you. The internet has made it possible to talk at length with strangers who share your passion for any subject—make the most of it. Digital detoxes are popular, but like a crash diet they are unlikely to boost your well-being or improve the way you consume online information. In fact, those who attempt a detox may notice low mood and feelings of isolation or loneliness. Instead, focus on using technology to your advantage by cultivating genuine connections with others, only consuming information that will help us grow personally and professionally, and reflecting on the way we use our devices. The post Digital detoxes don’t actually work appeared first on Ness Labs.
Digital detoxes dont actually work
Deliberate doubt: the art of questioning our assumptions
Deliberate doubt: the art of questioning our assumptions
Socrates, Galileo, Marie Curie, Einstein… What did these great thinkers have in common? They all practiced deliberate doubt and used it as a tool to improve their thinking and generate creative ideas. Deliberate doubt is the practice of actively questioning our beliefs and assumptions. It is about suspending our certainty and letting go of our preconceived notions in order to explore new ideas and perspectives. By turning doubt into a deliberate process, we open ourselves up to new possibilities and allow our minds to wander in unexpected directions. A thinking tool for systematic curiosity When we’re certain of something, we tend to stop looking for alternative explanations or possibilities. But when we doubt, we’re forced to consider other perspectives and look for evidence to support our beliefs. Of course, doubt can feel uncomfortable, but it can lead to a more nuanced understanding of a topic and can spark new ideas and insights. Let’s say that you’re working on a research project and your intuition tells you that a certain hypothesis is correct. You may become so focused on this hypothesis that you’re blind to other—equally interesting—options. By leaving room for uncertainty, you may find that a different explanation could be supported by the evidence, which might lead to new insights. By doubting your initial assumption, you open yourself up to new possibilities which can improve the quality of your research. When we’re faced with a difficult problem, it can also be tempting to rely on our preconceived notions and try to solve it in the same way that we’ve solved similar problems in the past. But if we consider alternative approaches, we may find that a different solution is actually more effective and will lead to better outcomes. Deliberate doubt can help us to develop a more open-minded and curious approach to the world. It encourages us to consider other perspectives and to seek out new information. This approach has been used by some of the best thinkers to generate new, innovative ideas: Socrates. The Greek philosopher is known for his method of questioning, which he called elenchus, better known today as the Socratic method. He believed that by asking questions and doubting the beliefs and assumptions of others, he could help people to think more deeply and critically about the world around them. Galileo Galilei. Considered the father of modern observational astronomy, Galileo was known for doubting existing theories and beliefs and testing them through observation and experimentation. This method helped him to make many important discoveries, including the fact that the Earth orbits the Sun, which was contrary to the prevailing belief at the time. Marie Curie. The Polish-French physicist and chemist is known for her pioneering work in radioactivity. She was the first woman to win a Nobel Prize and the only person to win Nobel Prizes in two different scientific fields (physics and chemistry). A practitioner of deliberate doubt, Curie was known for her ability to challenge existing theories and beliefs and to seek out new evidence to support her ideas. Albert Einstein. The most famous had an uncanny ability to think outside of the box and challenge existing theories and beliefs. In his own words to a journalist at LIFE Magazine: “The important thing is not to stop questioning. Curiosity has its own reason for existence.” But it doesn’t mean you should use it all the time. Deliberate doubt can help us challenge our assumptions, stimulate creative thinking, and improve our problem-solving skills. And the good news is: it’s simple to start implementing its principles in your daily life and work. How to practice deliberate doubt Practicing deliberate doubt requires regularly challenging your own beliefs and assumptions. Ask yourself questions like: What if I’m wrong about this? What evidence do I have to support my belief? What are the alternative explanations? Another way to practice deliberate doubt is to seek out a diverse range of experiences and expertise. Ask yourself: Are there people who have different perspectives on this matter? This way, you can broaden your understanding of the world by exposing yourself to different viewpoints. For instance, you can read books or articles by authors who have different backgrounds or opinions than your own, or you can have conversations with people who have different experiences than you do. The variety of perspectives will help you develop a more nuanced understanding of a topic, and potentially generate more interesting ideas. Finally, test your beliefs with evidence. Let’s say that you’re working on a product launch and you believe that a certain marketing strategy will be the most effective. Instead of treating this assumption as your only option, you can test it by conducting a pilot study or a small-scale experiment to see if it actually produces the desired outcome. Deliberate doubt is incredibly effective if your goal is to open your cone of uncertainty and think more creatively but, like all thinking tools, it shouldn’t be used indiscriminately. When doubt becomes counterproductive While deliberate doubt can be a valuable tool for generating creative ideas and exploring complex problems, it can also be counterproductive if it is not practiced in the right way. It’s important to keep in mind that deliberate doubt is not constant doubt. When practiced all the time, deliberate doubt can lead to inaction. If we’re continuously doubting our own ideas, we’ll be less likely to pursue them and see them through to completion. We can become overly hesitant, which can prevent us from making decisions. We spend so much time doubting everything, we end up not doing anything. Deliberate doubt can also lead to a lack of confidence when we apply it to ourselves. We can become self-critical and unsure of our abilities. In this case, deliberate doubt can undermine our self-esteem. As a result, we may be too afraid to try new things or take risks. To avoid these pitfalls, it’s important to strike a balance between doubt and certainty, and to use doubt as a tool to stimulate creative thinking and exploration, rather than as a means of undermining ourselves or others. Avoiding the pitfalls of deliberate doubt There are a few caveats to keep in mind in order to avoid pitfalls and make the most of this valuable tool. Some of these caveats include: Balance doubt with certainty. It’s important to strike a balance between doubt and certainty. If we doubt everything, we may become overly skeptical and cynical. On the other hand, if we’re certain of everything, we may stop looking for alternative explanations or possibilities, and this can limit our creativity and thinking. Dance with uncertainty: find a balance between doubt and certainty. Use doubt as a tool, not as a weapon. When we use doubt as a weapon, it can lead to a lack of confidence in ourselves and trust in others. When practicing deliberate doubt, it is important to use it as a tool to stimulate creative thinking and exploration, rather than as a means of undermining ourselves or others. Seek out diverse perspectives and experiences. By exposing ourselves to different viewpoints, we can broaden our understanding of the world and challenge our assumptions. This can help us to develop a more nuanced understanding of a topic and generate new ideas. By actively questioning our beliefs and assumptions, and by exposing ourselves to diverse perspectives, we can open ourselves up to new possibilities and generate original ideas. As long as you use it as one of the many thinking tools at your disposal, deliberate doubt can be a powerful source of insights and inspiration. The post Deliberate doubt: the art of questioning our assumptions appeared first on Ness Labs.
Deliberate doubt: the art of questioning our assumptions
December 2022 Updates
December 2022 Updates
New Things Under the Sun is a living literature review; as the state of the academic literature evolves, so do we. This post highlights some recent updates. Subscribe now Science: Trending Less Disruptive The post “Science is getting harder” surveyed four main categories of evidence (Nobel prizes, top cited papers, growth in the number of topics covered by science, and citations to recent work by patents and papers) to argue it has become more challenging to make scientific discoveries of comparable “size” to the past. This post has now been updated to include an additional category of evidence related to a measure of how disruptive academic papers are. From the updated article: …The preceding suggested a decline in the number of new topics under study by looking at the words associated with papers. But we can infer a similar process is under way by turning again to their citations. The Consolidation-Disruption Index (CD index for short) attempts to score papers on the extent to which they overturn received ideas and birth new fields of inquiry. To see the basic idea of the CD index, suppose we want to see how disruptive is some particular paper x. To compute paper x’s CD index, we would identify all the papers that cite paper x or the papers x cites itself. We would then look to see if the papers that cite x also tend to cite x’s citations, or if they cite x alone. If every paper citing paper x also cites x’s own references, paper x has the minimum CD index score of -1. If none of the papers citing paper x cite any of paper x’s references, paper x has the maximum CD index score of +1. The intuition here is that if paper x overturned old ideas and made them obsolete, then we shouldn’t see people continuing to cite older work, at least in the same narrow research area. But if paper x is a mere incremental development, then future papers continue to cite older work alongside it. That’s the idea anyway; does it actually map to our ideas of what a disruptive paper is? It’s a new measure and it’s properties are still under investigation, but Wu, Wang, and Evans (2019) tried to validate it by identifying sets of papers that we have independent reasons to believe are likely to be more or less disruptive than each other. They then checked to see that the CD index matched predictions. Nobel prize winning papers? We would expect those to be disruptive, and indeed, Wu and coauthors find they tend to have high CD index scores on average. Literature review articles? We would expect those to be less disruptive than original research, and their CD index is indeed lower on average than the CD index of the papers they review. Articles which specifically mention another person in the title? We would expect those tend to be incremental advances, and they also have lower CD index scores. Lastly, for a sample of 190 papers suggested by a survey of 20 scholars as being distinctively disruptive or not disruptive, the CD index closely tracked which papers were disruptive and which were not. Park, Leahey, and Funk (2022) compute the CD index for a variety of different datasets of academic publications, encompassing many millions of papers. Below is a representative result from 25 million papers drawn from the web of science. Across all major fields, the CD index has fallen substantially. Declining Disruption - from Park, Leahey, and Funk (2022) This decline is robust to a lot of different attempts to explain it away. For example, we might be worried that this is a mechanical outcome of the tendency to cite more papers, and to cite older papers (which we discuss in the next section). For any given paper x, that would increase the probability we cite paper x’s references, in addition to x. Park, Leahey, and Funk, try to show this isn’t solely driving their results in a few different ways. For example, they create placebo citation networks, by randomly shuffling the actual citations papers make to other papers. So instead of paper y citing paper x, they redirect the citation so that paper y now cites some other paper z, where z is published in the same year as x. This kind of reshuffling preserves the tendency over time of papers to cite more references and to cite older works. But when you compute the CD index of these placebo citation networks, they exhibit smaller declines than in the actual citation networks, suggesting the decline of disruption isn’t just a mechanical artifact of the trend towards citing more and older papers. Lastly, it turns out this decline in the average value of the CD index is not so much driven by a decrease in the number of disruptive papers, as it is a massive increase in the number of incremental papers. The following figure plots the absolute number of papers published in a given year with a CD index in one of four ranges. In blue, we have the least disruptive papers, in red, the most disruptive, with green and orange in the middle. Annual # of publications in four CD index ranges. Blue = 0.0-0.25. Orange = 0.25-0.5. Green = 0.5-0.75. Red = 0.75-1.0. From Park, Leahey, and Funk (2022). While the annual number of the most disruptive papers (in red) grew over 1945-1995 or so, it has fallen since then so that the number of highly disruptive papers published in 2010 isn’t much different from the number published in 1945. But over the same time period, the number of the mostly incremental papers (in blue) has grown dramatically, from a few thousand a year to nearly 200,000 per year. As an aside, the above presents an interesting parallel with the Nobel prize results discussed earlier: Collison and Nielsen find the impact of Nobel prize-winning discoveries are not rated as worse in more recent years (except in physics), but neither are they rated better (as we might expect given the increase in scientific resources). Similarly, we are not producing fewer highly disruptive papers; we simply are not getting more for our extra resources. The updated article also includes some new discussion of additional text-based evidence for a decline in the number of topics under study in science, relative to the number of papers, again from Park, Leahey, and Funk (2022). It also adds in some evidence that the rise in academic citations to older works does not merely reflect a rise in polite but inconsequential citations - at least in recent times, the citations to older work are just as likely to be rated influential citations as the citations to younger work. Read the whole thing Creative Patents and the Pace of Technological Progress The article “Innovation (mostly) gets harder” has a similar conclusion to “Science is getting harder”, but applied to the case of technological progress: eking out a given proportional increase along some technological metric seems to require more and more effort. The original article reviewed evidence from a few specific technologies (integrated circuits, machine learning benchmarks, agricultural yields, and healthcare) as well as some broad-based proxies for technological progress (firm-level profit analogues, and total factor productivity). I’ve now updated this article to include a discussion of patents derived from a fascinating PhD job market paper by Aakash Kalyani: …it’s desirable to complement the case studies with some broader measures less susceptible to the charge of cherry-picking. One obvious place to turn is patents: in theory, each patent describes a new invention that someone at the patent office thought was useful and not obvious. Following Bloom et al., below I calculate annual US patent grants1 per effective researcher. As a first pass, this data seems to go against the case study evidence: more R&D effort has been roughly matched by more patenting, and in fact, in recent years, patenting has increased faster than R&D effort! Is innovation, as measured by patents, getting easier? Author calculations. Annual patent grant data from here. US effective researchers computed by dividing annual R&D spending (see figure RD-1 here) by median wage for college educated US workers (spliced data series from Bloom et al., here). The trouble with the above figure is that patents shouldn’t really be thought of as a pure census of new inventions for a few reasons. First off, the propensity of inventors (and inventive firms) to seek patent protection for their inventions seems to have increased over time.2 So the observed increase in annual patenting may simply reflect an increase in the share of inventions that are patented, rather than any change in the number of new inventions. Second, patents vary a lot in their value. A small share of patents seem to account for the majority of their value. We don’t care so much about the total number of patents as the number of valuable patents. On the second problem at least, Kalyani (2022) shows that one way to separate the patent wheat from the patent chaff is to look at the actual text of the patent document. Specifically, Kalyani processes the text of patents to identify technical terminology and then looks for patents that have a larger than usual share of technical phrases (think “machine learning” or “neural network”) that are not previously mentioned in patents filed in the preceding five years. When a patent has twice as many of these new technical phrases as the average for its technology type, he calls it a creative patent. About 15% of patents are creative by this definition. Kalyani provides a variety of evidence that creative patents really do seem to measure new inventions, in a way that non-creative patents don’t. Creative patents are correlated with new product announcements, better stock market returns for the patent-holder, more R&D expenditure, and greater productivity growth. Non-creative patents, in general, are not. And when you look at the number of creative patents (in per capita terms - it’s the solid green line below), Kalyani finds they have been on the decline since at least 1990. From Kalyani (20...
December 2022 Updates
Proprioceptive writing: a method for embodied self-reflection
Proprioceptive writing: a method for embodied self-reflection
For the last few years, I have been looking for ways to get to know myself better. An unexpected life event in 2019, followed swiftly by trying to maintain my freelance career while solo parenting through a pandemic, left me feeling I had lost my sense of self. Back on my feet, but, like many parents, still trying to maintain the balance of work and home life, I have been searching for a way to support my current reflective practices. Expensive and time-consuming options are off the table, so it has been refreshing to learn about a free method for boosting self-awareness: proprioceptive writing. This process combines meditation and writing, two of the most effective ways to tap into the inner self. Rediscovering your sense of self Proprioceptive writing was first invented in the mid-1970s by author Linda Trichter Metcalf. Metcalf was working as a professor at Pratt Institute and began researching methods to help students find their writing voice. She developed the proprioceptive writing method as a tool to bring the self into focus and clarify one’s own life. The word proprioception comes from the Latin proprius, meaning “one’s own”. In medical terminology, proprioception is the sense that tells us about the location or movement of our bodies. If healthy proprioception is present, you will know whether someone has moved your finger upwards or downwards even when your eyes are closed. Conditions such as diabetes can disrupt this sense, making it difficult to perceive where your digits, or even limbs, are in space. The same is true of our emotions and imagination. It is easy to lose sense of where we are in our lives right now, and the metaphorical direction we are heading in. With emotional proprioception missing, we start to feel lost or as if we are simply coasting along. If we have ignored our inner voice for some time by not completing any reflective practice, we can become switched off to our own feelings and ideas. Curiosity dwindles, and we may not register our everyday thoughts. Proprioceptive writing can help us to rediscover our dreams and creative energy, while rebuilding our self-trust. Furthermore, it may help us to resolve emotional conflict while dissolving inhibitions. The benefits of embodied self-reflection In their book, Writing the Mind Alive, Linda Trichter Metcalf and Tobin Simon describe the ritual of proprioceptive writing as “utter simplicity”. The writing task only takes around 25 minutes. During this time, one listens to inner thoughts, writing down whatever is heard. This could include feelings, emotions or worries that come to mind. These are explored through a combination of writing and inner hearing. Researchers Jennifer Leigh and Richard Bailey noted that self-focus based purely on the reflection of thoughts can lead to rumination, anxiety and neuroticism. Conversely, they found that embodied reflective practices such as proprioceptive writing reduced the likelihood of unhealthy rumination. Furthermore, this practice was found to be helpful for both personal and professional development. The combination of writing and reflection serves as a method to connect physical sensations with thoughts that an individual might otherwise remain unaware of. By learning to listen to one’s own thoughts in a supportive, empathetic manner, it is possible to develop a stronger connection to our emotions. Writing in the Journal of Vocational Behavior, Reinekke Lengelle and colleagues reported that proprioceptive writing demonstrated increased vulnerability. Students who completed career questionnaires submitted answers that showed openness and depth of understanding, with richer material than would usually be expected of similar reflective exercises. Lengelle concluded that proprioceptive writing increased the development of students’ career identities and narratives. This could, in turn, “enable them to contribute usefully to society in a way that is personally meaningful to them.” By connecting with physical sensations through practising proprioceptive writing, you are likely to experience better internal and external emotional connections. This can lead to greater empathy for both oneself and others, as well as improved confidence levels, providing the right environment for personal and professional growth. How to practise proprioceptive writing This self-reflection method involves writing for 25 minutes while listening to music. Professor Metcalf recommends Baroque music to aid creativity. The only equipment required is a pen and pad of plain paper. You should then follow these three steps for each session: Write down what you hear. It takes practice to recognise your thoughts and convert them to words, so take your time writing down each feeling as it comes. It can be helpful to think of your thoughts as voices. Perhaps a voice says, “I still need to enrol in that online course”, while another says, “I can’t take on anything else right now.” In the first stage, write it all down without any judgement. Hear what you write. Now that you have written down your thoughts, it will be easier to listen to what your mind is saying. Take time to explore each thought before you move on to another feeling or concern. For example, if you find yourself worrying that your income is lower than you would like, dig deeper into where this thought comes from and what your mind is trying to say. The process will help you listen to the story you are telling yourself. Go deeper for each thought. With every thought you wrote down, ask yourself: “What do I mean by…?” In the salary example above, you would explore your income worries further to understand whether your concerns are related to financial difficulty, your perceived status, personal expectations, self-image, self-esteem, self-worth or another issue. Keep your thinking slow to fully explore every thought in the above three steps, rather than letting your mind race ahead. The, at the end of the 25 minutes, stop writing and and ask yourself four review questions: Which thoughts were heard but not written down? How do I feel now? What story am I telling? Do I have any direction for future proprioceptive writing sessions? These review questions should help to clarify the thoughts you have had, as well as providing prompts to help you get started on your next session. Proprioceptive writing is a simple technique that combines writing and meditation to support embodied self-reflection. This method of self-reflection can reduce rumination and supports both personal and professional growth. Practising hearing intelligence can be enlightening as it helps you not only discover what is on your mind, but the meaning and significance behind these thoughts, too. By putting your thoughts into words, you can play closer attention to feelings that might otherwise go unnoticed. It’s a way to make time to really listen to your inner self. The post Proprioceptive writing: a method for embodied self-reflection appeared first on Ness Labs.
Proprioceptive writing: a method for embodied self-reflection
Reopening the mind: how cognitive closure kills creative thinking
Reopening the mind: how cognitive closure kills creative thinking
Finding answers is a highly-valued skill in today’s world, where more than ever knowledge is power. We pride ourselves in quickly resolving issues and creating consensus. In job descriptions, companies clearly state that they are looking for problem solvers. But what if this single-mindedness blinds us to more creative answers? What would happen if we became more comfortable with unsolved problems? The need for cognitive closure is the motivation to find an answer to ambiguous situations — any answer that aligns with our existing knowledge. Not only can it lead us to make mistakes based on erroneous assumptions, but it can obscure the path to innovation. The psychology of cognitive closure Ideally, we should seek knowledge to resolve questions regardless of whether that new knowledge points to an answer that aligns with what we believe or what we want (“I don’t like this answer, but it is the most logical answer”). We should also accept the ambiguous nature of a situation for as long as we don’t have enough knowledge to resolve it (“I currently don’t know enough to answer that question”). That’s what we would do if we were rational agents. But dealing with uncertainty feels uncomfortable, so we try to get to an answer as fast as possible, sometimes irrationally, as long as it seems to neatly close the open loops we’ve been struggling with — thus providing us with a sense of closure. That’s why our need for cognitive closure is related to our aversion toward ambiguity. According to Professor Arie Kruglanski and his team at the University of Maryland, the need for cognitive closure manifests itself via two main tendencies: The urgency tendency: our inclination to attain closure as fast as possible. The permanence tendency: our inclination to maintain closure for as long as possible. When we find ourselves in an uncertain situation, urgency and permanence act as irrational sources of motivation that push us to try our hardest to eliminate ambiguity and to arrive at a definite conclusion. We are compelled to find an answer, irrespective of its actual validity. Some people feel more comfortable than others in ambiguous situations. Professor Arie Kruglanski and his team designed the Need for Closure Scale (NFCS), which, in their own words “was introduced to assess the extent to which a person, faced with a decision or judgment, desires any answer, as compared with confusion and ambiguity.” Items such as “I think that having clear rules and order at work is essential to success” and “When dining out, I like to go to places where I’ve been before so that I know what to expect” will make you score higher. Items such as “Even after I’ve made up my mind about something, I am always eager to consider a different opinion” and “I enjoy the uncertainty of going into a new situation without knowing what might happen” are reverse coded. People who score high on the NFCS are more likely to make stereotypical judgments and to distort new information so it aligns with their existing beliefs. Conversely, people who score low on the scale will display fluider, more creative thinking, and will be more open to new ideas and exploring new environments. While our individual need for cognitive closure is mostly stable throughout our lives, it can sometimes be affected by specific circumstances. For instance, experiments show that under high time pressure, we’ll tend to use shortcuts to process information and get to a solution faster. Just as heuristics can often be helpful, our need for cognitive closure can be beneficial in simple situations that require a quick answer. However, when faced with more complex problems that demand creative thinking, our need for cognitive closure can get in the way by motivating us to accept any answer that fits our existing knowledge, whether explicitly or tacitly. Cognitive closure and creative thinking A high need for cognitive closure may lead us to select only information that matches our current knowledge and may result in faster resolution. We may also analyze that information in ways that produce simple, quick solutions — but not always the best solution. Another way cognitive closure impacts the way we think is by making us cling to our current ideas to maintain our sense of expertise. Instead of expending cognitive resources towards learning new information and dealing with the discomfort of uncertainty, we hold on to the reassuring perception of solid knowledge. Preserving the stability of our web of knowledge becomes more important than expanding it. In contrast, a lower need for cognitive closure means we are more comfortable playing with many shades of gray and remaining in a situation where we don’t have an answer yet — and may never get to a satisfactory resolution. Of course, you don’t want your need for cognitive closure to be too low, as in many situations we do need to make a decision at some point, even if we don’t have all the information. But more often than not, high cognitive closure can be blamed for rushed, unimaginative decisions. Fortunately, our need for cognitive closure can be reduced by being intentional about the way we navigate ambiguous situations and by making space for productive mistakes. Embracing ambiguity to unlock creativity The first step is to know where you sit on the scale. The more you know how you tend to react in uncertain and complex situations, the better you will be able to manage your relative need for cognitive closure. Researchers Arne Roets and Alain Van Hiel from Ghent University created a short version of the questionnaire, with only 15 items. Here are the questions, where 1 means “strongly disagree” and 6 means “strongly agree”: 1. I don’t like situations that are uncertain. 1 – 2 – 3 – 4 – 5 – 6 2. I dislike questions which could be answered in many different ways. 1 – 2 – 3 – 4 – 5 – 6 3. I find that a well ordered life with regular hours suits my temperament. 1 – 2 – 3 – 4 – 5 – 6 4. I feel uncomfortable when I don’t understand the reason why an event occurred in my life. 1 – 2 – 3 – 4 – 5 – 6 5. I feel irritated when one person disagrees with what everyone else in a group believes. 1 – 2 – 3 – 4 – 5 – 6 6. I don’t like to go into a situation without knowing what I can expect from it. 1 – 2 – 3 – 4 – 5 – 6 7. When I have made a decision, I feel relieved. 1 – 2 – 3 – 4 – 5 – 6 8. When I am confronted with a problem, I’m dying to reach a solution very quickly. 1 – 2 – 3 – 4 – 5 – 6 9. I would quickly become impatient and irritated if I would not find a solution to a problem immediately. 1 – 2 – 3 – 4 – 5 – 6 10. I don’t like to be with people who are capable of unexpected actions. 1 – 2 – 3 – 4 – 5 – 6 11. I dislike it when a person’s statement could mean many different things. 1 – 2 – 3 – 4 – 5 – 6 12. I find that establishing a consistent routine enables me to enjoy life more. 1 – 2 – 3 – 4 – 5 – 6 13. I enjoy having a clear and structured mode of life. 1 – 2 – 3 – 4 – 5 – 6 14. I do not usually consult many different opinions before forming my own view. 1 – 2 – 3 – 4 – 5 – 6 15. I dislike unpredictable situations. 1 – 2 – 3 – 4 – 5 – 6 Then, add up all your answers. Scores up to 30 mean low need for closure, and scores between 75-90 mean high need for closure. If you find you have a high need for closure, here are some simple strategies you can apply so you can keep your mind open to competing possibilities when facing uncertain situations, and to avoid making decisions too fast. Design a psychologically safe environment. Our need for closure goes up when we feel threatened, and it goes down when we feel safe to make mistakes. By fostering psychological safety and encouraging creative experimentation, you and the people you work with are more likely to open their mind to the power of uncertainty. Fall in love with problems. Instead of trying to find answers as quickly as possible, train yourself to become comfortable with open issues that you know are unsolved. Richard Feynman recommended keeping a dozen of your favorite problems constantly present in your mind. He said: “Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps.” Practice mind gardening. In French, my native language, we talk of ideas as seeds that need to sprout (“faire germer une idée”). Keeping your mind open doesn’t mean you should passively wait for an answer. When you find yourself in an uncertain situation, collect nuggets of information and grow your tree of knowledge by connecting ideas together. You may not get to a definite answer, but you will still generate interesting insights. Instead of building a prison of convergence, cultivate a garden of emergence. Learn in public. Similarly, don’t wait until you have an answer to share it with the world, as this may lead you to rush to a clear solution. Instead, publish your early ideas, especially if they feel half-baked. You can do this on your blog, on social media, or in a public digital garden. Decide when to decide. While reducing your need for cognitive closure will allow you to explore more innovative answers to complex problems, there will be times when you need to make a decision, whether it is because of time pressure or other imperatives. Know when questions can remain open, and when you should move forward, even if you wish you had more information. The DECIDE framework can be a useful tool to make a decision and then evaluate the result. Liminal states can be uncomfortable, but they offer an unparalleled time for creativity. Some people are more comfortable than others in these moments of ambiguity, and the way we handle uncertainty greatly impacts our ability to think creatively under pressure. Knowing your own level of need for cognitive closure can help you better navigate those unfamiliar spaces and ensure you don’t constrain your imagination by rushing to make a decisio...
Reopening the mind: how cognitive closure kills creative thinking
The psychology of prestige: why we play the social status game
The psychology of prestige: why we play the social status game
With social media at our fingertips, we are regularly alerted to the news of a friend’s new car, an ex-colleague being awarded yet another promotion, or the lavish holiday your neighbors have somehow managed to afford. It’s hard to not get swept up in the pursuit of social status. Far from being a modern phenomenon, we have craved status ever since we were monkeys, when it already offered advantages within hierarchical micro-societies. However, now that status is not so closely linked to our survival, pursuing goals based on the assumed prestige our success will confer can be a bad idea. For instance, those who choose to study medicine based on the future status of being a doctor could later find themselves unfulfilled in a career they are not truly interested or invested in. Rather than striving for status, we need to find more sustainable incentives for success. Our natural desire for prestige The importance we confer to prestige makes sense from an evolutionary perspective. For our ancestors, being more popular was a survival advantage. Social status offered greater group protection and longevity, which means that they were more likely to reproduce. Similarly, as modern individuals, we seek out and follow paths that will maximize our social status and capital, even if we do not realize we are doing it. Professor Cameron Anderson explained that status influences how we behave and think. For example, wearing designer clothing or driving a sports car may be part of our inbuilt desire for prestige. Such status symbols can help maintain social hierarchies. Dr Sabina Siebert from the University of Glasgow found that when faced with competition from other professions, barristers protected their prestige with the use of status symbols including professional dress, ceremonies and rituals. She concluded that this allowed “elite professionals to maintain their superior status.” Modern society has exacerbated our natural desire for high-ranking status, with social media as a giant leaderboard where we compete with each other to gain the most prestige points.  Eugene Wei, who has worked in media, technology and for consumer internet companies, wrote that social media is built on the idea that it offers an efficient way to accumulate social capital. The likes, retweets, or comments that are felt to increase reach and boost the perception of one’s own value. It’s a “world of artificial prestige”. But farming prestige points is not without a cost. The impact of status anxiety When you focus on how successful you appear to others, status anxiety can occur. Your fear of not being valued by society may sadly lead to harmful long-term decisions being made. If you study law to claim the associated status of working as a lawyer, rather than because you are drawn to the career itself, you may later find yourself dissatisfied, stressed or unhappy at work. The desire to achieve status may mean you did not consider other career options, and may have turned down more suitable opportunities because of your drive to appear prestigious.  In his book Status Anxiety, philosopher Alain de Botton writes that the anxiety about what others think of us and about whether we are judged a success or a failure can lead us to make decisions that are self-defeating, lower our self-worth, or are at odds with our values. Status symbols such as a large house in a desirable area, multiple holidays each year, or being able to flash a Rolex on your wrist may all be ways that you feel you demonstrate your significance and value in society. However, when your drive to be outwardly successful supersedes all else, you may ignore exciting vocational work opportunities, put too little energy into personal relationships, or fail to make time for rest. If you decline opportunities for personal growth or self-discovery while striving for status, you could progress fast, but not in the right direction. In situations in which status, rather than the achievement itself, is the goal, we will find that even when acquired, we will likely remain dissatisfied. So what’s the alternative? Breaking free from the social status game It is possible to replace irrational status-seeking behaviors with healthier alternatives in which the value is found in the act itself rather than by the aimless collection of status symbols. Here are a few strategies to help you replace empty prestige with playful exploration: Practice metacognition to reflect on long-term goals. By becoming more aware of your thought processes, it is possible to observe patterns regarding your motivations. If you notice that you are instinctively drawn to actions based on the potential for increased status, note this down in a journal. Take time to consider whether the goal or motivation is truly aligned to your values, or if you are being coerced by a desire for prestige. Surround yourself with explorers. If your colleagues, friends or family are all driven by status, it is difficult not to get sucked into the pursuit of outward signs of success. Even worse, you may find yourself playing a game of one-upmanship and in a vicious cycle of trying to appear better than one’s peers. To avoid this trap, find friends online and in real life who are not playing the status game. This will help to avoid feelings of inadequacy and the desire to keep up with others. Explore unconventional paths. Many people have achieved success in pursuing their interests. Reserve time to read memoirs and biographies of those who have achieved their dreams not by striving for wealth or status, but by reflecting on what is important to them and following their own path. Focus on learning new skills. Rather than collecting status symbols, try to acquire skills that could help you grow and develop as an individual. This could include working on your communication skills, self-confidence, or problem-solving capabilities. It may even involve considering a career change. The psychology of prestige has its roots in evolution. However, in the modern world, we have the ability to reflect on the motivations behind our pursuit of status. It is important to distinguish between wanting to achieve a goal that is aligned to our values and will truly make us feel good, and a goal we want to meet purely for its associated status in our society. If we’re aware that we’re playing the social status game, then we can reflect on whether there is an intrinsically motivated path that could provide opportunities for growth and greater overall satisfaction. The post The psychology of prestige: why we play the social status game appeared first on Ness Labs.
The psychology of prestige: why we play the social status game
Connect all your workflows with Michael Dubakov CEO of Fibery
Connect all your workflows with Michael Dubakov CEO of Fibery
Welcome to this edition of our Tools for Thought series, where we interview founders on a mission to help us work better and happier. Michael Dubakov is the CEO of Fibery, an all-in-one workspace allowing the whole company to do everything together, whether it’s research, product development, marketing, customer management, and more. In this interview, we talked about the proper metrics for productivity, how to augment organizational intelligence, what we can learn from hypertext tools from the 80’s, the benefits of combining work management and knowledge management, how to work with both structured and unstructured information, and much more. Enjoy the read! Hi Michael, thank you so much for agreeing to this interview. Let’s start with a bit of a controversial question: what do you think is the problem with most productivity tools? I’ll speak about teams’ productivity most of the time here. Productivity tools should increase productivity, right? But productivity of knowledge workers is extremely hard to measure. There is no good metric yet. Working hours, lines of code, or any similar metrics measure effort, but not results. We need a better metric.  I think the proper metric is the quality and quantity of insights. What is an insight? It’s a piece of new knowledge. It can take many forms: a new question, a new answer to an existing question, a new theory, a new proof, a new experiment, etc. The more insights a knowledge worker generates in a given timeframe, the more productive she is.  Most tools promote values like “save time”, “work faster”, etc. However, in the knowledge economy, we compete with knowledge, not efficiency. Our productivity tools should become knowledge management tools as well, thus making companies more intelligent.  The second problem is that productivity tools create silos. Wiki, Spreadsheets, CRM, Project management tools create many walls and barriers inside a company. As a result, it is much harder to extract and connect information. Connections are really important here, this is how we discover novelty. Data silos impede connections and impede insights generation.  These problems are extremely hard to solve and there is no tool on the market that does it, but at least we should embrace them and move the new generation of productivity tools into the right direction. You are on a mission to augment organizational intelligence — what does that mean exactly? Well, intelligence is hard to define. It is easy to understand de-augmentation though, Engelbart demonstrated it with a brick attached to a pen. But what is intelligence augmentation? Engelbart defined it as “more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insolvable.” Beautiful!  I define intelligence as quantity and quality of insights. This definition is shorter and includes everything Doug said. We need a tool that increases the probability of insights and quality of insights.  Conceptually, what would that look like? There are many things here, but let me try to nail some important traits. First, I think that the knowledge management and work management dichotomy is false, we have to unite these spaces. A dream tool should combine work management and knowledge management processes together. It should work very well with unstructured information that has poor meta-data (note, chat, text, document, diagram) and with structured information that has rich meta-data (task, product, protein formula). Second, this tool should be a single point of truth about anything important happening in a company. It should break information silos, replace many tools, and fetch data from those tools it can’t replace. As an example, a team usually uses different software for chat, task management and documents management. A dream tool should have all these things as features that are tightly coupled and work together with a single database.  Third, this tool should support connectivity. All information should be connected via all kinds of links (bi-directional links, relations, transclusions). It should be possible to build ontologies and transform unstructured information into structured easily. Interestingly, most organizations don’t try to connect data. However, true intelligence lives in connections, this is how we invent new things. Finally, this tool should support information and process evolution. Teams and organizations evolve and processes change. However, most productivity tools are relatively rigid.  To summarize, we need a tool that accumulates, mixes, connects, and visualizes structured and unstructured information in a single space. That sounds like a simple yet ambitious vision. Can you tell us how you turned these principles into an actual tool when designing Fibery? Fibery is my second company. My first company was Targetprocess that I started in 2004. It was a software that focused on agile project management practices and was acquired by Apptio two years ago. So we learned a lot about companies’ processes and problems. The most important problems to me were processes’ connectivity and evolution. We wanted to create a tool that connects many processes in a company and evolves with a company, but, to be honest, we completely missed the knowledge management part.  About two years ago I started to dig into the past and discovered many beautiful ideas. Surprisingly, hypertext tools from the 80’s were very powerful. They provided a unique environment to create, connect and share knowledge. For example, Intermedia tool was created in 1985 and it had bi-directional links, various visualizations and features we are reinventing last decade. The Internet killed all these systems, but now we have a renaissance of hypertext tools. That is how we discovered that knowledge management is super important and unstructured + structured information mix is paramount for a real productivity tool for a knowledge economy. Fibery is five years old already, but we nailed the current vision only a year and a half ago. The deeper we dig into it, the deeper we believe in it. That sounds amazing. So, how does Fibery work concretely? Fibery core is what we call a “flexible domain”. You can create your own structures and hierarchies that represent how your company operates. Basically, you can design your database, but it is well hidden from the creator. It means that Fibery supports structured information really well. Here is the very basic map of four processes: Then you have all kinds of visualizations. You can visualize data using several Views: Timeline, Board, Table, Hierarchical List, Calendar, Graphical Report.  Then we have tools to work with unstructured information (Documents and Whiteboards). Our documents are kinda tricky, we combine them with databases in an unusual way, so you have a rich edit document in every entity in a database. Whiteboard View mixes databases and free form diagrams, you can include entities from the database and do cool things. And we pay much attention to links. Connections and linking information is where Fibery really shines. You can select a part of text anywhere and connect it to any entity via a bi-directional link. You can connect databases via strong relations and build deep hierarchies and complex data structures. It all helps people to discover new things. Fibery has a relatively unique panel navigation, so you can quickly explore these links and get back without losing focus. Then you want to bring the data from external systems. Fibery power is that you can replicate any domain. You can fetch data from dozens of systems (Intercom, GitLab, GitHub, Airtable, Braintree, Zendesk, etc) and connect this data to other databases. For example, you can fetch Pull Requests from GitLab and connect them to Features, or you can fetch Subscriptions from Braintree and connect them to Accounts.  Finally, you can automate things in Fibery, it has automation rules and buttons. It helps to keep data consistent and, well, save time.  These sound like powerful workflows! What kind of people use Fibery? Fibery is a horizontal product, but we are mostly focusing on product development companies and startups now. Our largest customer has 500 people in Fibery and uses it for all kinds of processes, from product management to legal.  Our typical customer is a product company or a startup below 100 people that uses Fibery for everything: product development, CRM, feedback accumulation, HR, strategic plans. We have more than 250 paid customers already. And how do you personally use Fibery? As you can imagine, we use Fibery for all our processes. In fact we have only two major tools: Fibery + Slack. Eventually we want to get rid of Slack and add sync communication in Fibery. My favorite use case is feedback accumulation and prioritization. We have several channels of feedback: Intercom, customers’ calls, community forum, and some random suggestions in other places.  Fibery integrates with Intercom and Discourse, and fetches all communication. Thus we can easily highlight a part of text and link it to some Feature, Bug or Insight in Fibery. We write notes for every call and do the linking afterwards, here is how it looks: The best thing is that these links are bi-directional. When you navigate to some feature, you will find all the feedback inside linked to it. Eventually feedback accumulates and you can create a list that shows what features or insights are requested by customers and leads more often. It helps to decide what to take next into development. From my experience, feature prioritization is one of the hardest processes for product managers, and Fibery solves it. Another cool use case is that we use Fibery as a CRM. All registered accounts are added into Fibery, we also have a...
Connect all your workflows with Michael Dubakov CEO of Fibery
The Uncertain Mind: How the Brain Handles the Unknown
The Uncertain Mind: How the Brain Handles the Unknown
Our brain is wired to reduce uncertainty. The unknown is synonymous with threats that pose risks to our survival. The more we know, the more we can make accurate predictions and shape our future. The path forward feels more dangerous when we can sense essential gaps in our knowledge. In fact, fear of the unknown has been theorized to be the “one fear to rule them all”—the fear that gives rises to all other fears. Unfamiliar spaces and potential blind spots make us uncomfortable. This fear makes sense from an evolutionary perspective, but can be unnecessarily nerve-wracking—and sometimes paralyzing—in our modern world. Fortunately, we have also evolved an ability that’s deeply human: metacognition, or thinking about thinking. Metacognitive strategies can help us think better and manage the anxiety that arises from the unknown. How the brain reacts to uncertainty Humans react strongly to uncertainty. A study from researchers at the University of Wisconsin–Madison shows that uncertainty disrupts many of automatic cognitive processes that govern routine action. To ensure our survival, we become hypervigilant to potential threats. And this heightened state of worry creates conflict in the brain. First, uncertainty impacts our attention. The sense of threat degrades our ability to focus. When we feel uncertain about the future, doubt takes over our mind, making it difficult to think about anything else. Our mind is scattered and distracted. We feel like we’re all over the place. The underlying biology is still poorly understood, but research in primates conducted by Dr Jacqueline Gottlieb and her team at Columbia University’s Zuckerman Institute reveals that uncertainty leads to major shifts in brain activity, both at the micro-level of individual cells and at the macro-level of signals sent across the brain. Put simply, their results suggest that our brain redirects its energy towards resolving uncertainty, at the expense of other cognitive tasks. Uncertainty also affects our working memory. You can think of your working memory as a mental scratch space where you jot down temporary information. Working memory is attention’s best buddy. It’s what helps you visualize the route to a new place when you drive and keep several ideas in as you write down a sentence. Our working memory capacity is limited. Cognitive load is the amount of working memory resources we use at one given time. A high cognitive load means that we’re using a lot of our working memory resources. And uncertain situations force us to use additional working memory resources. In the words of Samuli Laato, a researcher at the University of Turku: “Uncertainty always increases cognitive load. Stressors such as health threat, fear of unemployment and fear of consumer market disruptions all [cause] cognitive load.” Cognitive overload makes it harder to keep crucial information in mind when making decisions or to think creatively by connecting ideas together when we experience. Because it has such a big impact on our cognitive functioning—decreasing our attention and using up more of our working memory resources—uncertainty often leads to anxiety and overwhelm. The good news is: the heavy load of uncertainty is not inevitable. Studies suggest that responding to uncertainty is resource-intensive, but metacognitive strategies can help us reduce the impact of uncertainty. By using thinking tools, we can offload some of the burden uncertainty puts on our mind, so we can regain control of our attention and free our working memory resources—and, ultimately, think more clearly in times of uncertainty. A thinking tool for dealing with uncertainty Uncertainty is not a binary concept—“I am certain or I am uncertain.” Rather, uncertainty is multifaceted, with many flavors that should be treated differently. Former United States Secretary of Defense Donald Rumsfeld famously said: “…there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know.” The Uncertainty Matrix, sometimes called the Rumsfeld Matrix, is a tool that can be used to help make decisions when facing an uncertain situation. It can be used to differentiate between different types of uncertainties, and to come up with possible solutions for each. The matrix consists of four quadrants: Known-Knowns, Known-Unknowns, Unknown-Knowns, and Unknowns-Unknowns. Each quadrant represents a different type of uncertainty, and each has its own set of possible solutions. Known-Knowns are uncertainties that are known to us, and that we can plan for. For example, if we know that there is a high possibility of a layoff at our company, we can make a plan for how to deal with it. Known-Unknowns are uncertainties that we know exist, but where we don’t have enough knowledge to make a plan. For example, we may not know if our company will be acquired by another in the future. Or, you may be aware of the inherent uncertainties of leaving your job to work on a venture of your own, but you can’t make a step-by-step plan of what to do because you don’t have enough data yet. Unknown-Knowns are uncertainties that we’re not aware of but that we tacitly understand, which may lead to biases and assumptions in our decisions. Hidden facts Unknowns-Unknowns are uncertainties that we don’t know about. For example, a new technology may be developed that makes our product obsolete. “ unknown unknowns are risks that come from situations that are so unexpected that they would not be considered.” Once we know what type of uncertainty we’re dealing with, we can come up with possible solutions. For example, if we’re dealing with a Known-Known, we can exploit the factual data at our disposal to make a contingency plan, which will allow us to mitigate known risks. When dealing with a Known-Unknown, we conduct experiments to gather more information, so we can close some of our knowledge gaps and turn those Known-Unknown into Known–Knowns. For Unknown-Knowns, explore our assumptions—the things we don’t know we know—and identify biases in those assumptions, so we can potentially replace them with factual data. Finally, in the case of Unknown-Unknowns, we can conduct market research and use strategic intelligence to try and uncover blind spots. It’s a good practice to have in place, but it should be noted that there is no guarantee we will be able to turn Unknown-Unknowns into Known-Unknowns. There will always be events we could not have predicted. The Uncertainty Matrix is a useful tool for dealing with uncertainty, and can help us make better decisions when faced with it. It’s even better when used as part of a team, as different people may have different perspectives on the same uncertainties. “The oldest and strongest emotion of humankind is fear, and the oldest and strongest kind of fear is fear of the unknown,” wrote H.P. Lovecraft. While the fear of the unknown is deeply rooted in our biology, it is possible to elevate ourselves above our automatic reactions so we can make the most of uncertainty. Metacognition can be a great ally in reducing anxiety, freeing our working memory resources, and making better decisions when navigating unfamiliar spaces. The post The Uncertain Mind: How the Brain Handles the Unknown appeared first on Ness Labs.
The Uncertain Mind: How the Brain Handles the Unknown
Answering Your Questions
Answering Your Questions
To celebrate passing 10,000 subscribers, last week I asked for questions from readers. There were too many to answer, but here’s an initial 10. If I missed yours, or you want to submit another question, I’m going to add a reader questions section to the bottom of my future updates posts, so feel free to ask a question using this form and I’ll try to get to it in the future. Otherwise, back to normal posting next time. One more piece of news; I’ve joined Open Philanthropy as a Research Fellow! I will continue to write New Things Under the Sun while there, but among other things I’ll also be trying to expand the New Things Under the Sun model to more writers and more academic fields. More details will be coming down the road, but if you are an academic who wants to write the definitive living literature review for your passion topic, drop me an email (matt.clancy@openphilanthropy.org) and I’ll keep you in the loop! I’m sad to leave the Institute for Progress, which continues to do outstanding work I really believe in, but I will remain a senior fellow with them. On to your questions! Subscribe now What is the most critical dataset that you would like to do research on but currently does not exist or is not available? - Antoine Blanchard I’m going to dream big here: I would love to see a better measure of technological progress than total factor productivity or patents. One particularly interesting idea for this was suggested to me by Jeff Alstott. Imagine we collected the technical specifications of thousands (millions?) of different kinds of individual technologies that seek to give a representative cross-section of human capabilities: solar panels, power drills, semiconductors, etc. There is some precedent for trying to collect technical specifications for lots of technologies, but it has typically been pretty labor intensive. However, gathering and organizing this data at a huge scale seems to be entering the realm of possibility, with the digitization of so much data and better data scraping technology. For example, we now have some inflation indices based on scraping price data from the web at a very large scale. Once you have all this data, for each class of technology, you can map out the tradeoff among these specifications to map the set of available technologies. How tradeoffs evolve over time is a quite direct and tangible measure of technological progress. This kind of technique has been used, for example, to model technological progress in the automobile industry (see image below). You then need a way to normalize the rate of progress across very different domains, and to weight progress across different goods so we can aggregate them up to a meaningful measure of overall progress. Lastly, to be most useful for research, you would want to link all this data up to other datasets, such as data on firm financials, or underlying academic research and patents. Adapted from Knittel (2011) It would be a huge undertaking, but with modern computing power, I’m not sure it’s much worse than computing many other economic statistics, from inflation to GDP. And it would help remove some serious measurement issues from research to understand what drives innovation. Can we quantify the impact of information and knowledge storage/sharing innovations, on the progress of innovation? Things like libraries, and more modern knowledge management systems. And obviously things like mobile characters and the printing press etc. What is the value of knowledge commons? - Gianni Giacomelli  Let’s start with the assumption that most good inventions draw on the accumulated knowledge of human history. If you can’t accumulate knowledge, I think most innovation would proceed at a glacial pace. Tinkering would still occasionally result in an improvement, but the pace of change would be evolutionary and rarely revolutionary. So if it’s a question of having access to accumulated knowledge or not having access, the value of having access is probably close to the value of R&D. But our ability to store and access knowledge is itself a technology that can be improved via the means you suggest. What we want to study is the incremental return on improvements to this knowledge management system. Some papers have looked at this for public libraries, patent libraries, and wikipedia (see the post Free Knowledge and Innovation). Having a public or patent library nearby appears to have helped boost the local rate of innovation by 10-20%. One way to interpret this is that an improvement in the quality of knowledge commons equivalent to the difference between a local or distant library could buy you a 10-20% increase in the rate of innovation. Nagaraj, Shears, and de Vaan (2020) find significantly larger impacts from making satellite imagery data available, in terms of the number of new scientific papers this enabled. And other papers have have documented how access to a knowledge commons changes what kinds of works are cited: Zheng and Wang (2020) looks at what happened to Chinese innovation when the Great Firewall cut off access to google; Bryan and Ozcan (2020) show requirements to make NIH-funded research open access increased people citation of it. In each case, its clear access had a measurable impact, but it’s tough to value. As an aside, my own belief is improving the knowledge commons gives you a lot of bang for your buck, especially from the perspective of what an individual researcher can accomplish. But of course, I’m biased. I was wondering if there has been a significant long-term impact of internet on economic growth and if there is any evidence to suggest that any of the economic growth in the last 2 decades can be attributed to the rise of internet - Daniyal from Pakistan There’s at least two different ways the internet affects economic growth. First and most obviously, it directly creates new kinds of economic activity - think Uber, Netflix, and Amazon. Unsurprisingly, this digital economy has been growing a lot faster than the non-digital economy (6.3% per year, compared to 1.5% per year for the whole economy, over 2012-2020 in the USA), but since it only counts for about 10% of the US economy, the impact on headline growth can’t have been too big yet. So, sure, the internet has contributed to faster economic growth, though the effect isn’t particularly large. Second, and more closely related to the themes of this newsletter, the internet can also affect the overall rate of innovation (including innovation in non-internet domains). It allows researchers to collaborate more easily at a distance and democratizes access to frontier ideas. These impacts of the internet have been a big theme of my writing - see the post Remote work and the future of innovation for a summary of that work, and more specifically the post The internet, the postal service, and access to distant ideas. I think on the whole, the internet has likely been good for the overall rate of innovation; we know, for example, that it seems to help regions that are geographically far from where innovation is happening keep up. It also helps enable new kinds of collaboration which, though possibly less disruptive than their more traditional counterparts, might simply not exist at all otherwise. It does seem a bit surprising the effect is not much larger though; why doesn’t having easy access to all the world’s written information multiply innovation by a factor of 10 or 100? The fact that it doesn’t suggests we should think of innovation as being comprised of lots of factors that matter (see this overview for some of those factors) and it’s hard to substitute one for the other. We get bottle-necked by the factors that are in short supply. To take a concrete example, it may be that the world’s written information is now at our fingertips, but the overall number of people interested in using it to innovate hasn’t increased much. Or that written information is rarely enough to take an R&D project across the finish line, so that we’re bottlenecked by the availability of tacit knowledge. Research in developing countries is both cheaper and of lower perceived quality than that which is carried out in developed countries. To what extent are these two outcomes separable? Do you think it's conceivable that the former can improve to the extent that a large share of technologically sophisticated R&D will be outsourced in the future? - Aditya I take it as a given that talent is equally distributed around the world, but I think developing countries face at least two main disadvantages in producing research that is perceived to be high quality. First, research can be expensive and rich countries can provide more support to researchers - not only salary support, but also all the other non-labor inputs to research.  Second, rich countries like the USA have tended to attract a disproportionate share of top scientific talent. As I’ve argued, while academic work is increasingly performed by teams collaborating at a distance, most of the team members seem to initially get to know each other during periods of physical colocation (conferences, postdocs, etc). Compared to a researcher physically based in a rich country on the scientific frontier, it will be harder for a researcher based in a developing country to form these relationships. Compounding this challenge, researchers in developing countries may face additional challenges to developing long-distance relationships: possibly linguistic differences, internet connectivity issues, distant time zones, lack of shared cultural context, etc. Moreover, we have some evidence that in science, the citations a paper receives are better predicted by the typical citations of the team member who tends to get the least citations on their own work. That means the returns to having access to a large pool of collaborators is especially high - you can’t rely on having a superstar, you need a whole team of high performers. Lastly, the...
Answering Your Questions
Tana: the all-in-one tool for thought?
Tana: the all-in-one tool for thought?
Notion, Evernote, and Roam have long been the gold standard for online collaboration and note-taking. However, a new player has emerged that promises to be the all-in-one tool for thought everyone has been waiting for. This tool is called Tana. The end of context switching Knowledge work often requires us to switch between tools for thought, and this can make the process of thinking and learning tedious. Tana’s vision is to create a tool that can end context switching between tools. Tana combines the best features from Notion, Roam, and Airtable, and it allows you to easily transition between free-flowing thoughts, collaboration, and structured data. Tana could be the perfect tool for people who feel like they are at the intersection of the note-taking styles of architect, gardener, and librarian. Tana requires a mindset shift in order to use it effectively, from thinking about information in terms of files to thinking of them in terms of nodes. In Tana, everything is a node. This means that every piece of information — whether it be a task, a note, a file, or even a person — is represented as a node in a graph. Those nodes are connected through bi-directional links. This means that you can link to any piece of information from any other piece of information. This makes it easy to find the information you need when you need it, without having to switch between different applications. This approach allows you to easily see the relationships between different pieces of information and quickly find what you are looking for. Because everything is connected in a graph, you no longer need to think of information in terms of files and folders. Bi-directional links are powerful, but they are not new. A feature that is truly unique to Tana is Supertags. Supertags are like templates for your nodes. They allow you to create a template once and use it in multiple places. This makes it easy to keep track of information and find it when you need it and creates a database that you can search and view on any page. Updated productivity workflows Tana is still in early access, but it’s already showing a lot of promise. It’s easy to use and has a lot of potential to change the way we think about and work with information. Let’s go through some productivity workflows that feel like magic with Tana. Task management Task management in Tana can be as simple as using checkboxes on any node. For explicit to-dos, you can add the tag #todo to the node. You can add tasks anywhere, from the node that you are currently working on, your daily note, or even use the quick add feature to capture your thoughts from any node. You do not need to worry about remembering where you kept these tasks, as you view all your to-dos in the sidebar, and filter it further using live search which ensures that these nodes will not fall through the cracks. This makes it frictionless to capture and organize your tasks. The #todo tag also has a due date to indicate when it needs to be done. This will show up as a reference at the bottom of the day the task is due. Tana’s live search makes it a great option for task management. For example, you can find all of your outstanding tasks with these steps: Go to your home node and run the command “Find todos…” with the shortcut Cmd/Ctrl + K. Hit enter and open the “Find todos…” node. In the search parameters, add a new node in the search and write “NOT DONE”. Doing this will create a node containing all your outstanding tasks. This is just one way you can manage your tasks more efficiently, but Tana is so flexible that you can practically design any task management workflow that suits your needs. Building a knowledge base As we discussed earlier, Supertags are a feature that’s unique to Tana. With Supertags, you can easily add templates to your nodes. Let’s see how it works by building a database for all the books you have read. List down the books you have read as a new node, and add a tag to it. Let’s add the “book notes” tag to our nodes. Click on the tag, where you will see a configure option. This allows you to add fields, similar to Notion’s databases. You can add fields such as date, number, user, URL, and even create your own custom fields. Once you are done configuring your Supertag, try it out by clicking on the nodes with the book notes tag. Here, you can add values to each field. To create a dashboard of all the books you’ve read, click on the tag and go to the list of #book notes. This will create a database and show a list of all the nodes you tagged with book notes. You can then sort, filter, group, and view the database from different perspectives. You can open this list from anywhere in Tana by using the command menu and typing “Find nodes with tag #book notes” This is only one of the many ways you can use Supertags in Tana. This feature is incredibly powerful and can unlock productivity workflows that were previously not possible without cobbling together several tools. Interstitial journaling Interstitial journaling is a journaling technique where you write down a few thoughts when taking breaks from your tasks, and note the time you took these notes. With interstitial journaling, you combine note-taking, task management, and time tracking into one unique workflow. It can make your breaks more mindful, where you reflect on your previous task, plan your next steps, and jot down your thoughts so you can focus on the work at hand. It can also keep you accountable when working, as you have a record of the time you spent working and time spent resting. While you can do interstitial journaling with any tool, it is greatly enhanced with Tana. Let’s see how you can use Supertags to enhance your journaling. Go to your daily note and create a new node. Write down the current time and type whatever you are thinking about. If you are working on a task, you can mention the task by using @ and typing the name of the task. This will link your interstitial journaling node with the initial todo node. Add the tag “Interstitial Journaling” to the node, and configure the tag to add fields for each of your journaling nodes. Add goals, self review, next plans, and anything you want to jot down into your fields. By finding nodes with the tag #Interstitial Journaling, you will have a log of all the work you have done. This is useful for doing a weekly review, or to look back at the progress you have made on your tasks and projects. Limitations of Tana Although Tana is an incredibly powerful tool, there are some limitations — which is to be expected considering that it is still in early access and is relatively new in the tools for thought space. First, Tana is a cloud based web app, so you also might find it a bit slower compared to tools for thought that are local-based. As we just mentioned, Tana is still in early access and there are some features that are still being developed. You might find some features buggy, such as the panels feature, where you might find it difficult to place and resize panels. Another limitation is that the learning curve for Tana is relatively steep. It might come easy to power users who are used to other tools for thought that Tana draws inspiration from, but for the majority of users, the concepts and principles that Tana uses are not intuitive and may take some time to get used to. Concepts such as everything is a node and multiple view databases will take some time to digest before it becomes second nature. However, there are many good videos from the team on how to use Tana. We also have several easy-to-follow Tana tutorials here at Ness Labs to help you get started. Finally, there are some features that are unavailable in Tana, which may be a dealbreaker to some depending on their use cases. For example, Tana is not yet available on mobile devices, making it unsuitable for people who frequently need to access their notes while away from the computer. In terms of task management, users who like to timeblock may be disappointed that there is no calendar for them to timeblock and schedule their tasks. There is no API as well, which limits the integrations with the workflows you currently use. However, it is still early days and the team may address this in the future. The good news is that the Tana team is very responsive to feedback and is working hard to improve the platform. In addition, Tana has a very active community on Slack where they help each other out and share their tips and tricks. Some useful resources include the Tana Pattern Library, which is a shared workspace containing patterns from the community that you can import into your own database with one click. However, before jumping into Tana, beware of the shiny toy syndrome. It’s common to want to jump ship to the latest toy everyone is talking about, but think about your current use cases for your tools for thought, whether there is some important feature missing, and consider if switching to Tana is worth the time and effort. Overall, Tana is a great tool for thought with a lot of potential. It’s well worth checking out if you are looking for an all-in-one tool to manage your tasks, notes, projects, and collaborate with your team members. Tana is still in early access, but you can sign up for the waitlist here. The post Tana: the all-in-one tool for thought? appeared first on Ness Labs.
Tana: the all-in-one tool for thought?
Single-tasking: the power of focusing on one task at a time
Single-tasking: the power of focusing on one task at a time
We are all juggling multiple obligations, roles, and responsibilities across our personal and professional lives. Multitasking seems like it should be the perfect solution when faced with multiple demands and limited time. Doing two things at the same time is faster than doing them one after the other… Right? I’m a freelance medical copywriter. When I sit down to work, I get email notifications, have multiple tabs open, my phone nearby, and other distractions including a never-ending personal to-do list. While I like to kid myself that quickly answering an email and then returning to writing an article is a great feat of multitasking, my output, as well as the scientific evidence, tells me otherwise. In fact, psychiatrist Edward Hallowell defined multitasking as a “mythical activity in which people believe they can perform two or more tasks simultaneously as effectively as one”. Trying to multitask can not only hurt our productivity, but also our ability to learn. Fortunately, there is an alternative way to boost your efficiency: single-tasking. Illustration by DALL·E The dangers of multitasking Despite being an established word in the English language, when multitasking was first coined in the 1960s it was not with human productivity in mind. Rather, its meaning was related to computers performing more than one task at once. As humans, although it might seem that we’re performing multiple tasks at the same time, the reality is that we only work on one task at a time. The multitasking illusion is achieved by opening an email, saving a document and streaming an audiobook one after the other so quickly that it appears simultaneous. Performing multiple tasks in series, rather than parallel, is also how we attempt to multitask as humans. As Canadian author Michael Harris puts it: “When we think we’re multitasking we’re actually multi-switching”. Multitasking makes us feel busy, but rather than being productive, we are lowering our efficiency. Researchers Kevin Madore and Anthony Wagner investigated what happens to the brain when trying to handle more than one task at a time. They found that “the human mind and brain lack the architecture to perform two or more tasks simultaneously.” That’s why multitasking leads to decrements in performance when compared to performing tasks one at a time. Furthermore, it is worrying that those who multitask often inaccurately consider their efforts to be effective, as studies have demonstrated that multitasking leads to an over-inflated belief in one’s own ability to do so. Not only are we bad at multitasking, but we can’t seem to be able to see it. While micro-level multitasking, such as responding to an online work chat while producing a report, will lead to lost efficiency, it’s important to note that macro-level multitasking can be achieved when you are balancing several projects at once. However, in most cases, research shows that single-tasking is the most efficient way of working, as it avoids switching costs and conserves energy that would be expended by mentally juggling multiple competing tasks. Single-tasking boosts more than just productivity To single-task, we must relearn how to focus our attention on one task, rather than becoming drawn into another project or social distraction. In 2016, an analysis of 49 studies found that multitasking negatively impacted cognitive outcomes. For young adults in education, multitasking, such as studying and texting, was found to reduce educational achievement and increase the amount of time it took to complete homework. Students who multitasked in class failed to offset the damage done to their final grades, even if they put in additional hours of study at home to try to make up for it. It is therefore difficult to combat the damage caused by multitasking. In contrast, single-tasking can help you meet your targets more efficiently. By consciously blocking out distractions, you counteract the stop-start nature of task-switching and instead reach a flow state. This ensures you can focus solely on the current brief without interruption, leading to increased productivity in a shorter space of time.  Focusing on one task can, surprisingly, boost creativity. Whereas multitasking creates a constant stream of distraction, the tedium of focusing on a single task gives your brain the space it needs to explore new paths that you might otherwise not have considered By focusing on one workstream, inspiration and creativity can bloom because you are not trying to split your focus in multiple directions at once. By dedicating yourself to one task, you will complete tasks more effectively and therefore feel more confident about your capabilities at work, and less stressed about keeping up with deadlines or targets. How to single-task With studies demonstrating that multitasking drains your energy and diminishes your productivity, those of us trying to multitask are at risk of falling behind. Failing to complete tasks, having to work overtime, or feeling exhausted by a never-ending to-do list will likely lead to stress or anxiety. Fortunately, there are three strategies which will help you implement a single-tasking approach to work: Design a distraction-free environment. Both your digital and physical environment should be free of distractions to enable you to focus solely on one task. Turn off email notifications, and instead, only check your emails at when you start work, at lunchtime, and an hour before you finish. Put your phone in your bag or leave it in a different room to reduce the urge to check it. Close any tabs or browsers that are not relevant to your current task to avoid the temptation to get sucked into the latest sale or any breaking news. Use the Pomodoro technique. The Pomodoro technique involves working for 25 minutes and then taking a 5 minute break. During the 25 minutes of work, you must be completely focused on the task. Breaking your time down in this way offers certainty that you will be able to focus solely on one task for a relatively short amount of time, rather than setting a more overwhelming time target, such as a whole morning. Using a timer is beneficial for keeping you on track and ensuring you take breaks. For maximum productivity, be sure to return to work as soon as the break is over. Take regular breaks. In addition to the 5 minute Pomodoro breaks, you need to regularly take meaningful breaks to fresh and recharge. Leave your screens behind and go for a walk at lunchtime or commit to reading a novel for thirty minutes. Focussed work requires energy, so you will need to make sure you factor in respite to reduce the risk of burnout. Many of us think we can multitask, but an unfortunate risk of multitasking is that we develop an over-inflated perception of just how effectively we juggle multiple tasks. For micro-tasks, single-tasking is a far more effective way to complete projects, boost creativity, and even reduce stress levels. As we have become accustomed to so-called multitasking, learning to focus on one thing takes time, but it is worth the effort. By creating an environment free from distractions, using techniques to boost your focus and incorporating regular breaks, you are likely to become more efficient and ultimately more successful. The post Single-tasking: the power of focusing on one task at a time appeared first on Ness Labs.
Single-tasking: the power of focusing on one task at a time
AI and I: The Age of Artificial Creativity
AI and I: The Age of Artificial Creativity
A new generation of AI tools is taking the world by storm. These tools can help you write better, code faster, and generate unique imagery at scale. People are using AI tools to produce entire blog posts, create content for their company’s social media channels, and craft enticing sales emails. The advent of such powerful AI tools begs the question: what does it mean to be a creator or knowledge worker in the age of artificial creativity? Artificial creativity is a new liminal space between machine and human, between productivity and creativity, which will affect the lives of billions of workers in the coming years. Some jobs will be replaced, others will be augmented, and many others will be reinvented in an unrecognizable way. If your work involves creative thinking or knowledge management, read on for a primer on what’s going on with the latest generation of AI tools, and what it means to be a creator or a knowledge worker in the age of artificial creativity. Illustration by DALL·E The advent of Generative AI Artificial creativity, also known as computational creativity, is a multidisciplinary field of research that aims to design programs capable of human-level creativity. The field is not new. Already in the 19th century, scientists were debating whether artificial creativity was possible. Ada Lovelace formulated what is probably the most famous objection to machine intelligence: if computers can only do what they are programmed to do, how can their behavior ever be called creative? In her view, independent learning is an essential feature of creativity. But recent advances in unsupervised machine learning do bear the question of whether the creativity exhibited by some AI software is still the result of simply executing instructions from a human engineer. It’s hard not to wonder about Ada’s thought had she seen what computers have become capable of creating. In the words of Sonya Huang, Partner at Sequoia: “As the models get bigger and bigger, they begin to deliver human-level, and then superhuman results.” To understand what’s going on and why it matters for the very future of knowledge work and creative work, we need to understand the difference between Discriminative AI and Generative AI. There are two main classes of statistical models used by AI. The first one, which has been used the longest and is what you’ll find in classical AI, is called “discriminative”: it discriminates between different kinds of data instances. The second class of model, much more recent, is called “generative”: it can generate new data instances. It’s a bit easier to understand with an analogy. Let’s say you have two friends: Lee and Lexi. They’re both brilliant and they’re doing great at school, but the way they study is very different. When preparing for an exam, Lee learns everything about the topic and researches every single detail. It takes a lot of time, but once he knows it, he never forgets it. On the other hand, Lexi creates a mind map of the topic, trying to understand the connections between ideas in that problem space. It’s less systematic, but a lot more flexible. In this story, Lee uses a discriminative approach whereas Lexi uses a generative approach. Both approaches work very well, and it’s hard to tell the difference from the outside, especially when the goal is to perform well on a specific exam. But, as you can imagine, Lexi is likely to do much better with her generative approach in situations where coming up with novel ideas is required. That’s why discriminative models are often used in supervised machine learning (which is great for analytical tasks like image recognition), while generative models are preferred in unsupervised machine learning (which is better for creative tasks like image generation). For many years, Generative AI was constrained by a number of factors. Those models were difficult and expensive to run, requiring elaborate workload orchestration to manage compute resources and avoid bottlenecks, and only organizations with deep pockets could afford the exorbitant cost of using cloud computing. But things are changing fast. New techniques, more data, cheaper computing power—we’ve come to a point where any developer can now build an AI application from their living room. For an affordable cost, these applications can solve problems, come up with new ideas, and transform the way we work. The growing landscape of AI applications The artificial creativity space is moving so quickly, it would be impossible to map the entire landscape without missing any of the new applications that are launched every day. However, this map with more than 180 AI tools gives you an idea of the thriving ecosystem as of 2022: DOWNLOAD THE CLICKABLE PDF This map was initially created using the three classical categories of artificial creativity: linguistic, visual, and musical creativity. However, the range of creative tasks AI applications can perform has widely expanded in recent years, so the map also includes an additional category (scientific creativity) and a catchall category for all the weird and original ways generative models are used to augment human creativity. 1. Linguistic creativity. Have you ever found yourself staring at a blank page, unsure where to start? AI applications may mean the end of writer’s block as we know it. And there is a huge market for those AI writing tools, as evidenced by the exponential search volumes: Some tools like Jasper, Lex, and Rytr position themselves as general-purpose writing assistants. You just need to feed them a prompt or a paragraph, and they can complete those initial thoughts with original content. This is one of the most promising categories of AI tools: Jasper, which was founded in 2021, recently announced a record $125 million fundraising round at a $1.5 billion valuation. Others are specialized, addressing specific pain points. Lavender will write your sales emails, Surfer will generate SEO-optimized blog posts, Copy.ai will produce high-conversion marketing copy for your website, Nyle will create product descriptions as scale. Code is another area of linguistic creativity where AI can change the way we work. Replit’s Ghostwriter promises to become your “partner in code”, using AI to help you write better code, faster. It generates boilerplate functions and files, provides suggestions, and refactors code—all thanks to AI. GitHub has a similar solution called Copilot, which they’ve dubbed “your AI pair programmer”. Other tools allow you to use AI to code websites in a couple of clicks. These writing and coding tools are evolving extremely fast. Soon, typing everything manually will feel outdated and inefficient. 2. Visual and artistic creativity Long gone are the days where AI was mostly used for image recognition. AI-generated art is everywhere. Tools like Midjourney, Deep Dream Generator, and Stability AI allow anyone to type a few words and to get back an image. Not sure what to type? Websites like Lexica offer massive libraries of pre-testes prompts you just have to copy and paste. Many services such as Astria, Avatar AI, and AI Profile Picture allow you to train a model on photos of yourself, so you can create a series of AI-generated avatars to use on social media. You can ask Tattoos AI to design a unique tattoo for you, or Interior AI to create interior design mockups based on photos you upload. The output of visual creativity tools doesn’t have to be static. Video generation has also come a long way. Recently, Sundar Pichai, the CEO of Google, shared a long, coherent, high-resolution video that was created by AI just from text prompts. 1/ From today's AI@ event: we announced our Imagen text-to-image model is coming soon to AI Test Kitchen. And for the 1st time, we shared an AI-generated super-resolution video using Phenaki to generate long, coherent videos from text prompts and Imagen Video to increase quality. pic.twitter.com/WofU5J5eZV — Sundar Pichai (@sundarpichai) November 2, 2022 Opus lets you turn text into movies. Tavus allows you to record one video, and to generate thousands, automatically changing some words. You could record one video sales pitch, and change the name of the person it’s addressed to in one click. And Colossyan provides you with AI actors ready to deliver the lines you provide. 3. Audio and musical creativity Will we ever need to reach out to potential guests to invite them on our podcast? Maybe not. Podcast.ai is a podcast that is entirely generated by AI. Listeners are invited to suggest topics or even guests and hosts for future episodes. Powerful text-to-speech applications have also hit the market. Using advanced machine translation and generative AI, Dubverse automates dubbing so you can quickly produce multilingual videos. You can generate entire songs with AI apps like Soundful or Boomy. Melobytes allows you to transform your audio files so you can become a rapper. Innovative apps like Endel (whose founder we have interviewed here) use AI to create personalized soundscapes to help their users focus, relax, and even sleep. The possibilities are endless. 4. Scientific creativity Scientific research requires rigor and creativity to solve complex problems and invent innovative solutions. In that realm too, AI is coming to the rescue. Elicit uses language models like GPT-3 to automate parts of researchers’ workflows, allowing researchers to ask a research question and to get answers from 175 million papers. Genei automatically summarizes background reading and produces reports. In biochemistry, Cradle uses AI to predict a protein’s 3D structure and generate new sequences, saving days of work for scientists who use it. Wizdom continuously monitors billions of data points about the global research ecosystem to automatically provide actionable insights to their users. By unlocking data and making it accessible and digestible, all these AI applications are making research fas...
AI and I: The Age of Artificial Creativity
Taking Your Questions
Taking Your Questions
Dear reader, This newsletter recently got it’s 10,000th subscriber. To celebrate, I thought I would try something new and take reader questions. So: ask me anything by using this google form. I’ll try to get through as many questions as I can in the next post, which will hopefully come out the week of November 14th. Cheers everyone and thanks for your interest in this project, Matt
Taking Your Questions
Use timeboxing to regain calmness and control with Marie Ng founder of Llama Life
Use timeboxing to regain calmness and control with Marie Ng founder of Llama Life
Welcome to this edition of our interview series, where we meet with founders on a mission to help us work better and happier. Marie Ng is a long-time Ness Labs reader and the founder of Llama Life, a uniquely designed tool to manage timeboxed working sessions. The quirky branding, the attention to details, the simple features… Everything has been crafted to help you whiz through your to-do list. In this interview, we talked about the concept of time boxing, how it may be particularly useful to people who suffer from time blindness but can help everyone, the power of whimsical effects to maintain motivation, and how to set reasonable expectations to avoid overloading yourself with work. We also talked about time management and ADHD, and what Marie and her team have in mind for the future of Llama Life. Enjoy the read! Hi Marie, thank you so much for agreeing to this interview. First, can you tell us more about timeboxing? For sure, thank you for having me! I was actually doing timeboxing before I knew it was called timeboxing! I’ve also heard it referred to as “time blocking”. Essentially, it’s about being more mindful and purposeful in how you’re spending your time — so you set aside a fixed amount of time to do a particular task, or to do a particular piece of work. By setting a fixed amount of time, you’re creating a positive constraint, and also a bit of pressure to encourage focus to get things done. Timeboxing uses a principle called Parkinson’s Law. Parkinson’s Law states that the work you have to get done fills the time allotted to it. If you’ve ever noticed yourself procrastinating because you think “oh it’s not due till next week”, and then scrambling to get it done at the last minute, then that’s Parkinson’s Law. You knew you had over a week to do the task, so that’s how long it took to get it done. If you had the same task, but a shorter deadline, you would often increase your focus, waste less time, and get it done in that shorter time. Timeboxing plays on this principle, at a micro-level — it’s like creating many little deadlines throughout your day. I use timeboxing for every aspect of my life: getting ready in the morning, doing household chores, doing work tasks… Everything. I suffer from something called “time blindness”, which may sound a little strange, but it just means I have a hard time keeping track of time — how long things might take, how long I spend on something, and generally just knowing where the hours in the day go. This, combined with the challenge of having ADHD, makes timeboxing an essential method for me. Do you think everyone should try it, even if they don’t suffer from time blindness? I think we can all benefit from being more purposeful and efficient with our time. There’s a lot of distractions these days. A lot of us are working remotely, trying to juggle home life with work, with family, all in the same space. There’s also social media, which is really designed to catch your attention and hold it. Now this may be ok if that’s something you’re intending to spend time on and enjoying it, but not ok if it becomes a distraction and is pulling you away from other things which you need to do. And if we’re all more purposeful with our time, it helps to create more time to spend on other things which we may want to do or experience. Above all, I think this helps to reduce stress, because no one likes feeling that they’re behind on what they need to get done, or feeling that they’re missing out on things that they want to do. So, this is why you decided to build Llama Life. There are two main reasons why I made Llama Life. The first is, I was teaching myself how to code. When Covid first hit, everyone was learning a new skill, so I decided it was about time I took the plunge and started to learn web development. So Llama Life started as a project to practice what I was learning by actually building something. The second reason is that I really wanted to create a product to help myself. I had been doing timeboxing for a long time just using timers, but I wanted a way to be able to quickly and easily attach those timers to specific tasks, and I just couldn’t find a product that worked the way I wanted it to. It makes such a big difference to me in terms of how I manage my day, and importantly how I feel at the end of the day, that I thought it was worth bringing the product to life and sharing it with others. So now Llama Life’s mission is about helping people achieve calm, focused productivity. The kind of productivity where it feels immersive, enjoyable, fun and effortless. I have to ask… Why is Llama Life called this way? Pre-Covid, I went on a soul-searching journey with one of my best friends. We traveled to Peru, did a lot of hiking, and did the trek to Machu Picchu. As part of the trip, we also ended up visiting a small village to get to know the people and see how they live. There were around twenty people in this village. They had no modern conveniences – no running water, electricity, no internet… But they had a lot of llamas! The llamas were their livelihood. So they would use the wool from the llama’s backs to make sweaters, scarfs, beanies etc to sell.  And what struck me about these people was, although they had no modern conveniences, they were very happy, calm and content with their way of life. And that feeling stayed with me.  So years later when it came to naming Llama Life, I was trying to think of a name that would embody our mission of “calm, focused productivity”. And the name “Llama” just came to mind almost immediately. The interesting thing is, over time, customers started calling it “Llama Life”. Previously it was just “Llama”. Customers started to say stuff like “I want to live the llama life” and they were posting this on Twitter. And it occurred to me that this was a much better name for the product, because it helped people aspire to a certain lifestyle — a llama life — which is much more powerful than aspiring to be just an animal! That’s very true! I love the story behind the name. More specifically, how does Llama Life work? Llama Life is all about helping you work through your todo-list, rather than just making never-ending lists. It does this by letting you set a fixed amount of time to each task (with a countdown timer) and by making it super fun and rewarding to use. For example, when you complete a task, you get a confetti animation! It’s also full of whimsical little sound effects, all of which are designed to help boost your motivation, encourage focus, and get stuff done. A lot of other apps let you set 25-minute timers, but 25-minute timers never worked for me. I find it much easier to start with short timers, and work my way up. Therefore, Llama Life is flexible and allows you to set a timer of any duration, depending on what works best for you. This means you can set a timeboxed session starting with something very achievable, for instance five minutes. And this helps our users transform big overwhelming tasks into more manageable bite-sized chunks of productivity. I imagine this can be used for many different use cases, like study sessions for instance. Yes, there are so many use-cases! We do have students using it for study sessions, and also a lot of indie hackers / startup founders using it to make the most of their time. It makes a lot of sense for founders because you’re often trying to do several different roles at once, and there’s only so much time you have in the day.  To help with this, Llama Life shows you the total time it would take to complete all the tasks on your list, as well as an estimated finishing time of the day. This helps with planning — it makes sure you don’t overload yourself (guilty!) and encourages you to set reasonable expectations for what you can achieve in a given time slot. We also have executives using it to plan and keep to meeting agendas so they don’t run over time. Actually… ‘Overtime’ is a feature Llama Life has, which was hotly requested. Essentially when your task timer runs out, it starts counting the extra time, so you can get a sense of how long things actually take versus how long you had planned. And it’s being used for non-work tasks too! We had a customer the other day who used it to keep herself on track to clean different rooms of her house, before guests showed up for dinner that evening! I think the most interesting thing is that people are using Llama Life as part of a workflow. So we’re not trying to compete with Todoist, Notion, Asana, Trello, etc. Llama Life is not meant to be a place to store your master list of todos, or manage and collaborate on a project. It’s designed to be the “tip” of the workflow. Most of our customers store their todos and projects somewhere else and then transfer them to Llama Life for their focus session during the day. Llama Life is very much a tool to help you get through today. As such we’re also focusing on integrations, to make the transfer of tasks as frictionless and easy to do as possible. There’s indeed something powerful about the idea of simply getting through today. So, what kind of people use Llama Life? I think the thing which ties all of our customers together is that everyone shares a goal of wanting to increase their focus, and make the best use of their time. Sometimes those are people who are already productive and are looking to ‘level-up’. But very often it’s people who are struggling with focus, for example people with ADHD. We don’t specifically ask people if they have ADHD, but we know they make up a large part of our customer base, because they take time to email me and explain the challenges they have (which are always very relatable to me, being someone who was diagnosed with ADHD much later in my life). What about you, how do you use Llama Liffe? I use Llama Life’s “Preset Lists” a lot. A Preset List is a template list of tasks that you can create, save and then re-use as many t...
Use timeboxing to regain calmness and control with Marie Ng founder of Llama Life
The default effect: why we renounce our ability to choose
The default effect: why we renounce our ability to choose
Why is it that we like having choices, but we don’t like choosing? Being able to decide between several options makes us feel in control. Yet, we tend to exhibit a preference for the default option when presented with a selection of choices. This is called the default effect, and it rules many aspects of our lives from the products we buy to the career we build. Choosing the default option The default effect is our tendency to go with the status quo, even when a different option would be better for us. Many studies show that we tend to generally accept the default option—the one that was preselected for us—and that making an option a default increases the likelihood that such an option is chosen. One of the theories behind the default effect is that humans are hardwired to avoid loss. We feel a strong aversion to any kind of loss. This aversion is so strong that it can override our logical thinking and lead us to stick to what seems like the safest path. Another theory relates to the cognitive effort needed to consider alternative options. It’s much easier to go with what’s right in front of us, compared to researching and evaluating other potential choices. Opting for the safest path may seem like a good idea but it can often lead to suboptimal decisions. For example, we might choose the default health insurance plan, even though there are better options available. Or we might stay in our current job, even though we’re unhappy, because the idea of starting over is too daunting. In each of these cases, we’re letting the default effect guide our decision-making, and as a result, we’re not reaching our full potential. Then, many years later, we look back, surprised to find ourselves in a less-than-ideal situation. There’s a saying, often misattributed to Lao Tzu, that goes: “If you do not change direction, you might end up where you are heading.” In other words, you cannot be surprised about finding yourself in a certain position if you decided to stick to the default path. We shouldn’t blame ourselves for falling prey to the default effect. It’s a powerful evolutionary force that’s hard to resist. Our survival instinct tells us to avoid risky situations and potential losses. But we can learn to recognize when the default effect is influencing our decisions and take steps to overcome it. Breaking free from the default effect While there are some situations in which the default effect can be beneficial—for example, by only having healthy options in your fridge—letting it guide all of your daily decisions can lead you to live a life you have not chosen. Make space for metacognition. We’re often so busy thinking about how to get things done, we forget to think about why we want to get these things done. Metacognition is “thinking about thinking”, it’s an awareness of your own thoughts, an examination of the underlying patterns that guide your decision. Block some time in your calendar to reflect on your recent choices and what led you to them. Journaling is a great metacognitive strategy, but you can also think out loud with a friend or colleague. Practice intentional decision-making. It doesn’t have to be about big decisions. Next time you notice yourself grabbing the exact same snack between two meetings, ask yourself: is there another option? Or, when you’re about to walk into a meeting room to have your weekly chat with your employee, ask yourself: can we have this chat somewhere else? These little acts of intentionality will train your mind to not always stick to the default routine. Project yourself into the future. While it’s great to live in the present, it can also be helpful to imagine the path forward. To avoid blindly taking one step at a time and “ending up where you are heading”, consider where you want to go. It doesn’t have to be very precise. You can start by describing a perfect day. Is the default option leading to your ideal destination? Or should you change direction? Your annual review at the end of the year can be a good time for such an exercise. In Robert Frost’s famous words: “Two roads diverged in a wood, and I— I took the one less traveled by, And that has made all the difference.” Breaking free from the default effect so you can choose your own path is not easy, but it can make all the difference. The post The default effect: why we renounce our ability to choose appeared first on Ness Labs.
The default effect: why we renounce our ability to choose
Tutorial: Collaborative task management in Tana
Tutorial: Collaborative task management in Tana
Tana is a powerful tool for thought that allows you to easily turn raw notes into tasks. Its goal is to end context switching and copy-pasting, so you can accomplish all of your goals from an all-in-one workspace. You can start experiencing the power of Tana by creating a simple solo workflow for task management. But Tana also makes it simple to collaborate in teams. Follow this tutorial to learn how to manage your tasks as a team with Tana. How to manage tasks as a team with Tana Managing your tasks as a team with Tana is as easy as one, two, three. You only need to create a new workspace, decide which workspace tags you’ll use, create shared tags, and set up a team workflow. Step 1. Create a new workspace and decide which workspace tags you’ll use. You can accept invitations to other workspaces by clicking the plus symbol at the bottom of the sidebar, or you can create your own workspace. By navigating to “Options” and clicking on the “Allow content from…” section, you can choose which workspace tags you want to utilize. Step 2. Create shared tags. It’s best to create the super tags in the workspace where you want the data to live in order to deal with different workspaces successfully. However, if you create a tag in your personal space and attempt to use it in a workspace, only you will be able to view it. You can still utilize tags from other workspaces. You can add nodes that have shared tags after creating them in your personal workspace or in the shared workspace itself. Since you can write everything out first and then submit it to the workspace when you’re ready to share it, this is really useful. The node is being suggested to be moved to the shared workspace, as you’ll see. With the “Move to” command, nodes can also be moved to different workspaces. Step 3. Set up a team workflow. Set up calendar tags. Since a new workspace doesn’t come with a built-in calendar, you must create calendar tags to make this feature available. On the today page of your new workspace, enter #day, #week, and #year. Set up the child supertags #week and #day for the supertags #year and #week’s legacy, respectively. Set up task and project databases. This is comparable to what we discussed earlier about managing your own tasks. Due to the fact that this is for a team and the tasks and projects will come from the team workspace rather than your own workspace, there is an additional “Assignee” user field. Set up people and organizations. This will serve as your team’s and customers’ database. Make a supertag out of a “person” tag by configuring it. Make sure the supertag contains the following fields: “Organization” as an instance field setting #organization as the source supertag “Email” address as an email field “Phone” number as a number field “Twitter” or other preferred social networking sites Make a tag called “organization” and configure it to become a supertag. Make sure the supertag contains the following fields: Shortcode URL as a link field “Employees” as a search node (type in #person and make a field called Organization and set the value as PARENT) Make a tag called “team member” and configure it to become a supertag. To obtain the person fields, remember to “extend an existing tag” to #person in the supertag’s advanced settings. Make databases for “people” and “organizations” with the names “People” and “Organizations” and include them in the sidebar. Set up work logs. This is similar to the time log that was previously explained, with the exception that since this is a collaborative process, the individual who performs the task is identified. Make a supertag called “work log” and configure it to become one. Make sure the supertag contains the following fields: Start Time End Time “Task” as a dynamic option field that searches nodes with the tag ‘task’ that are not done yet “Date” as a date field: Open the advanced section, go to initialize expression, click fx and type in ‘formatDate’ and type in ‘CREATED’ and ‘DATE_REFERENCE’ as its child node and switch back to edit mode by clicking fx again. Set hide field conditions to always using command k. “Who” as an instance field: Type in #person in the source supertag. Open the advanced section, go to initialize expression, click fx and type in ‘filter’ and type in ‘childrenOf’ as its child node and type in ‘the name of your company’ as the child node of ‘childrenOf’. Type in a new ‘Email’ field under ‘filter’ and type in ‘CURRENT_USER’. Set hide field conditions to always using command k Check ‘build title from fields’ in the advance section and type in ‘${Tasks}: ${Who} ${Start Time} – ${End Time}’ in the expression field. By setting the child supertag to #worklog, you may have the tag “work log” appear automatically when you click enter inside the “Work Logs” section of your daily template. That’s it! Tana makes it simple to set up a simple yet powerful team management system. Not only is the user interface attractive and the layout is simple, but collaboration is made simpler by the ability to view what others have accomplished. Have fun with Tana, and feel free to join the Ness Labs Learning Community to discuss Tana and other tools for thought! The post Tutorial: Collaborative task management in Tana appeared first on Ness Labs.
Tutorial: Collaborative task management in Tana
Tutorial: How to manage your tasks with Tana
Tutorial: How to manage your tasks with Tana
Tana is a brand-new tool for thought that claims to put a stop to context-switching. It enables you to begin by entering data and then readily find it using searches rather than figuring out where to place it before you write it. Benefitting from both database-based note-taking like Notion and block-based note-taking like Roam Research and Obsidian, Tana perfectly balances spontaneous and structured data. It is performing so admirably that it attracted many Personal Knowledge Management experts in a short amount of time. The core of Tana includes powerful features such as fields, supertags, live queries, and views, which enable users to create extremely complex workflows without having to install any further plugins. In this tutorial, you will learn how to manage your tasks with Tana so you can increase your productivity and remove any unnecessary friction from your daily workflow. Primer on Tana First, you need to log in to Tana. Before we get started, let’s have a quick look at some of the core design principles that govern the way Tana works. It will be much easier to design a task management system with Tana once you understand these ideas.  Workspace. Your private workspace is located at the top of your sidebar. This is your exclusive workplace, which no one else may use. But you can also use Tana in collaboration with other people. Each workspace allows you to manage access rights, add tags specifically for that workspace, and decide whether or not to accept tags from other workspaces. There is a library specific to each workspace, and you can export nearby structures for the entire workspace.  Nodes. Tana’s nodes are similar to Roam’s blocks. They make up the core of Tana’s network-based structure and outliner functionality. Supertag. Tana’s superpower is called a supertag. By letting a tag contain more data pieces, they elevate a basic tag to a higher level. You can specify the attributes of a supertag or add nodes to it, and all instances where you use a specific supertag will utilize these values as default metadata or schema. Fields. In Tana, fields are similar to properties in Notion. These fields can be configured any way you like, and if you turn a tag into a supertag, they will be accessible to you whenever you’re ready to fill them in. Inheritance. Tana’s inheritance feature enables you to create a supertag that inherits the fields of another supertag while maintaining its own distinctive fields. It may sound complex, so let’s use an example. For instance, the inheritance feature can be used with related tags like “person” and “customer”. Because a customer is a person, the fields you create for the person’s supertag can be inherited by the customer’s supertag. Emergence. When both supertags are used in the same node, Tana merges the fields you set up for each supertag. This feature is called “emergence”. The fields from the “task” supertag and the “work” supertag will both emerge underneath the node, allowing you to capture those fields in one note. As you can see, Tana is an effective task management tool because it combines the two complex realms of databases and bidirectional connections. Though it’s useful to have an overall idea of how Tana works, you don’t need to fully understand these principles to start managing your tasks with Tana. Next, I’ll share a basic approach to get you started. How to manage your own tasks with Tana With Tana, managing your own tasks is easy. Simply create your own “Tasks and Projects” database, choose how to automate your days using a template, and handle your accumulated tasks at the end of the day. Step 1. Setup up a “Tasks and Projects” database. Create tags called “task” and “project”. To make the tag “task” a supertag, configure it. Include the following fields in your “task” supertag: “Do Date” as a date field “Due Date” a date field “Status” as a fixed options field (“To Do”, “Doing”, and “Done” options) “Related Project” as an instance field setting #project as the source supertag Then, you need to also make the tag “project” a supertag by configuring it. Fill out the following fields in your “project” supertag as needed: “Due Date” as a date field “Status” as a fixed options field (“To Do”, “Doing”, and “Done” options) “Tasks” as a search node (type in #task and make a field called “Related Project” connected into the task database and set the value as PARENT) Create search nodes for your task and project databases. Enter #task for the task database and #project for the project. Both databases can be viewed as cards and grouped based on status. Give them whatever names you like, then pin them both to your sidebar. Step 2. Decide what your day will look like by automating your days using a template. If your private workspace doesn’t already have a “day” tag, create one. Make your “day” tag become a supertag by configuring it. Include the following fields in your “task” supertag: Agenda: you can create a reference of the tasks you want to do for a particular day in this section. Choose tasks on your task database. Time Log: you can do your interstitial journaling here. Create a ‘time log’ supertag with fields like Start Time, End Time, Task as a dynamic options field (create a search node on it and type in #task and NOT DONE), and Notes. Use the ‘build title from fields’ feature under the advanced section to automatically set the name of your time logs and type in ‘${Task}: ${Start Time} – ${End Time}’ in the title expression field. Step 3. Handle your accumulated tasks at the end of the day. You can use Tana’s “Quick Add” feature if you enjoy journaling and writing down your thoughts as they come to you. However, I highly recommend the practice of interstitial journaling. It’s the most straightforward method to incorporate note-taking, tasks, and time tracking, and it works great with Tana. To manage the tasks you accumulated through interstitial journaling, all you need is a simple habit. Just add a search node that looks for tasks that were created within the previous 24 hours, and go through these tasks at the end of each day. And you’re done! This is a simple three-step process to set up a task management system in Tana. It enables you to incorporate both your planned and spontaneous ideas throughout the day. Have fun with Tana, and feel free to join the Ness Labs Learning Community to discuss Tana and other tools for thought. The post Tutorial: How to manage your tasks with Tana appeared first on Ness Labs.
Tutorial: How to manage your tasks with Tana
Are Technologies Inevitable?
Are Technologies Inevitable?
Dear reader, This week’s post is not the usual thing. I designed New Things Under the Sun to feature two kinds of articles: claims and arguments. Almost everything I write is a claim article (or an update to them). Today’s post is the other kind of article, an argument. The usual goal of a claim article is to synthesize several academic papers in service of assessing a specific narrow claim about innovation. Argument articles live one level up the chain of abstraction: the goal is to synthesize many claim articles (referenced mostly in footnotes) in service of presenting a bigger picture argument. That means in this post you won’t see me talk much about specific papers; instead, I’ll talk about various literatures and how I think they interact with each other. Also, this article is really long; probably about twice as long as anything else I’ve written. Rather than send you the whole thing in email, I’m sending along the introduction below, an outline, and a link to the rest of the article, which lives on the NewThingsUnderTheSun.com. Alternatively, you can listen to a podcast of the whole thing here. Cheers everyone and thanks for your interest, Matt Subscribe now Take me straight to the whole article Are Technologies Inevitable? Introduction In a 1989 book, the biologist Stephen Jay Gould posed a thought experiment: I call this experiment “replaying life’s tape.” You press the rewind button and, making sure you thoroughly erase everything that actually happened, go back to any time and place in the past… then let the tape run again and see if the repetition looks at all like the original.” p48, Wonderful Life Gould’s main argument is: …any replay of the tape would lead evolution down a pathway radically different from the road actually taken… Alter any early event, ever so slightly and without apparent importance at the time, and evolution cascades into a radically different channel. p51, Wonderful Life Gould is interested in the role of contingency in the history of life. But we can ask the same question about technology. Suppose in some parallel universe history proceeded down a quite different path from our own, shortly after Homo sapiens evolved. If we fast forward to 2022 of that universe, how different would the technological stratum of that parallel universe be from our own? Would they have invented the wheel? Steam engines? Railroads? Cars? Computers? Internet? Social media? Or would their technologies rely on principles entirely alien to us? In other words, once humans find themselves in a place where technological improvement is the rule (hardly a given!), is the form of the technology they create inevitable? Or is it the stuff of contingency and accident? In academic lingo, this is a question about path dependency. How much path dependency is there in technology? If path dependency is strong, where you start has a big effect on where you end up: contingency is also strong. But if path dependency is weak, all roads lead to the same place, so to speak. Contingency is weak. Some people find this kind of thing inherently fun to speculate about. It’s also an interesting way to think through the drivers of innovation more generally. But at the same time, I don’t think this is a purely speculative exercise. My original motivation for writing it was actually related to a policy question. How well should we expect policies that try to affect the direction of innovation to work? How much can we really direct and steer technological progress? As we’ll see, the question of contingency in our technological history is also related to the question of how much remains to be discovered. Do we have much scope to increase the space of scientific and technological ideas we explore? Or do we just about have everything covered, and further investigation would mostly be duplicating work that is already underway? I’ll argue in the following that path dependency is probably quite strong, but not without limits. We can probably have a big impact on the timing, sequence, and details of technologies, but I suspect major technological paradigms will tend to show up eventually, in one way or another. Rerun history and I doubt you’ll find the technological stratum operating on principles entirely foreign to us. But that still leaves enormous scope for technology policy to matter; policies to steer technology probably can exert a big influence on the direction of our society’s technological substrate. The rest of the post is divided into two main parts. First, I present a set of arguments that cumulatively make the case for very strong path dependency. By the end of this section, readers may be close to adopting a view close to Gould’s: any change in our history might lead to radically different trajectories. I think this actually goes too far. In the second part of the essay, I rein things in a bit by presenting a few arguments for limits to strong path dependency. The rest of the piece goes on to make the following argument: Part One: The Case for Strong Path Dependency Small scale versions of replaying the technology tape point to path dependency being at least big enough to notice The landscape of possible technologies is probably very big because Combinatorial landscapes are very big Technology seems to have an important combinatorial element Our exploration of this space seems a bit haphazard and incomplete From the constrained set of research and invention options actually discovered, an even smaller set get an early lead, often for highly contingent reasons, and then enjoy persistent rich-get-richer effects Part Two: The Limits of Path Dependence It may not matter that the landscape of technological possibility is large, if the useful bits of it are small. This may be plausible because This might be the case for biology It is probably possible to discover the small set of universal regularities in nature via many paths Human inventors can survey the space of technological possibility to a much greater degree than in biological evolution A shrinking share of better technologies combined with our ability to survey the growing combinatorial landscape can yield exponential growth in some models Read the whole thing here As always, if you want to chat about this post or innovation in generally, let’s grab a virtual coffee. Send me an email at mattclancy at hey dot com and we’ll put something in the calendar. New Things Under the Sun is produced in partnership with the Institute for Progress, a Washington, DC-based think tank. You can learn more about their work by visiting their website.
Are Technologies Inevitable?