Public

Public

451 bookmarks
Custom sorting
Oxytocin Increases Generosity in Humans
Oxytocin Increases Generosity in Humans
Human beings routinely help strangers at costs to themselves. Sometimes the help offered is generous—offering more than the other expects. The proximate mechanisms supporting generosity are not well-understood, but several lines of research suggest ...
Human beings routinely help strangers at costs to themselves. Sometimes the help offered is generous—offering more than the other expects. The proximate mechanisms supporting generosity are not well-understood, but several lines of research suggest a role for empathy. In this study, participants were infused with 40 IU oxytocin (OT) or placebo and engaged in a blinded, one-shot decision on how to split a sum of money with a stranger that could be rejected. Those on OT were 80% more generous than those given a placebo. OT had no effect on a unilateral monetary transfer task dissociating generosity from altruism. OT and altruism together predicted almost half the interpersonal variation in generosity. Notably, OT had twofold larger impact on generosity compared to altruism. This indicates that generosity is associated with both altruism as well as an emotional identification with another person.
Generosity may be part of the human repertoire to sustain cooperative relationships [19]. Several neural mechanisms likely support generosity. OT can induce dopamine release in ventromedial regions associated with reward [52] reinforcing generosity. A recent fMRI study of donations to charities [28], showed increased activation in the subgenual region of the cingulate cortex (Brodmann area 25) when making a charitable donation compared to receiving a monetary reward
·ncbi.nlm.nih.gov·
Oxytocin Increases Generosity in Humans
Anterior cingulate reflects susceptibility to framing... : NeuroReport
Anterior cingulate reflects susceptibility to framing... : NeuroReport
neral role in behavioral adjustments. We hypothesized, therefore, that the anterior cingulate cortex is also involved in the ‘framing effect’. Our hypothesis was tested by using a binary attractiveness judgment task (‘liking’ versus ‘nonliking’) during functional magnetic resonance imaging. We found that the framing-related anterior cingulate cortex activity predicted how strongly susceptible an individual was to a biased response. Our results support the hypothesis that paralimbic processes are crucial for predicting an individual's susceptibility to framing....
We hypothesized, therefore, that the anterior cingulate cortex is also involved in the ‘framing effect’. Our hypothesis was tested by using a binary attractiveness judgment task (‘liking’ versus ‘nonliking’) during functional magnetic resonance imaging. We found that the framing-related anterior cingulate cortex activity predicted how strongly susceptible an individual was to a biased response. Our results support the hypothesis that paralimbic processes are crucial for predicting an individual's susceptibility to framing.
·journals.lww.com·
Anterior cingulate reflects susceptibility to framing... : NeuroReport
Social Decision-Making: Insights from Game Theory and Neuroscience
Social Decision-Making: Insights from Game Theory and Neuroscience
By combining the models and tasks of Game Theory with modern psychological and neuroscientific methods, the neuroeconomic approach to the study of social decision-making has the potential to extend our knowledge of brain mechanisms involved in ...
Research has already begun to illustrate how social exchange can act directly on the brain's reward system, how affective factors play an important role in bargaining and competitive games, and how the ability to assess another's intentions is related to strategic play. These findings provide a fruitful starting point for improved models of social decision-making, informed by the formal mathematical approach of economics and constrained by known neural mechanisms.
·science.org·
Social Decision-Making: Insights from Game Theory and Neuroscience
Neuroeconomics - Wikipedia
Neuroeconomics - Wikipedia
Regarding the choice of sexual partner, research studies have been conducted on humans and on nonhuman primates. Notably, Cheney & Seyfarth 1990, Deaner et al. 2005, and Hayden et al. 2007 suggest a persistent willingness to accept fewer physical goods or higher prices in return for access to socially high-ranking individuals, including physically attractive individuals, whereas increasingly high rewards are demanded if asked to relate to low-ranking individuals
While most research on decision making tends to focus on individuals making choices outside of a social context, it is also important to consider decisions that involve social interactions. The types of behavior that decision theorists study are as diverse as altruism, cooperation, punishment, and retribution. One of the most frequently utilized tasks in social decision making is the prisoner's dilemma.
·en.wikipedia.org·
Neuroeconomics - Wikipedia
Neuroeconomics - Wikipedia
Neuroeconomics - Wikipedia
Neuroeconomics is an interdisciplinary field that seeks to explain human decision making, the ability to process multiple alternatives and to follow through on a plan of action. It studies how economic behavior can shape our understanding of the brain, and how neuroscientific discoveries can guide models of economics
·en.wikipedia.org·
Neuroeconomics - Wikipedia
The intuition behind Shannon’s Entropy
The intuition behind Shannon’s Entropy
[WARNING: TOO EASY!]
Shannon thought that the information content of anything can be measured in bits. To write a number N in bits, we need to take a log base 2 of N.
Note that thermodynamic “entropy” and the “entropy” in information theory both capture increasing randomness.
·towardsdatascience.com·
The intuition behind Shannon’s Entropy
Perplexity Intuition (and Derivation)
Perplexity Intuition (and Derivation)
Never be perplexed again by perplexity.
Less entropy (or less disordered system) is favorable over more entropy. Because predictable results are preferred over randomness. This is why people say low perplexity is good and high perplexity is bad since the perplexity is the exponentiation of the entropy (and you can safely think of the concept of perplexity as entropy).
Why do we use perplexity instead of entropy? If we think of perplexity as a branching factor (the weighted average number of choices a random variable has), then that number is easier to understand than the entropy
·towardsdatascience.com·
Perplexity Intuition (and Derivation)
Perplexity - Wikipedia
Perplexity - Wikipedia
In information theory, perplexity is a measurement of how well a probability distribution or probability model predicts a sample. It may be used to compare probability models. A low perplexity indicates the probability distribution is good at predicting the sample
·en.wikipedia.org·
Perplexity - Wikipedia
Phylogenetic tree - Wikipedia
Phylogenetic tree - Wikipedia
A phylogenetic tree (also phylogeny or evolutionary tree [3]) is a branching diagram or a tree showing the evolutionary relationships among various biological species or other entities based upon similarities and differences in their physical or genetic characteristics. All life on Earth is part of a single phylogenetic tree, indicating common ancestry.
·en.wikipedia.org·
Phylogenetic tree - Wikipedia
Inverse gambler's fallacy - Wikipedia
Inverse gambler's fallacy - Wikipedia
The inverse gambler's fallacy, named by philosopher Ian Hacking, is a formal fallacy of Bayesian inference which is an inverse of the better known gambler's fallacy. It is the fallacy of concluding, on the basis of an unlikely outcome of a random process, that the process is likely to have occurred many times before
The argument from design asserts, first, that the universe is fine tuned to support life, and second, that this fine tuning points to the existence of an intelligent designer. The rebuttal attacked by Hacking consists of accepting the first premise, but rejecting the second on the grounds that our (big bang) universe is just one in a long sequence of universes, and that the fine tuning merely shows that there have been many other (poorly tuned) universes preceding this one
·en.wikipedia.org·
Inverse gambler's fallacy - Wikipedia
Iatrogenesis - Wikipedia
Iatrogenesis - Wikipedia
Iatrogenesis is the causation of a disease, a harmful complication, or other ill effect by any medical activity, including diagnosis, intervention, error, or negligence
·en.wikipedia.org·
Iatrogenesis - Wikipedia
How to WIN the fight Against AGING | Aubrey de Grey on Health Theory
How to WIN the fight Against AGING | Aubrey de Grey on Health Theory
This episode is sponsored by Thryve. Get 50% off your at home gut health test when you go to https://trythryve.com/impacttheory Scientific break-throughs are happening all around you. As technology advances, biologists such as the Chief Science Officer of the SENS Research Foundation, Aubrey de Grey, is leading the way to pioneering tech that will allow you to choose how long you want to live. For years scientists have been trying to find a way to slow down the aging process, but Aubrey introduces the idea of repairing the damage that aging does on the body to theoretically restore the body’s biological age to maybe 30 years younger. If you are familiar with the obsession I’ve had with living forever, you already know how much this excites me. SHOW NOTES: Quality of Life | Aubrey on why a long life depends on the quality and being invested in choice [0:58] Reverse Aging | Why damage repair could be easier than slowing aging & the push back met [4:40] Indefinite Life | Aubrey on why the result of expected life for reversed aging could be indefinite [8:08] Structure Repair | Landing on the idea of repairing structure to restore cellular level function [9:32] Pushback | Aubrey on pushback among scientists about reverse aging in biology [11:17] Body Damage | Aubrey on self-inflicted damage being reversed to same level 30 years prior [14:13] Longevity Escape Velocity | Aubrey on his theory how to reverse biological age 30 years [16:26] Types of Damage | 7 categories of damage that correspond to therapeutic methods of repair [21:18] Stem Cell Therapy | Aubrey explains how stem cells could treat loss of cell problems [22:17] Cancer Treatments | Aubrey gives category of damage due to too many cells [23:26] Senescent Cells | Aubrey explains how these cells can promote cancer [26:53] Mitochondrial Mutations | Aubrey explains problems at the molecular level inside cells [28:53] Cellular Waste | Aubrey breaks down how cellular waste over years impacts old age [33:35] Macular Degeneration | Aubrey explains specific enzymes that could prevent blindness [35:24] Excretion | Aubrey explains diseases that could be resolved by breaking down waste [37:50] Alzheimer’s | Breaking down amyloid as extracellular waste and modest benefit [39:22] Advancing Therapies | Aubrey gives sobering guess how close effective therapies are [44:07] QUOTES: “I want to make sure that my choice about how long to live, and, of course, how high quality that life will be, is not progressively taken away from me by aging.” [2:24] “Old age is something that evolution doesn't care about at all. Evolution only cares about the propagation of genetic information.” [34:41] “you've got to fix them all. You haven't got to fix any of them perfectly. But you've got to fix them all pretty well.” [43:28] “what excites me is typically the breakthroughs that would take me half an hour background to describe why it's even important.” [46:16] Guest Bio: Dr. de Grey is the biomedical gerontologist who devised the SENS platform and established SENS Research Foundation to implement it. He received his BA in Computer Science and Ph.D. in Biology from the University of Cambridge in 1985 and 2000, respectively. Dr. de Grey is Editor-in-Chief of Rejuvenation Research, is a Fellow of both the Gerontological Society of America and the American Aging Association Follow Aubrey De Grey: Website: https://www.sens.org/ Twitter: https://twitter.com/aubreydegrey LinkedIn: https://www.linkedin.com/in/aubrey-de-grey-24260b/ Dive Deeper On Related Episodes: Reset Your Age with David Sinclair https://youtu.be/IEz1P4i1P7s Lifestyle For Longevity with Kellyann Petrucci https://youtu.be/l9QO0JlnU8w Secrets To Longevity https://youtu.be/Ulm01gzU8rU
·youtube.com·
How to WIN the fight Against AGING | Aubrey de Grey on Health Theory
Alexis Courbet | Towards Computational Design of Self Assembling &Genetically Encodable Nanomachines - Foresight Institute
Alexis Courbet | Towards Computational Design of Self Assembling &Genetically Encodable Nanomachines - Foresight Institute
Alexis’s research proposes to investigate computational design rules to rationally install biochemical energy driven dynamic and mechanical behavior within de novo protein nanostructures, by tailoring the energy landscape to capture favorable thermal fluctuations allowing to perform work (i.e. rationally designing a Brownian ratchet mechanism using energy from the catalysis of a biologically orthogonal small molecule to break symmetry). As a proof of concept, he is focusing on the de novo design of protein rotary motors, in which symmetric energy minima along an interface between multiple components couples rotation to a catalytic event, thereby converting the biochemical energy of a fuel molecule into work.
There has recently been a surge of progress with computational protein folding.  Alexis is working on designing atomically precise machines using proteins as structural and mechanical elements.  One of the first designs is an axel and rotor assembly.  He started by designing the components separately, then computing the interface between components.  He then simulated the motion and degrees of freedom to calculate whether the machine would perform the intended function
·foresight.org·
Alexis Courbet | Towards Computational Design of Self Assembling &Genetically Encodable Nanomachines - Foresight Institute
Topological sorting - Wikipedia
Topological sorting - Wikipedia
In computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge uv from vertex u to vertex v, u comes before v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this application, a topological ordering is just a valid sequence for the tasks. Precisely, a topological sort is a graph traversal in which each node v is visited only after all its dependencies are visited. A topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph (DAG). Any DAG has at least one topological ordering, and algorithms are known for constructing a topological ordering of any DAG in linear time. Topological sorting has many applications especially in ranking problems such as feedback arc set. Topological sorting is possible even when the DAG has disconnected components.
·en.wikipedia.org·
Topological sorting - Wikipedia
Topological sorting - Wikipedia
Topological sorting - Wikipedia
On a parallel random-access machine, a topological ordering can be constructed in O(log2 n) time using a polynomial number of processors, putting the problem into the complexity class NC2.[5] One method for doing this is to repeatedly square the adjacency matrix of the given graph, logarithmically many times, using min-plus matrix multiplication with maximization in place of minimization. The resulting matrix describes the longest path distances in the graph. Sorting the vertices by the lengths of their longest incoming paths produces a topological ordering
·en.wikipedia.org·
Topological sorting - Wikipedia
Fuzzing - Wikipedia
Fuzzing - Wikipedia
In programming and software development, fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with.
·en.wikipedia.org·
Fuzzing - Wikipedia
Kirchhoff's circuit laws - Wikipedia
Kirchhoff's circuit laws - Wikipedia
Kirchhoff's first law, or Kirchhoff's junction rule, states that, for any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node; or equivalently: The algebraic sum of currents in a network of conductors meeting at a point is zero
·en.wikipedia.org·
Kirchhoff's circuit laws - Wikipedia
Ohm's law - Wikipedia
Ohm's law - Wikipedia
Ohm's law states that the current through a conductor between two points is directly proportional to the voltage across the two points
·en.wikipedia.org·
Ohm's law - Wikipedia
Open-Ended Learning Leads to Generally Capable Agents
Open-Ended Learning Leads to Generally Capable Agents
Artificial agents have achieved great success in individual challenging simulated environments, mastering the particular tasks they were trained for, with their behaviour even generalising to maps and opponents that were never encountered in training. In this work we create agents that can perform well beyond a single, individual task, that exhibit much wider generalisation of behaviour to a massive, rich space of challenges. We define a universe of tasks within an environment domain and demonstrate the ability to train agents that are generally capable across this vast space and beyond. The environment is natively multi-agent, spanning the continuum of competitive, cooperative, and independent games, which are situated within procedurally generated physical 3D worlds. The resulting space is exceptionally diverse in terms of the challenges posed to agents, and as such, even measuring the learning progress of an agent is an open research problem. We propose an iterative notion of improvement between successive generations of agents, rather than seeking to maximise a singular objective, allowing us to quantify progress despite tasks being incomparable in terms of achievable rewards. Training an agent that is performant across such a vast space of tasks is a central challenge, one we find that pure reinforcement learning on a fixed distribution of training tasks does not succeed in. We show that through constructing an open-ended learning process, which dyna
We show that through constructing an open-ended learning process, which dynamically changes the training task distributions and training objectives such that the agent never stops learning, we achieve consistent learning of new behaviours. The resulting agent is able to score reward in every one of our humanly solvable evaluation levels, with behaviour generalising to many held-out points in the universe of tasks
·deepmind.com·
Open-Ended Learning Leads to Generally Capable Agents
Monoid - Wikipedia
Monoid - Wikipedia
In abstract algebra, a branch of mathematics, a monoid is a set equipped with an associative binary operation and an identity element. For example, the nonnegative integers with addition form a monoid, the identity element being 0. Monoids are semigroups with identity. Such algebraic structures occur in several branches of mathematics. The functions from a set into itself form a monoid with respect to function composition. More generally, in category theory, the morphisms of an object to itself form a monoid, and, conversely, a monoid may be viewed as a category with a single object. In computer science and computer programming, the set of strings built from a given set of characters is a free monoid. Transition monoids and syntactic monoids are used in describing finite-state machines. Trace monoids and history monoids provide a foundation for process calculi and concurrent computing. In theoretical computer science, the study of monoids is fundamental for automata theory (Krohn–Rhodes theory), and formal language theory (star height problem).
In computer science, many abstract data types can be endowed with a monoid structure. In a common pattern, a sequence of elements of a monoid is "folded" or "accumulated" to produce a final value. For instance, many iterative algorithms need to update some kind of "running total" at each iteration; this pattern may be elegantly expressed by a monoid operation. Alternatively, the associativity of monoid operations ensures that the operation can be parallelized by employing a prefix sum or similar algorithm, in order to utilize multiple cores or processors efficiently.
An application of monoids in computer science is the so-called MapReduce programming model (see Encoding Map-Reduce As A Monoid With Left Folding). MapReduce, in computing, consists of two or three operations. Given a dataset, "Map" consists of mapping arbitrary data to elements of a specific monoid. "Reduce" consists of folding those elements, so that in the end we produce just one element
·en.wikipedia.org·
Monoid - Wikipedia
Combinatory logic - Wikipedia
Combinatory logic - Wikipedia
Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel[1] and Haskell Curry,[2] and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators, which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic. A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments.
·en.wikipedia.org·
Combinatory logic - Wikipedia