Found 451 bookmarks
Newest
The intuition behind Shannon’s Entropy
The intuition behind Shannon’s Entropy
[WARNING: TOO EASY!]
Shannon thought that the information content of anything can be measured in bits. To write a number N in bits, we need to take a log base 2 of N.
Note that thermodynamic “entropy” and the “entropy” in information theory both capture increasing randomness.
·towardsdatascience.com·
The intuition behind Shannon’s Entropy
Perplexity Intuition (and Derivation)
Perplexity Intuition (and Derivation)
Never be perplexed again by perplexity.
Less entropy (or less disordered system) is favorable over more entropy. Because predictable results are preferred over randomness. This is why people say low perplexity is good and high perplexity is bad since the perplexity is the exponentiation of the entropy (and you can safely think of the concept of perplexity as entropy).
Why do we use perplexity instead of entropy? If we think of perplexity as a branching factor (the weighted average number of choices a random variable has), then that number is easier to understand than the entropy
·towardsdatascience.com·
Perplexity Intuition (and Derivation)
Perplexity - Wikipedia
Perplexity - Wikipedia
In information theory, perplexity is a measurement of how well a probability distribution or probability model predicts a sample. It may be used to compare probability models. A low perplexity indicates the probability distribution is good at predicting the sample
·en.wikipedia.org·
Perplexity - Wikipedia
Phylogenetic tree - Wikipedia
Phylogenetic tree - Wikipedia
A phylogenetic tree (also phylogeny or evolutionary tree [3]) is a branching diagram or a tree showing the evolutionary relationships among various biological species or other entities based upon similarities and differences in their physical or genetic characteristics. All life on Earth is part of a single phylogenetic tree, indicating common ancestry.
·en.wikipedia.org·
Phylogenetic tree - Wikipedia
Inverse gambler's fallacy - Wikipedia
Inverse gambler's fallacy - Wikipedia
The inverse gambler's fallacy, named by philosopher Ian Hacking, is a formal fallacy of Bayesian inference which is an inverse of the better known gambler's fallacy. It is the fallacy of concluding, on the basis of an unlikely outcome of a random process, that the process is likely to have occurred many times before
The argument from design asserts, first, that the universe is fine tuned to support life, and second, that this fine tuning points to the existence of an intelligent designer. The rebuttal attacked by Hacking consists of accepting the first premise, but rejecting the second on the grounds that our (big bang) universe is just one in a long sequence of universes, and that the fine tuning merely shows that there have been many other (poorly tuned) universes preceding this one
·en.wikipedia.org·
Inverse gambler's fallacy - Wikipedia
Iatrogenesis - Wikipedia
Iatrogenesis - Wikipedia
Iatrogenesis is the causation of a disease, a harmful complication, or other ill effect by any medical activity, including diagnosis, intervention, error, or negligence
·en.wikipedia.org·
Iatrogenesis - Wikipedia
Second partial derivative test - Wikipedia
Second partial derivative test - Wikipedia
In mathematics, the second partial derivative test is a method in multivariable calculus used to determine if a critical point of a function is a local minimum, maximum or saddle point.
·en.wikipedia.org·
Second partial derivative test - Wikipedia
Null hypothesis - Wikipedia
Null hypothesis - Wikipedia
In inferential statistics, the null hypothesis (often denoted H0)[1] is that two possibilities are the same. The null hypothesis is that the observed difference is due to chance alone. Using statistical tests, it is possible to calculate the likelihood that the null hypothesis is true.
·en.wikipedia.org·
Null hypothesis - Wikipedia
p-value - Wikipedia
p-value - Wikipedia
In null-hypothesis significance testing, the p-value[note 1] is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct.[2][3] A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Reporting p-values of statistical tests is common practice in academic publications of many quantitative fields. Since the precise meaning of p-value is hard to grasp, misuse is widespread and has been a major topic in metascience.
·en.wikipedia.org·
p-value - Wikipedia
Statistical hypothesis testing - Wikipedia
Statistical hypothesis testing - Wikipedia
A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters.
·en.wikipedia.org·
Statistical hypothesis testing - Wikipedia
Multinomial logistic regression - Wikipedia
Multinomial logistic regression - Wikipedia
In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes.[1] That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc.).
·en.wikipedia.org·
Multinomial logistic regression - Wikipedia
Softmax function - Wikipedia
Softmax function - Wikipedia
The softmax function, also known as softargmax[1]: 184  or normalized exponential function,[2]: 198  converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
·en.wikipedia.org·
Softmax function - Wikipedia
How to WIN the fight Against AGING | Aubrey de Grey on Health Theory
How to WIN the fight Against AGING | Aubrey de Grey on Health Theory
This episode is sponsored by Thryve. Get 50% off your at home gut health test when you go to https://trythryve.com/impacttheory Scientific break-throughs are happening all around you. As technology advances, biologists such as the Chief Science Officer of the SENS Research Foundation, Aubrey de Grey, is leading the way to pioneering tech that will allow you to choose how long you want to live. For years scientists have been trying to find a way to slow down the aging process, but Aubrey introduces the idea of repairing the damage that aging does on the body to theoretically restore the body’s biological age to maybe 30 years younger. If you are familiar with the obsession I’ve had with living forever, you already know how much this excites me. SHOW NOTES: Quality of Life | Aubrey on why a long life depends on the quality and being invested in choice [0:58] Reverse Aging | Why damage repair could be easier than slowing aging & the push back met [4:40] Indefinite Life | Aubrey on why the result of expected life for reversed aging could be indefinite [8:08] Structure Repair | Landing on the idea of repairing structure to restore cellular level function [9:32] Pushback | Aubrey on pushback among scientists about reverse aging in biology [11:17] Body Damage | Aubrey on self-inflicted damage being reversed to same level 30 years prior [14:13] Longevity Escape Velocity | Aubrey on his theory how to reverse biological age 30 years [16:26] Types of Damage | 7 categories of damage that correspond to therapeutic methods of repair [21:18] Stem Cell Therapy | Aubrey explains how stem cells could treat loss of cell problems [22:17] Cancer Treatments | Aubrey gives category of damage due to too many cells [23:26] Senescent Cells | Aubrey explains how these cells can promote cancer [26:53] Mitochondrial Mutations | Aubrey explains problems at the molecular level inside cells [28:53] Cellular Waste | Aubrey breaks down how cellular waste over years impacts old age [33:35] Macular Degeneration | Aubrey explains specific enzymes that could prevent blindness [35:24] Excretion | Aubrey explains diseases that could be resolved by breaking down waste [37:50] Alzheimer’s | Breaking down amyloid as extracellular waste and modest benefit [39:22] Advancing Therapies | Aubrey gives sobering guess how close effective therapies are [44:07] QUOTES: “I want to make sure that my choice about how long to live, and, of course, how high quality that life will be, is not progressively taken away from me by aging.” [2:24] “Old age is something that evolution doesn't care about at all. Evolution only cares about the propagation of genetic information.” [34:41] “you've got to fix them all. You haven't got to fix any of them perfectly. But you've got to fix them all pretty well.” [43:28] “what excites me is typically the breakthroughs that would take me half an hour background to describe why it's even important.” [46:16] Guest Bio: Dr. de Grey is the biomedical gerontologist who devised the SENS platform and established SENS Research Foundation to implement it. He received his BA in Computer Science and Ph.D. in Biology from the University of Cambridge in 1985 and 2000, respectively. Dr. de Grey is Editor-in-Chief of Rejuvenation Research, is a Fellow of both the Gerontological Society of America and the American Aging Association Follow Aubrey De Grey: Website: https://www.sens.org/ Twitter: https://twitter.com/aubreydegrey LinkedIn: https://www.linkedin.com/in/aubrey-de-grey-24260b/ Dive Deeper On Related Episodes: Reset Your Age with David Sinclair https://youtu.be/IEz1P4i1P7s Lifestyle For Longevity with Kellyann Petrucci https://youtu.be/l9QO0JlnU8w Secrets To Longevity https://youtu.be/Ulm01gzU8rU
·youtube.com·
How to WIN the fight Against AGING | Aubrey de Grey on Health Theory
Shear matrix - Wikipedia
Shear matrix - Wikipedia
In mathematics, a shear matrix or transvection is an elementary matrix that represents the addition of a multiple of one row or column to another. Such a matrix may be derived by taking the identity matrix and replacing one of the zero elements with a non-zero value. The name shear reflects the fact that the matrix represents a shear transformation. Geometrically, such a transformation takes pairs of points in a vector space that are purely axially separated along the axis whose row in the matrix contains the shear element, and effectively replaces those pairs by pairs whose separation is no longer purely axial but has two vector components. Thus, the shear axis is always an eigenvector of S.
·en.wikipedia.org·
Shear matrix - Wikipedia
Elementary matrix - Wikipedia
Elementary matrix - Wikipedia
In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. The elementary matrices generate the general linear group GLn(F) when F is a field. Left multiplication (pre-multiplication) by an elementary matrix represents elementary row operations, while right multiplication (post-multiplication) represents elementary column operations. Elementary row operations are used in Gaussian elimination to reduce a matrix to row echelon form. They are also used in Gauss–Jordan elimination to further reduce the matrix to reduced row echelon form.
·en.wikipedia.org·
Elementary matrix - Wikipedia
Orthogonality (mathematics) - Wikipedia
Orthogonality (mathematics) - Wikipedia
In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms. Two elements u and v of a vector space with bilinear form B are orthogonal when B(u, v) = 0. Depending on the bilinear form, the vector space may contain nonzero self-orthogonal vectors. In the case of function spaces, families of orthogonal functions are used to form a basis. The concept has been used in the context of orthogonal functions, orthogonal polynomials, and combinatorics.
·en.wikipedia.org·
Orthogonality (mathematics) - Wikipedia
Basis (linear algebra) - Wikipedia
Basis (linear algebra) - Wikipedia
In mathematics, a set B of vectors in a vector space V is called a basis if every element of V may be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors. Equivalently, a set B is a basis if its elements are linearly independent and every element of V is a linear combination of elements of B.[1] In other words, a basis is a linearly independent spanning set. A vector space can have several bases; however all the bases have the same number of elements, called the dimension of the vector space.
·en.wikipedia.org·
Basis (linear algebra) - Wikipedia
Alexis Courbet | Towards Computational Design of Self Assembling &Genetically Encodable Nanomachines - Foresight Institute
Alexis Courbet | Towards Computational Design of Self Assembling &Genetically Encodable Nanomachines - Foresight Institute
Alexis’s research proposes to investigate computational design rules to rationally install biochemical energy driven dynamic and mechanical behavior within de novo protein nanostructures, by tailoring the energy landscape to capture favorable thermal fluctuations allowing to perform work (i.e. rationally designing a Brownian ratchet mechanism using energy from the catalysis of a biologically orthogonal small molecule to break symmetry). As a proof of concept, he is focusing on the de novo design of protein rotary motors, in which symmetric energy minima along an interface between multiple components couples rotation to a catalytic event, thereby converting the biochemical energy of a fuel molecule into work.
There has recently been a surge of progress with computational protein folding.  Alexis is working on designing atomically precise machines using proteins as structural and mechanical elements.  One of the first designs is an axel and rotor assembly.  He started by designing the components separately, then computing the interface between components.  He then simulated the motion and degrees of freedom to calculate whether the machine would perform the intended function
·foresight.org·
Alexis Courbet | Towards Computational Design of Self Assembling &Genetically Encodable Nanomachines - Foresight Institute
Fuzzing - Wikipedia
Fuzzing - Wikipedia
In programming and software development, fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with.
·en.wikipedia.org·
Fuzzing - Wikipedia
Topological sorting - Wikipedia
Topological sorting - Wikipedia
On a parallel random-access machine, a topological ordering can be constructed in O(log2 n) time using a polynomial number of processors, putting the problem into the complexity class NC2.[5] One method for doing this is to repeatedly square the adjacency matrix of the given graph, logarithmically many times, using min-plus matrix multiplication with maximization in place of minimization. The resulting matrix describes the longest path distances in the graph. Sorting the vertices by the lengths of their longest incoming paths produces a topological ordering
·en.wikipedia.org·
Topological sorting - Wikipedia