Found 53 bookmarks
Newest
Procrustes transformation - Wikipedia
Procrustes transformation - Wikipedia
A Procrustes transformation is a geometric transformation that involves only translation, rotation, uniform scaling, or a combination of these transformations. Hence, it may change the size or position, but not the shape of a geometric object. Named after the mythical Greek robber, Procrustes, who made his victims fit his bed either by stretching their limbs or cutting them off.
·en.wikipedia.org·
Procrustes transformation - Wikipedia
Procrustes analysis - Wikipedia
Procrustes analysis - Wikipedia
In statistics, Procrustes analysis is a form of statistical shape analysis used to analyse the distribution of a set of shapes. The name Procrustes (Greek: Προκρούστης) refers to a bandit from Greek mythology who made his victims fit his bed either by stretching their limbs or cutting them off. In mathematics: an orthogonal Procrustes problem is a method which can be used to find out the optimal rotation and/or reflection (i.e., the optimal orthogonal linear transformation) for the Procrustes Superimposition (PS) of an object with respect to another. a constrained orthogonal Procrustes problem, subject to det(R) = 1 (where R is a rotation matrix), is a method which can be used to determine the optimal rotation for the PS of an object with respect to another (reflection is not allowed). In some contexts, this method is called the Kabsch algorithm.
·en.wikipedia.org·
Procrustes analysis - Wikipedia
Isomorphism - Wikipedia
Isomorphism - Wikipedia
In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word isomorphism is derived from the Ancient Greek: ἴσος isos "equal", and μορφή morphe "form" or "shape".
In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word isomorphism is derived from the Ancient Greek: ἴσος isos "equal", and μορφή morphe "form" or "shape".
·en.wikipedia.org·
Isomorphism - Wikipedia
Penrose tiling - Wikipedia
Penrose tiling - Wikipedia
A Penrose tiling is an example of an aperiodic tiling. Here, a tiling is a covering of the plane by non-overlapping polygons or other shapes, and aperiodic means that shifting any tiling with these shapes by any finite distance, without rotation, cannot produce the same tiling. However, despite their lack of translational symmetry, Penrose tilings may have both reflection symmetry and fivefold rotational symmetry. Penrose tilings are named after mathematician and physicist Roger Penrose, who investigated them in the 1970s.
A Penrose tiling is an example of an aperiodic tiling. Here, a tiling is a covering of the plane by non-overlapping polygons or other shapes, and aperiodic means that shifting any tiling with these shapes by any finite distance, without rotation, cannot produce the same tiling. However, despite their lack of translational symmetry, Penrose tilings may have both reflection symmetry and fivefold rotational symmetry. Penrose tilings are named after mathematician and physicist Roger Penrose, who investigated them in the 1970s.
·en.wikipedia.org·
Penrose tiling - Wikipedia
Tessellation - Wikipedia
Tessellation - Wikipedia
A tessellation or tiling is the covering of a surface, often a plane, using one or more geometric shapes, called tiles, with no overlaps and no gaps. In mathematics, tessellation can be generalized to higher dimensions and a variety of geometries.
A tessellation or tiling is the covering of a surface, often a plane, using one or more geometric shapes, called tiles, with no overlaps and no gaps. In mathematics, tessellation can be generalized to higher dimensions and a variety of geometries. A periodic tiling has a repeating pattern. Some special kinds include regular tilings with regular polygonal tiles all of the same shape, and semiregular tilings with regular tiles of more than one shape and with every corner identically arranged. The patterns formed by periodic tilings can be categorized into 17 wallpaper groups. A tiling that lacks a repeating pattern is called "non-periodic". An aperiodic tiling uses a small set of tile shapes that cannot form a repeating pattern. A tessellation of space, also known as a space filling or honeycomb, can be defined in the geometry of higher dimensions.
·en.wikipedia.org·
Tessellation - Wikipedia
Antiderivative - Wikipedia
Antiderivative - Wikipedia
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral[Note 1] of a function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as F' = f.[1][2] The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as F and G.
·en.wikipedia.org·
Antiderivative - Wikipedia
Fundamental theorem of calculus - Wikipedia
Fundamental theorem of calculus - Wikipedia
The fundamental theorem of calculus is a theorem that links the concept of differentiating a function (calculating its slopes, or rate of change at each time) with the concept of integrating a function (calculating the area under its graph, or the cumulative effect of small contributions). The two operations are inverses of each other apart from a constant value which depends on where one starts to compute area.
·en.wikipedia.org·
Fundamental theorem of calculus - Wikipedia
Riemann sum - Wikipedia
Riemann sum - Wikipedia
In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is approximating the area of functions or lines on a graph, but also the length of curves and other approximations.
·en.wikipedia.org·
Riemann sum - Wikipedia
Stochastic variance reduction - Wikipedia
Stochastic variance reduction - Wikipedia
(Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classical Stochastic approximation setting. Variance reduction approaches are widely used for training machine learning models such as logistic regression and support vector machines[1] as these problems have finite-sum structure and uniform conditioning that make them ideal candidates for variance reduction.
·en.wikipedia.org·
Stochastic variance reduction - Wikipedia
Stochastic approximation - Wikipedia
Stochastic approximation - Wikipedia
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.
·en.wikipedia.org·
Stochastic approximation - Wikipedia
Stochastic gradient descent - Wikipedia
Stochastic gradient descent - Wikipedia
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in trade for a lower convergence rate
·en.wikipedia.org·
Stochastic gradient descent - Wikipedia
Category theory - Wikipedia
Category theory - Wikipedia
Category theory is a general theory of mathematical structures and their relations that was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Nowadays, category theory is used in almost all areas of mathematics, and in some areas of computer science. In particular, many constructions of new mathematical objects from previous ones, that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality.
·en.wikipedia.org·
Category theory - Wikipedia
Applied category theory - Wikipedia
Applied category theory - Wikipedia
Applied category theory is an academic discipline in which methods from category theory are used to study other fields[1][2][3] including but not limited to computer science,[4][5] physics (in particular quantum mechanics[6][7][8][9]), natural language processing,[10][11][12] control theory,[13][14] probability theory and causality. The application of category theory in these domains can take different forms. In some cases the formalization of the domain into the language of category theory is the goal, the idea here being that this would elucidate the important structure and properties of the domain. In other cases the formalization is used to leverage the power of abstraction in order to prove new results about the field
·en.wikipedia.org·
Applied category theory - Wikipedia
Bessel's correction - Wikipedia
Bessel's correction - Wikipedia
In statistics, Bessel's correction is the use of n − 1 instead of n in the formula for the sample variance and sample standard deviation,[1] where n is the number of observations in a sample. This method corrects the bias in the estimation of the population variance. It also partially corrects the bias in the estimation of the population standard deviation. However, the correction often increases the mean squared error in these estimations. This technique is named after Friedrich Bessel.
·en.wikipedia.org·
Bessel's correction - Wikipedia
Differentiable function - Wikipedia
Differentiable function - Wikipedia
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp.
·en.wikipedia.org·
Differentiable function - Wikipedia
Second partial derivative test - Wikipedia
Second partial derivative test - Wikipedia
In mathematics, the second partial derivative test is a method in multivariable calculus used to determine if a critical point of a function is a local minimum, maximum or saddle point.
·en.wikipedia.org·
Second partial derivative test - Wikipedia
Null hypothesis - Wikipedia
Null hypothesis - Wikipedia
In inferential statistics, the null hypothesis (often denoted H0)[1] is that two possibilities are the same. The null hypothesis is that the observed difference is due to chance alone. Using statistical tests, it is possible to calculate the likelihood that the null hypothesis is true.
·en.wikipedia.org·
Null hypothesis - Wikipedia
p-value - Wikipedia
p-value - Wikipedia
In null-hypothesis significance testing, the p-value[note 1] is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct.[2][3] A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Reporting p-values of statistical tests is common practice in academic publications of many quantitative fields. Since the precise meaning of p-value is hard to grasp, misuse is widespread and has been a major topic in metascience.
·en.wikipedia.org·
p-value - Wikipedia
Statistical hypothesis testing - Wikipedia
Statistical hypothesis testing - Wikipedia
A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters.
·en.wikipedia.org·
Statistical hypothesis testing - Wikipedia
Multinomial logistic regression - Wikipedia
Multinomial logistic regression - Wikipedia
In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes.[1] That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc.).
·en.wikipedia.org·
Multinomial logistic regression - Wikipedia
Softmax function - Wikipedia
Softmax function - Wikipedia
The softmax function, also known as softargmax[1]: 184  or normalized exponential function,[2]: 198  converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
·en.wikipedia.org·
Softmax function - Wikipedia
Shear matrix - Wikipedia
Shear matrix - Wikipedia
In mathematics, a shear matrix or transvection is an elementary matrix that represents the addition of a multiple of one row or column to another. Such a matrix may be derived by taking the identity matrix and replacing one of the zero elements with a non-zero value. The name shear reflects the fact that the matrix represents a shear transformation. Geometrically, such a transformation takes pairs of points in a vector space that are purely axially separated along the axis whose row in the matrix contains the shear element, and effectively replaces those pairs by pairs whose separation is no longer purely axial but has two vector components. Thus, the shear axis is always an eigenvector of S.
·en.wikipedia.org·
Shear matrix - Wikipedia
Elementary matrix - Wikipedia
Elementary matrix - Wikipedia
In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. The elementary matrices generate the general linear group GLn(F) when F is a field. Left multiplication (pre-multiplication) by an elementary matrix represents elementary row operations, while right multiplication (post-multiplication) represents elementary column operations. Elementary row operations are used in Gaussian elimination to reduce a matrix to row echelon form. They are also used in Gauss–Jordan elimination to further reduce the matrix to reduced row echelon form.
·en.wikipedia.org·
Elementary matrix - Wikipedia
Orthogonality (mathematics) - Wikipedia
Orthogonality (mathematics) - Wikipedia
In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms. Two elements u and v of a vector space with bilinear form B are orthogonal when B(u, v) = 0. Depending on the bilinear form, the vector space may contain nonzero self-orthogonal vectors. In the case of function spaces, families of orthogonal functions are used to form a basis. The concept has been used in the context of orthogonal functions, orthogonal polynomials, and combinatorics.
·en.wikipedia.org·
Orthogonality (mathematics) - Wikipedia