Found 53 bookmarks
Custom sorting
Procrustes transformation - Wikipedia
Procrustes transformation - Wikipedia
A Procrustes transformation is a geometric transformation that involves only translation, rotation, uniform scaling, or a combination of these transformations. Hence, it may change the size or position, but not the shape of a geometric object. Named after the mythical Greek robber, Procrustes, who made his victims fit his bed either by stretching their limbs or cutting them off.
·en.wikipedia.org·
Procrustes transformation - Wikipedia
Procrustes analysis - Wikipedia
Procrustes analysis - Wikipedia
In statistics, Procrustes analysis is a form of statistical shape analysis used to analyse the distribution of a set of shapes. The name Procrustes (Greek: Προκρούστης) refers to a bandit from Greek mythology who made his victims fit his bed either by stretching their limbs or cutting them off. In mathematics: an orthogonal Procrustes problem is a method which can be used to find out the optimal rotation and/or reflection (i.e., the optimal orthogonal linear transformation) for the Procrustes Superimposition (PS) of an object with respect to another. a constrained orthogonal Procrustes problem, subject to det(R) = 1 (where R is a rotation matrix), is a method which can be used to determine the optimal rotation for the PS of an object with respect to another (reflection is not allowed). In some contexts, this method is called the Kabsch algorithm.
·en.wikipedia.org·
Procrustes analysis - Wikipedia
Antiderivative - Wikipedia
Antiderivative - Wikipedia
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral[Note 1] of a function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as F' = f.[1][2] The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as F and G.
·en.wikipedia.org·
Antiderivative - Wikipedia
Fundamental theorem of calculus - Wikipedia
Fundamental theorem of calculus - Wikipedia
The fundamental theorem of calculus is a theorem that links the concept of differentiating a function (calculating its slopes, or rate of change at each time) with the concept of integrating a function (calculating the area under its graph, or the cumulative effect of small contributions). The two operations are inverses of each other apart from a constant value which depends on where one starts to compute area.
·en.wikipedia.org·
Fundamental theorem of calculus - Wikipedia
Stochastic variance reduction - Wikipedia
Stochastic variance reduction - Wikipedia
(Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classical Stochastic approximation setting. Variance reduction approaches are widely used for training machine learning models such as logistic regression and support vector machines[1] as these problems have finite-sum structure and uniform conditioning that make them ideal candidates for variance reduction.
·en.wikipedia.org·
Stochastic variance reduction - Wikipedia
Stochastic approximation - Wikipedia
Stochastic approximation - Wikipedia
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.
·en.wikipedia.org·
Stochastic approximation - Wikipedia
Stochastic gradient descent - Wikipedia
Stochastic gradient descent - Wikipedia
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in trade for a lower convergence rate
·en.wikipedia.org·
Stochastic gradient descent - Wikipedia
Differentiable function - Wikipedia
Differentiable function - Wikipedia
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp.
·en.wikipedia.org·
Differentiable function - Wikipedia
Statistical hypothesis testing - Wikipedia
Statistical hypothesis testing - Wikipedia
A statistical hypothesis test is a method of statistical inference used to decide whether the data at hand sufficiently support a particular hypothesis. Hypothesis testing allows us to make probabilistic statements about population parameters.
·en.wikipedia.org·
Statistical hypothesis testing - Wikipedia
p-value - Wikipedia
p-value - Wikipedia
In null-hypothesis significance testing, the p-value[note 1] is the probability of obtaining test results at least as extreme as the result actually observed, under the assumption that the null hypothesis is correct.[2][3] A very small p-value means that such an extreme observed outcome would be very unlikely under the null hypothesis. Reporting p-values of statistical tests is common practice in academic publications of many quantitative fields. Since the precise meaning of p-value is hard to grasp, misuse is widespread and has been a major topic in metascience.
·en.wikipedia.org·
p-value - Wikipedia
Null hypothesis - Wikipedia
Null hypothesis - Wikipedia
In inferential statistics, the null hypothesis (often denoted H0)[1] is that two possibilities are the same. The null hypothesis is that the observed difference is due to chance alone. Using statistical tests, it is possible to calculate the likelihood that the null hypothesis is true.
·en.wikipedia.org·
Null hypothesis - Wikipedia
Second partial derivative test - Wikipedia
Second partial derivative test - Wikipedia
In mathematics, the second partial derivative test is a method in multivariable calculus used to determine if a critical point of a function is a local minimum, maximum or saddle point.
·en.wikipedia.org·
Second partial derivative test - Wikipedia
Softmax function - Wikipedia
Softmax function - Wikipedia
The softmax function, also known as softargmax[1]: 184  or normalized exponential function,[2]: 198  converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes, based on Luce's choice axiom.
·en.wikipedia.org·
Softmax function - Wikipedia
Multinomial logistic regression - Wikipedia
Multinomial logistic regression - Wikipedia
In statistics, multinomial logistic regression is a classification method that generalizes logistic regression to multiclass problems, i.e. with more than two possible discrete outcomes.[1] That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc.).
·en.wikipedia.org·
Multinomial logistic regression - Wikipedia
Elementary matrix - Wikipedia
Elementary matrix - Wikipedia
In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. The elementary matrices generate the general linear group GLn(F) when F is a field. Left multiplication (pre-multiplication) by an elementary matrix represents elementary row operations, while right multiplication (post-multiplication) represents elementary column operations. Elementary row operations are used in Gaussian elimination to reduce a matrix to row echelon form. They are also used in Gauss–Jordan elimination to further reduce the matrix to reduced row echelon form.
·en.wikipedia.org·
Elementary matrix - Wikipedia
Shear matrix - Wikipedia
Shear matrix - Wikipedia
In mathematics, a shear matrix or transvection is an elementary matrix that represents the addition of a multiple of one row or column to another. Such a matrix may be derived by taking the identity matrix and replacing one of the zero elements with a non-zero value. The name shear reflects the fact that the matrix represents a shear transformation. Geometrically, such a transformation takes pairs of points in a vector space that are purely axially separated along the axis whose row in the matrix contains the shear element, and effectively replaces those pairs by pairs whose separation is no longer purely axial but has two vector components. Thus, the shear axis is always an eigenvector of S.
·en.wikipedia.org·
Shear matrix - Wikipedia
Orthogonality (mathematics) - Wikipedia
Orthogonality (mathematics) - Wikipedia
In mathematics, orthogonality is the generalization of the geometric notion of perpendicularity to the linear algebra of bilinear forms. Two elements u and v of a vector space with bilinear form B are orthogonal when B(u, v) = 0. Depending on the bilinear form, the vector space may contain nonzero self-orthogonal vectors. In the case of function spaces, families of orthogonal functions are used to form a basis. The concept has been used in the context of orthogonal functions, orthogonal polynomials, and combinatorics.
·en.wikipedia.org·
Orthogonality (mathematics) - Wikipedia
Basis (linear algebra) - Wikipedia
Basis (linear algebra) - Wikipedia
In mathematics, a set B of vectors in a vector space V is called a basis if every element of V may be written in a unique way as a finite linear combination of elements of B. The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to B. The elements of a basis are called basis vectors. Equivalently, a set B is a basis if its elements are linearly independent and every element of V is a linear combination of elements of B.[1] In other words, a basis is a linearly independent spanning set. A vector space can have several bases; however all the bases have the same number of elements, called the dimension of the vector space.
·en.wikipedia.org·
Basis (linear algebra) - Wikipedia
Topology - Wikipedia
Topology - Wikipedia
In mathematics, topology (from the Greek words τόπος, 'place, location', and λόγος, 'study') is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling, and bending; that is, without closing holes, opening holes, tearing, gluing, or passing through itself
·en.wikipedia.org·
Topology - Wikipedia
Linear span - Wikipedia
Linear span - Wikipedia
In mathematics, the linear span (also called the linear hull[1] or just span) of a set S of vectors (from a vector space), denoted span(S),[2] is the smallest linear subspace that contains the set.[3] It can be characterized either as the intersection of all linear subspaces that contain S, or as the set of linear combinations of elements of S. The linear span of a set of vectors is therefore a vector space. Spans can be generalized to matroids and modules.
·en.wikipedia.org·
Linear span - Wikipedia
Linear subspace - Wikipedia
Linear subspace - Wikipedia
In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspace[1][note 1] is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.
·en.wikipedia.org·
Linear subspace - Wikipedia
Determinant - Wikipedia
Determinant - Wikipedia
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It allows characterizing some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix A is denoted det(A), det A, or |A|.
·en.wikipedia.org·
Determinant - Wikipedia
Jacobian matrix and determinant - Wikipedia
Jacobian matrix and determinant - Wikipedia
In vector calculus, the Jacobian matrix (/dʒəˈkoʊbiən/,[1][2][3] /dʒɪ-, jɪ-/) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as input as the number of vector components of its output, its determinant is referred to as the Jacobian determinant. Both the matrix and (if applicable) the determinant are often referred to simply as the Jacobian in literature
·en.wikipedia.org·
Jacobian matrix and determinant - Wikipedia
Derivative - Wikipedia
Derivative - Wikipedia
In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances
·en.wikipedia.org·
Derivative - Wikipedia
Recursive definition - Wikipedia
Recursive definition - Wikipedia
In mathematics and computer science, a recursive definition, or inductive definition, is used to define the elements in a set in terms of other elements in the set (Aczel 1977:740ff). Some examples of recursively-definable objects include factorials, natural numbers, Fibonacci numbers, and the Cantor ternary set.
·en.wikipedia.org·
Recursive definition - Wikipedia