Found 451 bookmarks
Newest
Riemann sum - Wikipedia
Riemann sum - Wikipedia
In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is approximating the area of functions or lines on a graph, but also the length of curves and other approximations.
·en.wikipedia.org·
Riemann sum - Wikipedia
Naturalness (physics) - Wikipedia
Naturalness (physics) - Wikipedia
In physics, naturalness is the property that the dimensionless ratios between free parameters or physical constants appearing in a physical theory should take values "of order 1" and that free parameters are not fine-tuned. That is, a natural theory would have parameter ratios with values like 2.34 rather than 234000 or 0.000234
·en.wikipedia.org·
Naturalness (physics) - Wikipedia
ChatGPT: Optimizing Language Models for Dialogue
ChatGPT: Optimizing Language Models for Dialogue
We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. ChatGPT is a sibling model to InstructGPT, which is trained to follow an instruction in
We trained this model using Reinforcement Learning from Human Feedback (RLHF), using the same methods as InstructGPT, but with slight differences in the data collection setup. We trained an initial model using supervised fine-tuning: human AI trainers provided conversations in which they played both sides—the user and an AI assistant. We gave the trainers access to model-written suggestions to help them compose their responses. We mixed this new dialogue dataset with the InstructGPT dataset, which we transformed into a dialogue format.
We randomly selected a model-written message, sampled several alternative completions, and had AI trainers rank them. Using these reward models, we can fine-tune the model using Proximal Policy Optimization.
·openai.com·
ChatGPT: Optimizing Language Models for Dialogue
Neurogenesis - Wikipedia
Neurogenesis - Wikipedia
Neurogenesis is the process by which nervous system cells, the neurons, are produced by neural stem cells (NSCs). It occurs in all species of animals except the porifera (sponges) and placozoans.[1] Types of NSCs include neuroepithelial cells (NECs), radial glial cells (RGCs), basal progenitors (BPs), intermediate neuronal precursors (INPs), subventricular zone astrocytes, and subgranular zone radial astrocytes, among others.[1] Neurogenesis is most active during embryonic development and is responsible for producing all the various types of neurons of the organism, but it continues throughout adult life in a variety of organisms.[1] Once born, neurons do not divide (see mitosis), and many will live the lifespan of the animal
·en.wikipedia.org·
Neurogenesis - Wikipedia
Binding problem - Wikipedia
Binding problem - Wikipedia
The consciousness and binding problem is the problem of how objects, background and abstract or emotional features are combined into a single experience.[1] The binding problem refers to the overall encoding of our brain circuits for the combination of decisions, actions, and perception. The binding problem encompasses a wide range of different circuits and can be divided into subsections that will be discussed later on. The binding problem is considered a "problem" due to the fact that no complete model exists. The binding problem can be subdivided into four problems of perception, used in neuroscience, cognitive science and philosophy of mind. Including general considerations on coordination,the Subjective unity of perception, and variable binding
·en.wikipedia.org·
Binding problem - Wikipedia
Prisoner's dilemma - Wikipedia
Prisoner's dilemma - Wikipedia
The Prisoner's Dilemma is an example of a game analyzed in game theory[citation needed]. It is also a thought experiment that challenges two completely rational agents to a dilemma: cooperate with their partner for mutual reward, or betray their partner ("defect") for individual reward. This dilemma was originally framed by Merrill Flood and Melvin Dresher while working at RAND in 1950[citation needed]. Albert W. Tucker appropriated the game and formalized it by structuring the rewards in terms of prison sentences and named it "prisoner's dilemma".[1] William Poundstone in his 1993 book Prisoner's Dilemma writes the following version:Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain.The possible outcomes are: A: If A and B each betray the other, each of them serves 5 years in prison B: If A betrays B but B remains silent, A will be set free and B will serve 10 years in prison C: If A remains silent but B betrays A, A will serve 10 years in prison and B will be set free D: If A and B both remain silent, both of them will serve 2 years in prison (on the lesser charge).
·en.wikipedia.org·
Prisoner's dilemma - Wikipedia
Rapid eye movement sleep - Wikipedia
Rapid eye movement sleep - Wikipedia
After the deprivation is complete, mild psychological disturbances, such as anxiety, irritability, hallucinations, and difficulty concentrating may develop and appetite may increase. There are also positive consequences of REM deprivation. Some symptoms of depression are found to be suppressed by REM deprivation; aggression may increase, and eating behavior may get disrupted
Sleep in general aids memory. REM sleep may favor the preservation of certain types of memories: specifically, procedural memory, spatial memory, and emotional memory. In rats, REM sleep increases following intensive learning, especially several hours after, and sometimes for multiple nights. Experimental REM sleep deprivation has sometimes inhibited memory consolidation, especially regarding complex processes (e.g., how to escape from an elaborate maze).[103] In humans, the best evidence for REM's improvement of memory pertains to learning of procedures—new ways of moving the body (such as trampoline jumping), and new techniques of problem solving. REM deprivation seemed to impair declarative (i.e., factual) memory only in more complex cases, such as memories of longer stories.[104] REM sleep apparently counteracts attempts to suppress certain thoughts
After waking from REM sleep, the mind seems "hyperassociative"—more receptive to semantic priming effects. People awakened from REM have performed better on tasks like anagrams and creative problem solving.[73] Sleep aids the process by which creativity forms associative elements into new combinations that are useful or meet some requirement.[74] This occurs in REM sleep rather than in NREM sleep
·en.wikipedia.org·
Rapid eye movement sleep - Wikipedia
Rapid eye movement sleep - Wikipedia
Rapid eye movement sleep - Wikipedia
Rapid eye movement sleep (REM sleep or REMS) is a unique phase of sleep in mammals and birds, characterized by random rapid movement of the eyes, accompanied by low muscle tone throughout the body, and the propensity of the sleeper to dream vividly. The REM phase is also known as paradoxical sleep (PS) and sometimes desynchronized sleep or dreamy sleep,[1] because of physiological similarities to waking states including rapid, low-voltage desynchronized brain waves. Electrical and chemical activity regulating this phase seems to originate in the brain stem, and is characterized most notably by an abundance of the neurotransmitter acetylcholine, combined with a nearly complete absence of monoamine neurotransmitters histamine, serotonin and norepinephrine.[2]Experiences of REM sleep are not transferred to permanent memory due to absence of norepinephrine
·en.wikipedia.org·
Rapid eye movement sleep - Wikipedia
A Gentle Introduction to Pooling Layers for Convolutional Neural Networks - MachineLearningMastery.com
A Gentle Introduction to Pooling Layers for Convolutional Neural Networks - MachineLearningMastery.com
Convolutional layers in a convolutional neural network summarize the presence of features in an input image. A problem with the output feature maps is that they are sensitive to the location of the features in the input. One approach to address this sensitivity is to down sample the feature maps. This has the effect of […]
Pooling layers provide an approach to down sampling feature maps by summarizing the presence of features in patches of the feature map. Two common pooling methods are average pooling and max pooling that summarize the average presence of a feature and the most activated presence of a feature respectively.
·machinelearningmastery.com·
A Gentle Introduction to Pooling Layers for Convolutional Neural Networks - MachineLearningMastery.com
Service level indicator - Wikipedia
Service level indicator - Wikipedia
In information technology, a service level indicator (SLI) is a measure of the service level provided by a service provider to a customer. SLIs form the basis of service level objectives (SLOs), which in turn form the basis of service level agreements (SLAs);[1] an SLI is thus also called an SLA metric.
·en.wikipedia.org·
Service level indicator - Wikipedia
Service-level objective - Wikipedia
Service-level objective - Wikipedia
A service-level objective (SLO) is a key element of a service-level agreement (SLA) between a service provider and a customer. SLOs are agreed upon as a means of measuring the performance of the Service Provider and are outlined as a way of avoiding disputes between the two parties based on misunderstanding.
·en.wikipedia.org·
Service-level objective - Wikipedia
Theory of Basic Human Values - Wikipedia
Theory of Basic Human Values - Wikipedia
The Theory of Basic Human Values recognize ten universal values, which can be organized in four higher-order groups. Each of the ten universal values has a central goal that is the underlying motivator.[1][5] Openness to change[edit] Self-Direction Independent thought and action—choosing, creating, exploring. Stimulation Excitement, novelty and challenge in life. Self-enhancement[edit] Hedonism Pleasure or sensuous gratification for oneself. Achievement Personal success through demonstrating competence according to social standards. Power Social status and prestige, control or dominance over people and resources. Conservation[edit] Security Safety, harmony, and stability of society, of relationships, and of self. Conformity Restraint of actions, inclinations, and impulses likely to upset or harm others and violate social expectations or norms. Tradition Respect, commitment, and acceptance of the customs and ideas that one's culture or religion provides. Self-transcendence[edit] Benevolence Preserving and enhancing the welfare of those with whom one is in frequent personal contact (the ‘in-group’). Universalism Understanding, appreciation, tolerance, and protection for the welfare of all people and for nature. Other[edit] Spirituality was considered as an additional eleventh value, however, it was found that it did not exist in all cultures.
·en.wikipedia.org·
Theory of Basic Human Values - Wikipedia
Intrinsic value (ethics) - Wikipedia
Intrinsic value (ethics) - Wikipedia
In ethics, intrinsic value is a property of anything that is valuable on its own. Intrinsic value is in contrast to instrumental value (also known as extrinsic value), which is a property of anything that derives its value from a relation to another intrinsically valuable thing.[1] Intrinsic value is always something that an object has "in itself" or "for its own sake", and is an intrinsic property. An object with intrinsic value may be regarded as an end, or in Kantian terminology, as an end-in-itself
·en.wikipedia.org·
Intrinsic value (ethics) - Wikipedia
GCI Framework – Complice Workshops
GCI Framework – Complice Workshops
One-time tasks (eg calling someone to set up a meeting)Gathering information (eg researching how to solve some aspect of your goal)Practicing a skill (eg practicing guitar for doing recordings)Reinforcing a Habit (eg eating healthy meals instead of snacking)Working a pump (eg making sales calls for your business)Testing an assumption (eg doing market validation for your business)Deep work (working directly on some core output of the goal, eg writing for a book goal or programming for a software startup goal)
·workshops.complice.co·
GCI Framework – Complice Workshops
Shadow (psychology) - Wikipedia
Shadow (psychology) - Wikipedia
In analytical psychology, the shadow (also known as ego-dystonic complex, repressed id, shadow aspect, or shadow archetype) is an unconscious aspect of the personality that does not correspond with the ego ideal, leading the ego to resist and project the shadow. In short, the shadow is the self's emotional blind spot, projected (as archetypes—or, metaphoral sense-image complexes, personified within the collective unconscious); e.g., trickster
·en.wikipedia.org·
Shadow (psychology) - Wikipedia
Self-concept - Wikipedia
Self-concept - Wikipedia
In the psychology of self, one's self-concept (also called self-construction, self-identity, self-perspective or self-structure) is a collection of beliefs about oneself.[1][2] Generally, self-concept embodies the answer to the question "Who am I?".[3] Self-concept is distinguishable from self-awareness, which is the extent to which self-knowledge is defined, consistent, and currently applicable to one's attitudes and dispositions.[4] Self-concept also differs from self-esteem: self-concept is a cognitive or descriptive component of one's self (e.g. "I am a fast runner"), while self-esteem is evaluative and opinionated (e.g. "I feel good about being a fast runner"). Self-concept is made up of one's self-schemas, and interacts with self-esteem, self-knowledge, and the social self to form the self as a whole. It includes the past, present, and future selves, where future selves (or possible selves) represent individuals' ideas of what they might become, what they would like to become, or what they are afraid of becoming. Possible selves may function as incentives for certain behaviour.[3][5] The perception people have about their past or future selves relates to their perception of their current selves. The temporal self-appraisal theory[6] argues that people have a tendency to maintain a positive self-evaluation by distancing themselves from their negative self and paying more attention to their positive one. In addition, people have a tendency to perceive the past self less favourably[7] (e.g. "I'm better than I used to be") and the future self more positively[8] (e.g. "I will be better than I am now").
In the psychology of self, one's self-concept (also called self-construction, self-identity, self-perspective or self-structure) is a collection of beliefs about oneself.[1][2] Generally, self-concept embodies the answer to the question "Who am I?".[3] The self-concept is distinguishable from self-awareness, which is the extent to which self-knowledge is defined, consistent, and currently applicable to one's attitudes and dispositions.[4] Self-concept also differs from self-esteem: self-concept is a cognitive or descriptive component of one's self (e.g. "I am a fast runner"), while self-esteem is evaluative and opinionated (e.g. "I feel good about being a fast runner").
·en.wikipedia.org·
Self-concept - Wikipedia
Stochastic variance reduction - Wikipedia
Stochastic variance reduction - Wikipedia
(Stochastic) variance reduction is an algorithmic approach to minimizing functions that can be decomposed into finite sums. By exploiting the finite sum structure, variance reduction techniques are able to achieve convergence rates that are impossible to achieve with methods that treat the objective as an infinite sum, as in the classical Stochastic approximation setting. Variance reduction approaches are widely used for training machine learning models such as logistic regression and support vector machines[1] as these problems have finite-sum structure and uniform conditioning that make them ideal candidates for variance reduction.
·en.wikipedia.org·
Stochastic variance reduction - Wikipedia
Stochastic approximation - Wikipedia
Stochastic approximation - Wikipedia
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating extreme values of functions which cannot be computed directly, but only estimated via noisy observations.
·en.wikipedia.org·
Stochastic approximation - Wikipedia
Stochastic gradient descent - Wikipedia
Stochastic gradient descent - Wikipedia
Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of gradient descent optimization, since it replaces the actual gradient (calculated from the entire data set) by an estimate thereof (calculated from a randomly selected subset of the data). Especially in high-dimensional optimization problems this reduces the very high computational burden, achieving faster iterations in trade for a lower convergence rate
·en.wikipedia.org·
Stochastic gradient descent - Wikipedia