text
stringlengths
2.13k
184k
source
stringlengths
31
108
In statistics and applications of statistics, normalization can have a range of meanings. In the simplest cases, normalization of ratings means adjusting values measured on different scales to a notionally common scale, often prior to averaging. In more complicated cases, normalization may refer to more sophisticated adjustments where the intention is to bring the entire probability distributions of adjusted values into alignment. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. A different approach to normalization of probability distributions is quantile normalization, where the quantiles of the different measures are brought into alignment. In another usage in statistics, normalization refers to the creation of shifted and scaled versions of statistics, where the intention is that these normalized values allow the comparison of corresponding normalized values for different datasets in a way that eliminates the effects of certain gross influences, as in an anomaly time series. Some types of normalization involve only a rescaling, to arrive at values relative to some size variable. In terms of levels of measurement, such ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios). In theoretical statistics, parametric normalization can often lead to pivotal quantities – functions whose sampling distribution does not depend on the parameters – and to ancillary statistics – pivotal quantities that can be computed from observations, without knowing parameters. ## History ### Standard score (Z-score) The concept of normalization emerged alongside the study of the normal distribution by Abraham De Moivre, Pierre-Simon Laplace, and Carl Friedrich Gauss from the 18th to the 19th century. As the name “standard” refers to the particular normal distribution with expectation zero and standard deviation one, that is, the standard normal distribution, normalization, in this case, “standardization”, was then used to refer to the rescaling of any distribution or data set to have mean zero and standard deviation one. While the study of normal distribution structured the process of standardization, the result of this process, also known as the Z-score, given by the difference between sample value and population mean divided by population standard deviation and measuring the number of standard deviations of a value from its population mean, was not formalized and popularized until Ronald Fisher and Karl Pearson elaborated the concept as part of the broader framework of statistical inference and hypothesis testing in the early 20th century. ### Student’s t-Statistic William Sealy Gosset initiated the adjustment of normal distribution and standard score on small sample size. Educated in Chemistry and Mathematics at Winchester and Oxford, Gosset was employed by Guinness Brewery, the biggest brewer in Ireland back then, and was tasked with precise quality control. It was through small-sample experiments that Gosset discovered that the distribution of the means using small-scaled samples slightly deviated from the distribution of the means using large-scaled samples – the normal distribution – and appeared “taller and narrower” in comparison. This finding was later published in a Guinness internal report titled The application of the “Law of Error” to the work of the brewery and was sent to Karl Pearson for further discussion, which later yielded a formal publishment titled The probable error of a mean in the year of 1908. Under Guinness Brewery’s privacy restrictions, Gosset published the paper under the pseudo “Student”. Gosset’s work was later enhanced and transformed by Ronald Fisher to the form that is used today, and was, alongside the names “Student’s t distribution” – referring to the adjusted normal distribution Gosset proposed, and “Student’s t-statistic” – referring to the test statistic used in measuring the departure of the estimated value of a parameter from its hypothesized value divided by its standard error, popularized through Fisher’s publishment titled Applications of “Student’s” distribution. ### Feature Scaling The rise of computers and multivariate statistics in mid-20th century necessitated normalization to process data with different units, hatching feature scaling – a method used to rescale data to a fixed range – like min-max scaling and robust scaling. This modern normalization process especially targeting large-scaled data became more formalized in fields including machine learning, pattern recognition, and neural networks in late 20th century. ### Batch Normalization Batch normalization was proposed by Sergey Ioffe and Christian Szegedy in 2015 to enhance the efficiency of training in neural networks. ## Examples There are different types of normalizations in statistics – nondimensional ratios of errors, residuals, means and standard deviations, which are hence scale invariant – some of which may be summarized as follows. Note that in terms of levels of measurement, these ratios only make sense for ratio measurements (where ratios of measurements are meaningful), not interval measurements (where only distances are meaningful, but not ratios). See also Category:Statistical ratios. Name Formula Use Standard score Normalizing errors when population parameters are known. Works well for populations that are normally distributed Student's t-statistic the departure of the estimated value of a parameter from its hypothesized value, normalized by its standard error. Studentized residual Normalizing residuals when parameters are estimated, particularly across different data points in regression analysis. Standardized moment Normalizing moments, using the standard deviation as a measure of scale. Coefficient of variation Normalizing dispersion, using the mean as a measure of scale, particularly for positive distribution such as the exponential distribution and Poisson distribution. Min-max feature scaling Feature scaling is used to bring all values into the range [0,1]. This is also called unity-based normalization. This can be generalized to restrict the range of values in the dataset between any arbitrary points and , using for example . Note that some other ratios, such as the variance-to-mean ratio $$ \left(\frac{\sigma^2}{\mu}\right) $$ , are also done for normalization, but are not nondimensional: the units do not cancel, and thus the ratio has units, and is not scale-invariant. ## Other types Other non-dimensional normalizations that can be used with no assumptions on the distribution include: - Assignment of percentiles. This is common on standardized tests. See also quantile normalization. - Normalization by adding and/or multiplying by constants so values fall between 0 and 1. This is used for probability density functions, with applications in fields such as quantum mechanics in assigning probabilities to .
https://en.wikipedia.org/wiki/Normalization_%28statistics%29
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in $$ \hat{\mathbf{v}} $$ (pronounced "v-hat"). The term normalized vector is sometimes used as a synonym for unit vector. The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., $$ \mathbf{\hat{u}} = \frac{\mathbf{u}}{\|\mathbf{u}\|}=(\frac{u_1}{\|\mathbf{u}\|}, \frac{u_2}{\|\mathbf{u}\|}, ... , \frac{u_n}{\|\mathbf{u}\|}) $$ where ‖u‖ is the norm (or length) of u and $$ \|\mathbf{u}\| = (u_1, u_2, ..., u_n) $$ . The proof is the following: $$ \|\mathbf{\hat{u}}\|=\sqrt{\frac{u_1}{\sqrt{u_1^2+...+u_n^2}}^2+...+\frac{u_n}{\sqrt{u_1^2+...+u_n^2}}^2}=\sqrt{\frac{u_1^2+...+u_n^2}{u_1^2+...+u_n^2}}=\sqrt{1}=1 $$ A unit vector is often used to represent directions, such as normal directions. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination form of unit vectors. ## Orthogonal coordinates ### Cartesian coordinates Unit vectors may be used to represent the axes of a Cartesian coordinate system. For instance, the standard unit vectors in the direction of the x, y, and z axes of a three dimensional Cartesian coordinate system are $$ \mathbf{\hat{x}} = \begin{bmatrix}1\\0\\0\end{bmatrix}, \,\, \mathbf{\hat{y}} = \begin{bmatrix}0\\1\\0\end{bmatrix}, \,\, \mathbf{\hat{z}} = \begin{bmatrix}0\\0\\1\end{bmatrix} $$ They form a set of mutually orthogonal unit vectors, typically referred to as a standard basis in linear algebra. They are often denoted using common vector notation (e.g., x or $$ \vec{x} $$ ) rather than standard unit vector notation (e.g., x̂). In most contexts it can be assumed that x, y, and z, (or $$ \vec{x}, $$ $$ \vec{y}, $$ and $$ \vec{z} $$ ) are versors of a 3-D Cartesian coordinate system. The notations (î, ĵ, k̂), (x̂1, x̂2, x̂3), (êx, êy, êz), or (ê1, ê2, ê3), with or without hat, are also used, particularly in contexts where i, j, k might lead to confusion with another quantity (for instance with index symbols such as i, j, k, which are used to identify an element of a set or array or sequence of variables). When a unit vector in space is expressed in Cartesian notation as a linear combination of x, y, z, its three scalar components can be referred to as direction cosines. The value of each component is equal to the cosine of the angle formed by the unit vector with the respective basis vector. This is one of the methods used to describe the orientation (angular position) of a straight line, segment of straight line, oriented axis, or segment of oriented axis (vector). ### Cylindrical coordinates The three orthogonal unit vectors appropriate to cylindrical symmetry are: - $$ \boldsymbol{\hat{\rho}} $$ (also designated $$ \mathbf{\hat{e}} $$ or $$ \boldsymbol{\hat s} $$ ), representing the direction along which the distance of the point from the axis of symmetry is measured; - $$ \boldsymbol{\hat \varphi} $$ , representing the direction of the motion that would be observed if the point were rotating counterclockwise about the symmetry axis; - $$ \mathbf{\hat{z}} $$ , representing the direction of the symmetry axis; They are related to the Cartesian basis $$ \hat{x} $$ , $$ \hat{y} $$ , $$ \hat{z} $$ by: $$ \boldsymbol{\hat{\rho}} = \cos(\varphi)\mathbf{\hat{x}} + \sin(\varphi)\mathbf{\hat{y}} $$ $$ \boldsymbol{\hat \varphi} = -\sin(\varphi) \mathbf{\hat{x}} + \cos(\varphi) \mathbf{\hat{y}} $$ $$ \mathbf{\hat{z}} = \mathbf{\hat{z}}. $$ The vectors $$ \boldsymbol{\hat{\rho}} $$ and $$ \boldsymbol{\hat \varphi} $$ are functions of $$ \varphi, $$ and are not constant in direction. When differentiating or integrating in cylindrical coordinates, these unit vectors themselves must also be operated on. The derivatives with respect to $$ \varphi $$ are: $$ \frac{\partial \boldsymbol{\hat{\rho}}} {\partial \varphi} = -\sin \varphi\mathbf{\hat{x}} + \cos \varphi\mathbf{\hat{y}} = \boldsymbol{\hat \varphi} $$ $$ \frac{\partial \boldsymbol{\hat \varphi}} {\partial \varphi} = -\cos \varphi\mathbf{\hat{x}} - \sin \varphi\mathbf{\hat{y}} = -\boldsymbol{\hat{\rho}} $$ $$ \frac{\partial \mathbf{\hat{z}}} {\partial \varphi} = \mathbf{0}. $$ ### Spherical coordinates The unit vectors appropriate to spherical symmetry are: $$ \mathbf{\hat{r}} $$ , the direction in which the radial distance from the origin increases; $$ \boldsymbol{\hat{\varphi}} $$ , the direction in which the angle in the x-y plane counterclockwise from the positive x-axis is increasing; and $$ \boldsymbol{\hat \theta} $$ , the direction in which the angle from the positive z axis is increasing. To minimize redundancy of representations, the polar angle $$ \theta $$ is usually taken to lie between zero and 180 degrees. It is especially important to note the context of any ordered triplet written in spherical coordinates, as the roles of $$ \boldsymbol{\hat \varphi} $$ and $$ \boldsymbol{\hat \theta} $$ are often reversed. Here, the American "physics" convention is used. This leaves the azimuthal angle $$ \varphi $$ defined the same as in cylindrical coordinates. The Cartesian relations are: $$ \mathbf{\hat{r}} = \sin \theta \cos \varphi\mathbf{\hat{x}} + \sin \theta \sin \varphi\mathbf{\hat{y}} + \cos \theta\mathbf{\hat{z}} $$ $$ \boldsymbol{\hat \theta} = \cos \theta \cos \varphi\mathbf{\hat{x}} + \cos \theta \sin \varphi\mathbf{\hat{y}} - \sin \theta\mathbf{\hat{z}} $$ $$ \boldsymbol{\hat \varphi} = - \sin \varphi\mathbf{\hat{x}} + \cos \varphi\mathbf{\hat{y}} $$ The spherical unit vectors depend on both $$ \varphi $$ and $$ \theta $$ , and hence there are 5 possible non-zero derivatives. For a more complete description, see Jacobian matrix and determinant. The non-zero derivatives are: $$ \frac{\partial \mathbf{\hat{r}}} {\partial \varphi} = -\sin \theta \sin \varphi\mathbf{\hat{x}} + \sin \theta \cos \varphi\mathbf{\hat{y}} = \sin \theta\boldsymbol{\hat \varphi} $$ $$ \frac{\partial \mathbf{\hat{r}}} {\partial \theta} =\cos \theta \cos \varphi\mathbf{\hat{x}} + \cos \theta \sin \varphi\mathbf{\hat{y}} - \sin \theta\mathbf{\hat{z}}= \boldsymbol{\hat \theta} $$ $$ \frac{\partial \boldsymbol{\hat{\theta}}} {\partial \varphi} =-\cos \theta \sin \varphi\mathbf{\hat{x}} + \cos \theta \cos \varphi\mathbf{\hat{y}} = \cos \theta\boldsymbol{\hat \varphi} $$ $$ \frac{\partial \boldsymbol{\hat{\theta}}} {\partial \theta} = -\sin \theta \cos \varphi\mathbf{\hat{x}} - \sin \theta \sin \varphi\mathbf{\hat{y}} - \cos \theta\mathbf{\hat{z}} = -\mathbf{\hat{r}} $$ $$ \frac{\partial \boldsymbol{\hat{\varphi}}} {\partial \varphi} = -\cos \varphi\mathbf{\hat{x}} - \sin \varphi\mathbf{\hat{y}} = -\sin \theta\mathbf{\hat{r}} -\cos \theta\boldsymbol{\hat{\theta}} $$ ### General unit vectors Common themes of unit vectors occur throughout physics and geometry: Unit vector Nomenclature Diagram Tangent vector to a curve/flux line A normal vector to the plane containing and defined by the radial position vector and angular tangential direction of rotation is necessary so that the vector equations of angular motion hold.Normal to a surface tangent plane/plane containing radial position component and angular tangential component In terms of polar coordinates; Binormal vector to tangent and normal Parallel to some axis/line One unit vector aligned parallel to a principal direction (red line), and a perpendicular unit vector is in any radial direction relative to the principal line. Perpendicular to some axis/line in some radial direction Possible angular deviation relative to some axis/line Unit vector at acute deviation angle φ (including 0 or π/2 rad) relative to a principal direction. ## Curvilinear coordinates In general, a coordinate system may be uniquely specified using a number of linearly independent unit vectors $$ \mathbf{\hat{e}}_n $$ (the actual number being equal to the degrees of freedom of the space). For ordinary 3-space, these vectors may be denoted $$ \mathbf{\hat{e}}_1, \mathbf{\hat{e}}_2, \mathbf{\hat{e}}_3 $$ . It is nearly always convenient to define the system to be orthonormal and right-handed: $$ \mathbf{\hat{e}}_i \cdot \mathbf{\hat{e}}_j = \delta_{ij} $$ $$ \mathbf{\hat{e}}_i \cdot (\mathbf{\hat{e}}_j \times \mathbf{\hat{e}}_k) = \varepsilon_{ijk} $$ where $$ \delta_{ij} $$ is the Kronecker delta (which is 1 for i = j, and 0 otherwise) and $$ \varepsilon_{ijk} $$ is the Levi-Civita symbol (which is 1 for permutations ordered as ijk, and −1 for permutations ordered as kji). ## Right versor A unit vector in $$ \mathbb{R}^3 $$ was called a right versor by W. R. Hamilton, as he developed his quaternions $$ \mathbb{H} \subset \mathbb{R}^4 $$ . In fact, he was the originator of the term vector, as every quaternion $$ q = s + v $$ has a scalar part s and a vector part v. If v is a unit vector in $$ \mathbb{R}^3 $$ , then the square of v in quaternions is −1. Thus by Euler's formula, $$ \exp (\theta v) = \cos \theta + v \sin \theta $$ is a versor in the 3-sphere. When θ is a right angle, the versor is a right versor: its scalar part is zero and its vector part v is a unit vector in $$ \mathbb{R}^3 $$ . Thus the right versors extend the notion of imaginary units found in the complex plane, where the right versors now range over the 2-sphere $$ \mathbb{S}^2 \subset \mathbb{R}^3 \subset \mathbb{H} $$ rather than the pair } in the complex plane. By extension, a right quaternion is a real multiple of a right versor.
https://en.wikipedia.org/wiki/Unit_vector
In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related formulations of classical mechanics. Analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Analytical mechanics was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system; it can also be called vectorial mechanics. A scalar is a quantity, whereas a vector is represented by quantity and direction. The results of these two different approaches are equivalent, but the analytical mechanics approach has many advantages for complex problems. Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics. Two dominant branches of analytical mechanics are ## Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and ## Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, ## Routhian mechanics , and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory. Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory. The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics. ## Motivation The goal of mechanical theory is to solve mechanical problems, such as arise in physics and engineering. Starting from a physical system—such as a mechanism or a star system—a mathematical model is developed in the form of a differential equation. The model can be solved numerically or analytically to determine the motion of the system. Newton's vectorial approach to mechanics describes motion with the help of vector quantities such as force, velocity, acceleration. These quantities characterise the motion of a body idealised as a "mass point" or a "particle" understood as a single point to which a mass is attached. Newton's method has been successfully applied to a wide range of physical problems, including the motion of a particle in Earth's gravitational field and the motion of planets around the Sun. In this approach, Newton's laws describe the motion by a differential equation and then the problem is reduced to the solving of that equation. When a mechanical system contains many particles, however (such as a complex mechanism or a fluid), Newton's approach is difficult to apply. Using a Newtonian approach is possible, under proper precautions, namely isolating each single particle from the others, and determining all the forces acting on it. Such analysis is cumbersome even in relatively simple systems. Newton thought that his third law "action equals reaction" would take care of all complications. This is false even for such simple system as rotations of a solid body. In more complicated systems, the vectorial approach cannot give an adequate description. The analytical approach simplifies problems by treating mechanical systems as ensembles of particles that interact with each other, rather considering each particle as an isolated unit. In the vectorial approach, forces must be determined individually for each particle, whereas in the analytical approach it is enough to know one single function which contains implicitly all the forces acting on and in the system. Such simplification is often done using certain kinematic conditions which are stated a priori. However, the analytical treatment does not require the knowledge of these forces and takes these kinematic conditions for granted. Still, deriving the equations of motion of a complicated mechanical system requires a unifying basis from which they follow. This is provided by various variational principles: behind each set of equations there is a principle that expresses the meaning of the entire set. Given a fundamental and universal quantity called action, the principle that this action be stationary under small variation of some other mechanical quantity generates the required set of differential equations. The statement of the principle does not require any special coordinate system, and all results are expressed in generalized coordinates. This means that the analytical equations of motion do not change upon a coordinate transformation, an invariance property that is lacking in the vectorial equations of motion. It is not altogether clear what is meant by 'solving' a set of differential equations. A problem is regarded as solved when the particles coordinates at time t are expressed as simple functions of t and of parameters defining the initial positions and velocities. However, 'simple function' is not a well-defined concept: nowadays, a function f(t) is not regarded as a formal expression in t (elementary function) as in the time of Newton but most generally as a quantity determined by t, and it is not possible to draw a sharp line between 'simple' and 'not simple' functions. If one speaks merely of 'functions', then every mechanical problem is solved as soon as it has been well stated in differential equations, because given the initial conditions and t determine the coordinates at t. This is a fact especially at present with the modern methods of computer modelling which provide arithmetical solutions to mechanical problems to any desired degree of accuracy, the differential equations being replaced by difference equations. Still, though lacking precise definitions, it is obvious that the two-body problem has a simple solution, whereas the three-body problem has not. The two-body problem is solved by formulas involving parameters; their values can be changed to study the class of all solutions, that is, the mathematical structure of the problem. Moreover, an accurate mental or drawn picture can be made for the motion of two bodies, and it can be as real and accurate as the real bodies moving and interacting. In the three-body problem, parameters can also be assigned specific values; however, the solution at these assigned values or a collection of such solutions does not reveal the mathematical structure of the problem. As in many other problems, the mathematical structure can be elucidated only by examining the differential equations themselves. Analytical mechanics aims at even more: not at understanding the mathematical structure of a single mechanical problem, but that of a class of problems so wide that they encompass most of mechanics. It concentrates on systems to which Lagrangian or Hamiltonian equations of motion are applicable and that include a very wide range of problems indeed. Development of analytical mechanics has two objectives: (i) increase the range of solvable problems by developing standard techniques with a wide range of applicability, and (ii) understand the mathematical structure of mechanics. In the long run, however, (ii) can help (i) more than a concentration on specific problems for which methods have already been designed. ## Intrinsic motion ### Generalized coordinates and constraints In Newtonian mechanics, one customarily uses all three Cartesian coordinates, or other 3D coordinate system, to refer to a body's position during its motion. In physical systems, however, some structure or other system usually constrains the body's motion from taking certain directions and pathways. So a full set of Cartesian coordinates is often unneeded, as the constraints determine the evolving relations among the coordinates, which relations can be modeled by equations corresponding to the constraints. In the Lagrangian and Hamiltonian formalisms, the constraints are incorporated into the motion's geometry, reducing the number of coordinates to the minimum needed to model the motion. These are known as generalized coordinates, denoted qi (i = 1, 2, 3...). ### Difference between curvillinear and generalized coordinates Generalized coordinates incorporate constraints on the system. There is one generalized coordinate qi for each degree of freedom (for convenience labelled by an index i = 1, 2...N), i.e. each way the system can change its configuration; as curvilinear lengths or angles of rotation. Generalized coordinates are not the same as curvilinear coordinates. The number of curvilinear coordinates equals the dimension of the position space in question (usually 3 for 3d space), while the number of generalized coordinates is not necessarily equal to this dimension; constraints can reduce the number of degrees of freedom (hence the number of generalized coordinates required to define the configuration of the system), following the general rule: For a system with N degrees of freedom, the generalized coordinates can be collected into an N-tuple: $$ \mathbf{q} = (q_1, q_2, \dots, q_N) $$ and the time derivative (here denoted by an overdot) of this tuple give the generalized velocities: $$ \frac{d\mathbf{q}}{dt} = \left(\frac{dq_1}{dt}, \frac{dq_2}{dt}, \dots, \frac{dq_N}{dt}\right) \equiv \mathbf{\dot{q}} = (\dot{q}_1, \dot{q}_2, \dots, \dot{q}_N) . $$ ### D'Alembert's principle of virtual work D'Alembert's principle states that infinitesimal virtual work done by a force across reversible displacements is zero, which is the work done by a force consistent with ideal constraints of the system. The idea of a constraint is useful – since this limits what the system can do, and can provide steps to solving for the motion of the system. The equation for D'Alembert's principle is: $$ \delta W = \boldsymbol{\mathcal{Q}} \cdot \delta\mathbf{q} = 0 \,, $$ where $$ \boldsymbol\mathcal{Q} = (\mathcal{Q}_1, \mathcal{Q}_2, \dots, \mathcal{Q}_N) $$ are the generalized forces (script Q instead of ordinary Q is used here to prevent conflict with canonical transformations below) and are the generalized coordinates. This leads to the generalized form of Newton's laws in the language of analytical mechanics: $$ \boldsymbol\mathcal{Q} = \frac{d}{dt} \left ( \frac {\partial T}{\partial \mathbf{\dot{q}}} \right ) - \frac {\partial T}{\partial \mathbf{q}}\,, $$ where T is the total kinetic energy of the system, and the notation $$ \frac {\partial}{\partial \mathbf{q}} = \left(\frac{\partial }{\partial q_1}, \frac{\partial }{\partial q_2}, \dots, \frac{\partial }{\partial q_N}\right) $$ is a useful shorthand (see matrix calculus for this notation). ### Constraints If the curvilinear coordinate system is defined by the standard position vector , and if the position vector can be written in terms of the generalized coordinates and time in the form: $$ \mathbf{r} = \mathbf{r}(\mathbf{q}(t),t) $$ and this relation holds for all times , then are called holonomic constraints. Vector is explicitly dependent on in cases when the constraints vary with time, not just because of . For time-independent situations, the constraints are also called scleronomic, for time-dependent cases they are called rheonomic. Lagrangian mechanics The introduction of generalized coordinates and the fundamental Lagrangian function: $$ L(\mathbf{q},\mathbf{\dot{q}},t) = T(\mathbf{q},\mathbf{\dot{q}},t) - V(\mathbf{q},\mathbf{\dot{q}},t) $$ where T is the total kinetic energy and V is the total potential energy of the entire system, then either following the calculus of variations or using the above formula – lead to the Euler–Lagrange equations; $$ \frac{d}{dt}\left(\frac{\partial L}{\partial \mathbf{\dot{q}}}\right) = \frac{\partial L}{\partial \mathbf{q}} \,, $$ which are a set of N second-order ordinary differential equations, one for each qi(t). This formulation identifies the actual path followed by the motion as a selection of the path over which the time integral of kinetic energy is least, assuming the total energy to be fixed, and imposing no conditions on the time of transit. The Lagrangian formulation uses the configuration space of the system, the set of all possible generalized coordinates: $$ \mathcal{C} = \{ \mathbf{q} \in \mathbb{R}^N \}\,, $$ where $$ \mathbb{R}^N $$ is N-dimensional real space (see also set-builder notation). The particular solution to the Euler–Lagrange equations is called a (configuration) path or trajectory, i.e. one particular q(t) subject to the required initial conditions. The general solutions form a set of possible configurations as functions of time: $$ \{ \mathbf{q}(t) \in \mathbb{R}^N \,:\,t\ge 0,t\in \mathbb{R}\}\subseteq\mathcal{C}\,, $$ The configuration space can be defined more generally, and indeed more deeply, in terms of topological manifolds and the tangent bundle. Hamiltonian mechanics The Legendre transformation of the Lagrangian replaces the generalized coordinates and velocities (q, q̇) with (q, p); the generalized coordinates and the generalized momenta conjugate to the generalized coordinates: $$ \mathbf{p} = \frac{\partial L}{\partial \mathbf{\dot{q}}} = \left(\frac{\partial L}{\partial \dot{q}_1},\frac{\partial L}{\partial \dot{q}_2},\cdots \frac{\partial L}{\partial \dot{q}_N}\right) = (p_1, p_2\cdots p_N)\,, $$ and introduces the Hamiltonian (which is in terms of generalized coordinates and momenta): $$ H(\mathbf{q},\mathbf{p},t) = \mathbf{p}\cdot\mathbf{\dot{q}} - L(\mathbf{q},\mathbf{\dot{q}},t) $$ where $$ \cdot $$ denotes the dot product, also leading to Hamilton's equations: $$ \mathbf{\dot{p}} = - \frac{\partial H}{\partial \mathbf{q}}\,,\quad \mathbf{\dot{q}} = + \frac{\partial H}{\partial \mathbf{p}} \,, $$ which are now a set of 2N first-order ordinary differential equations, one for each qi(t) and pi(t). Another result from the Legendre transformation relates the time derivatives of the Lagrangian and Hamiltonian: $$ \frac{dH}{dt}=-\frac{\partial L}{\partial t}\,, $$ which is often considered one of Hamilton's equations of motion additionally to the others. The generalized momenta can be written in terms of the generalized forces in the same way as Newton's second law: $$ \mathbf{\dot{p}} = \boldsymbol{\mathcal{Q}}\,. $$ Analogous to the configuration space, the set of all momenta is the generalized momentum space: $$ \mathcal{M} = \{ \mathbf{p}\in\mathbb{R}^N \}\,. $$ ("Momentum space" also refers to "k-space"; the set of all wave vectors (given by De Broglie relations) as used in quantum mechanics and theory of waves) The set of all positions and momenta form the phase space: $$ \mathcal{P} = \mathcal{C}\times\mathcal{M} = \{ (\mathbf{q},\mathbf{p})\in\mathbb{R}^{2N} \} \,, $$ that is, the Cartesian product of the configuration space and generalized momentum space. A particular solution to Hamilton's equations is called a phase path, a particular curve (q(t),p(t)) subject to the required initial conditions. The set of all phase paths, the general solution to the differential equations, is the phase portrait: $$ \{ (\mathbf{q}(t),\mathbf{p}(t))\in\mathbb{R}^{2N}\,:\,t\ge0, t\in\mathbb{R} \} \subseteq \mathcal{P}\,, $$ ### The Poisson bracket All dynamical variables can be derived from position q, momentum p, and time t, and written as a function of these: A = A(q, p, t). If A(q, p, t) and B(q, p, t) are two scalar valued dynamical variables, the Poisson bracket is defined by the generalized coordinates and momenta: $$ \begin{align} \{A,B\} \equiv \{A,B\}_{\mathbf{q},\mathbf{p}} & = \frac{\partial A}{\partial \mathbf{q}}\cdot\frac{\partial B}{\partial \mathbf{p}} - \frac{\partial A}{\partial \mathbf{p}}\cdot\frac{\partial B}{\partial \mathbf{q}}\\ & \equiv \sum_k \frac{\partial A}{\partial q_k}\frac{\partial B}{\partial p_k} - \frac{\partial A}{\partial p_k}\frac{\partial B}{\partial q_k}\,, \end{align} $$ Calculating the total derivative of one of these, say A, and substituting Hamilton's equations into the result leads to the time evolution of A: $$ \frac{dA}{dt} = \{A,H\} + \frac{\partial A}{\partial t}\,. $$ This equation in A is closely related to the equation of motion in the Heisenberg picture of quantum mechanics, in which classical dynamical variables become quantum operators (indicated by hats (^)), and the Poisson bracket is replaced by the commutator of operators via Dirac's canonical quantization: $$ \{A,B\} \rightarrow \frac{1}{i\hbar}[\hat{A},\hat{B}]\,. $$ ## Properties of the Lagrangian and the Hamiltonian Following are overlapping properties between the Lagrangian and Hamiltonian functions.Classical Mechanics, T.W.B. Kibble, European Physics Series, McGraw-Hill (UK), 1973, - All the individual generalized coordinates qi(t), velocities q̇i(t) and momenta pi(t) for every degree of freedom are mutually independent. Explicit time-dependence of a function means the function actually includes time t as a variable in addition to the q(t), p(t), not simply as a parameter through q(t) and p(t), which would mean explicit time-independence. - The Lagrangian is invariant under addition of the total time derivative of any function of q and t, that is: $$ L' = L +\frac{d}{dt}F(\mathbf{q},t) \,, $$ so each Lagrangian L and L describe exactly the same motion. In other words, the Lagrangian of a system is not unique. - Analogously, the Hamiltonian is invariant under addition of the partial time derivative of any function of q, p and t, that is: (K is a frequently used letter in this case). This property is used in canonical transformations (see below). - If the Lagrangian is independent of some generalized coordinates, then the generalized momenta conjugate to those coordinates are constants of the motion, i.e. are conserved, this immediately follows from Lagrange's equations: Such coordinates are "cyclic" or "ignorable". It can be shown that the Hamiltonian is also cyclic in exactly the same generalized coordinates. - If the Lagrangian is time-independent the Hamiltonian is also time-independent (i.e. both are constant in time). - If the kinetic energy is a homogeneous function of degree 2 of the generalized velocities, and the Lagrangian is explicitly time-independent, then: where λ is a constant, then the Hamiltonian will be the total conserved energy, equal to the total kinetic and potential energies of the system: This is the basis for the Schrödinger equation, inserting quantum operators directly obtains it. ## Principle of least action Action is another quantity in analytical mechanics defined as a functional of the Lagrangian: A general way to find the equations of motion from the action is the principle of least action:Encyclopaedia of Physics (2nd Edition), R.G. Lerner, G.L. Trigg, VHC publishers, 1991, ISBN (Verlagsgesellschaft) 3-527-26954-1, ISBN (VHC Inc.) 0-89573-752-3 where the departure t1 and arrival t2 times are fixed. The term "path" or "trajectory" refers to the time evolution of the system as a path through configuration space , in other words q(t) tracing out a path in . The path for which action is least is the path taken by the system. From this principle, all equations of motion in classical mechanics can be derived. This approach can be extended to fields rather than a system of particles (see below), and underlies the path integral formulation of quantum mechanics,Quantum Mechanics, E. Abers, Pearson Ed., Addison Wesley, Prentice Hall Inc, 2004, Quantum Field Theory, D. McMahon, Mc Graw Hill (US), 2008, and is used for calculating geodesic motion in general relativity.Relativity, Gravitation, and Cosmology, R.J.A. Lambourne, Open University, Cambridge University Press, 2010, ## Hamiltonian-Jacobi mechanics Canonical transformations The invariance of the Hamiltonian (under addition of the partial time derivative of an arbitrary function of p, q, and t) allows the Hamiltonian in one set of coordinates q and momenta p to be transformed into a new set Q = Q(q, p, t) and P = P(q, p, t), in four possible ways: With the restriction on P and Q such that the transformed Hamiltonian system is: the above transformations are called canonical transformations, each function Gn is called a generating function of the "nth kind" or "type-n". The transformation of coordinates and momenta can allow simplification for solving Hamilton's equations for a given problem. The choice of Q and P is completely arbitrary, but not every choice leads to a canonical transformation. One simple criterion for a transformation q → Q and p → P to be canonical is the Poisson bracket be unity, for all i = 1, 2,...N. If this does not hold then the transformation is not canonical. The Hamilton–Jacobi equation By setting the canonically transformed Hamiltonian K = 0, and the type-2 generating function equal to Hamilton's principal function (also the action ) plus an arbitrary constant C: the generalized momenta become: and P is constant, then the Hamiltonian-Jacobi equation (HJE) can be derived from the type-2 canonical transformation: where H is the Hamiltonian as before: Another related function is Hamilton's characteristic functionused to solve the HJE by additive separation of variables for a time-independent Hamiltonian H. The study of the solutions of the Hamilton–Jacobi equations leads naturally to the study of symplectic manifolds and symplectic topology. In this formulation, the solutions of the Hamilton–Jacobi equations are the integral curves of Hamiltonian vector fields. Routhian mechanics Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, not often used but especially useful for removing cyclic coordinates. If the Lagrangian of a system has s cyclic coordinates q = q1, q2, ... qs with conjugate momenta p = p1, p2, ... ps, with the rest of the coordinates non-cyclic and denoted ζ = ζ1, ζ1, ..., ζN − s, they can be removed by introducing the Routhian: which leads to a set of 2s Hamiltonian equations for the cyclic coordinates q, and N − s Lagrangian equations in the non cyclic coordinates ζ. Set up in this way, although the Routhian has the form of the Hamiltonian, it can be thought of a Lagrangian with N − s degrees of freedom. The coordinates q do not have to be cyclic, the partition between which coordinates enter the Hamiltonian equations and those which enter the Lagrangian equations is arbitrary. It is simply convenient to let the Hamiltonian equations remove the cyclic coordinates, leaving the non cyclic coordinates to the Lagrangian equations of motion. ## Appellian mechanics Appell's equation of motion involve generalized accelerations, the second time derivatives of the generalized coordinates: as well as generalized forces mentioned above in D'Alembert's principle. The equations are where is the acceleration of the k particle, the second time derivative of its position vector. Each acceleration ak is expressed in terms of the generalized accelerations αr, likewise each rk are expressed in terms the generalized coordinates qr. ## Classical field theory ### Lagrangian field theory Generalized coordinates apply to discrete particles. For N scalar fields φi(r, t) where i = 1, 2, ... N, the Lagrangian density is a function of these fields and their space and time derivatives, and possibly the space and time coordinates themselves: and the Euler–Lagrange equations have an analogue for fields: where ∂μ denotes the 4-gradient and the summation convention has been used. For N scalar fields, these Lagrangian field equations are a set of N second order partial differential equations in the fields, which in general will be coupled and nonlinear. This scalar field formulation can be extended to vector fields, tensor fields, and spinor fields. The Lagrangian is the volume integral of the Lagrangian density:Gravitation, J.A. Wheeler, C. Misner, K.S. Thorne, W.H. Freeman & Co, 1973, Originally developed for classical fields, the above formulation is applicable to all physical fields in classical, quantum, and relativistic situations: such as Newtonian gravity, classical electromagnetism, general relativity, and quantum field theory. It is a question of determining the correct Lagrangian density to generate the correct field equation. ### Hamiltonian field theory The corresponding "momentum" field densities conjugate to the N scalar fields φi(r, t) are: where in this context the overdot denotes a partial time derivative, not a total time derivative. The Hamiltonian density is defined by analogy with mechanics: The equations of motion are: where the variational derivative must be used instead of merely partial derivatives. For N fields, these Hamiltonian field equations are a set of 2N first order partial differential equations, which in general will be coupled and nonlinear. Again, the volume integral of the Hamiltonian density is the Hamiltonian ## Symmetry, conservation, and Noether's theorem Symmetry transformations in classical space and time Each transformation can be described by an operator (i.e. function acting on the position r or momentum p variables to change them). The following are the cases when the operator does not change r or p, i.e. symmetries. Transformation Operator Position Momentum Translational symmetry Time translation Rotational invariance Galilean transformations Parity T-symmetry where R(n̂, θ) is the rotation matrix about an axis defined by the unit vector n̂''' and angle θ. Noether's theorem Noether's theorem states that a continuous symmetry transformation of the action corresponds to a conservation law, i.e. the action (and hence the Lagrangian) does not change under a transformation parameterized by a parameter s: the Lagrangian describes the same motion independent of s, which can be length, angle of rotation, or time. The corresponding momenta to q'' will be conserved.
https://en.wikipedia.org/wiki/Analytical_mechanics
In database theory, the PACELC design principle is an extension to the CAP theorem. It states that in case of network partitioning (P) in a distributed computer system, one has to choose between availability (A) and consistency (C) (as per the CAP theorem), but else (E), even when the system is running normally in the absence of partitions, one has to choose between latency (L) and loss of consistency (C). ## Overview The CAP theorem can be phrased as "PAC", the impossibility theorem that no distributed data store can be both consistent and available in executions that contains partitions. This can be proved by examining latency: if a system ensures consistency, then operation latencies grow with message delays, and hence operations cannot terminate eventually if the network is partitioned, i.e. the system cannot ensure availability. In the absence of partitions, both consistency and availability can be satisfied. PACELC therefore goes further and examines how the system replicates data. Specifically, in the absence of partitions, an additional trade-off (ELC) exists between latency and consistency. If the store is atomically consistent, then the sum of the read and write delay is at least the message delay. In practice, most systems rely on explicit acknowledgments rather than timed delays to ensure delivery, requiring a full network round trip and therefore message delay on both reads and writes to ensure consistency. In low latency systems, in contrast, consistency is relaxed in order to reduce latency. There are four configurations or tradeoffs in the PACELC space: - PA/EL - prioritize availability and latency over consistency - PA/EC - when there is a partition, choose availability; else, choose consistency - PC/EL - when there is a partition, choose consistency; else, choose latency - PC/EC - choose consistency at all times PC/EC and PA/EL provide natural cognitive models for an application developer. A PC/EC system provides a firm guarantee of atomic consistency, as in ACID, while PA/EL provides high availability and low latency with a more complex consistency model. In contrast, PA/EC and PC/EL systems only make conditional guarantees of consistency. The developer still has to write code to handle the cases where the guarantee is not upheld. PA/EC systems are rare outside of the in-memory data grid industry, where systems are localized to geographic regions and the latency vs. consistency tradeoff is not significant. PC/EL is even more tricky to understand. PC does not indicate that the system is fully consistent; rather it indicates that the system does not reduce consistency beyond the baseline consistency level when a network partition occurs—instead, it reduces availability. Some experts like Marc Brooker argue that the CAP theorem is particularly relevant in intermittently connected environments, such as those related to the Internet of Things (IoT) and mobile applications. In these contexts, devices may become partitioned due to challenging physical conditions, such as power outages or when entering confined spaces like elevators. For distributed systems, such as cloud applications, it is more appropriate to use the PACELC theorem, which is more comprehensive and considers trade-offs such as latency and consistency even in the absence of network partitions. ## History The PACELC theorem was first described by Daniel Abadi from Yale University in 2010 in a blog post, which he later clarified in a paper in 2012. The purpose of PACELC is to address his thesis that "Ignoring the consistency/latency trade-off of replicated systems is a major oversight [in CAP], as it is present at all times during system operation, whereas CAP is only relevant in the arguably rare case of a network partition." The PACELC theorem was proved formally in 2018 in a SIGACT News article. ## Database PACELC ratings Original database PACELC ratings are from. Subsequent updates contributed by wikipedia community. - The default versions of Amazon's early (internal) Dynamo, Cassandra, Riak, and Cosmos DB are PA/EL systems: if a partition occurs, they give up consistency for availability, and under normal operation they give up consistency for lower latency. - Fully ACID systems such as VoltDB/H-Store, Megastore, MySQL Cluster, and PostgreSQL are PC/EC: they refuse to give up consistency, and will pay the availability and latency costs to achieve it. Bigtable and related systems such as HBase are also PC/EC. - Amazon DynamoDB (launched January 2012) is quite different from the early (Amazon internal) Dynamo which was considered for the PACELC paper. DynamoDB follows a strong leader model, where every write is strictly serialized (and conditional writes carry no penalty) and supports read-after-write consistency. This guarantee does not apply to "Global Tables" across regions. The DynamoDB SDKs use eventually consistent reads by default (improved availability and throughput), but when a consistent read is requested the service will return either a current view to the item or an error. - Couchbase provides a range of consistency and availability options during a partition, and equally a range of latency and consistency options with no partition. Unlike most other databases, Couchbase doesn't have a single API set nor does it scale/replicate all data services homogeneously. For writes, Couchbase favors Consistency over Availability making it formally CP, but on read there is more user-controlled variability depending on index replication, desired consistency level and type of access (single document lookup vs range scan vs full-text search, etc.). On top of that, there is then further variability depending on cross-datacenter-replication (XDCR) which takes multiple CP clusters and connects them with asynchronous replication and Couchbase Lite which is an embedded database and creates a fully multi-master (with revision tracking) distributed topology. - Cosmos DB supports five tunable consistency levels that allow for tradeoffs between C/A during P, and L/C during E. Cosmos DB never violates the specified consistency level, so it's formally CP. - MongoDB can be classified as a PA/EC system. In the baseline case, the system guarantees reads and writes to be consistent. - PNUTS is a PC/EL system. - Hazelcast IMDG and indeed most in-memory data grids are an implementation of a PA/EC system; Hazelcast can be configured to be EL rather than EC. Concurrency primitives (Lock, AtomicReference, CountDownLatch, etc.) can be either PC/EC or PA/EC. - FaunaDB implements Calvin, a transaction protocol created by Dr. Daniel Abadi, the author of the PACELC theorem, and offers users adjustable controls for LC tradeoff. It is PC/EC for strictly serializable transactions, and EL for serializable reads. DDBS P+A P+C E+L E+CAerospikepaid onlyoptionalBigtable/HBase Cassandra Cosmos DB Couchbase Dynamo DynamoDB FaunaDB Hazelcast IMDG Megastore MongoDB MySQL Cluster PNUTS PostgreSQL Riak SpiceDB VoltDB/H-Store
https://en.wikipedia.org/wiki/PACELC_design_principle
In computer science, a binary search tree (BST), also called an ordered or sorted binary tree, is a rooted binary tree data structure with the key of each internal node being greater than all the keys in the respective node's left subtree and less than the ones in its right subtree. The time complexity of operations on the binary search tree is linear with respect to the height of the tree. Binary search trees allow binary search for fast lookup, addition, and removal of data items. Since the nodes in a BST are laid out so that each comparison skips about half of the remaining tree, the lookup performance is proportional to that of binary logarithm. BSTs were devised in the 1960s for the problem of efficient storage of labeled data and are attributed to Conway Berners-Lee and David Wheeler. The performance of a binary search tree is dependent on the order of insertion of the nodes into the tree since arbitrary insertions may lead to degeneracy; several variations of the binary search tree can be built with guaranteed worst-case performance. The basic operations include: search, traversal, insert and delete. BSTs with guaranteed worst-case complexities perform better than an unsorted array, which would require linear search time. The complexity analysis of BST shows that, on average, the insert, delete and search takes $$ O(\log n) $$ for $$ n $$ nodes. In the worst case, they degrade to that of a singly linked list: $$ O(n) $$ . To address the boundless increase of the tree height with arbitrary insertions and deletions, self-balancing variants of BSTs are introduced to bound the worst lookup complexity to that of the binary logarithm. AVL trees were the first self-balancing binary search trees, invented in 1962 by Georgy Adelson-Velsky and Evgenii Landis. Binary search trees can be used to implement abstract data types such as dynamic sets, lookup tables and priority queues, and used in sorting algorithms such as tree sort. ## History The binary search tree algorithm was discovered independently by several researchers, including P.F. Windley, Andrew Donald Booth, Andrew Colin, Thomas N. Hibbard. The algorithm is attributed to Conway Berners-Lee and David Wheeler, who used it for storing labeled data in magnetic tapes in 1960. One of the earliest and popular binary search tree algorithm is that of Hibbard. The time complexity of a binary search tree increases boundlessly with the tree height if the nodes are inserted in an arbitrary order, therefore self-balancing binary search trees were introduced to bound the height of the tree to $$ O(\log n) $$ . Various height-balanced binary search trees were introduced to confine the tree height, such as AVL trees, Treaps, and red–black trees. The AVL tree was invented by Georgy Adelson-Velsky and Evgenii Landis in 1962 for the efficient organization of information. English translation by Myron J. Ricci in Soviet Mathematics - Doklady, 3:1259–1263, 1962. It was the first self-balancing binary search tree to be invented. ## Overview A binary search tree is a rooted binary tree in which nodes are arranged in strict total order in which the nodes with keys greater than any particular node A is stored on the right sub-trees to that node A and the nodes with keys equal to or less than A are stored on the left sub-trees to A, satisfying the binary search property. Binary search trees are also efficacious in sortings and search algorithms. However, the search complexity of a BST depends upon the order in which the nodes are inserted and deleted; since in worst case, successive operations in the binary search tree may lead to degeneracy and form a singly linked list (or "unbalanced tree") like structure, thus has the same worst-case complexity as a linked list. Binary search trees are also a fundamental data structure used in construction of abstract data structures such as sets, multisets, and associative arrays. ## Operations ### Searching Searching in a binary search tree for a specific key can be programmed recursively or iteratively. Searching begins by examining the root node. If the tree is , the key being searched for does not exist in the tree. Otherwise, if the key equals that of the root, the search is successful and the node is returned. If the key is less than that of the root, the search proceeds by examining the left subtree. Similarly, if the key is greater than that of the root, the search proceeds by examining the right subtree. This process is repeated until the key is found or the remaining subtree is $$ \text{nil} $$ . If the searched key is not found after a $$ \text{nil} $$ subtree is reached, then the key is not present in the tree. #### Recursive search The following pseudocode implements the BST search procedure through recursion. Recursive-Tree-Search(x, key) if x = NIL or key = x.key then return x if key < x.key then return Recursive-Tree-Search(x.left, key) else return Recursive-Tree-Search(x.right, key) end if The recursive procedure continues until a $$ \text{nil} $$ or the $$ \text{key} $$ being searched for are encountered. #### Iterative search The recursive version of the search can be "unrolled" into a while loop. On most machines, the iterative version is found to be more efficient. Iterative-Tree-Search(x, key) while x ≠ NIL and key ≠ x.key do if key < x.key then x := x.left else x := x.right end if repeat return x Since the search may proceed till some leaf node, the running time complexity of BST search is $$ O(h) $$ where $$ h $$ is the height of the tree. However, the worst case for BST search is $$ O(n) $$ where $$ n $$ is the total number of nodes in the BST, because an unbalanced BST may degenerate to a linked list. However, if the BST is height-balanced the height is $$ O(\log n) $$ . #### Successor and predecessor For certain operations, given a node $$ \text{x} $$ , finding the successor or predecessor of $$ \text{x} $$ is crucial. Assuming all the keys of a BST are distinct, the successor of a node $$ \text{x} $$ in a BST is the node with the smallest key greater than $$ \text{x} $$ 's key. On the other hand, the predecessor of a node $$ \text{x} $$ in a BST is the node with the largest key smaller than $$ \text{x} $$ 's key. The following pseudocode finds the successor and predecessor of a node $$ \text{x} $$ in a BST. BST-Successor(x) if x.right ≠ NIL then return BST-Minimum(x.right) end if y := x.parent while y ≠ NIL and x = y.right do x := y y := y.parent repeat return y BST-Predecessor(x) if x.left ≠ NIL then return BST-Maximum(x.left) end if y := x.parent while y ≠ NIL and x = y.left do x := y y := y.parent repeat return y Operations such as finding a node in a BST whose key is the maximum or minimum are critical in certain operations, such as determining the successor and predecessor of nodes. Following is the pseudocode for the operations. BST-Maximum(x) while x.right ≠ NIL do x := x.right repeat return x BST-Minimum(x) while x.left ≠ NIL do x := x.left repeat return x ### Insertion Operations such as insertion and deletion cause the BST representation to change dynamically. The data structure must be modified in such a way that the properties of BST continue to hold. New nodes are inserted as leaf nodes in the BST. Following is an iterative implementation of the insertion operation. 1 BST-Insert(T, z) 2 y := NIL 3 x := T.root 4 while x ≠ NIL do 5 y := x 6 if z.key < x.key then 7 x := x.left 8 else 9 x := x.right 10 end if 11 repeat 12 z.parent := y 13 if y = NIL then 14 T.root := z 15 else if z.key < y.key then 16 y.left := z 17 else 18 y.right := z 19 end if The procedure maintains a "trailing pointer" $$ \text{y} $$ as a parent of $$ \text{x} $$ . After initialization on line 2, the while loop along lines 4-11 causes the pointers to be updated. If $$ \text{y} $$ is $$ \text{nil} $$ , the BST is empty, thus $$ \text{z} $$ is inserted as the root node of the binary search tree $$ \text{T} $$ , if it is not $$ \text{nil} $$ , insertion proceeds by comparing the keys to that of $$ \text{y} $$ on the lines 15-19 and the node is inserted accordingly. ### Deletion The deletion of a node, say $$ \text{Z} $$ , from the binary search tree $$ \text{BST} $$ has three cases: 1. If $$ \text{Z} $$ is a leaf node, it is replaced by $$ \text{NIL} $$ as shown in (a). 1. If $$ \text{Z} $$ has only one child, the child node of $$ \text{Z} $$ gets elevated by modifying the parent node of $$ \text{Z} $$ to point to the child node, consequently taking $$ \text{Z} $$ 's position in the tree, as shown in (b) and (c). 1. If $$ \text{Z} $$ has both left and right children, the in-order successor of $$ \text{Z} $$ , say $$ \text{Y} $$ , displaces $$ \text{Z} $$ by following the two cases: 1. If $$ \text{Y} $$ is $$ \text{Z} $$ 's right child, as shown in (d), $$ \text{Y} $$ displaces $$ \text{Z} $$ and $$ \text{Y} $$ 's right child remain unchanged. 1. If $$ \text{Y} $$ lies within $$ \text{Z} $$ 's right subtree but is not $$ \text{Z} $$ 's right child, as shown in (e), $$ \text{Y} $$ first gets replaced by its own right child, and then it displaces $$ \text{Z} $$ 's position in the tree. 1. Alternatively, the in-order predecessor can also be used. The following pseudocode implements the deletion operation in a binary search tree. 1 BST-Delete(BST, z) 2 if z.left = NIL then 3 Shift-Nodes(BST, z, z.right) 4 else if z.right = NIL then 5 Shift-Nodes(BST, z, z.left) 6 else 7 y := BST-Successor(z) 8 if y.parent ≠ z then 9 Shift-Nodes(BST, y, y.right) 10 y.right := z.right 11 y.right.parent := y 12 end if 13 Shift-Nodes(BST, z, y) 14 y.left := z.left 15 y.left.parent := y 16 end if 1 Shift-Nodes(BST, u, v) 2 if u.parent = NIL then 3 BST.root := v 4 else if u = u.parent.left then 5 u.parent.left := v 5 else 6 u.parent.right := v 7 end if 8 if v ≠ NIL then 9 v.parent := u.parent 10 end if The $$ \text{BST-Delete} $$ procedure deals with the 3 special cases mentioned above. Lines 2-3 deal with case 1; lines 4-5 deal with case 2 and lines 6-16 for case 3. The helper function $$ \text{Shift-Nodes} $$ is used within the deletion algorithm for the purpose of replacing the node $$ \text{u} $$ with $$ \text{v} $$ in the binary search tree $$ \text{BST} $$ . This procedure handles the deletion (and substitution) of $$ \text{u} $$ from $$ \text{BST} $$ . ## Traversal A BST can be traversed through three basic algorithms: inorder, preorder, and postorder tree walks. - Inorder tree walk: Nodes from the left subtree get visited first, followed by the root node and right subtree. Such a traversal visits all the nodes in the order of non-decreasing key sequence. - Preorder tree walk: The root node gets visited first, followed by left and right subtrees. - Postorder tree walk: Nodes from the left subtree get visited first, followed by the right subtree, and finally, the root. Following is a recursive implementation of the tree walks. Inorder-Tree-Walk(x) if x ≠ NIL then Inorder-Tree-Walk(x.left) visit node Inorder-Tree-Walk(x.right) end if Preorder-Tree-Walk(x) if x ≠ NIL then visit node Preorder-Tree-Walk(x.left) Preorder-Tree-Walk(x.right) end if Postorder-Tree-Walk(x) if x ≠ NIL then Postorder-Tree-Walk(x.left) Postorder-Tree-Walk(x.right) visit node end if ## Balanced binary search trees Without rebalancing, insertions or deletions in a binary search tree may lead to degeneration, resulting in a height $$ n $$ of the tree (where $$ n $$ is number of items in a tree), so that the lookup performance is deteriorated to that of a linear search. Keeping the search tree balanced and height bounded by $$ O(\log n) $$ is a key to the usefulness of the binary search tree. This can be achieved by "self-balancing" mechanisms during the updation operations to the tree designed to maintain the tree height to the binary logarithmic complexity. ### Height-balanced trees A tree is height-balanced if the heights of the left sub-tree and right sub-tree are guaranteed to be related by a constant factor. This property was introduced by the AVL tree and continued by the red–black tree. The heights of all the nodes on the path from the root to the modified leaf node have to be observed and possibly corrected on every insert and delete operation to the tree. ### Weight-balanced trees In a weight-balanced tree, the criterion of a balanced tree is the number of leaves of the subtrees. The weights of the left and right subtrees differ at most by $$ 1 $$ . However, the difference is bound by a ratio $$ \alpha $$ of the weights, since a strong balance condition of $$ 1 $$ cannot be maintained with $$ O(\log n) $$ rebalancing work during insert and delete operations. The $$ \alpha $$ -weight-balanced trees gives an entire family of balance conditions, where each left and right subtrees have each at least a fraction of $$ \alpha $$ of the total weight of the subtree. ### Types There are several self-balanced binary search trees, including T-tree, treap, red-black tree, B-tree, 2–3 tree, and Splay tree. ## Examples of applications ### Sort Binary search trees are used in sorting algorithms such as tree sort, where all the elements are inserted at once and the tree is traversed at an in-order fashion. BSTs are also used in quicksort. ### Priority queue operations Binary search trees are used in implementing priority queues, using the node's key as priorities. Adding new elements to the queue follows the regular BST insertion operation but the removal operation depends on the type of priority queue: - If it is an ascending order priority queue, removal of an element with the lowest priority is done through leftward traversal of the BST. - If it is a descending order priority queue, removal of an element with the highest priority is done through rightward traversal of the BST.
https://en.wikipedia.org/wiki/Binary_search_tree
The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved. Here, "quickly" means an algorithm exists that solves the task and runs in polynomial time (as opposed to, say, exponential time), meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm. The general class of questions that some algorithm can answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be verified in polynomial time is "NP", standing for "nondeterministic polynomial time". An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time. The problem has been called the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields. It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution. ## #### Example Consider the following yes/no problem: given an incomplete Sudoku grid of size $$ n^2 \times n^2 $$ , is there at least one legal solution where every row, column, and $$ n \times n $$ square contains the integers 1 through $$ n^2 $$ ? It is straightforward to verify "yes" instances of this generalized Sudoku problem given a candidate solution. However, it is not known whether there is a polynomial-time algorithm that can correctly answer "yes" or "no" to all instances of this problem. Therefore, generalized Sudoku is in NP (quickly verifiable), but may or may not be in P (quickly solvable). (It is necessary to consider a generalized version of Sudoku, as any fixed size Sudoku has only a finite number of possible grids. In this case the problem is in P, as the answer can be found by table lookup.) ## History The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973). Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, speculating that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical), this would imply what is now called P ≠ NP, since a proposed key can be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated. ## Context The relation between the complexity classes ### P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem). In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other). In this theory, the class P consists of all decision problems (defined below) solvable on a deterministic sequential machine in a duration polynomial in the size of the input; the class NP consists of all decision problems whose positive solutions are verifiable in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes: Is P equal to NP? Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believed P ≠ NP. These polls do not imply whether P = NP, Gasarch himself stated: "This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era." ## ### NP-completeness To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are problems that any other NP problem is reducible to in polynomial time and whose solution is still verifiable in polynomial time. That is, any NP problem can be transformed into any NP-complete problem. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP. NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time. For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into a Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems are NP-complete, and no fast algorithm for any of them is known. From the definition alone it is unintuitive that NP-complete problems exist; however, a trivial NP-complete problem can be formulated as follows: given a Turing machine M guaranteed to halt in polynomial time, does a polynomial-size input that M will accept exist? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists. The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem". ## Harder problems Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games. The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least $$ 2^{2^{cn}} $$ for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all. It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?". Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems. ## Problems in NP not known to be in P or NP-complete In 1975, Richard E. Ladner showed that if P ≠ NP, then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem, and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete. The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time. The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time $$ O\left (\exp \left ( \left (\tfrac{64n}{9} \log(2) \right )^{\frac{1}{3}} \left ( \log(n\log(2)) \right )^{\frac{2}{3}} \right) \right ) $$ to factor an n-bit integer. The best known quantum algorithm for this problem, Shor's algorithm, runs in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes. ## Does P mean "easy"? All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common assumption in complexity theory; but there are caveats. First, it can be false in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than $$ 2 \uparrow \uparrow (2 \uparrow \uparrow (2 \uparrow \uparrow (h/2) ) ) $$ (using Knuth's up-arrow notation), and where h is the number of vertices in H. On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem, and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms. Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms. ## Reasons to believe ### P ≠ NP or ### P = NP Cook provides a restatement of the problem in The P Versus NP Problem as "Does P = NP?" According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3,000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH. It is also intuitively argued that the existence of problems that are hard to solve but whose solutions are easy to verify matches real-world experience. On the other hand, some researchers believe that it is overconfident to believe P ≠ NP and that researchers should also explore proofs of P = NP. For example, in 2002 these statements were made: ### DLIN vs NLIN When one substitutes "linear time on a multitape Turing machine" for "polynomial time" in the definitions of P and NP, one obtains the classes DLIN and NLIN. It is known that DLIN ≠ NLIN. ## Consequences of solution One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well. P = NP A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields. It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them. A solution showing P = NP could upend the field of cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including: - Existing implementations of public-key cryptography, a foundation for many modern security applications such as secure financial transactions over the Internet. - Symmetric ciphers such as AES or 3DES, used for the encryption of communications data. - Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, finding a pre-image that hashes to a given value must be difficult, ideally taking exponential time. If P = NP, then this can take polynomial time, through reduction to SAT. These would need modification or replacement with information-theoretically secure solutions that do not assume P ≠ NP. There are also enormous benefits that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; making these problems efficiently solvable could considerably advance life sciences and biotechnology. These changes could be insignificant compared to the revolution that efficiently solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics: Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says: Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method guaranteed to find a proof if a "reasonable" size proof exists, would essentially end this struggle. Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof: P ≠ NP A proof of P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would represent a great advance in computational complexity theory and guide future research. It would demonstrate that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place. P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds. ## Results about difficulty of proof Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are insufficient for answering the question, suggesting novel technical approaches are required. As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, all insufficient to prove P ≠ NP: ClassificationDefinitionRelativizing proofsImagine a world where every algorithm is allowed to make queries to some fixed subroutine called an oracle (which can answer a fixed set of questions in constant time, such as an oracle that solves any traveling salesman problem in 1 step), and the running time of the oracle is not counted against the running time of the algorithm. Most proofs (especially classical ones) apply uniformly in a world with oracles regardless of what the oracle does. These proofs are called relativizing. In 1975, Baker, Gill, and Solovay showed that P = NP with respect to some oracles, while P ≠ NP for other oracles. As relativizing proofs can only prove statements that are true for all possible oracles, these techniques cannot resolve P = NP.Natural proofsIn 1993, Alexander Razborov and Steven Rudich defined a general class of proof techniques for circuit complexity lower bounds, called natural proofs. At the time, all previously known circuit lower bounds were natural, and circuit complexity was considered a very promising approach for resolving P = NP. However, Razborov and Rudich showed that if one-way functions exist, P and NP are indistinguishable to natural proof methods. Although the existence of one-way functions is unproven, most mathematicians believe that they do, and a proof of their existence would be a much stronger statement than P ≠ NP. Thus it is unlikely that natural proofs alone can resolve P = NP.Algebrizing proofsAfter the Baker–Gill–Solovay result, new non-relativizing proof techniques were successfully used to prove that IP = PSPACE. However, in 2008, Scott Aaronson and Avi Wigderson showed that the main technical tool used in the IP = PSPACE proof, known as arithmetization, was also insufficient to resolve P = NP. Arithmetization converts the operations of an algorithm to algebraic and basic arithmetic symbols and then uses those to analyze the workings. In the IP = PSPACE proof, they convert the black box and the Boolean circuits to an algebraic problem. As mentioned previously, it has been proven that this method is not viable to solve P = NP and other time complexity problems. These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results. These barriers lead some computer scientists to suggest the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). An independence result could imply that either P ≠ NP and this is unprovable in (e.g.) ZFC, or that P = NP but it is unprovable in ZFC that any polynomial-time algorithms are correct. However, if the problem is undecidable even with much weaker assumptions extending the Peano axioms for integer arithmetic, then nearly polynomial-time algorithms exist for all NP problems. Therefore, assuming (as most complexity theorists do) some NP problems don't have efficient algorithms, proofs of independence with those techniques are impossible. This also implies proving independence from PA or ZFC with current techniques is no easier than proving all NP problems have efficient algorithms. ## Logical characterizations The P = NP problem can be restated as certain classes of logical statements, as a result of work in descriptive complexity. Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P are expressible in first-order logic with the addition of a suitable least fixed-point combinator. Recursive functions can be defined with this and the order relation. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P. Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH). ## Polynomial-time algorithms No known algorithm for a NP-complete problem runs in polynomial time. However, there are algorithms known for NP-complete problems that if P = NP, the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP: // Algorithm that accepts the NP-complete language SUBSET-SUM. // // this is a polynomial-time algorithm if and only if P = NP. // // "Polynomial-time" means it returns "yes" in polynomial time when // the answer should be "yes", and runs forever when it is "no". // // Input: S = a finite set of integers // Output: "yes" if any subset of S adds up to 0. // Runs forever with no output otherwise. // Note: "Program number M" is the program obtained by // writing the integer M in binary, then // considering that string of bits to be a // program. Every possible program can be // generated this way, though most do nothing // because of syntax errors. FOR K = 1...∞ FOR M = 1...K Run program number M for K steps with input S IF the program outputs a list of distinct integers AND the integers are all in S AND the integers sum to 0 THEN OUTPUT "yes" and HALT This is a polynomial-time algorithm accepting an NP-complete language only if P = NP. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm). This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first. ## Formal definitions P and NP A decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that produces the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is the set of languages that can be decided by a deterministic polynomial-time Turing machine. Meaning, $$ \mathsf{P} = \{ L : L=L(M) \text{ for some deterministic polynomial-time Turing machine } M \} $$ where $$ L(M) = \{ w\in\Sigma^{*}: M \text{ accepts } w \} $$ and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies two conditions: 1. M halts on all inputs w and 1. there exists $$ k \in N $$ such that $$ T_M(n)\in O(n^k) $$ , where O refers to the big O notation and $$ T_M(n) = \max\{ t_M(w) : w\in\Sigma^{*}, |w| = n \} $$ $$ t_M(w) = \text{ number of steps }M\text{ takes to halt on input }w. $$ NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach uses the concept of certificate and verifier. Formally, NP is the set of languages with a finite alphabet and verifier that runs in polynomial time. The following defines a "verifier": Let L be a language over a finite alphabet, Σ. L ∈ NP if, and only if, there exists a binary relation $$ R\subset\Sigma^{*}\times\Sigma^{*} $$ and a positive integer k such that the following two conditions are satisfied: 1. ; and 1. the language is decidable by a deterministic Turing machine in polynomial time. A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L. Not all verifiers must be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time. Example Let $$ \mathrm{COMPOSITE} = \left \{x\in\mathbb{N} \mid x=pq \text{ for integers } p, q > 1 \right \} $$ $$ R = \left \{(x,y)\in\mathbb{N} \times\mathbb{N} \mid 1<y \leq \sqrt x \text{ and } y \text{ divides } x \right \}. $$ Whether a value of x is composite is equivalent to of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations). COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test. NP-completeness There are many equivalent ways of describing NP-completeness. Let L be a language over a finite alphabet Σ. L is NP-complete if, and only if, the following two conditions are satisfied: 1. L ∈ NP; and 1. any L′ in NP is polynomial-time-reducible to L (written as $$ L' \leq_{p} L $$ ), where $$ L' \leq_{p} L $$ if, and only if, the following two conditions are satisfied: 1. There exists f : Σ* → Σ* such that for all w in Σ* we have: $$ (w\in L' \Leftrightarrow f(w)\in L) $$ ; and 1. there exists a polynomial-time Turing machine that halts with f(w) on its tape on any input w. Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete. ## Claimed solutions While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger compiled a list of 116 purported proofs from 1986 to 2016, of which 61 were proofs of P = NP, 49 were proofs of P ≠ NP, and 6 proved other results, e.g. that the problem is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have been refuted. ## Popular culture The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem. In the sixth episode of The Simpsons seventh season "Treehouse of Horror VI", the equation P = NP is seen shortly after Homer accidentally stumbles into the "third dimension". In the second episode of season 2 of Elementary, "Solve for X" Sherlock and Watson investigate the murders of mathematicians who were attempting to solve P versus NP. ## Similar problems - R vs. RE problem, where R is analog of class P, and RE is analog class NP. These classes are not equal, because undecidable but verifiable problems do exist, for example, Hilbert's tenth problem which is RE-complete. - A similar problem exists in the theory of algebraic complexity: VP vs. VNP problem. Like P vs. NP, the answer is currently unknown.
https://en.wikipedia.org/wiki/P_versus_NP_problem
Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers. The components of a distributed system communicate and coordinate their actions by passing messages to one another in order to achieve a common goal. Three significant challenges of distributed systems are: maintaining concurrency of components, overcoming the lack of a global clock, and managing the independent failure of components. When a component of one system fails, the entire system does not fail. ## Examples of distributed systems vary from SOA-based systems to microservices to massively multiplayer online games to peer-to-peer applications. Distributed systems cost significantly more than monolithic architectures, primarily due to increased needs for additional hardware, servers, gateways, firewalls, new subnets, proxies, and so on. Also, distributed systems are prone to fallacies of distributed computing. On the other hand, a well designed distributed system is more scalable, more durable, more changeable and more fine-tuned than a monolithic application deployed on a single machine. According to Marc Brooker: "a system is scalable in the range where marginal cost of additional workload is nearly constant." Serverless technologies fit this definition but the total cost of ownership, and not just the infra cost must be considered. A computer program that runs within a distributed system is called a distributed program, and distributed programming is the process of writing such programs. There are many different types of implementations for the message passing mechanism, including pure HTTP, RPC-like connectors and message queues. Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other via message passing. ## Introduction The word distributed in terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing. While there is no single definition of a distributed system, the following defining properties are commonly used as: - There are several autonomous computational entities (computers or nodes), each of which has its own local memory. - The entities communicate with each other by message passing. A distributed system may have a common goal, such as solving a large computational problem; the user then perceives the collection of autonomous processors as a unit. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users. Other typical properties of distributed systems include the following: - The system has to tolerate failures in individual computers. - The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program. - Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input. ## Patterns Here are common architectural patterns used for distributed computing: - Saga interaction pattern - Microservices - Event driven architecture ## Events vs. Messages In distributed systems, events represent a fact or state change (e.g., OrderPlaced) and are typically broadcast asynchronously to multiple consumers, promoting loose coupling and scalability. While events generally don’t expect an immediate response, acknowledgment mechanisms are often implemented at the infrastructure level (e.g., Kafka commit offsets, SNS delivery statuses) rather than being an inherent part of the event pattern itself. In contrast, messages serve a broader role, encompassing commands (e.g., ProcessPayment), events (e.g., PaymentProcessed), and documents (e.g., DataPayload). Both events and messages can support various delivery guarantees, including at-least-once, at-most-once, and exactly-once, depending on the technology stack and implementation. However, exactly-once delivery is often achieved through idempotency mechanisms rather than true, infrastructure-level exactly-once semantics. Delivery patterns for both events and messages include publish/subscribe (one-to-many) and point-to-point (one-to-one). While request/reply is technically possible, it is more commonly associated with messaging patterns rather than pure event-driven systems. Events excel at state propagation and decoupled notifications, while messages are better suited for command execution, workflow orchestration, and explicit coordination. Modern architectures commonly combine both approaches, leveraging events for distributed state change notifications and messages for targeted command execution and structured workflows based on specific timing, ordering, and delivery requirements. ## Parallel and distributed computing Distributed systems are groups of networked computers which share a common goal for their work. The terms "concurrent computing", "parallel computing", and "distributed computing" have much overlap, and no clear distinction exists between them. The same system may be characterized both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particularly tightly coupled form of distributed computing, and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria: - In parallel computing, all processors may have access to a shared memory to exchange information between processors. - In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors. The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory. The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems (see below for more detailed discussion). Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms. ## History The use of concurrent processes which communicate through message-passing has its roots in operating system architectures studied in the 1960s. The first widespread distributed systems were local-area networks such as Ethernet, which was invented in the 1970s. ARPANET, one of the predecessors of the Internet, was introduced in the late 1960s, and ARPANET e-mail was invented in the early 1970s. E-mail became the most successful application of ARPANET, and it is probably the earliest example of a large-scale distributed application. In addition to ARPANET (and its successor, the global Internet), other early worldwide computer networks included Usenet and FidoNet from the 1980s, both of which were used to support distributed discussion systems. The study of distributed computing became its own branch of computer science in the late 1970s and early 1980s. The first conference in the field, Symposium on Principles of Distributed Computing (PODC), dates back to 1982, and its counterpart International Symposium on Distributed Computing (DISC) was first held in Ottawa in 1985 as the International Workshop on Distributed Algorithms on Graphs. ## Architectures Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system. Whether these CPUs share resources or not determines a first distinction between three types of architecture: - Shared memory - Shared disk - Shared nothing. Distributed programming typically falls into one of several basic architectures: client–server, three-tier, n-tier, or peer-to-peer; or categories: loose coupling, or tight coupling. - Client–server: architectures where smart clients contact the server for data then format and display it to the users. Input at the client is committed back to the server when it represents a permanent change. - Three-tier: architectures that move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are three-tier. - n-tier: architectures that refer typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers. - Peer-to-peer: architectures where there are no special machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and as servers. Examples of this architecture include BitTorrent and the bitcoin network. Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a main/sub relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database. Database-centric architecture in particular provides relational processing analytics in a schematic architecture allowing for live environment relay. This enables distributed computing functions both within and beyond the parameters of a networked database. ### Cell-Based Architecture Cell-based architecture is a distributed computing approach in which computational resources are organized into self-contained units called cells. Each cell operates independently, processing requests while maintaining scalability, fault isolation, and availability. A cell typically consists of multiple services or application components and functions as an autonomous unit. Some implementations replicate entire sets of services across multiple cells, while others partition workloads between cells. In replicated models, requests may be rerouted to an operational cell if another experiences a failure. This design is intended to enhance system resilience by reducing the impact of localized failures. Some implementations employ circuit breakers within and between cells. Within a cell, circuit breakers may be used to prevent cascading failures among services, while inter-cell circuit breakers can isolate failing cells and redirect traffic to those that remain operational. Cell-based architecture has been adopted in some large-scale distributed systems, particularly in cloud-native and high-availability environments, where fault isolation and redundancy are key design considerations. Its implementation varies depending on system requirements, infrastructure constraints, and operational objectives. ## Applications Reasons for using distributed systems and distributed computing may include: - The very nature of an application may require the use of a communication network that connects several computers: for example, data produced in one physical location and required in another location. - There are many cases in which the use of a single computer would be possible in principle, but the use of a distributed system is beneficial for practical reasons. For example: - It can allow for much larger storage and memory, faster compute, and higher bandwidth than a single machine. - It can provide more reliability than a non-distributed system, as there is no single point of failure. Moreover, a distributed system may be easier to expand and manage than a monolithic uniprocessor system. - It may be more cost-efficient to obtain the desired level of performance by using a cluster of several low-end computers, in comparison with a single high-end computer. Examples Examples of distributed systems and applications of distributed computing include the following: - telecommunications networks: - telephone networks and cellular networks, - computer networks such as the Internet, - wireless sensor networks, - routing algorithms; - network applications: - World Wide Web and peer-to-peer networks, - massively multiplayer online games and virtual reality communities, - distributed databases and distributed database management systems, - network file systems, - distributed cache such as burst buffers, - distributed information processing systems such as banking systems and airline reservation systems; - real-time process control: - aircraft control systems, - industrial control systems; - parallel computation: - scientific computing, including cluster computing, grid computing, cloud computing, and various volunteer computing projects, - distributed rendering in computer graphics. - peer-to-peer ## Reactive distributed systems According to Reactive Manifesto, reactive distributed systems are responsive, resilient, elastic and message-driven. Subsequently, Reactive systems are more flexible, loosely-coupled and scalable. To make your systems reactive, you are advised to implement Reactive Principles. Reactive Principles are a set of principles and patterns which help to make your cloud native application as well as edge native applications more reactive. ## Theoretical foundations ### Models Many tasks that we would like to automate by using a computer are of question–answer type: we would like to ask a question and the computer should produce an answer. In theoretical computer science, such tasks are called computational problems. Formally, a computational problem consists of instances together with a solution for each instance. Instances are questions that we can ask, and solutions are desired answers to these questions. Theoretical computer science seeks to understand which computational problems can be solved by using a computer (computability theory) and how efficiently (computational complexity theory). Traditionally, it is said that a problem can be solved by using a computer if we can design an algorithm that produces a correct solution for any given instance. Such an algorithm can be implemented as a computer program that runs on a general-purpose computer: the program reads a problem instance from input, performs some computation, and produces the solution as output. Formalisms such as random-access machines or universal Turing machines can be used as abstract models of a sequential general-purpose computer executing such an algorithm. The field of concurrent and distributed computing studies similar questions in the case of either multiple computers, or a computer that executes a network of interacting processes: which computational problems can be solved in such a network and how efficiently? However, it is not at all obvious what is meant by "solving a problem" in the case of a concurrent or distributed system: for example, what is the task of the algorithm designer, and what is the concurrent or distributed equivalent of a sequential general-purpose computer? The discussion below focuses on the case of multiple computers, although many of the issues are the same for concurrent processes running on a single computer. Three viewpoints are commonly used: Parallel algorithms in shared-memory model - All processors have access to a shared memory. The algorithm designer chooses the program executed by each processor. - One theoretical model is the parallel random-access machines (PRAM) that are used. However, the classical PRAM model assumes synchronous access to the shared memory. - Shared-memory programs can be extended to distributed systems if the underlying operating system encapsulates the communication between nodes and virtually unifies the memory across all individual systems. - A model that is closer to the behavior of real-world multiprocessor machines and takes into account the use of machine instructions, such as Compare-and-swap (CAS), is that of asynchronous shared memory. There is a wide body of work on this model, a summary of which can be found in the literature. Parallel algorithms in message-passing model - The algorithm designer chooses the structure of the network, as well as the program executed by each computer. - Models such as Boolean circuits and sorting networks are used. A Boolean circuit can be seen as a computer network: each gate is a computer that runs an extremely simple computer program. Similarly, a sorting network can be seen as a computer network: each comparator is a computer. Distributed algorithms in message-passing model - The algorithm designer only chooses the computer program. All computers run the same program. The system must work correctly regardless of the structure of the network. - A commonly used model is a graph with one finite-state machine per node. In the case of distributed algorithms, computational problems are typically related to graphs. Often the graph that describes the structure of the computer network is the problem instance. This is illustrated in the following example. ### An example Consider the computational problem of finding a coloring of a given graph G. Different fields might take the following approaches: Centralized algorithms - The graph G is encoded as a string, and the string is given as input to a computer. The computer program finds a coloring of the graph, encodes the coloring as a string, and outputs the result. Parallel algorithms - Again, the graph G is encoded as a string. However, multiple computers can access the same string in parallel. Each computer might focus on one part of the graph and produce a coloring for that part. - The main focus is on high-performance computation that exploits the processing power of multiple computers in parallel. Distributed algorithms - The graph G is the structure of the computer network. There is one computer for each node of G and one communication link for each edge of G. Initially, each computer only knows about its immediate neighbors in the graph G; the computers must exchange messages with each other to discover more about the structure of G. Each computer must produce its own color as output. - The main focus is on coordinating the operation of an arbitrary distributed system. While the field of parallel algorithms has a different focus than the field of distributed algorithms, there is much interaction between the two fields. For example, the Cole–Vishkin algorithm for graph coloring was originally presented as a parallel algorithm, but the same technique can also be used directly as a distributed algorithm. Moreover, a parallel algorithm can be implemented either in a parallel system (using shared memory) or in a distributed system (using message passing). The traditional boundary between parallel and distributed algorithms (choose a suitable network vs. run in any given network) does not lie in the same place as the boundary between parallel and distributed systems (shared memory vs. message passing). ### Complexity measures In parallel algorithms, yet another resource in addition to time and space is the number of computers. Indeed, often there is a trade-off between the running time and the number of computers: the problem can be solved faster if there are more computers running in parallel (see speedup). If a decision problem can be solved in polylogarithmic time by using a polynomial number of processors, then the problem is said to be in the class NC. The class NC can be defined equally well by using the PRAM formalism or Boolean circuits—PRAM machines can simulate Boolean circuits efficiently and vice versa. In the analysis of distributed algorithms, more attention is usually paid on communication operations than computational steps. Perhaps the simplest model of distributed computing is a synchronous system where all nodes operate in a lockstep fashion. This model is commonly known as the LOCAL model. During each communication round, all nodes in parallel (1) receive the latest messages from their neighbours, (2) perform arbitrary local computation, and (3) send new messages to their neighbors. In such systems, a central complexity measure is the number of synchronous communication rounds required to complete the task. This complexity measure is closely related to the diameter of the network. Let D be the diameter of the network. On the one hand, any computable problem can be solved trivially in a synchronous distributed system in approximately 2D communication rounds: simply gather all information in one location (D rounds), solve the problem, and inform each node about the solution (D rounds). On the other hand, if the running time of the algorithm is much smaller than D communication rounds, then the nodes in the network must produce their output without having the possibility to obtain information about distant parts of the network. In other words, the nodes must make globally consistent decisions based on information that is available in their local D-neighbourhood. Many distributed algorithms are known with the running time much smaller than D rounds, and understanding which problems can be solved by such algorithms is one of the central research questions of the field. Typically an algorithm which solves a problem in polylogarithmic time in the network size is considered efficient in this model. Another commonly used measure is the total number of bits transmitted in the network (cf. communication complexity). The features of this concept are typically captured with the CONGEST(B) model, which is similarly defined as the LOCAL model, but where single messages can only contain B bits. ### Other problems Traditional computational problems take the perspective that the user asks a question, a computer (or a distributed system) processes the question, then produces an answer and stops. However, there are also problems where the system is required not to stop, including the dining philosophers problem and other similar mutual exclusion problems. In these problems, the distributed system is supposed to continuously coordinate the use of shared resources so that no conflicts or deadlocks occur. There are also fundamental challenges that are unique to distributed computing, for example those related to fault-tolerance. Examples of related problems include consensus problems, Byzantine fault tolerance, and self-stabilisation. Much research is also focused on understanding the asynchronous nature of distributed systems: - Synchronizers can be used to run synchronous algorithms in asynchronous systems. - Logical clocks provide a causal happened-before ordering of events. - Clock synchronization algorithms provide globally consistent physical time stamps. Note that in distributed systems, latency should be measured through "99th percentile" because "median" and "average" can be misleading. ### Election Coordinator election (or leader election) is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are either unaware which node will serve as the "coordinator" (or leader) of the task, or unable to communicate with the current coordinator. After a coordinator election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task coordinator. The network nodes communicate among themselves in order to decide which of them will get into the "coordinator" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the coordinator. The definition of this problem is often attributed to LeLann, who formalized it as a method to create a new token in a token ring network in which the token has been lost. Coordinator election algorithms are designed to be economical in terms of total bytes transmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra Prize for an influential paper in distributed computing. Many other algorithms were suggested for different kinds of network graphs, such as undirected rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general method that decouples the issue of the graph family from the design of the coordinator election algorithm was suggested by Korach, Kutten, and Moran. In order to perform coordination, distributed systems employ the concept of coordinators. The coordinator election problem is to choose a process from among a group of processes on different processors in a distributed system to act as the central coordinator. Several central coordinator election algorithms exist. ### Properties of distributed systems So far the focus has been on designing a distributed system that solves a given problem. A complementary research problem is studying the properties of a given distributed system. The halting problem is an analogous example from the field of centralised computation: we are given a computer program and the task is to decide whether it halts or runs forever. The halting problem is undecidable in the general case, and naturally understanding the behaviour of a computer network is at least as hard as understanding the behaviour of one computer. However, there are many interesting special cases that are decidable. In particular, it is possible to reason about the behaviour of a network of finite-state machines. One example is telling whether a given network of interacting (asynchronous and non-deterministic) finite-state machines can reach a deadlock. This problem is PSPACE-complete, i.e., it is decidable, but not likely that there is an efficient (centralised, parallel or distributed) algorithm that solves the problem in the case of large networks.
https://en.wikipedia.org/wiki/Distributed_computing
Feedforward refers to recognition-inference architecture of neural networks. Artificial neural network architectures are based on inputs multiplied by weights to obtain outputs (inputs-to-output): feedforward. Recurrent neural networks, or neural networks with loops allow information from later processing stages to feed back to earlier stages for sequence processing. However, at every stage of inference a feedforward multiplication remains the core, essential for backpropagation or backpropagation through time. Thus neural networks cannot contain feedback like negative feedback or positive feedback where the outputs feed back to the very same inputs and modify them, because this forms an infinite loop which is not possible to rewind in time to generate an error signal through backpropagation. This issue and nomenclature appear to be a point of confusion between some computer scientists and scientists in other fields studying brain networks. ## Mathematical foundations ### Activation function The two historically common activation functions are both sigmoids, and are described by $$ y(v_i) = \tanh(v_i) ~~ \textrm{and} ~~ y(v_i) = (1+e^{-v_i})^{-1} $$ . The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here $$ y_i $$ is the output of the $$ i $$ th node (neuron) and $$ v_i $$ is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models). In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids. ### Learning Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation. We can represent the degree of error in an output node $$ j $$ in the $$ n $$ th data point (training example) by $$ e_j(n)=d_j(n)-y_j(n) $$ , where $$ d_j(n) $$ is the desired target value for $$ n $$ th data point at node $$ j $$ , and $$ y_j(n) $$ is the value produced at node $$ j $$ when the $$ n $$ th data point is given as an input. The node weights can then be adjusted based on corrections that minimize the error in the entire output for the $$ n $$ th data point, given by $$ \mathcal{E}(n)=\frac{1}{2}\sum_{\text{output node }j} e_j^2(n) $$ . Using gradient descent, the change in each weight $$ w_{ij} $$ is $$ \Delta w_{ji} (n) = -\eta\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} y_i(n) $$ where $$ y_i(n) $$ is the output of the previous neuron $$ i $$ , and $$ \eta $$ is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression, $$ \frac{\partial\mathcal{E}(n)}{\partial v_j(n)} $$ denotes the partial derivate of the error $$ \mathcal{E}(n) $$ according to the weighted sum $$ v_j(n) $$ of the input connections of neuron $$ i $$ . The derivative to be calculated depends on the induced local field $$ v_j $$ , which itself varies. It is easy to prove that for an output node this derivative can be simplified to $$ -\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} = e_j(n)\phi^\prime (v_j(n)) $$ where $$ \phi^\prime $$ is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is $$ -\frac{\partial\mathcal{E}(n)}{\partial v_j(n)} = \phi^\prime (v_j(n))\sum_k -\frac{\partial\mathcal{E}(n)}{\partial v_k(n)} w_{kj}(n) $$ . This depends on the change in weights of the $$ k $$ th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function. ## History ### Timeline - Circa 1800, Legendre (1805) and Gauss (1795) created the simplest feedforward network which consists of a single weight layer with linear activation functions. It was trained by the least squares method for minimising mean squared error, also known as linear regression. Legendre and Gauss used it for the prediction of planetary movement from training data. - In 1943, Warren McCulloch and Walter Pitts proposed the binary artificial neuron as a logical model of biological neural networks. - In 1958, Frank Rosenblatt proposed the multilayered perceptron model, consisting of an input layer, a hidden layer with randomized weights that did not learn, and an output layer with learnable connections. R. D. Joseph (1960) mentions an even earlier perceptron-like device: "Farley and Clark of MIT Lincoln Laboratory actually preceded Rosenblatt in the development of a perceptron-like device." However, "they dropped the subject." - In 1960, Joseph also discussed multilayer perceptrons with an adaptive hidden layer. Rosenblatt (1962) cited and adopted these ideas, also crediting work by H. D. Block and B. W. Knight. Unfortunately, these early efforts did not lead to a working learning algorithm for hidden units, i.e., deep learning. - In 1965, Alexey Grigorevich Ivakhnenko and Valentin Lapa published Group Method of Data Handling, the first working deep learning algorithm, a method to train arbitrarily deep neural networks. It is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates." It was used to train an eight-layer neural net in 1971. - In 1967, Shun'ichi Amari reported the first multilayered neural network trained by stochastic gradient descent, which was able to classify non-linearily separable pattern classes. Amari's student Saito conducted the computer experiments, using a five-layered feedforward network with two learning layers. - In 1970, Seppo Linnainmaa published the modern form of backpropagation in his master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work. - In 2003, interest in backpropagation networks returned due to the successes of deep learning being applied to language modelling by Yoshua Bengio with co-authors. ### Linear regression ### Perceptron If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron. (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function. Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent. ### Multilayer perceptron A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable. ## Other feedforward networks Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function.
https://en.wikipedia.org/wiki/Feedforward_neural_network
In mathematics, computable numbers are the real numbers that can be computed to within any desired precision by a finite, terminating algorithm. They are also known as the recursive numbers, effective numbers, computable reals, or recursive reals. The concept of a computable real number was introduced by Émile Borel in 1912, using the intuitive notion of computability available at the time. ### Equivalent definitions can be given using μ-recursive functions, Turing machines, or λ-calculus as the formal representation of algorithms. The computable numbers form a real closed field and can be used in the place of real numbers for many, but not all, mathematical purposes. ## Informal definition In the following, Marvin Minsky defines the numbers to be computed in a manner similar to those defined by Alan Turing in 1936; i.e., as "sequences of digits interpreted as decimal fractions" between 0 and 1: The key notions in the definition are (1) that some n is specified at the start, (2) for any n the computation only takes a finite number of steps, after which the machine produces the desired output and terminates. An alternate form of (2) – the machine successively prints all n of the digits on its tape, halting after printing the nth – emphasizes Minsky's observation: (3) That by use of a Turing machine, a finite definition – in the form of the machine's state table – is being used to define what is a potentially infinite string of decimal digits. This is however not the modern definition which only requires the result be accurate to within any given accuracy. The informal definition above is subject to a rounding problem called the table-maker's dilemma whereas the modern definition is not. ## Formal definition A real number a is computable if it can be approximated by some computable function $$ f:\mathbb{N}\to\mathbb{Z} $$ in the following manner: given any positive integer n, the function produces an integer f(n) such that: $$ {f(n)-1\over n} \leq a \leq {f(n)+1\over n}. $$ A complex number is called computable if its real and imaginary parts are computable. Equivalent definitions There are two similar definitions that are equivalent: - There exists a computable function which, given any positive rational error bound $$ \varepsilon $$ , produces a rational number r such that $$ |r - a| \leq \varepsilon. $$ - There is a computable sequence of rational numbers $$ q_i $$ converging to $$ a $$ such that $$ |q_i - q_{i+1}| < 2^{-i}\, $$ for each i. There is another equivalent definition of computable numbers via computable Dedekind cuts. A computable Dedekind cut is a computable function $$ D\; $$ which when provided with a rational number $$ r $$ as input returns $$ D(r)=\mathrm{true}\; $$ or $$ D(r)=\mathrm{false}\; $$ , satisfying the following conditions: $$ \exists r D(r)=\mathrm{true}\; $$ $$ \exists r D(r)=\mathrm{false}\; $$ $$ (D(r)=\mathrm{true}) \wedge (D(s)=\mathrm{false}) \Rightarrow r<s\; $$ $$ D(r)=\mathrm{true} \Rightarrow \exist s>r, D(s)=\mathrm{true}.\; $$ An example is given by a program D that defines the cube root of 3. Assuming $$ q>0\; $$ this is defined by: $$ p^3<3 q^3 \Rightarrow D(p/q)=\mathrm{true}\; $$ $$ p^3>3 q^3 \Rightarrow D(p/q)=\mathrm{false}.\; $$ A real number is computable if and only if there is a computable Dedekind cut D corresponding to it. The function D is unique for each computable number (although of course two different programs may provide the same function). ## Properties ### Not computably enumerable Assigning a Gödel number to each Turing machine definition produces a subset $$ S $$ of the natural numbers corresponding to the computable numbers and identifies a surjection from $$ S $$ to the computable numbers. There are only countably many Turing machines, showing that the computable numbers are subcountable. The set $$ S $$ of these Gödel numbers, however, is not computably enumerable (and consequently, neither are subsets of $$ S $$ that are defined in terms of it). This is because there is no algorithm to determine which Gödel numbers correspond to Turing machines that produce computable reals. In order to produce a computable real, a Turing machine must compute a total function, but the corresponding decision problem is in Turing degree 0′′. Consequently, there is no surjective computable function from the natural numbers to the set $$ S $$ of machines representing computable reals, and Cantor's diagonal argument cannot be used constructively to demonstrate uncountably many of them. While the set of real numbers is uncountable, the set of computable numbers is classically countable and thus almost all real numbers are not computable. Here, for any given computable number $$ x, $$ the well ordering principle provides that there is a minimal element in $$ S $$ which corresponds to $$ x $$ , and therefore there exists a subset consisting of the minimal elements, on which the map is a bijection. The inverse of this bijection is an injection into the natural numbers of the computable numbers, proving that they are countable. But, again, this subset is not computable, even though the computable reals are themselves ordered. ### Properties as a field The arithmetical operations on computable numbers are themselves computable in the sense that whenever real numbers a and b are computable then the following real numbers are also computable: a + b, a - b, ab, and a/b if b is nonzero. These operations are actually uniformly computable; for example, there is a Turing machine which on input (A,B, $$ \epsilon $$ ) produces output r, where A is the description of a Turing machine approximating a, B is the description of a Turing machine approximating b, and r is an $$ \epsilon $$ approximation of a + b. The fact that computable real numbers form a field was first proved by Henry Gordon Rice in 1954. Computable reals however do not form a computable field, because the definition of a computable field requires effective equality. ### Non-computability of the ordering The order relation on the computable numbers is not computable. Let A be the description of a Turing machine approximating the number $$ a $$ . Then there is no Turing machine which on input A outputs "YES" if $$ a > 0 $$ and "NO" if $$ a \le 0. $$ To see why, suppose the machine described by A keeps outputting 0 as $$ \epsilon $$ approximations. It is not clear how long to wait before deciding that the machine will never output an approximation which forces a to be positive. Thus the machine will eventually have to guess that the number will equal 0, in order to produce an output; the sequence may later become different from 0. This idea can be used to show that the machine is incorrect on some sequences if it computes a total function. A similar problem occurs when the computable reals are represented as Dedekind cuts. The same holds for the equality relation: the equality test is not computable. While the full order relation is not computable, the restriction of it to pairs of unequal numbers is computable. That is, there is a program that takes as input two Turing machines A and B approximating numbers $$ a $$ and $$ b $$ , where $$ a \ne b $$ , and outputs whether $$ a < b $$ or $$ a > b. $$ It is sufficient to use $$ \epsilon $$ -approximations where $$ \epsilon < |b-a|/2, $$ so by taking increasingly small $$ \epsilon $$ (approaching 0), one eventually can decide whether $$ a < b $$ or $$ a > b. $$ ### Other properties The computable real numbers do not share all the properties of the real numbers used in analysis. For example, the least upper bound of a bounded increasing computable sequence of computable real numbers need not be a computable real number. A sequence with this property is known as a Specker sequence, as the first construction is due to Ernst Specker in 1949. Despite the existence of counterexamples such as these, parts of calculus and real analysis can be developed in the field of computable numbers, leading to the study of computable analysis. Every computable number is arithmetically definable, but not vice versa. There are many arithmetically definable, noncomputable real numbers, including: - any number that encodes the solution of the halting problem (or any other undecidable problem) according to a chosen encoding scheme. - Chaitin's constant, $$ \Omega $$ , which is a type of real number that is Turing equivalent to the halting problem. Both of these examples in fact define an infinite set of definable, uncomputable numbers, one for each universal Turing machine. A real number is computable if and only if the set of natural numbers it represents (when written in binary and viewed as a characteristic function) is computable. The set of computable real numbers (as well as every countable, densely ordered subset of computable reals without ends) is order-isomorphic to the set of rational numbers. ## Digit strings and the Cantor and Baire spaces Turing's original paper defined computable numbers as follows: (The decimal expansion of a only refers to the digits following the decimal point.) Turing was aware that this definition is equivalent to the $$ \epsilon $$ -approximation definition given above. The argument proceeds as follows: if a number is computable in the Turing sense, then it is also computable in the $$ \epsilon $$ sense: if $$ n > \log_{10} (1/\epsilon) $$ , then the first n digits of the decimal expansion for a provide an $$ \epsilon $$ approximation of a. For the converse, we pick an $$ \epsilon $$ computable real number a and generate increasingly precise approximations until the nth digit after the decimal point is certain. This always generates a decimal expansion equal to a but it may improperly end in an infinite sequence of 9's in which case it must have a finite (and thus computable) proper decimal expansion. Unless certain topological properties of the real numbers are relevant, it is often more convenient to deal with elements of $$ 2^{\omega} $$ (total 0,1 valued functions) instead of reals numbers in $$ [0,1] $$ . The members of $$ 2^{\omega} $$ can be identified with binary decimal expansions, but since the decimal expansions $$ .d_1d_2\ldots d_n0111\ldots $$ and $$ .d_1d_2\ldots d_n10 $$ denote the same real number, the interval $$ [0,1] $$ can only be bijectively (and homeomorphically under the subset topology) identified with the subset of $$ 2^{\omega} $$ not ending in all 1's. Note that this property of decimal expansions means that it is impossible to effectively identify the computable real numbers defined in terms of a decimal expansion and those defined in the $$ \epsilon $$ approximation sense. Hirst has shown that there is no algorithm which takes as input the description of a Turing machine which produces $$ \epsilon $$ approximations for the computable number a, and produces as output a Turing machine which enumerates the digits of a in the sense of Turing's definition. Similarly, it means that the arithmetic operations on the computable reals are not effective on their decimal representations as when adding decimal numbers. In order to produce one digit, it may be necessary to look arbitrarily far to the right to determine if there is a carry to the current location. This lack of uniformity is one reason why the contemporary definition of computable numbers uses $$ \epsilon $$ approximations rather than decimal expansions. However, from a computability theoretic or measure theoretic perspective, the two structures $$ 2^{\omega} $$ and $$ [0,1] $$ are essentially identical. Thus, computability theorists often refer to members of $$ 2^{\omega} $$ as reals. While $$ 2^{\omega} $$ is totally disconnected, for questions about $$ \Pi^0_1 $$ classes or randomness it is easier to work in $$ 2^{\omega} $$ . Elements of $$ \omega^{\omega} $$ are sometimes called reals as well and though containing a homeomorphic image of $$ \mathbb{R} $$ , $$ \omega^{\omega} $$ isn't even locally compact (in addition to being totally disconnected). This leads to genuine differences in the computational properties. For instance the $$ x \in \mathbb{R} $$ satisfying $$ \forall(n \in \omega)\phi(x,n) $$ , with $$ \phi(x,n) $$ quantifier free, must be computable while the unique $$ x \in \omega^{\omega} $$ satisfying a universal formula may have an arbitrarily high position in the hyperarithmetic hierarchy. ## Use in place of the reals The computable numbers include the specific real numbers which appear in practice, including all real algebraic numbers, as well as e, π, and many other transcendental numbers. Though the computable reals exhaust those reals we can calculate or approximate, the assumption that all reals are computable leads to substantially different conclusions about the real numbers. The question naturally arises of whether it is possible to dispose of the full set of reals and use computable numbers for all of mathematics. This idea is appealing from a constructivist point of view, and has been pursued by the Russian school of constructive mathematics. To actually develop analysis over computable numbers, some care must be taken. For example, if one uses the classical definition of a sequence, the set of computable numbers is not closed under the basic operation of taking the supremum of a bounded sequence (for example, consider a Specker sequence, see the section above). This difficulty is addressed by considering only sequences which have a computable modulus of convergence. The resulting mathematical theory is called computable analysis. ## Implementations of exact arithmetic Computer packages representing real numbers as programs computing approximations have been proposed as early as 1985, under the name "exact arithmetic". Modern examples include the CoRN library (Coq), and the RealLib package (C++). A related line of work is based on taking a real RAM program and running it with rational or floating-point numbers of sufficient precision, such as the package.
https://en.wikipedia.org/wiki/Computable_number
In computer science, a red–black tree is a self-balancing binary search tree data structure noted for fast storage and retrieval of ordered information. The nodes in a red-black tree hold an extra "color" bit, often drawn as red and black, which help ensure that the tree is always approximately balanced. When the tree is modified, the new tree is rearranged and "repainted" to restore the coloring properties that constrain how unbalanced the tree can become in the worst case. The properties are designed such that this rearranging and recoloring can be performed efficiently. The (re-)balancing is not perfect, but guarantees searching in $$ O(\log n) $$ time, where $$ n $$ is the number of entries in the tree. The insert and delete operations, along with tree rearrangement and recoloring, also execute in $$ O(\log n) $$ time. Tracking the color of each node requires only one bit of information per node because there are only two colors (due to memory alignment present in some programming languages, the real memory consumption may differ). The tree does not contain any other data specific to it being a red–black tree, so its memory footprint is almost identical to that of a classic (uncolored) binary search tree. In some cases, the added bit of information can be stored at no added memory cost. ## History In 1972, Rudolf Bayer invented a data structure that was a special order-4 case of a B-tree. These trees maintained all paths from root to leaf with the same number of nodes, creating perfectly balanced trees. However, they were not binary search trees. Bayer called them a "symmetric binary B-tree" in his paper and later they became popular as 2–3–4 trees or even 2–3 trees. In a 1978 paper, "A Dichromatic Framework for Balanced Trees", Leonidas J. Guibas and Robert Sedgewick derived the red–black tree from the symmetric binary B-tree. The color "red" was chosen because it was the best-looking color produced by the color laser printer available to the authors while working at Xerox PARC. Another response from Guibas states that it was because of the red and black pens available to them to draw the trees. In 1993, Arne Andersson introduced the idea of a right leaning tree to simplify insert and delete operations. In 1999, Chris Okasaki showed how to make the insert operation purely functional. Its balance function needed to take care of only 4 unbalanced cases and one default balanced case. The original algorithm used 8 unbalanced cases, but reduced that to 6 unbalanced cases. Sedgewick showed that the insert operation can be implemented in just 46 lines of Java. In 2008, Sedgewick proposed the left-leaning red–black tree, leveraging Andersson’s idea that simplified the insert and delete operations. Sedgewick originally allowed nodes whose two children are red, making his trees more like 2–3–4 trees, but later this restriction was added, making new trees more like 2–3 trees. Sedgewick implemented the insert algorithm in just 33 lines, significantly shortening his original 46 lines of code. ## Terminology The black depth of a node is defined as the number of black nodes from the root to that node (i.e. the number of black ancestors). The black height of a red–black tree is the number of black nodes in any path from the root to the leaves, which, by requirement 4, is constant (alternatively, it could be defined as the black depth of any leaf node). The black height of a node is the black height of the subtree rooted by it. In this article, the black height of a null node shall be set to 0, because its subtree is empty as suggested by the example figure, and its tree height is also 0. ## Properties In addition to the requirements imposed on a binary search tree the following must be satisfied by a 1. Every node is either red or black. 1. All null nodes are considered black. 1. A red node does not have a red child. 1. Every path from a given node to any of its leaf nodes goes through the same number of black nodes. 1. (Conclusion) If a node N has exactly one child, the child must be red. If the child were black, its leaves would sit at a different black depth than N's null node (which is considered black by rule 2), violating requirement 4. Some authors, e.g. Cormen & al., claim "the root is black" as fifth requirement; but not Mehlhorn & Sanders or Sedgewick & Wayne. Since the root can always be changed from red to black, this rule has little effect on analysis. This article also omits it, because it slightly disturbs the recursive algorithms and proofs. As an example, every perfect binary tree that consists only of black nodes is a red–black tree. The read-only operations, such as search or tree traversal, do not affect any of the requirements. In contrast, the modifying operations insert and delete easily maintain requirements 1 and 2, but with respect to the other requirements some extra effort must be made, to avoid introducing a violation of requirement 3, called a red-violation, or of requirement 4, called a black-violation. The requirements enforce a critical property of red–black trees: the path from the root to the farthest leaf is no more than twice as long as the path from the root to the nearest leaf. The result is that the tree is height-balanced. Since operations such as inserting, deleting, and finding values require worst-case time proportional to the height $$ h $$ of the tree, this upper bound on the height allows red–black trees to be efficient in the worst case, namely logarithmic in the number $$ n $$ of entries, i.e. (a property which is shared by all self-balancing trees, e.g., AVL tree or B-tree, but not the ordinary binary search trees). For a mathematical proof see section ## Proof of bounds . Red–black trees, like all binary search trees, allow quite efficient sequential access (e.g. in-order traversal, that is: in the order Left–Root–Right) of their elements. But they support also asymptotically optimal direct access via a traversal from root to leaf, resulting in $$ O(\log n) $$ search time. ## Analogy to 2–3–4 trees Red–black trees are similar in structure to 2–3–4 trees, which are B-trees of order 4. In 2–3–4 trees, each node can contain between 1 and 3 values and have between 2 and 4 children. These 2–3–4 nodes correspond to black node – red children groups in red-black trees, as shown in figure 1. It is not a 1-to-1 correspondence, because 3-nodes have two equivalent representations: the red child may lie either to the left or right. The left-leaning red-black tree variant makes this relationship exactly 1-to-1, by only allowing the left child representation. Since every 2–3–4 node has a corresponding black node, invariant 4 of red-black trees is equivalent to saying that the leaves of a 2–3–4 tree all lie at the same level. Despite structural similarities, operations on red–black trees are more economical than B-trees. B-trees require management of vectors of variable length, whereas red-black trees are simply binary trees. ## Applications and related data structures Red–black trees offer worst-case guarantees for insertion time, deletion time, and search time. Not only does this make them valuable in time-sensitive applications such as real-time applications, but it makes them valuable building blocks in other data structures that provide worst-case guarantees. For example, many data structures used in computational geometry are based on red–black trees, and the Completely Fair Scheduler and epoll system call of the Linux kernel use red–black trees. The AVL tree is another structure supporting $$ O(\log n) $$ search, insertion, and removal. AVL trees can be colored red–black, and thus are a subset of red-black trees. The worst-case height of AVL is 0.720 times the worst-case height of red-black trees, so AVL trees are more rigidly balanced. The performance measurements of Ben Pfaff with realistic test cases in 79 runs find AVL to RB ratios between 0.677 and 1.077, median at 0.947, and geometric mean 0.910. The performance of WAVL trees lie in between AVL trees and red-black trees. Red–black trees are also particularly valuable in functional programming, where they are one of the most common persistent data structures, used to construct associative arrays and sets that can retain previous versions after mutations. The persistent version of red–black trees requires $$ O(\log n) $$ space for each insertion or deletion, in addition to time. For every 2–3–4 tree, there are corresponding red–black trees with data elements in the same order. The insertion and deletion operations on 2–3–4 trees are also equivalent to color-flipping and rotations in red–black trees. This makes 2–3–4 trees an important tool for understanding the logic behind red–black trees, and this is why many introductory algorithm texts introduce 2–3–4 trees just before red–black trees, even though 2–3–4 trees are not often used in practice. In 2008, Sedgewick introduced a simpler version of the red–black tree called the left-leaning red–black tree by eliminating a previously unspecified degree of freedom in the implementation. The LLRB maintains an additional invariant that all red links must lean left except during inserts and deletes. Red–black trees can be made isometric to either 2–3 trees, or 2–3–4 trees, for any sequence of operations. The 2–3–4 tree isometry was described in 1978 by Sedgewick. With 2–3–4 trees, the isometry is resolved by a "color flip," corresponding to a split, in which the red color of two children nodes leaves the children and moves to the parent node. The original description of the tango tree, a type of tree optimised for fast searches, specifically uses red–black trees as part of its data structure. As of Java 8, the HashMap has been modified such that instead of using a LinkedList to store different elements with colliding hashcodes, a red–black tree is used. This results in the improvement of time complexity of searching such an element from $$ O(m) $$ to $$ O(\log m) $$ where $$ m $$ is the number of elements with colliding hashes. ## Implementation The read-only operations, such as search or tree traversal, on a red–black tree require no modification from those used for binary search trees, because every red–black tree is a special case of a simple binary search tree. However, the immediate result of an insertion or removal may violate the properties of a red–black tree, the restoration of which is called rebalancing so that red–black trees become self-balancing. Rebalancing (i.e. color changes) has a worst-case time complexity of $$ O(\log n) $$ and average of $$ O(1) $$ , though these are very quick in practice. Additionally, rebalancing takes no more than three tree rotations (two for insertion). This is an example implementation of insert and remove in C. Below are the data structures and the `rotate_subtree` helper function used in the insert and remove examples. ```c enum Color { BLACK, RED }; enum Dir { LEFT, RIGHT }; // red-black tree node struct Node { struct Node *parent; // null for the root node union { // Union so we can use ->left/->right or ->child[0]/->child[1] struct { struct Node *left; struct Node *right; }; struct Node *child[2]; }; enum Color color; int key; }; struct Tree { struct Node *root; }; 1. define DIRECTION(N) (N == N->parent->right ? RIGHT : LEFT) struct Node *rotate_subtree(struct Tree *tree, struct Node *sub, enum Dir dir) { struct Node *sub_parent = sub->parent; struct Node *new_root = sub->child[1 - dir]; // 1 - dir is the opposite direction struct Node *new_child = new_root->child[dir]; sub->child[1 - dir] = new_child; if (new_child) new_child->parent = sub; new_root->child[dir] = sub; new_root->parent = sub_parent; sub->parent = new_root; if (sub_parent) sub_parent->child[sub == sub_parent->right] = new_root; else tree->root = new_root; return new_root; } ``` ### Notes to the sample code and diagrams of insertion and removal The proposal breaks down both insertion and removal (not mentioning some very simple cases) into six constellations of nodes, edges, and colors, which are called cases. The proposal contains, for both insertion and removal, exactly one case that advances one black level closer to the root and loops, the other five cases rebalance the tree of their own. The more complicated cases are pictured in a diagram. - symbolises a red node and a (non-NULL) black node (of black height ≥ 1), symbolises the color red or black of a non-NULL node, but the same color throughout the same diagram. NULL nodes are not represented in the diagrams. - The variable N denotes the current node, which is labeled  N  or  N  in the diagrams. - A diagram contains three columns and two to four actions. The left column shows the first iteration, the right column the higher iterations, the middle column shows the segmentation of a case into its different actions. 1. The action "entry" shows the constellation of nodes with their colors which defines a case and mostly violates some of the requirements.A blue border rings the current node N and the other nodes are labeled according to their relation to N. 1. If a rotation is considered useful, this is pictured in the next action, which is labeled "rotation". 1. If some recoloring is considered useful, this is pictured in the next action, which is labeled "color". 1. If there is still some need to repair, the cases make use of code of other cases and this after a reassignment of the current node N, which then again carries a blue ring and relative to which other nodes may have to be reassigned also. This action is labeled "reassign".For both, insert and delete, there is (exactly) one case which iterates one black level closer to the root; then the reassigned constellation satisfies the respective loop invariant. - A possibly numbered triangle with a black circle atop represents a red–black subtree (connected to its parent according to requirement 3) with a black height equal to the iteration level minus one, i.e. zero in the first iteration. Its root may be red or black.A possibly numbered triangle represents a red–black subtree with a black height one less, i.e. its parent has black height zero in the second iteration. Remark For simplicity, the sample code uses the disjunction: `U == NULL || U->color == BLACK // considered black` and the conjunction: `U != NULL && U->color == RED   // not considered black` Thereby, it must be kept in mind that both statements are not evaluated in total, if `U == NULL`. Then in both cases `U->color` is not touched (see Short-circuit evaluation).(The comment `considered black` is in accordance with requirement 2.) The related `if`-statements have to occur far less frequently if the proposal is realised. ### Insertion Insertion begins by placing the new (non-NULL) node, say N, at the position in the binary search tree of a NULL node whose in-order predecessor’s key compares less than the new node’s key, which in turn compares less than the key of its in-order successor. (Frequently, this positioning is the result of a search within the tree immediately preceding the insert operation and consists of a node `P` together with a direction `dir` with The newly inserted node is temporarily colored red so that all paths contain the same number of black nodes as before. But if its parent, say P, is also red then this action introduces a red-violation. ```c // parent is optional void insert(struct Tree *tree, struct Node *node, struct Node *parent, enum Dir dir) { node->color = RED; node->parent = parent; if (!parent) { tree->root = node; return; } parent->child[dir] = node; // rebalance the tree do { // Case #1 if (parent->color == BLACK) return; struct Node *grandparent = parent->parent; if (!grandparent) { // Case #4 parent->color = BLACK; return; } dir = DIRECTION(parent); struct Node *uncle = grandparent->child[1 - dir]; if (!uncle || uncle->color == BLACK) { if (node == parent->child[1 - dir]) { // Case #5 rotate_subtree(tree, parent, dir); node = parent; parent = grandparent->child[dir]; } // Case #6 rotate_subtree(tree, grandparent, 1 - dir); parent->color = BLACK; grandparent->color = RED; return; } // Case #2 parent->color = BLACK; uncle->color = BLACK; grandparent->color = RED; node = grandparent; } while (parent = node->parent); // Case #3 return; } ``` The rebalancing loop of the insert operation has the following invariants: - Node is the current node, initially the insertion node. - Node is red at the beginning of each iteration. - Requirement 3 is satisfied for all pairs node←parent with the possible exception node←parent when parent is also red (a red-violation at node). - All other properties (including requirement 4) are satisfied throughout the tree. #### Notes to the insert diagrams before caserowspan="2" rowspan="2" after next Δh P G U x P G U x I1 I2 N:=G 2 — I3 — I4 i I5 P↶N N:=P o I6 0 o I6 P↷G Insertion This synopsis shows in its before columns, that allpossible casesThe same partitioning is found in Ben Pfaff. of constellations are covered. - In the diagrams, P is used for N’s parent, G for its grandparent, and U for its uncle. In the table, "—" indicates the root. - The diagrams show the parent node P as the left child of its parent G even though it is possible for P to be on either side. The sample code covers both possibilities by means of the side variable `dir`. - The diagrams show the cases where P is red also, the red-violation. - The column x indicates the change in child direction, i.e. o (for "outer") means that P and N are both left or both right children, whereas i (for "inner") means that the child direction changes from P’s to N’s. - The column group before defines the case, whose name is given in the column case. Thereby possible values in cells left empty are ignored. So in case I2 the sample code covers both possibilities of child directions of N, although the corresponding diagram shows only one. - The rows in the synopsis are ordered such that the coverage of all possible RB cases is easily comprehensible. - The column rotation indicates whether a rotation contributes to the rebalancing. - The column assignment shows an assignment of N before entering a subsequent step. This possibly induces a reassignment of the other nodes P, G, U also. - If something has been changed by the case, this is shown in the column group after. - A sign in column next signifies that the rebalancing is complete with this step. If the column after determines exactly one case, this case is given as the subsequent one, otherwise there are question marks. - In case 2 the problem of rebalancing is escalated $$ \Delta h=2 $$ tree levels or 1 black level higher in the tree, in that the grandfather G becomes the new current node N. So it takes maximally $$ \tfrac{h}2 $$ steps of iteration to repair the tree (where is the height of the tree). Because the probability of escalation decreases exponentially with each step the total rebalancing cost is constant on average, indeed amortized constant. - Rotations occur in cases I6 and I5 + I6 – outside the loop. Therefore, at most two rotations occur in total. #### Insert case 1 The current node’s parent P is black, so requirement 3 holds. Requirement 4 holds also according to the loop invariant. #### Insert case 2 If both the parent P and the uncle U are red, then both of them can be repainted black and the grandparent G becomes red for maintaining requirement 4. Since any path through the parent or uncle must pass through the grandparent, the number of black nodes on these paths has not changed. However, the grandparent G may now violate requirement 3, if it has a red parent. After relabeling G to N the loop invariant is fulfilled so that the rebalancing can be iterated on one black level (= 2 tree levels) higher. #### Insert case 3 Insert case 2 has been executed for $$ \tfrac{h-1}2 $$ times and the total height of the tree has increased by 1, now being  . The current node N is the (red) root of the tree, and all RB-properties are satisfied. #### Insert case 4 The parent P is red and the root. Because N is also red, requirement 3 is violated. But after switching P’s color the tree is in RB-shape. The black height of the tree increases by 1. #### Insert case 5 The parent P is red but the uncle U is black. The ultimate goal is to rotate the parent node P to the grandparent position, but this will not work if N is an "inner" grandchild of G (i.e., if N is the left child of the right child of G or the right child of the left child of G). A at P switches the roles of the current node N and its parent P. The rotation adds paths through N (those in the subtree labeled 2, see diagram) and removes paths through P (those in the subtree labeled 4). But both P and N are red, so requirement 4 is preserved. Requirement 3 is restored in case 6. #### Insert case 6 The current node N is now certain to be an "outer" grandchild of G (left of left child or right of right child). Now at G, putting P in place of G and making P the parent of N and G. G is black and its former child P is red, since requirement 3 was violated. After switching the colors of P and G the resulting tree satisfies requirement 3. Requirement 4 also remains satisfied, since all paths that went through the black G now go through the black P. Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place. ### Removal #### Simple cases - When the deleted node has 2 children (non-NULL), then we can swap its value with its in-order successor (the leftmost child of the right subtree), and then delete the successor instead. Since the successor is leftmost, it can only have a right child (non-NULL) or no child at all. - When the deleted node has only 1 child (non-NULL). In this case, just replace the node with its child, and color it black. - The single child (non-NULL) must be red according to conclusion 5, and the deleted node must be black according to requirement 3. - When the deleted node has no children (both NULL) and is the root, replace it with NULL. The tree is empty. - When the deleted node has no children (both NULL), and is red, simply remove the leaf node. - When the deleted node has no children (both NULL), and is black, deleting it will create an imbalance, and requires a rebalance, as covered in the next section. #### Removal of a black non-root leaf The complex case is when N is not the root, colored black and has no proper child (⇔ only NULL children). In the first iteration, N is replaced by NULL. ```c void remove(struct Tree *tree, struct Node *node) { struct Node *parent = node->parent; struct Node *sibling; struct Node *close_nephew; struct Node *distant_nephew; enum Dir dir = DIRECTION(node); parent->child[dir] = NULL; goto start_balance; do { dir = DIRECTION(node); start_balance: sibling = parent->child[1 - dir]; distant_nephew = sibling->child[1 - dir]; close_nephew = sibling->child[dir]; if (sibling->color == RED) { // Case #3 rotate_subtree(tree, parent, dir); parent->color = RED; sibling->color = BLACK; sibling = close_nephew; distant_nephew = sibling->child[1 - dir]; if (distant_nephew && distant_nephew->color == RED) goto case_6; close_nephew = sibling->child[dir]; if (close_nephew && close_nephew->color == RED) goto case_5; // Case #4 sibling->color = RED; parent->color = BLACK; return; } if (distant_nephew && distant_nephew->color == RED) goto case_6; if (close_nephew && close_nephew->color == RED) goto case_5; if (parent->color == RED) { // Case #4 sibling->color = RED; parent->color = BLACK; return; } // Case #1 if (!parent) return; // Case #2 sibling->color = RED; node = parent; } while (parent = node->parent); case_5: rotate_subtree(tree, sibling, 1 - dir); sibling->color = RED; close_nephew->color = BLACK; distant_nephew = sibling; sibling = close_nephew; case_6: rotate_subtree(tree, parent, dir); sibling->color = parent->color; parent->color = BLACK; distant_nephew->color = BLACK; return; } ``` The rebalancing loop of the delete operation has the following invariant: - At the beginning of each iteration the black height of N equals the iteration number minus one, which means that in the first iteration it is zero and that N is a true black node in higher iterations. - The number of black nodes on the paths through N is one less than before the deletion, whereas it is unchanged on all other paths, so that there is a black-violation at P if other paths exist. - All other properties (including requirement 3) are satisfied throughout the tree. #### Notes to the delete diagrams before caserowspan="2" rowspan="2" after next Δh P C S D P C S D — D1 D2 N:=P 1 D3 P↶S N:=N D6 0 D5 0 D4 0 D4 D5 C↷S N:=N D6 0 D6 P↶S Deletion This synopsis shows in its before columns, that allpossible cases of color constellations are covered. - In the diagrams below, P is used for N’s parent, S for the sibling of N, C (meaning close nephew) for S’s child in the same direction as N, and D (meaning distant nephew) for S’s other child (S cannot be a NULL node in the first iteration, because it must have black height one, which was the black height of N before its deletion, but C and D may be NULL nodes). - The diagrams show the current node N as the left child of its parent P even though it is possible for N to be on either side. The code samples cover both possibilities by means of the side variable `dir`. - At the beginning (in the first iteration) of removal, N is the NULL node replacing the node to be deleted. Because its location in parent’s node is the only thing of importance, it is symbolised by (meaning: the current node N is a NULL node and left child) in the left column of the delete diagrams. As the operation proceeds also proper nodes (of black height ≥ 1) may become current (see e.g. case 2). - By counting the black bullets ( and ) in a delete diagram it can be observed that the paths through N have one bullet less than the other paths. This means a black-violation at P—if it exists. - The color constellation in column group before defines the case, whose name is given in the column case. Thereby possible values in cells left empty are ignored. - The rows in the synopsis are ordered such that the coverage of all possible RB cases is easily comprehensible. - The column rotation indicates whether a rotation contributes to the rebalancing. - The column assignment shows an assignment of N before entering a subsequent iteration step. This possibly induces a reassignment of the other nodes P, C, S, D also. - If something has been changed by the case, this is shown in the column group after. - A sign in column next signifies that the rebalancing is complete with this step. If the column after determines exactly one case, this case is given as the subsequent one, otherwise there are question marks. - The loop is where the problem of rebalancing is escalated $$ \Delta h=1 $$ level higher in the tree in that the parent P becomes the new current node N. So it takes maximally iterations to repair the tree (where is the height of the tree). Because the probability of escalation decreases exponentially with each iteration the total rebalancing cost is constant on average, indeed amortized constant. (Just as an aside: Mehlhorn & Sanders point out: "AVL trees do not support constant amortized update costs." This is true for the rebalancing after a deletion, but not AVL insertion.) - Out of the body of the loop there are exiting branches to the cases 3, 6, 5, 4, and 1; section " #### Delete case 3 " of its own has three different exiting branches to the cases 6, 5 and 4. - Rotations occur in cases 6 and 5 + 6 and 3 + 5 + 6 – all outside the loop. Therefore, at most three rotations occur in total. #### Delete case 1 The current node N is the new root. One black node has been removed from every path, so the RB-properties are preserved. The black height of the tree decreases by 1. #### Delete case 2 P, S, and S’s children are black. After painting S red all paths passing through S, which are precisely those paths not passing through N, have one less black node. Now all paths in the subtree rooted by P have the same number of black nodes, but one fewer than the paths that do not pass through P, so requirement 4 may still be violated. After relabeling P to N the loop invariant is fulfilled so that the rebalancing can be iterated on one black level (= 1 tree level) higher. Delete case 3 The sibling S is red, so P and the nephews C and D have to be black. A at P turns S into N’s grandparent. Then after reversing the colors of P and S, the path through N is still short one black node. But N now has a red parent P and after the reassignment a black sibling S, so the transformations in cases 4, 5, or 6 are able to restore the RB-shape. #### Delete case 4 The sibling S and S’s children are black, but P is red. Exchanging the colors of S and P does not affect the number of black nodes on paths going through S, but it does add one to the number of black nodes on paths going through N, making up for the deleted black node on those paths. #### Delete case 5 The sibling S is black, S’s close child C is red, and S’s distant child D is black. After a at S the nephew C becomes S’s parent and N’s new sibling. The colors of S and C are exchanged. All paths still have the same number of black nodes, but now N has a black sibling whose distant child is red, so the constellation is fit for case D6. Neither N nor its parent P are affected by this transformation, and P may be red or black ( in the diagram). #### Delete case 6 The sibling S is black, S’s distant child D is red. After a at P the sibling S becomes the parent of P and S’s distant child D. The colors of P and S are exchanged, and D is made black. The whole subtree still has the same color at its root S, namely either red or black ( in the diagram), which refers to the same color both before and after the transformation. This way requirement 3 is preserved. The paths in the subtree not passing through N (i.o.w. passing through D and node 3 in the diagram) pass through the same number of black nodes as before, but N now has one additional black ancestor: either P has become black, or it was black and S was added as a black grandparent. Thus, the paths passing through N pass through one additional black node, so that requirement 4 is restored and the total tree is in RB-shape. Because the algorithm transforms the input without using an auxiliary data structure and using only a small amount of extra storage space for auxiliary variables it is in-place. Proof of bounds For $$ h\in\N $$ there is a red–black tree of height $$ h $$ with {| |- | $$ m_h $$ ||colspan=2| $$ = 2^{\lfloor(h+1)/2\rfloor} + 2^{\lfloor h/2 \rfloor} - 2 $$ |- |rowspan=2| ||rowspan=2;style="vertical-align:bot"| $$ = \Biggl\{ $$ ||style="vertical-align:top"| $$ 2 \cdot 2^{\tfrac{h}2}-2 = 2^{\tfrac{h}2+1}-2 $$ ||       ||style="vertical-align:bot"| if $$ h $$ even |- |style="vertical-align:top"| $$ 3 \cdot 2^{\tfrac{h-1}2}-2 $$ || ||style="vertical-align:bot"| if $$ h $$ odd |} nodes ( $$ \lfloor \, \rfloor $$ is the floor function) and there is no red–black tree of this tree height with fewer nodes—therefore it is minimal.Its black height is   $$ \lceil h/2\rceil $$   (with black root) or for odd $$ h $$ (then with a red root) also   $$ (h-1)/2~. $$ Proof For a red–black tree of a certain height to have minimal number of nodes, it must have exactly one longest path with maximal number of red nodes, to achieve a maximal tree height with a minimal black height. Besides this path all other nodes have to be black. If a node is taken off this tree it either loses height or some RB property. The RB tree of height $$ h=1 $$ with red root is minimal. This is in agreement with $$ m_1 = 2^{\lfloor (1+1)/2\rfloor} \!+\!2^{\lfloor 1/2 \rfloor} \!\!-\!\!2 = 2^1\!+\!2^0\!\!-\!\!2 = 1~. $$ A minimal RB tree (RBh in figure 2) of height $$ h>1 $$ has a root whose two child subtrees are of different height. The higher child subtree is also a minimal RB tree, containing also a longest path that defines its height it has $$ m_{h-1} $$ nodes and the black height $$ \lfloor(h\!\!-\!\!1)/2\rfloor =: s . $$ The other subtree is a perfect binary tree of (black) height $$ s $$ having $$ 2^s\!\!-\!\!1=2^{\lfloor(h-1)/2\rfloor}\!\!-\!\!1 $$ black nodes—and no red node. Then the number of nodes is by induction (higher subtree) (root) (second subtree) resulting in   ■ The graph of the function $$ m_h $$ is convex and piecewise linear with breakpoints at $$ (h=2k\;|\;m_{2k}=2 \cdot 2^k\!-\!2) $$ where $$ k \in \N . $$ The function has been tabulated as $$ m_h= $$ A027383(h–1) for $$ h\geq 1 $$ Solving the function for $$ h $$ The inequality $$ 9>8=2^3 $$ leads to $$ 3 > 2^{3/2} $$ , which for odd $$ h $$ leads to $$ m_h = 3 \cdot 2^{(h-1)/2}-2 = \bigl(3\cdot 2^{-3/2}\bigr) \cdot 2^{(h+2)/2}-2 > 2 \cdot 2^{h/2}-2 $$ . So in both, the even and the odd case, $$ h $$ is in the interval (perfect binary tree) (minimal red–black tree) with $$ n $$ being the number of nodes. Conclusion A red–black tree with $$ n $$ nodes (keys) has tree height $$ h \in O(\log n) . $$ ## Set operations and bulk operations In addition to the single-element insert, delete and lookup operations, several set operations have been defined on union, intersection and set difference. Then fast bulk operations on insertions or deletions can be implemented based on these set functions. These set operations rely on two helper operations, Split and Join. With the new operations, the implementation of red–black trees can be more efficient and highly-parallelizable. In order to achieve its time complexities this implementation requires that the root is allowed to be either red or black, and that every node stores its own black height. - Join: The function Join is on two red–black trees and and a key , where , i.e. all keys in are less than , and all keys in are greater than . It returns a tree containing all elements in , also as . If the two trees have the same black height, Join simply creates a new node with left subtree , root and right subtree . If both and have black root, set to be red. Otherwise is set black. If the black heights are unequal, suppose that has larger black height than (the other case is symmetric). Join follows the right spine of until a black node , which is balanced with . At this point a new node with left child , root (set to be red) and right child is created to replace c. The new node may invalidate the red–black invariant because at most three red nodes can appear in a row. This can be fixed with a double rotation. If double red issue propagates to the root, the root is then set to be black, restoring the properties. The cost of this function is the difference of the black heights between the two input trees. - Split: To split a red–black tree into two smaller trees, those smaller than key , and those larger than key , first draw a path from the root by inserting into the red–black tree. After this insertion, all values less than will be found on the left of the path, and all values greater than will be found on the right. By applying Join, all the subtrees on the left side are merged bottom-up using keys on the path as intermediate nodes from bottom to top to form the left tree, and the right part is symmetric. For some applications, Split also returns a boolean value denoting if appears in the tree. The cost of Split is $$ O(\log n) , $$ order of the height of the tree. This algorithm actually has nothing to do with any special properties of a red–black tree, and may be used on any tree with a join operation, such as an AVL tree. The join algorithm is as follows: function joinRightRB(TL, k, TR): if (TL.color=black) and (TL.blackHeight=TR.blackHeight): return Node(TL,⟨k,red⟩,TR) T'=Node(TL.left,⟨TL.key,TL.color⟩,joinRightRB(TL.right,k,TR)) if (TL.color=black) and (T'.right.color=T'.right.right.color=red): T'.right.right.color=black; return rotateLeft(T') return T' /* T[recte T'] */ function joinLeftRB(TL, k, TR): /* symmetric to joinRightRB */ function join(TL, k, TR): if TL.blackHeight>TR.blackHeight: T'=joinRightRB(TL,k,TR) if (T'.color=red) and (T'.right.color=red): T'.color=black return T' if TR.blackHeight>TL.blackHeight: /* symmetric */ if (TL.color=black) and (TR.color=black): return Node(TL,⟨k,red⟩,TR) return Node(TL,⟨k,black⟩,TR) The split algorithm is as follows: function split(T, k): if (T = NULL) return (NULL, false, NULL) if (k = T.key) return (T.left, true, T.right) if (k < T.key): (L',b,R') = split(T.left, k) return (L',b,join(R',T.key,T.right)) (L',b,R') = split(T.right, k) return (join(T.left,T.key,L'),b,T.right) The union of two red–black trees and representing sets and , is a red–black tree that represents . The following recursive function computes this union: function union(t1, t2): if t1 = NULL return t2 if t2 = NULL return t1 (L1,b,R1)=split(t1,t2.key) proc1=start: TL=union(L1,t2.left) proc2=start: TR=union(R1,t2.right) wait all proc1,proc2 return join(TL, t2.key, TR) Here, split is presumed to return two trees: one holding the keys less its input key, one holding the greater keys. (The algorithm is non-destructive, but an in-place destructive version exists also.) The algorithm for intersection or difference is similar, but requires the Join2 helper routine that is the same as Join but without the middle key. Based on the new functions for union, intersection or difference, either one key or multiple keys can be inserted to or deleted from the red–black tree. Since Split calls Join but does not deal with the balancing criteria of red–black trees directly, such an implementation is usually called the "join-based" implementation. The complexity of each of union, intersection and difference is $$ O\left(m \log \left({n\over m}+1\right)\right) $$ for two red–black trees of sizes $$ m $$ and $$ n(\ge m) $$ . This complexity is optimal in terms of the number of comparisons. More importantly, since the recursive calls to union, intersection or difference are independent of each other, they can be executed in parallel with a parallel depth $$ O(\log m \log n) $$ . When $$ m=1 $$ , the join-based implementation has the same computational directed acyclic graph (DAG) as single-element insertion and deletion if the root of the larger tree is used to split the smaller tree. ## Parallel algorithms Parallel algorithms for constructing red–black trees from sorted lists of items can run in constant time or $$ O(\log \log n) $$ time, depending on the computer model, if the number of processors available is asymptotically proportional to the number $$ n $$ of items where $$ n\to\infty $$ . Fast search, insertion, and deletion parallel algorithms are also known. The join-based algorithms for red–black trees are parallel for bulk operations, including union, intersection, construction, filter, map-reduce, and so on. ### Parallel bulk operations Basic operations like insertion, removal or update can be parallelised by defining operations that process bulks of multiple elements. It is also possible to process bulks with several basic operations, for example bulks may contain elements to insert and also elements to remove from the tree. The algorithms for bulk operations aren't just applicable to the red–black tree, but can be adapted to other sorted sequence data structures also, like the 2–3 tree, 2–3–4 tree and (a,b)-tree. In the following different algorithms for bulk insert will be explained, but the same algorithms can also be applied to removal and update. Bulk insert is an operation that inserts each element of a sequence $$ I $$ into a tree $$ T $$ . #### Join-based This approach can be applied to every sorted sequence data structure that supports efficient join- and split-operations. The general idea is to split and in multiple parts and perform the insertions on these parts in parallel. 1. First the bulk of elements to insert must be sorted. 1. After that, the algorithm splits into $$ k \in \mathbb{N}^+ $$ parts $$ \langle I_1, \cdots, I_k \rangle $$ of about equal sizes. 1. Next the tree must be split into parts $$ \langle T_1, \cdots, T_k \rangle $$ in a way, so that for every $$ j \in \mathbb{N}^+ | \, 1 \leq j < k $$ following constraints hold: 1. $$ \text{last}(I_j) < \text{first}(T_{j + 1}) $$ 1. $$ \text{last}(T_j) < \text{first}(I_{j + 1}) $$ 1. Now the algorithm inserts each element of $$ I_j $$ into $$ T_j $$ sequentially. This step must be performed for every , which can be done by up to processors in parallel. 1. Finally, the resulting trees will be joined to form the final result of the entire operation. Note that in Step 3 the constraints for splitting assure that in Step 5 the trees can be joined again and the resulting sequence is sorted. The pseudo code shows a simple divide-and-conquer implementation of the join-based algorithm for bulk-insert. Both recursive calls can be executed in parallel. The join operation used here differs from the version explained in this article, instead join2 is used, which misses the second parameter k. bulkInsert(T, I, k): I.sort() bulklInsertRec(T, I, k) bulkInsertRec(T, I, k): if k = 1: forall e in I: T.insert(e) else m := ⌊size(I) / 2⌋ (T1, _, T2) := split(T, I[m]) bulkInsertRec(T1, I[0 .. m], ⌈k / 2⌉) || bulkInsertRec(T2, I[m + 1 .. size(I) - 1], ⌊k / 2⌋) T ← join2(T1, T2) ##### ##### Execution time Sorting is not considered in this analysis. {| |- | #recursion levels || $$ \in O(\log k) $$ |- | T(split) + T(join) || $$ \in O(\log |T|) $$ |- | insertions per thread || $$ \in O\left(\frac{|I|}{k}\right) $$ |- | T(insert) || $$ \in O(\log |T|) $$ |- | || $$ \in O\left(\log k \log |T| + \frac{|I|}{k} \log |T|\right) $$ |} This can be improved by using parallel algorithms for splitting and joining. In this case the execution time is $$ \in O\left(\log |T| + \frac{|I|}{k} \log |T|\right) $$ . ##### ##### Work {| |- | #splits, #joins || $$ \in O(k) $$ |- | W(split) + W(join) || $$ \in O(\log |T|) $$ |- | #insertions || $$ \in O(|I|) $$ |- | W(insert) || $$ \in O(\log |T|) $$ |- | W(bulkInsert) || $$ \in O(k \log |T| + |I| \log |T|) $$ |} #### Pipelining Another method of parallelizing bulk operations is to use a pipelining approach. This can be done by breaking the task of processing a basic operation up into a sequence of subtasks. For multiple basic operations the subtasks can be processed in parallel by assigning each subtask to a separate processor. 1. First the bulk of elements to insert must be sorted. 1. For each element in the algorithm locates the according insertion position in . This can be done in parallel for each element $$ \in I $$ since won't be mutated in this process. Now must be divided into subsequences according to the insertion position of each element. For example $$ s_{n, \mathit{left}} $$ is the subsequence of that contains the elements whose insertion position would be to the left of node . 1. The middle element $$ m_{n, \mathit{dir}} $$ of every subsequence $$ s_{n, \mathit{dir}} $$ will be inserted into as a new node $$ n' $$ . This can be done in parallel for each $$ m_{n, \mathit{dir}} $$ since by definition the insertion position of each $$ m_{n, \mathit{dir}} $$ is unique. If $$ s_{n, \mathit{dir}} $$ contains elements to the left or to the right of $$ m_{n, \mathit{dir}} $$ , those will be contained in a new set of subsequences as $$ s_{n', \mathit{left}} $$ or $$ s_{n', \mathit{right}} $$ . 1. Now possibly contains up to two consecutive red nodes at the end of the paths form the root to the leaves, which needs to be repaired. Note that, while repairing, the insertion position of elements $$ \in S $$ have to be updated, if the corresponding nodes are affected by rotations. 1. If two nodes have different nearest black ancestors, they can be repaired in parallel. Since at most four nodes can have the same nearest black ancestor, the nodes at the lowest level can be repaired in a constant number of parallel steps. 1. This step will be applied successively to the black levels above until is fully repaired. 1. The steps 3 to 5 will be repeated on the new subsequences until is empty. At this point every element $$ \in I $$ has been inserted. Each application of these steps is called a stage. Since the length of the subsequences in is $$ \in O(|I|) $$ and in every stage the subsequences are being cut in half, the number of stages is $$ \in O(\log |I|) $$ . 1. Since all stages move up the black levels of the tree, they can be parallelised in a pipeline. Once a stage has finished processing one black level, the next stage is able to move up and continue at that level. Execution time Sorting is not considered in this analysis. Also, $$ |I| $$ is assumed to be smaller than $$ |T| $$ , otherwise it would be more efficient to construct the resulting tree from scratch. {| |- | T(find insert position) || $$ \in O(\log |T|) $$ |- | #stages || $$ \in O(\log |I|) $$ |- | T(insert) + T(repair) || $$ \in O(\log |T|) $$ |- style="vertical-align:top" | T(bulkInsert) with ~ #processors || $$ \in O(\log |I| + 2 \cdot \log |T|) $$ $$ = O(\log |T|) $$ |} Work {| |- | W(find insert positions) || $$ \in O(|I| \log |T|) $$ |- | #insertions, #repairs || $$ \in O(|I|) $$ |- | W(insert) + W(repair) || $$ \in O(\log |T|) $$ |- style="vertical-align:top" | W(bulkInsert) || $$ \in O(2 \cdot |I| \log |T|) $$ $$ = O(|I| \log |T|) $$ |}
https://en.wikipedia.org/wiki/Red%E2%80%93black_tree
A clustered file system (CFS) is a file system which is shared by being simultaneously mounted on multiple servers. There are several approaches to clustering, most of which do not employ a clustered file system (only direct attached storage for each node). Clustered file systems can provide features like location-independent addressing and redundancy which improve reliability or reduce the complexity of the other parts of the cluster. Parallel file systems are a type of clustered file system that spread data across multiple storage nodes, usually for redundancy or performance. ## Shared-disk file system A shared-disk file system uses a storage area network (SAN) to allow multiple computers to gain direct disk access at the block level. Access control and translation from file-level operations that applications use to block-level operations used by the SAN must take place on the client node. The most common type of clustered file system, the shared-disk file systemby adding mechanisms for concurrency controlprovides a consistent and serializable view of the file system, avoiding corruption and unintended data loss even when multiple clients try to access the same files at the same time. Shared-disk file-systems commonly employ some sort of fencing mechanism to prevent data corruption in case of node failures, because an unfenced device can cause data corruption if it loses communication with its sister nodes and tries to access the same information other nodes are accessing. The underlying storage area network may use any of a number of block-level protocols, including SCSI, iSCSI, HyperSCSI, ATA over Ethernet (AoE), Fibre Channel, network block device, and InfiniBand. There are different architectural approaches to a shared-disk filesystem. Some distribute file information across all the servers in a cluster (fully distributed). ### ### Examples - Blue Whale Clustered file system (BWFS) - Silicon Graphics (SGI) clustered file system (CXFS) - Veritas Cluster File System - Microsoft Cluster Shared Volumes (CSV) - DataPlow Nasan File System - IBM General Parallel File System (GPFS) - Oracle Cluster File System (OCFS) - OpenVMS Files-11 File System - PolyServe storage solutions - Quantum StorNext File System (SNFS), ex ADIC, ex CentraVision File System (CVFS) - Red Hat Global File System (GFS2) - Sun QFS - TerraScale Technologies TerraFS - Veritas CFS (Cluster FS: Clustered VxFS) - Versity VSM (SAM-QFS ported to Linux), ScoutFS - VMware VMFS - WekaFS - Apple Xsan - DragonFly BSD HAMMER2 ## Distributed file systems Distributed file systems do not share block level access to the same storage but use a network protocol. These are commonly known as network file systems, even though they are not the only file systems that use the network to send data. Distributed file systems can restrict access to the file system depending on access lists or capabilities on both the servers and the clients, depending on how the protocol is designed. The difference between a distributed file system and a distributed data store is that a distributed file system allows files to be accessed using the same interfaces and semantics as local files for example, mounting/unmounting, listing directories, read/write at byte boundaries, system's native permission model. Distributed data stores, by contrast, require using a different API or library and have different semantics (most often those of a database). ### Design goals Distributed file systems may aim for "transparency" in a number of aspects. That is, they aim to be "invisible" to client programs, which "see" a system which is similar to a local file system. Behind the scenes, the distributed file system handles locating files, transporting data, and potentially providing other features listed below. - Access transparency: clients are unaware that files are distributed and can access them in the same way as local files are accessed. - Location transparency: a consistent namespace exists encompassing local as well as remote files. The name of a file does not give its location. - ### Concurrency transparency: all clients have the same view of the state of the file system. This means that if one process is modifying a file, any other processes on the same system or remote systems that are accessing the files will see the modifications in a coherent manner. - Failure transparency: the client and client programs should operate correctly after a server failure. - Heterogeneity: file service should be provided across different hardware and operating system platforms. - Scalability: the file system should work well in small environments (1 machine, a dozen machines) and also scale gracefully to bigger ones (hundreds through tens of thousands of systems). - Replication transparency: Clients should not have to be aware of the file replication performed across multiple servers to support scalability. - Migration transparency: files should be able to move between different servers without the client's knowledge. ### ## History The Incompatible Timesharing System used virtual devices for transparent inter-machine file system access in the 1960s. More file servers were developed in the 1970s. In 1976, Digital Equipment Corporation created the File Access Listener (FAL), an implementation of the Data Access Protocol as part of DECnet Phase II which became the first widely used network file system. In 1984, Sun Microsystems created the file system called "Network File System" (NFS) which became the first widely used Internet Protocol based network file system. Other notable network file systems are Andrew File System (AFS), Apple Filing Protocol (AFP), NetWare Core Protocol (NCP), and Server Message Block (SMB) which is also known as Common Internet File System (CIFS). In 1986, IBM announced client and server support for Distributed Data Management Architecture (DDM) for the System/36, System/38, and IBM mainframe computers running CICS. This was followed by the support for IBM Personal Computer, AS/400, IBM mainframe computers under the MVS and VSE operating systems, and FlexOS. DDM also became the foundation for Distributed Relational Database Architecture, also known as DRDA. There are many peer-to-peer network protocols for open-source distributed file systems for cloud or closed-source clustered file systems, e. g.: 9P, AFS, Coda, CIFS/SMB, DCE/DFS, WekaFS, Lustre, PanFS, Google File System, Mnet, Chord Project. Examples - Alluxio - BeeGFS (Fraunhofer) - CephFS (Inktank, Red Hat, SUSE) - Windows Distributed File System (DFS) (Microsoft) - Infinit (acquired by Docker) - GfarmFS - GlusterFS (Red Hat) - GFS (Google Inc.) - GPFS (IBM) - HDFS (Apache Software Foundation) - IPFS (Inter Planetary File System) - iRODS - LizardFS (Skytechnology) - Lustre - MapR FS - MooseFS (Core Technology / Gemius) - ObjectiveFS - OneFS (EMC Isilon) - OrangeFS (Clemson University, Omnibond Systems), formerly Parallel Virtual File System - PanFS (Panasas) - Parallel Virtual File System (Clemson University, Argonne National Laboratory, Ohio Supercomputer Center) - RozoFS (Rozo Systems) - SMB/CIFS - Torus (CoreOS) - WekaFS (WekaIO) - XtreemFS ## Network-attached storage Network-attached storage (NAS) provides both storage and a file system, like a shared disk file system on top of a storage area network (SAN). NAS typically uses file-based protocols (as opposed to block-based protocols a SAN would use) such as NFS (popular on UNIX systems), SMB/CIFS (Server Message Block/Common Internet File System) (used with MS Windows systems), AFP (used with Apple Macintosh computers), or NCP (used with OES and Novell NetWare). ## Design considerations ### Avoiding single point of failure The failure of disk hardware or a given storage node in a cluster can create a single point of failure that can result in data loss or unavailability. Fault tolerance and high availability can be provided through data replication of one sort or another, so that data remains intact and available despite the failure of any single piece of equipment. For examples, see the lists of distributed fault-tolerant file systems and distributed parallel fault-tolerant file systems. ### Performance A common performance measurement of a clustered file system is the amount of time needed to satisfy service requests. In conventional systems, this time consists of a disk-access time and a small amount of CPU-processing time. But in a clustered file system, a remote access has additional overhead due to the distributed structure. This includes the time to deliver the request to a server, the time to deliver the response to the client, and for each direction, a CPU overhead of running the communication protocol software. Concurrency Concurrency control becomes an issue when more than one person or client is accessing the same file or block and want to update it. Hence updates to the file from one client should not interfere with access and updates from other clients. This problem is more complex with file systems due to concurrent overlapping writes, where different writers write to overlapping regions of the file concurrently. This problem is usually handled by concurrency control or locking which may either be built into the file system or provided by an add-on protocol. History IBM mainframes in the 1970s could share physical disks and file systems if each machine had its own channel connection to the drives' control units. In the 1980s, Digital Equipment Corporation's TOPS-20 and OpenVMS clusters (VAX/ALPHA/IA64) included shared disk file systems.
https://en.wikipedia.org/wiki/Clustered_file_system
In computer science (specifically computational complexity theory), the worst-case complexity measures the resources (e.g. running time, memory) that an algorithm requires given an input of arbitrary size (commonly denoted as in asymptotic notation). It gives an upper bound on the resources required by the algorithm. In the case of running time, the worst-case time complexity indicates the longest running time performed by an algorithm given any input of size , and thus guarantees that the algorithm will finish in the indicated period of time. The order of growth (e.g. linear, logarithmic) of the worst-case complexity is commonly used to compare the efficiency of two algorithms. The worst-case complexity of an algorithm should be contrasted with its average-case complexity, which is an average measure of the amount of resources the algorithm uses on a random input. ## Definition Given a model of computation and an algorithm $$ \mathsf{A} $$ that halts on each input $$ s $$ , the mapping $$ t_{\mathsf{A}} \colon \{0, 1\}^\star \to \N $$ is called the time complexity of $$ \mathsf{A} $$ if, for every input string $$ s $$ , $$ \mathsf{A} $$ halts after exactly $$ t_{\mathsf{A}}(s) $$ steps. Since we usually are interested in the dependence of the time complexity on different input lengths, abusing terminology, the time complexity is sometimes referred to the mapping $$ t_{\mathsf{A}} \colon \N \to \N $$ , defined by the maximal complexity $$ t_{\mathsf{A}}(n) := \max_{s\in \{0, 1\}^n} t_{\mathsf{A}}(s) $$ of inputs $$ s $$ with length or size $$ \le n $$ . Similar definitions can be given for space complexity, randomness complexity, etc. ## Ways of speaking Very frequently, the complexity $$ t_{\mathsf{A}} $$ of an algorithm $$ \mathsf{A} $$ is given in asymptotic Big-O Notation, which gives its growth rate in the form $$ t_{\mathsf{A}} = O(g(n)) $$ with a certain real valued comparison function $$ g(n) $$ and the meaning: - There exists a positive real number $$ M $$ and a natural number $$ n_0 $$ such that $$ |t_{\mathsf{A}}(n)| \le M g(n) \quad \text{ for all } n\ge n_0. $$ Quite frequently, the wording is: - „Algorithm $$ \mathsf{A} $$ has the worst-case complexity $$ O(g(n)) $$ .“ or even only: - „Algorithm $$ \mathsf{A} $$ has complexity $$ O(g(n)) $$ .“ ## Examples Consider performing insertion sort on $$ n $$ numbers on a random-access machine. The best-case for the algorithm is when the numbers are already sorted, which takes $$ O(n) $$ steps to perform the task. However, the input in the worst-case for the algorithm is when the numbers are reverse sorted and it takes $$ O(n^2) $$ steps to sort them; therefore the worst-case time-complexity of insertion sort is of $$ O(n^2) $$ .
https://en.wikipedia.org/wiki/Worst-case_complexity
Self-supervised learning (SSL) is a paradigm in machine learning where a model is trained on a task using the data itself to generate supervisory signals, rather than relying on externally-provided labels. In the context of neural networks, self-supervised learning aims to leverage inherent structures or relationships within the input data to create meaningful training signals. SSL tasks are designed so that solving them requires capturing essential features or relationships in the data. The input data is typically augmented or transformed in a way that creates pairs of related samples, where one sample serves as the input, and the other is used to formulate the supervisory signal. This augmentation can involve introducing noise, cropping, rotation, or other transformations. Self-supervised learning more closely imitates the way humans learn to classify objects. During SSL, the model learns in two steps. First, the task is solved based on an auxiliary or pretext classification task using pseudo-labels, which help to initialize the model parameters. Next, the actual task is performed with supervised or unsupervised learning. Self-supervised learning has produced promising results in recent years, and has found practical application in fields such as audio processing, and is being used by Facebook and others for speech recognition. ## Types ### Autoassociative self-supervised learning Autoassociative self-supervised learning is a specific category of self-supervised learning where a neural network is trained to reproduce or reconstruct its own input data. In other words, the model is tasked with learning a representation of the data that captures its essential features or structure, allowing it to regenerate the original input. The term "autoassociative" comes from the fact that the model is essentially associating the input data with itself. This is often achieved using autoencoders, which are a type of neural network architecture used for representation learning. Autoencoders consist of an encoder network that maps the input data to a lower-dimensional representation (latent space), and a decoder network that reconstructs the input from this representation. The training process involves presenting the model with input data and requiring it to reconstruct the same data as closely as possible. The loss function used during training typically penalizes the difference between the original input and the reconstructed output (e.g. mean squared error). By minimizing this reconstruction error, the autoencoder learns a meaningful representation of the data in its latent space. ### Contrastive self-supervised learning For a binary classification task, training data can be divided into positive examples and negative examples. Positive examples are those that match the target. For example, if training a classifier to identify birds, the positive training data would include images that contain birds. Negative examples would be images that do not. Contrastive self-supervised learning uses both positive and negative examples. The loss function in contrastive learning is used to minimize the distance between positive sample pairs, while maximizing the distance between negative sample pairs. An early example uses a pair of 1-dimensional convolutional neural networks to process a pair of images and maximize their agreement. Contrastive Language-Image Pre-training (CLIP) allows joint pretraining of a text encoder and an image encoder, such that a matching image-text pair have image encoding vector and text encoding vector that span a small angle (having a large cosine similarity). InfoNCE (Noise-Contrastive Estimation) is a method to optimize two models jointly, based on Noise Contrastive Estimation (NCE). Given a set $$ X=\left\{x_1, \ldots x_N\right\} $$ of $$ N $$ random samples containing one positive sample from $$ p\left(x_{t+k} \mid c_t\right) $$ and $$ N-1 $$ negative samples from the 'proposal' distribution $$ p\left(x_{t+k}\right) $$ , it minimizes the following loss function: $$ \mathcal{L}_{\mathrm{N}}=-\mathbb{E}_{X} \left[\log \frac{f_k\left(x_{t+k}, c_t\right)}{\sum_{x_j \in X} f_k\left(x_j, c_t\right)}\right] $$ ### Non-contrastive self-supervised learning Non-contrastive self-supervised learning (NCSSL) uses only positive examples. Counterintuitively, NCSSL converges on a useful local minimum rather than reaching a trivial solution, with zero loss. For the example of binary classification, it would trivially learn to classify each example as positive. Effective NCSSL requires an extra predictor on the online side that does not back-propagate on the target side. ## Comparison with other forms of machine learning SSL belongs to supervised learning methods insofar as the goal is to generate a classified output from the input. At the same time, however, it does not require the explicit use of labeled input-output pairs. Instead, correlations, metadata embedded in the data, or domain knowledge present in the input are implicitly and autonomously extracted from the data. These supervisory signals, extracted from the data, can then be used for training. SSL is similar to unsupervised learning in that it does not require labels in the sample data. Unlike unsupervised learning, however, learning is not done using inherent data structures. Semi-supervised learning combines supervised and unsupervised learning, requiring only a small portion of the learning data be labeled. In transfer learning, a model designed for one task is reused on a different task. Training an autoencoder intrinsically constitutes a self-supervised process, because the output pattern needs to become an optimal reconstruction of the input pattern itself. However, in current jargon, the term 'self-supervised' often refers to tasks based on a pretext-task training setup. This involves the (human) design of such pretext task(s), unlike the case of fully self-contained autoencoder training. In reinforcement learning, self-supervising learning from a combination of losses can create abstract representations where only the most important information about the state are kept in a compressed way. ## Examples Self-supervised learning is particularly suitable for speech recognition. For example, Facebook developed wav2vec, a self-supervised algorithm, to perform speech recognition using two deep convolutional neural networks that build on each other. Google's Bidirectional Encoder Representations from Transformers (BERT) model is used to better understand the context of search queries. OpenAI's GPT-3 is an autoregressive language model that can be used in language processing. It can be used to translate texts or answer questions, among other things. Bootstrap Your Own Latent (BYOL) is a NCSSL that produced excellent results on ImageNet and on transfer and semi-supervised benchmarks. The Yarowsky algorithm is an example of self-supervised learning in natural language processing. From a small number of labeled examples, it learns to predict which word sense of a polysemous word is being used at a given point in text. DirectPred is a NCSSL that directly sets the predictor weights instead of learning it via typical gradient descent. Self-GenomeNet is an example of self-supervised learning in genomics. Self-supervised learning continues to gain prominence as a new approach across diverse fields. Its ability to leverage unlabeled data effectively opens new possibilities for advancement in machine learning, especially in data-driven application domains. ## References ## Further reading - ## External links - - - - Category:Machine learning Category:Generative artificial intelligence
https://en.wikipedia.org/wiki/Self-supervised_learning
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and as an end to obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science. Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. ## Definition The full scope of combinatorics is not universally agreed upon. According to H. J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with: - the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations in a very general sense, associated with finite systems, - the existence of such structures that satisfy certain given criteria, - the construction of these structures, perhaps in many ways, and - optimization: finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality criterion. Leon Mirsky has said: "combinatorics is a range of linked studies which have something in common and yet diverge widely in their objectives, their methods, and the degree of coherence they have attained." One way to define combinatorics is, perhaps, to describe its subdivisions with their problems and techniques. This is the approach that is used below. However, there are also purely historical reasons for including or not including some topics under the combinatorics umbrella. Although primarily concerned with finite systems, some combinatorial questions and techniques can be extended to an infinite (specifically, countable) but discrete setting. ## History Basic combinatorial concepts and enumerative results appeared throughout the ancient world. The earliest recorded use of combinatorial techniques comes from problem 79 of the Rhind papyrus, which dates to the 16th century BC. The problem concerns a certain geometric series, and has similarities to Fibonacci's problem of counting the number of compositions of 1s and 2s that sum to a given total. Indian physician Sushruta asserts in Sushruta Samhita that 63 combinations can be made out of 6 different tastes, taken one at a time, two at a time, etc., thus computing all 26 − 1 possibilities. Greek historian Plutarch discusses an argument between Chrysippus (3rd century BCE) and Hipparchus (2nd century BCE) of a rather delicate enumerative problem, which was later shown to be related to Schröder–Hipparchus numbers.Stanley, Richard P.; "Hipparchus, Plutarch, Schröder, and Hough", American Mathematical Monthly 104 (1997), no. 4, 344–350. Earlier, in the Ostomachion, Archimedes (3rd century BCE) may have considered the number of configurations of a tiling puzzle, while combinatorial interests possibly were present in lost works by Apollonius. In the Middle Ages, combinatorics continued to be studied, largely outside of the European civilization. The Indian mathematician Mahāvīra () provided formulae for the number of permutations and combinations, and these formulas may have been familiar to Indian mathematicians as early as the 6th century CE. The philosopher and astronomer Rabbi Abraham ibn Ezra () established the symmetry of binomial coefficients, while a closed formula was obtained later by the talmudist and mathematician Levi ben Gerson (better known as Gersonides), in 1321. The arithmetical triangle—a graphical diagram showing relationships among the binomial coefficients—was presented by mathematicians in treatises dating as far back as the 10th century, and would eventually become known as Pascal's triangle. Later, in Medieval England, campanology provided examples of what is now known as Hamiltonian cycles in certain Cayley graphs on permutations. During the Renaissance, together with the rest of mathematics and the sciences, combinatorics enjoyed a rebirth. Works of Pascal, Newton, Jacob Bernoulli and Euler became foundational in the emerging field. In modern times, the works of J.J. Sylvester (late 19th century) and Percy MacMahon (early 20th century) helped lay the foundation for enumerative and algebraic combinatorics. ### Graph theory also enjoyed an increase of interest at the same time, especially in connection with the four color problem. In the second half of the 20th century, combinatorics enjoyed a rapid growth, which led to establishment of dozens of new journals and conferences in the subject. In part, the growth was spurred by new connections and applications to other fields, ranging from algebra to probability, from functional analysis to number theory, etc. These connections shed the boundaries between combinatorics and parts of mathematics and theoretical computer science, but at the same time led to a partial fragmentation of the field. ## Approaches and subfields of combinatorics ### Enumerative combinatorics Enumerative combinatorics is the most classical area of combinatorics and concentrates on counting the number of certain combinatorial objects. Although counting the number of elements in a set is a rather broad mathematical problem, many of the problems that arise in applications have a relatively simple combinatorial description. Fibonacci numbers is the basic example of a problem in enumerative combinatorics. The twelvefold way provides a unified framework for counting permutations, combinations and partitions. ### Analytic combinatorics Analytic combinatorics concerns the enumeration of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics, which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae. ### Partition theory Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, it is now considered a part of combinatorics or an independent field. It incorporates the bijective approach and various tools in analysis and analytic number theory and has connections with statistical mechanics. Partitions can be graphically visualized with Young diagrams or Ferrers diagrams. They occur in a number of branches of mathematics and physics, including the study of symmetric polynomials and of the symmetric group and in group representation theory in general. Graph theory Graphs are fundamental objects in combinatorics. Considerations of graph theory range from enumeration (e.g., the number of graphs on n vertices with k edges) to existing structures (e.g., Hamiltonian cycles) to algebraic representations (e.g., given a graph G and two numbers x and y, does the Tutte polynomial TG(x,y) have a combinatorial interpretation?). Although there are very strong connections between graph theory and combinatorics, they are sometimes thought of as separate subjects. While combinatorial methods apply to many graph theory problems, the two disciplines are generally used to seek solutions to different types of problems. ### Design theory Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties. Block designs are combinatorial designs of a special type. This area is one of the oldest parts of combinatorics, such as in Kirkman's schoolgirl problem proposed in 1850. The solution of the problem is a special case of a Steiner system, which play an important role in the classification of finite simple groups. The area has further connections to coding theory and geometric combinatorics. Combinatorial design theory can be applied to the area of design of experiments. Some of the basic theory of combinatorial designs originated in the statistician Ronald Fisher's work on the design of biological experiments. Modern applications are also found in a wide gamut of areas including finite geometry, tournament scheduling, lotteries, mathematical chemistry, mathematical biology, algorithm design and analysis, networking, group testing and cryptography. ### Finite geometry Finite geometry is the study of geometric systems having only a finite number of points. Structures analogous to those found in continuous geometries (Euclidean plane, real projective space, etc.) but defined combinatorially are the main items studied. This area provides a rich source of examples for design theory. It should not be confused with discrete geometry (combinatorial geometry). ### Order theory Order theory is the study of partially ordered sets, both finite and infinite. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". Various examples of partial orders appear in algebra, geometry, number theory and throughout combinatorics and graph theory. Notable classes and examples of partial orders include lattices and Boolean algebras. ### Matroid theory Matroid theory abstracts part of geometry. It studies the properties of sets (usually, finite sets) of vectors in a vector space that do not depend on the particular coefficients in a linear dependence relation. Not only the structure but also enumerative properties belong to matroid theory. Matroid theory was introduced by Hassler Whitney and studied as a part of order theory. It is now an independent field of study with a number of connections with other parts of combinatorics. ### Extremal combinatorics Extremal combinatorics studies how large or how small a collection of finite objects (numbers, graphs, vectors, sets, etc.) can be, if it has to satisfy certain restrictions. Much of extremal combinatorics concerns classes of set systems; this is called extremal set theory. For instance, in an n-element set, what is the largest number of k-element subsets that can pairwise intersect one another? What is the largest number of subsets of which none contains any other? The latter question is answered by Sperner's theorem, which gave rise to much of extremal set theory. The types of questions addressed in this case are about the largest possible graph which satisfies certain properties. For example, the largest triangle-free graph on 2n vertices is a complete bipartite graph Kn,n. Often it is too hard even to find the extremal answer f(n) exactly and one can only give an asymptotic estimate. Ramsey theory is another part of extremal combinatorics. It states that any sufficiently large configuration will contain some sort of order. It is an advanced generalization of the pigeonhole principle. ### Probabilistic combinatorics In probabilistic combinatorics, the questions are of the following type: what is the probability of a certain property for a random discrete object, such as a random graph? For instance, what is the average number of triangles in a random graph? Probabilistic methods are also used to determine the existence of combinatorial objects with certain prescribed properties (for which explicit examples might be difficult to find) by observing that the probability of randomly selecting an object with those properties is greater than 0. This approach (often referred to as the probabilistic method) proved highly effective in applications to extremal combinatorics and graph theory. A closely related area is the study of finite Markov chains, especially on combinatorial objects. Here again probabilistic tools are used to estimate the mixing time. Often associated with Paul Erdős, who did the pioneering work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. The area recently grew to become an independent field of combinatorics. ### Algebraic combinatorics Algebraic combinatorics is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra. Algebraic combinatorics has come to be seen more expansively as an area of mathematics where the interaction of combinatorial and algebraic methods is particularly strong and significant. Thus the combinatorial topics may be enumerative in nature or involve matroids, polytopes, partially ordered sets, or finite geometries. On the algebraic side, besides group and representation theory, lattice theory and commutative algebra are common. ### Combinatorics on words Combinatorics on words deals with formal languages. It arose independently within several branches of mathematics, including number theory, group theory and probability. It has applications to enumerative combinatorics, fractal analysis, theoretical computer science, automata theory, and linguistics. While many applications are new, the classical Chomsky–Schützenberger hierarchy of classes of formal grammars is perhaps the best-known result in the field. ### Geometric combinatorics Geometric combinatorics is related to convex and discrete geometry. It asks, for example, how many faces of each dimension a convex polytope can have. Metric properties of polytopes play an important role as well, e.g. the Cauchy theorem on the rigidity of convex polytopes. Special polytopes are also considered, such as permutohedra, associahedra and Birkhoff polytopes. Combinatorial geometry is a historical name for discrete geometry. It includes a number of subareas such as polyhedral combinatorics (the study of faces of convex polyhedra), convex geometry (the study of convex sets, in particular combinatorics of their intersections), and discrete geometry, which in turn has many applications to computational geometry. The study of regular polytopes, Archimedean solids, and kissing numbers is also a part of geometric combinatorics. Special polytopes are also considered, such as the permutohedron, associahedron and Birkhoff polytope. ### Topological combinatorics Combinatorial analogs of concepts and methods in topology are used to study graph coloring, fair division, partitions, partially ordered sets, decision trees, necklace problems and discrete Morse theory. It should not be confused with combinatorial topology which is an older name for algebraic topology. ### Arithmetic combinatorics Arithmetic combinatorics arose out of the interplay between number theory, combinatorics, ergodic theory, and harmonic analysis. It is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive number theory (sometimes also called additive combinatorics) refers to the special case when only the operations of addition and subtraction are involved. One important technique in arithmetic combinatorics is the ergodic theory of dynamical systems. ### Infinitary combinatorics Infinitary combinatorics, or combinatorial set theory, is an extension of ideas in combinatorics to infinite sets. It is a part of set theory, an area of mathematical logic, but uses tools and ideas from both set theory and extremal combinatorics. Some of the things studied include continuous graphs and trees, extensions of Ramsey's theorem, and Martin's axiom. Recent developments concern combinatorics of the continuum and combinatorics on successors of singular cardinals. Gian-Carlo Rota used the name continuous combinatorics to describe geometric probability, since there are many analogies between counting and measure. ## Related fields ### Combinatorial optimization Combinatorial optimization is the study of optimization on discrete and combinatorial objects. It started as a part of combinatorics and graph theory, but is now viewed as a branch of applied mathematics and computer science, related to operations research, algorithm theory and computational complexity theory. ### Coding theory Coding theory started as a part of design theory with early combinatorial constructions of error-correcting codes. The main idea of the subject is to design efficient and reliable methods of data transmission. It is now a large field of study, part of information theory. ### Discrete and computational geometry Discrete geometry (also called combinatorial geometry) also began as a part of combinatorics, with early results on convex polytopes and kissing numbers. With the emergence of applications of discrete geometry to computational geometry, these two fields partially merged and became a separate field of study. There remain many connections with geometric and topological combinatorics, which themselves can be viewed as outgrowths of the early discrete geometry. ### Combinatorics and dynamical systems Combinatorial aspects of dynamical systems is another emerging field. Here dynamical systems can be defined on combinatorial objects. See for example graph dynamical system. ### Combinatorics and physics There are increasing interactions between combinatorics and physics, particularly statistical physics. Examples include an exact solution of the Ising model, and a connection between the Potts model on one hand, and the chromatic and Tutte polynomials on the other hand.
https://en.wikipedia.org/wiki/Combinatorics
MongoDB is a source-available, cross-platform, document-oriented database program. Classified as a NoSQL database product, MongoDB uses JSON-like documents with optional schemas. Released in February 2009 by 10gen (now MongoDB Inc.), it supports features like sharding, replication, and ACID transactions (from version 4.0). ### MongoDB Atlas , its managed cloud service, operates on AWS, Google Cloud Platform, and Microsoft Azure. Current versions are licensed under the Server Side Public License (SSPL). MongoDB is a member of the MACH Alliance. ## History The American software company 10gen began developing MongoDB in 2007 as a component of a planned platform-as-a-service product. In 2009, the company shifted to an open-source development model and began offering commercial support and other services. In 2013, 10gen changed its name to MongoDB Inc. On October 20, 2017, MongoDB became a publicly traded company, listed on NASDAQ as MDB with an IPO price of $24 per share. On November 8, 2018, with the stable release 4.0.4, the software's license changed from AGPL 3.0 to SSPL. On October 30, 2019, MongoDB teamed with Alibaba Cloud to offer Alibaba Cloud customers a MongoDB-as-a-service solution. Customers can use the managed offering from Alibaba's global data centers. + MongoDB release history Version Release date Feature notes Refs 1.0 August 2009 1.2 December 2009- more indexes per collection - faster index creation - map/reduce - stored JavaScript functions - configurable fsync time - several small features and fixes 1.4 March 2010 1.6 August 2010- production-ready sharding - replica sets - support for IPv6 1.8 March 2011 2.0 September 2011 2.2 August 2012 2.4 March 2013- enhanced geospatial support - switch to V8 JavaScript engine - security enhancements - text search (beta) - hashed index 2.6 April 8, 2014- aggregation enhancements - text-search integration - query-engine improvements - new write-operation protocol - security enhancements 3.0 March 3, 2015- WiredTiger storage engine support - pluggable storage engine API - SCRAM-SHA-1 authentication - improved explain functionality - MongoDB Ops Manager 3.2 December 8, 2015- WiredTiger storage engine by default - replication election enhancements - config servers as replica sets - readConcern - document validations - moved from V8 to SpiderMonkey 3.4 November 29, 2016- linearizable read concerns - views - collation 3.6 November 29, 2017 4.0 June 26, 2018- transactions - license change effective pr. 4.0.4 4.2 August 13, 2019 4.4 July 25, 2020 4.4.5 April 2021 4.4.6 May 2021 5.0 July 13, 2021- future-proofs versioned API - client-side field level encryption - live resharding - time series support 6.0 July 19, 2022 7.0 August 15, 2023 8.0 October 2, 2024 ## Main features ### Ad-hoc queries MongoDB supports field, range query and regular-expression searches. Queries can return specific fields of documents and also include user-defined JavaScript functions. Queries can also be configured to return a random sample of results of a given size. ### Indexing Fields in a MongoDB document can be indexed with primary and secondary indices. ### Replication MongoDB provides high availability with replica sets. A replica set consists of two or more copies of the data. Each replica-set member may act in the role of primary or secondary replica at any time. All writes and reads are done on the primary replica by default. Secondary replicas maintain a copy of the data of the primary using built-in replication. When a primary replica fails, the replica set automatically conducts an election process to determine which secondary should become the primary. Secondaries can optionally serve read operations, but that data is only eventually consistent by default. If the replicated MongoDB deployment only has a single secondary member, a separate daemon called an arbiter must be added to the set. It has the single responsibility of resolving the election of the new primary. As a consequence, an ideal distributed MongoDB deployment requires at least three separate servers, even in the case of just one primary and one secondary. ### Load balancing MongoDB scales horizontally using sharding. The user chooses a shard key, which determines how the data in a collection will be distributed. The data is split into ranges (based on the shard key) and distributed across multiple shards, which are masters with one or more replicas. Alternatively, the shard key can be hashed to map to a shard – enabling an even data distribution. MongoDB can run over multiple servers, balancing the load or duplicating data to keep the system functional in case of hardware failure. ### File storage MongoDB can be used as a file system, called GridFS, with load-balancing and data-replication features over multiple machines for storing files. This function, called a grid file system, is included with MongoDB drivers. MongoDB exposes functions for file manipulation and content to developers. GridFS can be accessed using the mongofiles utility or plugins for Nginx and lighttpd. GridFS divides a file into parts, or chunks, and stores each of those chunks as a separate document. ### Aggregation MongoDB provides three ways to perform aggregation: the aggregation pipeline, the map-reduce function and single-purpose aggregation methods. Map-reduce can be used for batch processing of data and aggregation operations. However, according to MongoDB's documentation, the aggregation pipeline provides better performance for most aggregation operations. The aggregation framework enables users to obtain results similar to those returned by queries that include the SQL GROUP BY clause. Aggregation operators can be strung together to form a pipeline, analogous to Unix pipes. The aggregation framework includes the $lookup operator, which can join documents from multiple collections, as well as statistical operators such as standard deviation. ### Server-side JavaScript execution JavaScript can be used in queries, aggregation functions (such as MapReduce) and sent directly to the database to be executed. ### Capped collections MongoDB supports fixed-size collections called capped collections. This type of collection maintains insertion order and, once the specified size has been reached, behaves like a circular queue. ### Transactions MongoDB supports multi-document ACID transactions since the 4.0 release in June 2018. ## Editions ### ### MongoDB Community Server The MongoDB Community Edition is free and available for Windows, Linux and macOS. ### MongoDB Enterprise Server MongoDB Enterprise Server is the commercial edition of MongoDB and is available as part of the MongoDB Enterprise Advanced subscription. MongoDB Atlas MongoDB is also available as an on-demand, fully managed service. MongoDB Atlas runs on AWS, Microsoft Azure and Google Cloud Platform. On March 10, 2022, MongoDB warned its users in Russia and Belarus that their data stored on the MongoDB Atlas platform will be destroyed as a result of American sanctions related to the Russo-Ukrainian War. ## Architecture ### Programming language accessibility MongoDB has official drivers for major programming languages and development environments. There are also a large number of unofficial or community-supported drivers for other programming languages and frameworks. ### Serverless access ### Management and graphical front-ends The primary interface to the database has been the mongo shell. Since MongoDB 3.2, MongoDB Compass is introduced as the native GUI. There are products and third-party projects that offer user interfaces for administration and data viewing. ## Licensing MongoDB Community Server As of October 2018, MongoDB is released under the Server Side Public License (SSPL), a non-free license developed by the project. It replaces the GNU Affero General Public License, and is nearly identical to the GNU General Public License version 3, but requires that those making the software publicly available as part of a "service" must make the service's entire source code (insofar that a user would be able to recreate the service themselves) available under this license. By contrast, the AGPL only requires the source code of the licensed software to be provided to users when the software is conveyed over a network. The SSPL was submitted for certification to the Open Source Initiative but later withdrawn. In January 2021, the Open Source Initiative stated that SSPL is not an open source license. The language drivers are available under an Apache License. In addition, MongoDB Inc. offers proprietary licenses for MongoDB. The last versions licensed as AGPL version 3 are 4.0.3 (stable) and 4.1.4. MongoDB has been removed from the Debian, Fedora and Red Hat Enterprise Linux distributions because of the licensing change. Fedora determined that the SSPL version 1 is not a free software license because it is "intentionally crafted to be aggressively discriminatory" towards commercial users. ## Bug reports and criticisms ### Security Because of MongoDB's default security configuration, which allows any user full access to the database, data from tens of thousands of MongoDB installations has been stolen. Furthermore, many MongoDB servers have been held for ransom. In September 2017, Davi Ottenheimer head of product security at MongoDB, proclaimed that measures had been taken to defend against these risks. From the MongoDB 2.6 release onward, the binaries for the official MongoDB RPM and DEB packages bind to localhost by default. From MongoDB 3.6, this default behavior was extended to all MongoDB packages across all platforms. As a result, all networked connections to the database are denied unless explicitly configured by an administrator. ### Technical criticisms In some failure scenarios in which an application can access two distinct MongoDB processes that cannot access each other, it is possible for MongoDB to return stale reads. It is also possible for MongoDB to roll back writes that have been acknowledged. The issue was addressed in version 3.4.0, released in November 2016, and applied to earlier releases from v3.2.12 onward. Before version 2.2, locks were implemented on a per-server-process basis. With version 2.2, locks were implemented at the database level. Beginning with version 3.0, pluggable storage engines are available, and each storage engine may implement locks differently. With MongoDB 3.0, locks are implemented at the collection level for the MMAPv1 storage engine, while the WiredTiger storage engine uses an optimistic concurrency protocol that effectively provides document-level locking. Even with versions prior to 3.0, one approach to increase concurrency is to use sharding. In some situations, reads and writes will yield their locks. If MongoDB predicts that a page is unlikely to be in memory, operations will yield their lock while the pages load. The use of lock yielding expanded greatly in version 2.2. Until version 3.3.11, MongoDB could not perform collation-based sorting and was limited to bytewise comparison via memcmp, which would not provide correct ordering for many non-English languages when used with a Unicode encoding. The issue was fixed on August 23, 2016. Prior to MongoDB 4.0, queries against an index were not atomic. Documents that were updated while queries was running could be missed. The introduction of the snapshot read concern in MongoDB 4.0 eliminated this risk. MongoDB claimed that version 3.6.4 had passed "the industry's toughest data safety, correctness, and consistency tests" by Jepsen, and that "MongoDB offers among the strongest data consistency, correctness, and safety guarantees of any database available today." Jepsen, which describes itself as a "distributed systems safety research company," disputed both claims on Twitter, saying, "In that report, MongoDB lost data and violated causal by default." In its May 2020 report on MongoDB version 4.2.6, Jepsen wrote that MongoDB had only mentioned tests that version 3.6.4 had passed, and that version had 4.2.6 introduced more problems. Jepsen's test summary reads in part: Jepsen evaluated MongoDB version 4.2.6, and found that even at the strongest levels of read and write concern, it failed to preserve snapshot isolation. Instead, Jepsen observed read skew, cyclic information flow, duplicate writes, and internal consistency violations. Weak defaults meant that transactions could lose writes and allow dirty reads, even downgrading requested safety levels at the database and collection level. Moreover, the snapshot read concern did not guarantee snapshot unless paired with write concern majority—even for read-only transactions. These design choices complicate the safe use of MongoDB transactions. On May 26, Jepsen updated the report to say: "MongoDB identified a bug in the transaction retry mechanism which they believe was responsible for the anomalies observed in this report; a patch is scheduled for 4.2.8." The issue has been patched as of that version, and "Jepsen criticisms of the default write concerns have also been addressed, with the default write concern now elevated to the majority concern (w:majority) from MongoDB 5.0." ## MongoDB conference MongoDB Inc. hosts an annual developer conference that has been called MongoDB World or MongoDB.live. Year Dates City Venue Notes 2014 June 23–25 New York City Sheraton Times Square Hotel 2015 June 1–2 New York City Sheraton Times Square Hotel 2016 June 28–29 New York City New York Hilton Midtown 2017 June 20–21 Chicago Hyatt Regency Chicago First year not in New York City 2018 June 26–27 New York City New York Hilton Midtown 2019 June 17–19 New York City New York Hilton Midtown 2020 May 4–6 Online In‑person event canceled and conference held entirely online because of the COVID-19 pandemic 2021 July 13–14 Online Conference held online because of the COVID-19 pandemic 2022 June 7–9 New York City Javits Center
https://en.wikipedia.org/wiki/MongoDB
In classical electromagnetism, Ampère's circuital law (not to be confused with Ampère's force law) relates the circulation of a magnetic field around a closed loop to the electric current passing through the loop. James Clerk Maxwell derived it using hydrodynamics in his 1861 published paper "On Physical Lines of Force". In 1865, he generalized the equation to apply to time-varying currents by adding the displacement current term, resulting in the modern form of the law, sometimes called the Ampère–Maxwell law, which is one of Maxwell's equations that form the basis of classical electromagnetism. ## Ampère's original circuital law In 1820 Danish physicist Hans Christian Ørsted discovered that an electric current creates a magnetic field around it, when he noticed that the needle of a compass next to a wire carrying current turned so that the needle was perpendicular to the wire.H. A. M. Snelders, "Oersted's discovery of electromagnetism" in He investigated and discovered the rules which govern the field around a straight current-carrying wire: - The magnetic field lines encircle the current-carrying wire. - The magnetic field lines lie in a plane perpendicular to the wire. - If the direction of the current is reversed, the direction of the magnetic field reverses. - The strength of the field is directly proportional to the magnitude of the current. - The strength of the field at any point is inversely proportional to the distance of the point from the wire. This sparked a great deal of research into the relation between electricity and magnetism. André-Marie Ampère investigated the magnetic force between two current-carrying wires, discovering Ampère's force law. In the 1850s Scottish mathematical physicist James Clerk Maxwell generalized these results and others into a single mathematical law. The original form of Maxwell's circuital law, which he derived as early as 1855 in his paper "On Faraday's Lines of Force" based on an analogy to hydrodynamics, relates magnetic fields to electric currents that produce them. It determines the magnetic field associated with a given current, or the current associated with a given magnetic field. The original circuital law only applies to a magnetostatic situation, to continuous steady currents flowing in a closed circuit. For systems with electric fields that change over time, the original law (as given in this section) must be modified to include a term known as Maxwell's correction (see below). ### Equivalent forms The original circuital law can be written in several different forms, which are all ultimately equivalent: - An "integral form" and a "differential form". The forms are exactly equivalent, and related by the Kelvin–Stokes theorem (see the "proof" section below). - Forms using SI units, and those using cgs units. Other units are possible, but rare. This section will use SI units, with cgs units discussed later. - Forms using either or magnetic fields. These two forms use the total current density and free current density, respectively. The and fields are related by the constitutive equation: in non-magnetic materials where is the magnetic constant. ### Explanation The integral form of the original circuital law is a line integral of the magnetic field around some closed curve (arbitrary but must be closed). The curve in turn bounds both a surface which the electric current passes through (again arbitrary but not closed—since no three-dimensional volume is enclosed by ), and encloses the current. The mathematical statement of the law is a relation between the circulation of the magnetic field around some path (line integral) due to the current which passes through that enclosed path (surface integral). In terms of total current, (which is the sum of both free current and bound current) the line integral of the magnetic -field (in teslas, T) around closed curve is proportional to the total current passing through a surface (enclosed by ). In terms of free current, the line integral of the magnetic -field (in amperes per metre, A·m−1) around closed curve equals the free current through a surface . + Forms of the original circuital law written in SI units Integral form Differential formUsing -field and total currentUsing -field and free current - is the total current density (in amperes per square metre, A·m−2), - is the free current density only, - is the closed line integral around the closed curve , - denotes a surface integral over the surface bounded by the curve , - is the vector dot product, - is an infinitesimal element (a differential) of the curve (i.e. a vector with magnitude equal to the length of the infinitesimal line element, and direction given by the tangent to the curve ) - is the vector area of an infinitesimal element of surface (that is, a vector with magnitude equal to the area of the infinitesimal surface element, and direction normal to surface . The direction of the normal must correspond with the orientation of by the right hand rule), see below for further explanation of the curve and surface . - is the curl operator. ### Ambiguities and sign conventions There are a number of ambiguities in the above definitions that require clarification and a choice of convention. 1. First, three of these terms are associated with sign ambiguities: the line integral could go around the loop in either direction (clockwise or counterclockwise); the vector area could point in either of the two directions normal to the surface; and is the net current passing through the surface , meaning the current passing through in one direction, minus the current in the other direction—but either direction could be chosen as positive. These ambiguities are resolved by the right-hand rule: With the palm of the right-hand toward the area of integration, and the index-finger pointing along the direction of line-integration, the outstretched thumb points in the direction that must be chosen for the vector area . Also the current passing in the same direction as must be counted as positive. The right hand grip rule can also be used to determine the signs. 1. Second, there are infinitely many possible surfaces that have the curve as their border. (Imagine a soap film on a wire loop, which can be deformed by blowing on the film). Which of those surfaces is to be chosen? If the loop does not lie in a single plane, for example, there is no one obvious choice. The answer is that it does not matter: in the magnetostatic case, the current density is solenoidal (see next section), so the divergence theorem and continuity equation imply that the flux through any surface with boundary , with the same sign convention, is the same. In practice, one usually chooses the most convenient surface (with the given boundary) to integrate over. ## Free current versus bound current The electric current that arises in the simplest textbook situations would be classified as "free current"—for example, the current that passes through a wire or battery. In contrast, "bound current" arises in the context of bulk materials that can be magnetized and/or polarized. (All materials can to some extent.) When a material is magnetized (for example, by placing it in an external magnetic field), the electrons remain bound to their respective atoms, but behave as if they were orbiting the nucleus in a particular direction, creating a microscopic current. When the currents from all these atoms are put together, they create the same effect as a macroscopic current, circulating perpetually around the magnetized object. This magnetization current is one contribution to "bound current". The other source of bound current is bound charge. When an electric field is applied, the positive and negative bound charges can separate over atomic distances in polarizable materials, and when the bound charges move, the polarization changes, creating another contribution to the "bound current", the polarization current . The total current density due to free and bound charges is then: $$ \mathbf{J} =\mathbf{J}_\mathrm{f} + \mathbf{J}_\mathrm{M} + \mathbf{J}_\mathrm{P} \,, $$ with   the "free" or "conduction" current density. All current is fundamentally the same, microscopically. Nevertheless, there are often practical reasons for wanting to treat bound current differently from free current. For example, the bound current usually originates over atomic dimensions, and one may wish to take advantage of a simpler theory intended for larger dimensions. The result is that the more microscopic Ampère's circuital law, expressed in terms of and the microscopic current (which includes free, magnetization and polarization currents), is sometimes put into the equivalent form below in terms of and the free current only. For a detailed definition of free current and bound current, and the proof that the two formulations are equivalent, see the "proof" section below. ## Shortcomings of the original formulation of the circuital law There are two important issues regarding the circuital law that require closer scrutiny. First, there is an issue regarding the continuity equation for electrical charge. In vector calculus, the identity for the divergence of a curl states that the divergence of the curl of a vector field must always be zero. Hence $$ \nabla\cdot(\nabla\times\mathbf{B}) = 0 \,, $$ and so the original Ampère's circuital law implies that $$ \nabla\cdot \mathbf{J} = 0\,, $$ i.e. that the current density is solenoidal. But in general, reality follows the continuity equation for electric charge: $$ \nabla\cdot \mathbf{J} = -\frac{\partial \rho}{\partial t} \,, $$ which is nonzero for a time-varying charge density. An example occurs in a capacitor circuit where time-varying charge densities exist on the plates. Second, there is an issue regarding the propagation of electromagnetic waves. For example, in free space, where $$ \mathbf{J} = \mathbf{0}\,, $$ the circuital law implies that $$ \nabla\times\mathbf{B} = \mathbf{0}\,, $$ i.e. that the magnetic field is irrotational, but to maintain consistency with the continuity equation for electric charge, we must have $$ \nabla\times\mathbf{B} = \frac{1}{c^2}\frac{\partial\mathbf{E}}{\partial t}\,. $$ To 'resolve' these situations (w/ eqn. above), the contribution of displacement current must be added to the current term in the circuital law. James Clerk Maxwell conceived of displacement current as a polarization current in the dielectric vortex sea, which he used to model the magnetic field hydrodynamically and mechanically. He added this displacement current to Ampère's circuital law at equation 112 in his 1861 paper "On Physical Lines of Force". ### Displacement current In free space, the displacement current is related to the time rate of change of electric field. In a dielectric the above contribution to displacement current is present too, but a major contribution to the displacement current is related to the polarization of the individual molecules of the dielectric material. Even though charges cannot flow freely in a dielectric, the charges in molecules can move a little under the influence of an electric field. The positive and negative charges in molecules separate under the applied field, causing an increase in the state of polarization, expressed as the polarization density . A changing state of polarization is equivalent to a current. Both contributions to the displacement current are combined by defining the displacement current as: $$ \mathbf{J}_\mathrm{D} = \frac {\partial}{\partial t} \mathbf{D} (\mathbf{r}, \, t) \, , $$ where the electric displacement field is defined as: $$ \mathbf{D} = \varepsilon_0 \mathbf{E} + \mathbf{P} = \varepsilon_0 \varepsilon_\mathrm{r} \mathbf{E} \, , $$ where is the electric constant, the relative static permittivity, and is the polarization density. Substituting this form for in the expression for displacement current, it has two components: $$ \mathbf{J}_\mathrm{D} = \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} + \frac{\partial \mathbf{P}}{\partial t}\,. $$ The first term on the right hand side is present everywhere, even in a vacuum. It doesn't involve any actual movement of charge, but it nevertheless has an associated magnetic field, as if it were an actual current. Some authors apply the name displacement current to only this contribution. The second term on the right hand side is the displacement current as originally conceived by Maxwell, associated with the polarization of the individual molecules of the dielectric material. Maxwell's original explanation for displacement current focused upon the situation that occurs in dielectric media. In the modern post-aether era, the concept has been extended to apply to situations with no material media present, for example, to the vacuum between the plates of a charging vacuum capacitor. The displacement current is justified today because it serves several requirements of an electromagnetic theory: correct prediction of magnetic fields in regions where no free current flows; prediction of wave propagation of electromagnetic fields; and conservation of electric charge in cases where charge density is time-varying. For greater discussion see Displacement current. ## Extending the original law: the Ampère–Maxwell equation Next, the circuital equation is extended by including the polarization current, thereby remedying the limited applicability of the original circuital law. Treating free charges separately from bound charges, the equation including Maxwell's correction in terms of the -field is (the -field is used because it includes the magnetization currents, so does not appear explicitly, see -field and also Note): $$ \oint_C \mathbf{H} \cdot \mathrm{d} \boldsymbol{l} = \iint_S \left( \mathbf{J}_\mathrm{f} + \frac{\partial \mathbf{D}}{\partial t} \right) \cdot \mathrm{d} \mathbf{S} $$ (integral form), where is the magnetic field (also called "auxiliary magnetic field", "magnetic field intensity", or just "magnetic field"), is the electric displacement field, and is the enclosed conduction current or free current density. In differential form, $$ \mathbf{\nabla} \times \mathbf{H} = \mathbf{J}_\mathrm{f}+\frac{\partial \mathbf{D}}{\partial t} \, . $$ On the other hand, treating all charges on the same footing (disregarding whether they are bound or free charges), the generalized Ampère's equation, also called the Maxwell–Ampère equation, is in integral form (see the "proof" section below): In differential form, In both forms includes magnetization current density as well as conduction and polarization current densities. That is, the current density on the right side of the Ampère–Maxwell equation is: $$ \mathbf{J}_\mathrm{f}+\mathbf{J}_\mathrm{D} +\mathbf{J}_\mathrm{M} = \mathbf{J}_\mathrm{f}+\mathbf{J}_\mathrm{P} +\mathbf{J}_\mathrm{M} + \varepsilon_0 \frac {\partial \mathbf{E}}{\partial t} = \mathbf{J}+ \varepsilon_0 \frac {\partial \mathbf{E}}{\partial t} \, , $$ where current density is the displacement current, and is the current density contribution actually due to movement of charges, both free and bound. Because , the charge continuity issue with Ampère's original formulation is no longer a problem. Because of the term in , wave propagation in free space now is possible. With the addition of the displacement current, Maxwell was able to hypothesize (correctly) that light was a form of electromagnetic wave. See electromagnetic wave equation for a discussion of this important discovery. ### Proof of equivalence Proof that the formulations of the circuital law in terms of free current are equivalent to the formulations involving total current In this proof, we will show that the equation $$ \nabla\times \mathbf{H} = \mathbf{J}_\mathrm{f} + \frac{\partial \mathbf{D}}{\partial t} $$ is equivalent to the equation $$ \frac{1}{\mu_0}(\mathbf{\nabla} \times \mathbf{B}) = \mathbf{J} + \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}\,. $$ Note that we are only dealing with the differential forms, not the integral forms, but that is sufficient since the differential and integral forms are equivalent in each case, by the Kelvin–Stokes theorem. We introduce the polarization density , which has the following relation to and : $$ \mathbf{D}=\varepsilon_0 \mathbf{E} + \mathbf{P}\,. $$ Next, we introduce the magnetization density , which has the following relation to and : $$ \frac{1}{\mu_0}\mathbf{B} = \mathbf{H} + \mathbf{M} $$ and the following relation to the bound current: $$ \begin{align} \mathbf{J}_\mathrm{bound} &= \nabla\times\mathbf{M} + \frac{\partial \mathbf{P}}{\partial t} \\ &=\mathbf{J}_\mathrm{M}+\mathbf{J}_\mathrm{P}, \end{align} $$ where $$ \mathbf{J}_\mathrm{M} = \nabla\times\mathbf{M} , $$ is called the magnetization current density, and $$ \mathbf{J}_\mathrm{P} = \frac{\partial \mathbf{P}}{\partial t}, $$ is the polarization current density. Taking the equation for : $$ \begin{align} \frac{1}{\mu_0}(\mathbf{\nabla} \times \mathbf{B}) &= \mathbf{\nabla} \times \left( \mathbf {H}+\mathbf{M} \right) \\ &=\mathbf{\nabla} \times \mathbf H + \mathbf{J}_{\mathrm{M}} \\ &= \mathbf{J}_\mathrm{f} + \mathbf{J}_\mathrm{P} +\varepsilon_0 \frac{\partial \mathbf E}{\partial t} + \mathbf{J}_\mathrm{M}. \end{align} $$ Consequently, referring to the definition of the bound current: $$ \begin{align} \frac{1}{\mu_0}(\mathbf{\nabla} \times \mathbf{B}) &=\mathbf{J}_\mathrm{f}+ \mathbf{J}_\mathrm{bound} + \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} \\ &=\mathbf{J} + \varepsilon_0 \frac{\partial \mathbf E}{\partial t} , \end{align} $$ as was to be shown. ## Ampère's circuital law in cgs units In cgs units, the integral form of the equation, including Maxwell's correction, reads $$ \oint_C \mathbf{B} \cdot \mathrm{d}\boldsymbol{l} = \frac{1}{c} \iint_S \left(4\pi\mathbf{J}+\frac{\partial \mathbf{E}}{\partial t}\right) \cdot \mathrm{d}\mathbf{S}, $$ where is the speed of light. The differential form of the equation (again, including Maxwell's correction) is $$ \mathbf{\nabla} \times \mathbf{B} = \frac{1}{c}\left(4\pi\mathbf{J}+\frac{\partial \mathbf{E}}{\partial t}\right). $$
https://en.wikipedia.org/wiki/Amp%C3%A8re%27s_circuital_law
Association rule learning is a rule-based machine learning method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using some measures of interestingness. In any given transaction with a variety of items, association rules are meant to discover the rules that determine how or why certain items are connected. Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule $$ \{\mathrm{onions, potatoes}\} \Rightarrow \{\mathrm{burger}\} $$ found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotional pricing or product placements. In addition to the above example from market basket analysis, association rules are employed today in many application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. The association rule algorithm itself consists of various parameters that can make it difficult for those without some expertise in data mining to execute, with many rules that are arduous to understand. ## Definition Following the original definition by Agrawal, Imieliński, Swami the problem of association rule mining is defined as: Let $$ I=\{i_1, i_2,\ldots,i_n\} $$ be a set of binary attributes called items. Let $$ D = \{t_1, t_2, \ldots, t_m\} $$ be a set of transactions called the database. Each transaction in has a unique transaction ID and contains a subset of the items in . A rule is defined as an implication of the form: $$ X \Rightarrow Y $$ , where $$ X, Y \subseteq I $$ . In Agrawal, Imieliński, Swami a rule is defined only between a set and a single item, $$ X \Rightarrow i_j $$ for $$ i_j \in I $$ . Every rule is composed by two different sets of items, also known as itemsets, and , where is called antecedent or left-hand-side (LHS) and consequent or right-hand-side (RHS). The antecedent is that item that can be found in the data while the consequent is the item found when combined with the antecedent. The statement $$ X \Rightarrow Y $$ is often read as if then , where the antecedent ( ) is the if and the consequent () is the then. This simply implies that, in theory, whenever occurs in a dataset, then will as well. ## Process Association rules are made by searching data for frequent if-then patterns and by using a certain criterion under ### Support and ### Confidence to define what the most important relationships are. Support is the evidence of how frequent an item appears in the data given, as Confidence is defined by how many times the if-then statements are found true. However, there is a third criteria that can be used, it is called ### Lift and it can be used to compare the expected Confidence and the actual Confidence. Lift will show how many times the if-then statement is expected to be found to be true. Association rules are made to calculate from itemsets, which are created by two or more items. If the rules were built from the analyzing from all the possible itemsets from the data then there would be so many rules that they wouldn’t have any meaning. That is why Association rules are typically made from rules that are well represented by the data. There are many different data mining techniques you could use to find certain analytics and results, for example, there is Classification analysis, Clustering analysis, and Regression analysis. What technique you should use depends on what you are looking for with your data. Association rules are primarily used to find analytics and a prediction of customer behavior. For Classification analysis, it would most likely be used to question, make decisions, and predict behavior. Clustering analysis is primarily used when there are no assumptions made about the likely relationships within the data. Regression analysis Is used when you want to predict the value of a continuous dependent from a number of independent variables. Benefits There are many benefits of using Association rules like finding the pattern that helps understand the correlations and co-occurrences between data sets. A very good real-world example that uses Association rules would be medicine. Medicine uses Association rules to help diagnose patients. When diagnosing patients there are many variables to consider as many diseases will share similar symptoms. With the use of the Association rules, doctors can determine the conditional probability of an illness by comparing symptom relationships from past cases. Downsides However, Association rules also lead to many different downsides such as finding the appropriate parameter and threshold settings for the mining algorithm. But there is also the downside of having a large number of discovered rules. The reason is that this does not guarantee that the rules will be found relevant, but it could also cause the algorithm to have low performance. Sometimes the implemented algorithms will contain too many variables and parameters. For someone that doesn’t have a good concept of data mining, this might cause them to have trouble understanding it. ThresholdsWhen using Association rules, you are most likely to only use Support and Confidence. However, this means you have to satisfy a user-specified minimum support and a user-specified minimum confidence at the same time. Usually, the Association rule generation is split into two different steps that needs to be applied: 1. A minimum Support threshold to find all the frequent itemsets that are in the database. 1. A minimum Confidence threshold to the frequent itemsets found to create rules. +Table 1. Example of Threshold for Support and Confidence. Items Support Confidence Items Support Confidence Item A 30% 50% Item C 45% 55% Item B 15% 25% Item A 30% 50% Item C 45% 55% Item D 35% 40% Item D 35% 40% Item B 15% 25% The Support Threshold is 30%, Confidence Threshold is 50% The Table on the left is the original unorganized data and the table on the right is organized by the thresholds. In this case Item C is better than the thresholds for both Support and Confidence which is why it is first. Item A is second because its threshold values are spot on. Item D has met the threshold for Support but not Confidence. Item B has not met the threshold for either Support or Confidence and that is why it is last. To find all the frequent itemsets in a database is not an easy task since it involves going through all the data to find all possible item combinations from all possible itemsets. The set of possible itemsets is the power set over and has size $$ 2^n-1 $$ , of course this means to exclude the empty set which is not considered to be a valid itemset. However, the size of the power set will grow exponentially in the number of item that is within the power set . An efficient search is possible by using the downward-closure property of support (also called anti-monotonicity). This would guarantee that a frequent itemset and all its subsets are also frequent and thus will have no infrequent itemsets as a subset of a frequent itemset. Exploiting this property, efficient algorithms (e.g., Apriori and Eclat) can find all frequent itemsets. ## Useful Concepts + Table 2. Example database with 5 transactions and 7 items transaction ID milk bread butter beer diapers eggs fruit 1 1 1 0 0 0 0 1 2 0 0 1 0 0 1 1 3 0 0 0 1 1 0 0 4 1 1 1 0 0 1 1 5 0 1 0 0 0 0 0 To illustrate the concepts, we use a small example from the supermarket domain. Table 2 shows a small database containing the items where, in each entry, the value 1 means the presence of the item in the corresponding transaction, and the value 0 represents the absence of an item in that transaction. The set of items is $$ I= \{\mathrm{milk, bread, butter, beer, diapers, eggs, fruit}\} $$ . An example rule for the supermarket could be $$ \{\mathrm{butter, bread}\} \Rightarrow \{\mathrm{milk}\} $$ meaning that if butter and bread are bought, customers also buy milk. In order to select interesting rules from the set of all possible rules, constraints on various measures of significance and interest are used. The best-known constraints are minimum thresholds on support and confidence. Let $$ X, Y $$ be itemsets, $$ X \Rightarrow Y $$ an association rule and a set of transactions of a given database. Note: this example is extremely small. In practical applications, a rule needs a support of several hundred transactions before it can be considered statistically significant, and datasets often contain thousands or millions of transactions. Support Support is an indication of how frequently the itemset appears in the dataset. In our example, it can be easier to explain support by writing $$ \text{support} = P(A\cap B)= \frac{(\text{number of transactions containing }A\text{ and }B)}\text{ (total number of transactions)} $$ where A and B are separate item sets that occur at the same time in a transaction. Using Table 2 as an example, the itemset $$ X=\{\mathrm{beer, diapers}\} $$ has a support of since it occurs in 20% of all transactions (1 out of 5 transactions). The argument of support of X is a set of preconditions, and thus becomes more restrictive as it grows (instead of more inclusive). Furthermore, the itemset $$ Y=\{\mathrm{milk, bread, butter}\} $$ has a support of as it appears in 20% of all transactions as well. When using antecedents and consequents, it allows a data miner to determine the support of multiple items being bought together in comparison to the whole data set. For example, Table 2 shows that if milk is bought, then bread is bought has a support of 0.4 or 40%. This because in 2 out 5 of the transactions, milk as well as bread are bought. In smaller data sets like this example, it is harder to see a strong correlation when there are few samples, but when the data set grows larger, support can be used to find correlation between two or more products in the supermarket example. Minimum support thresholds are useful for determining which itemsets are preferred or interesting. If we set the support threshold to ≥0.4 in Table 3, then the $$ \{\mathrm{milk}\} \Rightarrow \{\mathrm{eggs}\} $$ would be removed since it did not meet the minimum threshold of 0.4. Minimum threshold is used to remove samples where there is not a strong enough support or confidence to deem the sample as important or interesting in the dataset. Another way of finding interesting samples is to find the value of (support)×(confidence); this allows a data miner to see the samples where support and confidence are high enough to be highlighted in the dataset and prompt a closer look at the sample to find more information on the connection between the items. Support can be beneficial for finding the connection between products in comparison to the whole dataset, whereas confidence looks at the connection between one or more items and another item. Below is a table that shows the comparison and contrast between support and support × confidence, using the information from Table 4 to derive the confidence values. +Table 3. Example of Support, and support × confidenceif Antecedent then Consequentsupport support X confidenceif buy milk, then buy bread 2/5= 0.40.4×1.0= 0.4if buy milk, then buy eggs1/5= 0.20.2×0.5= 0.1if buy bread, then buy fruit2/5= 0.40.4×0.66= 0.264if buy fruit, then buy eggs2/5= 0.40.4×0.66= 0.264if buy milk and bread, then buy fruit2/5= 0.40.4×1.0= 0.4 The support of with respect to is defined as the proportion of transactions in the dataset which contains the itemset . Denoting a transaction by $$ (i,t) $$ where is the unique identifier of the transaction and is its itemset, the support may be written as: $$ \mathrm{support\,of\,X} = \frac{|\{(i,t) \in T : X \subseteq t \}|}{|T|} $$ This notation can be used when defining more complicated datasets where the items and itemsets may not be as easy as our supermarket example above. Other examples of where support can be used is in finding groups of genetic mutations that work collectively to cause a disease, investigating the number of subscribers that respond to upgrade offers, and discovering which products in a drug store are never bought together. Confidence Confidence is the percentage of all transactions satisfying that also satisfy . With respect to , the confidence value of an association rule, often denoted as $$ X \Rightarrow Y $$ , is the ratio of transactions containing both and to the total amount of values present, where is the antecedent and is the consequent. Confidence can also be interpreted as an estimate of the conditional probability $$ P(E_Y | E_X) $$ , the probability of finding the RHS of the rule in transactions under the condition that these transactions also contain the LHS. It is commonly depicted as: $$ \mathrm{conf}(X \Rightarrow Y) = P(Y | X) = \frac{\mathrm{supp}(X \cap Y)}{ \mathrm{supp}(X) }=\frac{\text{number of transactions containing }X\text{ and }Y}{\text{number of transactions containing }X} $$ The equation illustrates that confidence can be computed by calculating the co-occurrence of transactions and within the dataset in ratio to transactions containing only . This means that the number of transactions in both and is divided by those just in . For example, Table 2 shows the rule $$ \{\mathrm{butter, bread}\} \Rightarrow \{\mathrm{milk}\} $$ which has a confidence of $$ \frac{1/5}{1/5}=\frac{0.2}{0.2}=1.0 $$ in the dataset, which denotes that every time a customer buys butter and bread, they also buy milk. This particular example demonstrates the rule being correct 100% of the time for transactions containing both butter and bread. The rule $$ \{\mathrm{fruit}\} \Rightarrow \{\mathrm{eggs}\} $$ , however, has a confidence of $$ \frac{2/5}{3/5}=\frac{0.4}{0.6}=0.67 $$ . This suggests that eggs are bought 67% of the times that fruit is brought. Within this particular dataset, fruit is purchased a total of 3 times, with two of those times consisting of egg purchases. For larger datasets, a minimum threshold, or a percentage cutoff, for the confidence can be useful for determining item relationships. When applying this method to some of the data in Table 2, information that does not meet the requirements are removed. Table 4 shows association rule examples where the minimum threshold for confidence is 0.5 (50%). Any data that does not have a confidence of at least 0.5 is omitted. Generating thresholds allow for the association between items to become stronger as the data is further researched by emphasizing those that co-occur the most. The table uses the confidence information from Table 3 to implement the Support × Confidence column, where the relationship between items via their both confidence and support, instead of just one concept, is highlighted. Ranking the rules by Support × Confidence multiples the confidence of a particular rule to its support and is often implemented for a more in-depth understanding of the relationship between the items. +Table 4. Example of Confidence and Support × Confidenceif Antecedent then ConsequentConfidence Support × Confidenceif buy milk, then buy bread = 1.00.4×1.0= 0.4if buy milk, then buy eggs = 0.50.2×0.5= 0.1if buy bread, then buy fruit ≈ 0.660.4×0.66= 0.264if buy fruit, then buy eggs ≈ 0.660.4×0.66= 0.264if buy milk and bread, then buy fruit = 1.00.4×1.0= 0.4 Overall, using confidence in association rule mining is great way to bring awareness to data relations. Its greatest benefit is highlighting the relationship between particular items to one another within the set, as it compares co-occurrences of items to the total occurrence of the antecedent in the specific rule. However, confidence is not the optimal method for every concept in association rule mining. The disadvantage of using it is that it does not offer multiple difference outlooks on the associations. Unlike support, for instance, confidence does not provide the perspective of relationships between certain items in comparison to the entire dataset, so while milk and bread, for example, may occur 100% of the time for confidence, it only has a support of 0.4 (40%). This is why it is important to look at other viewpoints, such as Support × Confidence, instead of solely relying on one concept incessantly to define the relationships. Lift The lift of a rule is defined as: $$ \mathrm{lift}(X\Rightarrow Y) = \frac{ \mathrm{supp}(X \cup Y)}{ \mathrm{supp}(X) \times \mathrm{supp}(Y) } $$ or the ratio of the observed support to that expected if X and Y were independent. For example, the rule $$ \{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\} $$ has a lift of $$ \frac{0.2}{0.4 \times 0.4} = 1.25 $$ . If the rule had a lift of 1, it would imply that the probability of occurrence of the antecedent and that of the consequent are independent of each other. When two events are independent of each other, no rule can be drawn involving those two events. If the lift is > 1, that lets us know the degree to which those two occurrences are dependent on one another, and makes those rules potentially useful for predicting the consequent in future data sets. If the lift is < 1, that lets us know the items are substitute to each other. This means that presence of one item has negative effect on presence of other item and vice versa. The value of lift is that it considers both the support of the rule and the overall data set. [rede] ### Conviction The conviction of a rule is defined as $$ \mathrm{conv}(X\Rightarrow Y) =\frac{ 1 - \mathrm{supp}(Y) }{ 1 - \mathrm{conf}(X\Rightarrow Y)} $$ . For example, the rule $$ \{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\} $$ has a conviction of $$ \frac{1 - 0.4}{1 - 0.5} = 1.2 $$ , and can be interpreted as the ratio of the expected frequency that X occurs without Y (that is to say, the frequency that the rule makes an incorrect prediction) if X and Y were independent divided by the observed frequency of incorrect predictions. In this example, the conviction value of 1.2 shows that the rule $$ \{\mathrm{milk, bread}\} \Rightarrow \{\mathrm{butter}\} $$ would be incorrect 20% more often (1.2 times as often) if the association between X and Y was purely random chance. ### Alternative measures of interestingness In addition to confidence, other measures of interestingness for rules have been proposed. Some popular measures are: - All-confidence - Collective strength - Leverage Several more measures are presented and compared by Tan et al. and by Hahsler. Looking for techniques that can model what the user has known (and using these models as interestingness measures) is currently an active research trend under the name of "Subjective Interestingness." ## History The concept of association rules was popularized particularly due to the 1993 article of Agrawal et al., which has acquired more than 23,790 citations according to Google Scholar, as of April 2021, and is thus one of the most cited papers in the Data Mining field. However, what is now called "association rules" is introduced already in the 1966 paper on GUHA, a general data mining method developed by Petr Hájek et al. An early (circa 1989) use of minimum support and confidence to find all association rules is the Feature Based Modeling framework, which found all rules with $$ \mathrm{supp}(X) $$ and $$ \mathrm{conf}(X \Rightarrow Y) $$ greater than user defined constraints. ## Statistically sound associations One limitation of the standard approach to discovering associations is that by searching massive numbers of possible associations to look for collections of items that appear to be associated, there is a large risk of finding many spurious associations. These are collections of items that co-occur with unexpected frequency in the data, but only do so by chance. For example, suppose we are considering a collection of 10,000 items and looking for rules containing two items in the left-hand-side and 1 item in the right-hand-side. There are approximately 1,000,000,000,000 such rules. If we apply a statistical test for independence with a significance level of 0.05 it means there is only a 5% chance of accepting a rule if there is no association. If we assume there are no associations, we should nonetheless expect to find 50,000,000,000 rules. Statistically sound association discovery controls this risk, in most cases reducing the risk of finding any spurious associations to a user-specified significance level. ## Algorithms Many algorithms for generating association rules have been proposed. Some well-known algorithms are Apriori, Eclat and FP-Growth, but they only do half the job, since they are algorithms for mining frequent itemsets. Another step needs to be done after to generate rules from frequent itemsets found in a database. ### Apriori algorithm Apriori is given by R. Agrawal and R. Srikant in 1994 for frequent item set mining and association rule learning. It proceeds by identifying the frequent individual items in the database and extending them to larger and larger item sets as long as those item sets appear sufficiently often. The name of the algorithm is Apriori because it uses prior knowledge of frequent itemset properties. Overview: Apriori uses a "bottom up" approach, where frequent subsets are extended one item at a time (a step known as candidate generation), and groups of candidates are tested against the data. The algorithm terminates when no further successful extensions are found. Apriori uses breadth-first search and a Hash tree structure to count candidate item sets efficiently. It generates candidate item sets of length  from item sets of length . Then it prunes the candidates which have an infrequent sub pattern. According to the downward closure lemma, the candidate set contains all frequent -length item sets. After that, it scans the transaction database to determine frequent item sets among the candidates. Example: Assume that each row is a cancer sample with a certain combination of mutations labeled by a character in the alphabet. For example a row could have {a, c} which means it is affected by mutation 'a' and mutation 'c'. +Input Set{a, b}{c, d}{a, d}{a, e}{b, d}{a, b, d}{a, c, d}{a, b, c, d} Now we will generate the frequent item set by counting the number of occurrences of each character. This is also known as finding the support values. Then we will prune the item set by picking a minimum support threshold. For this pass of the algorithm we will pick 3. +Support Valuesabcd6436 Since all support values are three or above there is no pruning. The frequent item set is {a}, {b}, {c}, and {d}. After this we will repeat the process by counting pairs of mutations in the input set. +Support Values{a, b}{a, c}{a, d}{b, c}{b, d}{c, d}324133 Now we will make our minimum support value 4 so only {a, d} will remain after pruning. Now we will use the frequent item set to make combinations of triplets. We will then repeat the process by counting occurrences of triplets of mutations in the input set. +Support Values{a, c, d}2 Since we only have one item the next set of combinations of quadruplets is empty so the algorithm will stop. Advantages and Limitations: Apriori has some limitations. Candidate generation can result in large candidate sets. For example a 10^4 frequent 1-itemset will generate a 10^7 candidate 2-itemset. The algorithm also needs to frequently scan the database, to be specific n+1 scans where n is the length of the longest pattern. Apriori is slower than the ### Eclat algorithm . However, Apriori performs well compared to Eclat when the dataset is large. This is because in the Eclat algorithm if the dataset is too large the tid-lists become too large for memory. FP-growth outperforms the Apriori and Eclat. This is due to the ### FP-growth algorithm not having candidate generation or test, using a compact data structure, and only having one database scan. Eclat algorithm Eclat (alt. ECLAT, stands for Equivalence Class Transformation) is a backtracking algorithm, which traverses the frequent itemset lattice graph in a depth-first search (DFS) fashion. Whereas the breadth-first search (BFS) traversal used in the Apriori algorithm will end up checking every subset of an itemset before checking it, DFS traversal checks larger itemsets and can save on checking the support of some of its subsets by virtue of the downward-closer property. Furthermore it will almost certainly use less memory as DFS has a lower space complexity than BFS. To illustrate this, let there be a frequent itemset {a, b, c}. a DFS may check the nodes in the frequent itemset lattice in the following order: {a} → {a, b} → {a, b, c}, at which point it is known that {b}, {c}, {a, c}, {b, c} all satisfy the support constraint by the downward-closure property. BFS would explore each subset of {a, b, c} before finally checking it. As the size of an itemset increases, the number of its subsets undergoes combinatorial explosion. It is suitable for both sequential as well as parallel execution with locality-enhancing properties. FP-growth algorithm FP stands for frequent pattern. In the first pass, the algorithm counts the occurrences of items (attribute-value pairs) in the dataset of transactions, and stores these counts in a 'header table'. In the second pass, it builds the FP-tree structure by inserting transactions into a trie. Items in each transaction have to be sorted by descending order of their frequency in the dataset before being inserted so that the tree can be processed quickly. Items in each transaction that do not meet the minimum support requirement are discarded. If many transactions share most frequent items, the FP-tree provides high compression close to tree root. Recursive processing of this compressed version of the main dataset grows frequent item sets directly, instead of generating candidate items and testing them against the entire database (as in the apriori algorithm). Growth begins from the bottom of the header table i.e. the item with the smallest support by finding all sorted transactions that end in that item. Call this item $$ I $$ . A new conditional tree is created which is the original FP-tree projected onto $$ I $$ . The supports of all nodes in the projected tree are re-counted with each node getting the sum of its children counts. Nodes (and hence subtrees) that do not meet the minimum support are pruned. Recursive growth ends when no individual items conditional on $$ I $$ meet the minimum support threshold. The resulting paths from root to $$ I $$ will be frequent itemsets. After this step, processing continues with the next least-supported header item of the original FP-tree. Once the recursive process has completed, all frequent item sets will have been found, and association rule creation begins. ### Others #### ASSOC The ASSOC procedure is a GUHA method which mines for generalized association rules using fast bitstrings operations. The association rules mined by this method are more general than those output by apriori, for example "items" can be connected both with conjunction and disjunctions and the relation between antecedent and consequent of the rule is not restricted to setting minimum support and confidence as in apriori: an arbitrary combination of supported interest measures can be used. #### OPUS search OPUS is an efficient algorithm for rule discovery that, in contrast to most alternatives, does not require either monotone or anti-monotone constraints such as minimum support. Initially used to find rules for a fixed consequent it has subsequently been extended to find rules with any item as a consequent. OPUS search is the core technology in the popular Magnum Opus association discovery system. ## Lore A famous story about association rule mining is the "beer and diaper" story. A purported survey of behavior of supermarket shoppers discovered that customers (presumably young men) who buy diapers tend also to buy beer. This anecdote became popular as an example of how unexpected association rules might be found from everyday data. There are varying opinions as to how much of the story is true. Daniel Powers says: In 1992, Thomas Blischok, manager of a retail consulting group at Teradata, and his staff prepared an analysis of 1.2 million market baskets from about 25 Osco Drug stores. Database queries were developed to identify affinities. The analysis "did discover that between 5:00 and 7:00 p.m. that consumers bought beer and diapers". Osco managers did NOT exploit the beer and diapers relationship by moving the products closer together on the shelves. ## Other types of association rule mining Multi-Relation Association Rules (MRAR): These are association rules where each item may have several relations. These relations indicate indirect relationships between the entities. Consider the following MRAR where the first item consists of three relations live in, nearby and humid: “Those who live in a place which is nearby a city with humid climate type and also are younger than 20 $$ \implies $$ their health condition is good”. Such association rules can be extracted from RDBMS data or semantic web data. Contrast set learning is a form of associative learning. Contrast set learners use rules that differ meaningfully in their distribution across subsets. Weighted class learning is another form of associative learning where weights may be assigned to classes to give focus to a particular issue of concern for the consumer of the data mining results. High-order pattern discovery facilitates the capture of high-order (polythetic) patterns or event associations that are intrinsic to complex real-world data. K-optimal pattern discovery provides an alternative to the standard approach to association rule learning which requires that each pattern appear frequently in the data. Approximate Frequent Itemset mining is a relaxed version of Frequent Itemset mining that allows some of the items in some of the rows to be 0. Generalized Association Rules hierarchical taxonomy (concept hierarchy) Quantitative Association Rules categorical and quantitative data Interval Data Association Rules e.g. partition the age into 5-year-increment ranged Sequential pattern mining discovers subsequences that are common to more than minsup (minimum support threshold) sequences in a sequence database, where minsup is set by the user. A sequence is an ordered list of transactions. Subspace Clustering, a specific type of clustering high-dimensional data, is in many variants also based on the downward-closure property for specific clustering models. Warmr, shipped as part of the ACE data mining suite, allows association rule learning for first order relational rules.
https://en.wikipedia.org/wiki/Association_rule_learning
Electronics is a scientific and engineering discipline that studies and applies the principles of physics to design, create, and operate devices that manipulate electrons and other electrically charged particles. It is a subfield of physics and electrical engineering which uses active devices such as transistors, diodes, and integrated circuits to control and amplify the flow of electric current and to convert it from one form to another, such as from alternating current (AC) to direct current (DC) or from analog signals to digital signals. Electronic devices have significantly influenced the development of many aspects of modern society, such as telecommunications, entertainment, education, health care, industry, and security. The main driving force behind the advancement of electronics is the semiconductor industry, which continually produces ever-more sophisticated electronic devices and circuits in response to global demand. The semiconductor industry is one of the global economy's largest and most profitable sectors, with annual revenues exceeding $481 billion in 2018. The electronics industry also encompasses other sectors that rely on electronic devices and systems, such as e-commerce, which generated over $29 trillion in online sales in 2017. ## History and development Karl Ferdinand Braun´s development of the crystal detector, the first semiconductor device, in 1874 and the identification of the electron in 1897 by Sir Joseph John Thomson, along with the subsequent invention of the vacuum tube which could amplify and rectify small electrical signals, inaugurated the field of electronics and the electron age. Practical applications started with the invention of the diode by Ambrose Fleming and the triode by Lee De Forest in the early 1900s, which made the detection of small electrical voltages, such as radio signals from a radio antenna, practicable. Vacuum tubes (thermionic valves) were the first active electronic components which controlled current flow by influencing the flow of individual electrons, and enabled the construction of equipment that used current amplification and rectification to give us radio, television, radar, long-distance telephony and much more. The early growth of electronics was rapid, and by the 1920s, commercial radio broadcasting and telecommunications were becoming widespread and electronic amplifiers were being used in such diverse applications as long-distance telephony and the music recording industry. The next big technological step took several decades to appear, when the first working point-contact transistor was invented by John Bardeen and Walter Houser Brattain at Bell Labs in 1947. However, vacuum tubes continued to play a leading role in the field of microwave and high power transmission as well as television receivers until the middle of the 1980s. Since then, solid-state devices have all but completely taken over. Vacuum tubes are still used in some specialist applications such as high power RF amplifiers, cathode-ray tubes, specialist audio equipment, guitar amplifiers and some microwave devices. In April 1955, the IBM 608 was the first IBM product to use transistor circuits without any vacuum tubes and is believed to be the first all-transistorized calculator to be manufactured for the commercial market. The 608 contained more than 3,000 germanium transistors. Thomas J. Watson Jr. ordered all future IBM products to use transistors in their design. From that time on transistors were almost exclusively used for computer logic circuits and peripheral devices. However, early junction transistors were relatively bulky devices that were difficult to manufacture on a mass-production basis, which limited them to a number of specialised applications. The MOSFET was invented at Bell Labs between 1955 and 1960. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. Its advantages include high scalability, affordability, low power consumption, and high density. It revolutionized the electronics industry, becoming the most widely used electronic device in the world. The MOSFET is the basic element in most modern electronic equipment. As the complexity of circuits grew, problems arose. One problem was the size of the circuit. A complex circuit like a computer was dependent on speed. If the components were large, the wires interconnecting them must be long. The electric signals took time to go through the circuit, thus slowing the computer. The invention of the integrated circuit by Jack Kilby and Robert Noyce solved this problem by making all the components and the chip out of the same block (monolith) of semiconductor material. The circuits could be made smaller, and the manufacturing process could be automated. This led to the idea of integrating all components on a single-crystal silicon wafer, which led to small-scale integration (SSI) in the early 1960s, and then medium-scale integration (MSI) in the late 1960s, followed by VLSI. In 2008, billion-transistor processors became commercially available. ## Subfields - Analog electronics - Audio electronics - Avionics - Bioelectronics - Circuit design - Digital electronics - Electronic components - Embedded systems - Integrated circuits - Microelectronics - Nanoelectronics - Optoelectronics - Power electronics - Printed circuit boards - Semiconductor devices - Sensors - Telecommunications ## Devices and components An electronic component is any component in an electronic system either active or passive. Components are connected together, usually by being soldered to a printed circuit board (PCB), to create an electronic circuit with a particular function. Components may be packaged singly, or in more complex groups as integrated circuits. Passive electronic components are capacitors, inductors, resistors, whilst active components are such as semiconductor devices; transistors and thyristors, which control current flow at electron level. ## Types of circuits Electronic circuit functions can be divided into two function groups: analog and digital. A particular device may consist of circuitry that has either or a mix of the two types. ### Analog circuits are becoming less common, as many of their functions are being digitized. Analog circuits Analog circuits use a continuous range of voltage or current for signal processing, as opposed to the discrete levels used in digital circuits. Analog circuits were common throughout an electronic device in the early years in devices such as radio receivers and transmitters. Analog electronic computers were valuable for solving problems with continuous variables until digital processing advanced. As semiconductor technology developed, many of the functions of analog circuits were taken over by digital circuits, and modern circuits that are entirely analog are less common; their functions being replaced by hybrid approach which, for instance, uses analog circuits at the front end of a device receiving an analog signal, and then use digital processing using microprocessor techniques thereafter. Sometimes it may be difficult to classify some circuits that have elements of both linear and non-linear operation. An example is the voltage comparator which receives a continuous range of voltage but only outputs one of two levels as in a digital circuit. Similarly, an overdriven transistor amplifier can take on the characteristics of a controlled switch, having essentially two levels of output. Analog circuits are still widely used for signal amplification, such as in the entertainment industry, and conditioning signals from analog sensors, such as in industrial measurement and control. ### Digital circuits Digital circuits are electric circuits based on discrete voltage levels. Digital circuits use Boolean algebra and are the basis of all digital computers and microprocessor devices. They range from simple logic gates to large integrated circuits, employing millions of such gates. Digital circuits use a binary system with two voltage levels labelled "0" and "1" to indicated logical status. Often logic "0" will be a lower voltage and referred to as "Low" while logic "1" is referred to as "High". However, some systems use the reverse definition ("0" is "High") or are current based. Quite often the logic designer may reverse these definitions from one circuit to the next as they see fit to facilitate their design. The definition of the levels as "0" or "1" is arbitrary. Ternary (with three states) logic has been studied, and some prototype computers made, but have not gained any significant practical acceptance. Universally, Computers and Digital signal processors are constructed with digital circuits using Transistors such as MOSFETs in the electronic logic gates to generate binary states. - Logic gates - Adders - Flip-flops - Counters - Registers - Multiplexers - Schmitt triggers Highly integrated devices: - Memory chip - Microprocessors - Microcontrollers - Application-specific integrated circuit (ASIC) - Digital signal processor (DSP) - Field-programmable gate array (FPGA) - Field-programmable analog array (FPAA) - System on chip (SOC) ## Design Electronic systems design deals with the multi-disciplinary design issues of complex electronic devices and systems, such as mobile phones and computers. The subject covers a broad spectrum, from the design and development of an electronic system (new product development) to assuring its proper function, service life and disposal. Electronic systems design is therefore the process of defining and developing complex electronic devices to satisfy specified requirements of the user. Due to the complex nature of electronics theory, laboratory experimentation is an important part of the development of electronic devices. These experiments are used to test or verify the engineer's design and detect errors. Historically, electronics labs have consisted of electronics devices and equipment located in a physical space, although in more recent years the trend has been towards electronics lab simulation software, such as CircuitLogix, Multisim, and PSpice. ### Computer-aided design Today's electronics engineers have the ability to design circuits using premanufactured building blocks such as power supplies, semiconductors (i.e. semiconductor devices, such as transistors), and integrated circuits. Electronic design automation software programs include schematic capture programs and printed circuit board design programs. Popular names in the EDA software world are NI Multisim, Cadence (ORCAD), EAGLE PCB and Schematic, Mentor (PADS PCB and LOGIC Schematic), Altium (Protel), LabCentre Electronics (Proteus), gEDA, KiCad and many others. ## Negative qualities ### Thermal management Heat generated by electronic circuitry must be dissipated to prevent immediate failure and improve long term reliability. Heat dissipation is mostly achieved by passive conduction/convection. Means to achieve greater dissipation include heat sinks and fans for air cooling, and other forms of computer cooling such as water cooling. These techniques use convection, conduction, and radiation of heat energy. ### Noise Electronic noise is defined as unwanted disturbances superposed on a useful signal that tend to obscure its information content. Noise is not the same as signal distortion caused by a circuit. Noise is associated with all electronic circuits. Noise may be electromagnetically or thermally generated, which can be decreased by lowering the operating temperature of the circuit. Other types of noise, such as shot noise cannot be removed as they are due to limitations in physical properties. ## Packaging methods Many different methods of connecting components have been used over the years. For instance, early electronics often used point to point wiring with components attached to wooden breadboards to construct circuits. Cordwood construction and wire wrap were other methods used. Most modern day electronics now use printed circuit boards made of materials such as FR4, or the cheaper (and less hard-wearing) Synthetic Resin Bonded Paper (SRBP, also known as Paxoline/Paxolin (trade marks) and FR2) – characterised by its brown colour. Health and environmental concerns associated with electronics assembly have gained increased attention in recent years, especially for products destined to go to European markets. Electrical components are generally mounted in the following ways: - Through-hole (sometimes referred to as 'Pin-Through-Hole') - Surface mount - Chassis mount - Rack mount - LGA/BGA/PGA socket ## Industry The electronics industry consists of various sectors. The central driving force behind the entire electronics industry is the semiconductor industry sector, which has annual sales of over as of 2018. The largest industry sector is e-commerce, which generated over in 2017. The most widely manufactured electronic device is the metal-oxide-semiconductor field-effect transistor (MOSFET), with an estimated 13sextillion MOSFETs having been manufactured between 1960 and 2018. In the 1960s, U.S. manufacturers were unable to compete with Japanese companies such as Sony and Hitachi who could produce high-quality goods at lower prices. By the 1980s, however, U.S. manufacturers became the world leaders in semiconductor development and assembly. However, during the 1990s and subsequently, the industry shifted overwhelmingly to East Asia (a process begun with the initial movement of microchip mass-production there in the 1970s), as plentiful, cheap labor, and increasing technological sophistication, became widely available there.Lewis, James Andrew: "Strengthening a Transnational Semiconductor Industry", June 2, 2022, Center for Strategic and International Studies (CSIS), retrieved September 12, 2022 Over three decades, the United States' global share of semiconductor manufacturing capacity fell, from 37% in 1990, to 12% in 2022. America's pre-eminent semiconductor manufacturer, Intel Corporation, fell far behind its subcontractor Taiwan Semiconductor Manufacturing Company (TSMC) in manufacturing technology. By that time, Taiwan had become the world's leading source of advanced semiconductors—followed by South Korea, the United States, Japan, Singapore, and China. Important semiconductor industry facilities (which often are subsidiaries of a leading producer based elsewhere) also exist in Europe (notably the Netherlands), Southeast Asia, South America, and Israel.
https://en.wikipedia.org/wiki/Electronics
In classical mechanics, Euler's laws of motion are equations of motion which extend Newton's laws of motion for point particle to rigid body motion. They were formulated by Leonhard Euler about 50 years after Isaac Newton formulated his laws. ## Overview ### Euler's first law Euler's first law states that the rate of change of linear momentum of a rigid body is equal to the resultant of all the external forces acting on the body: $$ \mathbf F_\text{ext} = \frac{d\mathbf p}{dt}. $$ Internal forces between the particles that make up a body do not contribute to changing the momentum of the body as there is an equal and opposite force resulting in no net effect. The linear momentum of a rigid body is the product of the mass of the body and the velocity of its center of mass . ### Euler's second law Euler's second law states that the rate of change of angular momentum about a point that is fixed in an inertial reference frame (often the center of mass of the body), is equal to the sum of the external moments of force (torques) acting on that body about that point: $$ \mathbf M = {d\mathbf L \over dt}. $$ Note that the above formula holds only if both and are computed with respect to a fixed inertial frame or a frame parallel to the inertial frame but fixed on the center of mass. For rigid bodies translating and rotating in only two dimensions, this can be expressed as: $$ \mathbf M = \mathbf r_{\rm cm} \times \mathbf a_{\rm cm} m + I \boldsymbol{\alpha}, $$ where: - is the position vector of the center of mass of the body with respect to the point about which moments are summed, - is the linear acceleration of the center of mass of the body, - is the mass of the body, - is the angular acceleration of the body, and - is the moment of inertia of the body about its center of mass. See also Euler's equations (rigid body dynamics). ## Explanation and derivation The distribution of internal forces in a deformable body are not necessarily equal throughout, i.e. the stresses vary from one point to the next. This variation of internal forces throughout the body is governed by Newton's second law of motion of conservation of linear momentum and angular momentum, which for their simplest use are applied to a mass particle but are extended in continuum mechanics to a body of continuously distributed mass. For continuous bodies these laws are called Euler's laws of motion. The total body force applied to a continuous body with mass , mass density , and volume , is the volume integral integrated over the volume of the body: $$ \mathbf F_B=\int_V\mathbf b\,dm = \int_V\mathbf b\rho\,dV $$ where is the force acting on the body per unit mass (dimensions of acceleration, misleadingly called the "body force"), and is an infinitesimal mass element of the body. Body forces and contact forces acting on the body lead to corresponding moments (torques) of those forces relative to a given point. Thus, the total applied torque about the origin is given by $$ \mathbf M= \mathbf M_B + \mathbf M_C $$ where and respectively indicate the moments caused by the body and contact forces. Thus, the sum of all applied forces and torques (with respect to the origin of the coordinate system) acting on the body can be given as the sum of a volume and surface integral: $$ \mathbf F = \int_V \mathbf a\,dm = \int_V \mathbf a\rho\,dV = \int_S \mathbf{t} \,dS + \int_V \mathbf b\rho\,dV $$ $$ \mathbf M = \mathbf M_B + \mathbf M_C = \int_S \mathbf r \times \mathbf t \,dS + \int_V \mathbf r \times \mathbf b\rho\,dV. $$ where is called the surface traction, integrated over the surface of the body, in turn denotes a unit vector normal and directed outwards to the surface . Let the coordinate system be an inertial frame of reference, be the position vector of a point particle in the continuous body with respect to the origin of the coordinate system, and be the velocity vector of that point. Euler's first axiom or law (law of balance of linear momentum or balance of forces) states that in an inertial frame the time rate of change of linear momentum of an arbitrary portion of a continuous body is equal to the total applied force acting on that portion, and it is expressed as $$ \begin{align} \frac{d\mathbf p}{dt} &= \mathbf F \\ \frac{d}{dt}\int_V \rho\mathbf v\,dV&=\int_S \mathbf t \, dS + \int_V \mathbf b\rho \,dV. \end{align} $$ Euler's second axiom or law (law of balance of angular momentum or balance of torques) states that in an inertial frame the time rate of change of angular momentum of an arbitrary portion of a continuous body is equal to the total applied torque acting on that portion, and it is expressed as $$ \begin{align} \frac{d\mathbf L}{dt} &= \mathbf M \\ \frac{d}{dt}\int_V \mathbf r\times\rho\mathbf v\,dV&=\int_S \mathbf r \times \mathbf t \,dS + \int_V \mathbf r \times \mathbf b\rho\,dV. \end{align} $$ where $$ \mathbf v $$ is the velocity, $$ V $$ the volume, and the derivatives of and are material derivatives.
https://en.wikipedia.org/wiki/Euler%27s_laws_of_motion
Synthetic geometry (sometimes referred to as axiomatic geometry or even pure geometry) is geometry without the use of coordinates. It relies on the axiomatic method for proving all results from a few basic properties initially called postulates, and at present called axioms. After the 17th-century introduction by René Descartes of the coordinate method, which was called analytic geometry, the term "synthetic geometry" was coined to refer to the older methods that were, before Descartes, the only known ones. According to Felix Klein Synthetic geometry is that which studies figures as such, without recourse to formulae, whereas analytic geometry consistently makes use of such formulae as can be written down after the adoption of an appropriate system of coordinates. The first systematic approach for synthetic geometry is Euclid's Elements. However, it appeared at the end of the 19th century that Euclid's postulates were not sufficient for characterizing geometry. The first complete axiom system for geometry was given only at the end of the 19th century by David Hilbert. At the same time, it appeared that both synthetic methods and analytic methods can be used to build geometry. The fact that the two approaches are equivalent has been proved by Emil Artin in his book Geometric Algebra. Because of this equivalence, the distinction between synthetic and analytic geometry is no more in use, except at elementary level, or for geometries that are not related to any sort of numbers, such as some finite geometries and non-Desarguesian geometry. ## Logical synthesis The process of logical synthesis begins with some arbitrary but definite starting point. This starting point is the introduction of primitive notions or primitives and axioms about these primitives: - Primitives are the most basic ideas. Typically they include both objects and relationships. In geometry, the objects are things such as points, lines and planes, while a fundamental relationship is that of incidence – of one object meeting or joining with another. The terms themselves are undefined. Hilbert once remarked that instead of points, lines and planes one might just as well talk of tables, chairs and beer mugs, the point being that the primitive terms are just empty placeholders and have no intrinsic properties. - Axioms are statements about these primitives; for example, any two points are together incident with just one line (i.e. that for any two points, there is just one line which passes through both of them). Axioms are assumed true, and not proven. They are the building blocks of geometric concepts, since they specify the properties that the primitives have. From a given set of axioms, synthesis proceeds as a carefully constructed logical argument. When a significant result is proved rigorously, it becomes a theorem. ### Properties of axiom sets There is no fixed axiom set for geometry, as more than one consistent set can be chosen. Each such set may lead to a different geometry, while there are also examples of different sets giving the same geometry. With this plethora of possibilities, it is no longer appropriate to speak of "geometry" in the singular. Historically, Euclid's parallel postulate has turned out to be independent of the other axioms. Simply discarding it gives absolute geometry, while negating it yields hyperbolic geometry. Other consistent axiom sets can yield other geometries, such as projective, elliptic, spherical or affine geometry. Axioms of continuity and "betweenness" are also optional, for example, discrete geometries may be created by discarding or modifying them. Following the Erlangen program of Klein, the nature of any given geometry can be seen as the connection between symmetry and the content of the propositions, rather than the style of development. ## History Euclid's original treatment remained unchallenged for over two thousand years, until the simultaneous discoveries of the non-Euclidean geometries by Gauss, Bolyai, Lobachevsky and Riemann in the 19th century led mathematicians to question Euclid's underlying assumptions. One of the early French analysts summarized synthetic geometry this way: The Elements of Euclid are treated by the synthetic method. This author, after having posed the axioms, and formed the requisites, established the propositions which he proves successively being supported by that which preceded, proceeding always from the simple to compound, which is the essential character of synthesis. The heyday of synthetic geometry can be considered to have been the 19th century, when analytic methods based on coordinates and calculus were ignored by some geometers such as Jakob Steiner, in favor of a purely synthetic development of projective geometry. For example, the treatment of the projective plane starting from axioms of incidence is actually a broader theory (with more models) than is found by starting with a vector space of dimension three. Projective geometry has in fact the simplest and most elegant synthetic expression of any geometry. In his Erlangen program, Felix Klein played down the tension between synthetic and analytic methods: On the Antithesis between the Synthetic and the Analytic Method in Modern Geometry: The distinction between modern synthesis and modern analytic geometry must no longer be regarded as essential, inasmuch as both subject-matter and methods of reasoning have gradually taken a similar form in both. We choose therefore in the text as common designation of them both the term projective geometry. Although the synthetic method has more to do with space-perception and thereby imparts a rare charm to its first simple developments, the realm of space-perception is nevertheless not closed to the analytic method, and the formulae of analytic geometry can be looked upon as a precise and perspicuous statement of geometrical relations. On the other hand, the advantage to original research of a well formulated analysis should not be underestimated, - an advantage due to its moving, so to speak, in advance of the thought. But it should always be insisted that a mathematical subject is not to be considered exhausted until it has become intuitively evident, and the progress made by the aid of analysis is only a first, though a very important, step. The close axiomatic study of Euclidean geometry led to the construction of the Lambert quadrilateral and the Saccheri quadrilateral. These structures introduced the field of non-Euclidean geometry where Euclid's parallel axiom is denied. Gauss, Bolyai and Lobachevski independently constructed hyperbolic geometry, where parallel lines have an angle of parallelism that depends on their separation. This study became widely accessible through the Poincaré disc model where motions are given by Möbius transformations. Similarly, Riemann, a student of Gauss's, constructed Riemannian geometry, of which elliptic geometry is a particular case. Another example concerns inversive geometry as advanced by Ludwig Immanuel Magnus, which can be considered synthetic in spirit. The closely related operation of reciprocation expresses analysis of the plane. Karl von Staudt showed that algebraic axioms, such as commutativity and associativity of addition and multiplication, were in fact consequences of incidence of lines in geometric configurations. David Hilbert showed that the Desargues configuration played a special role. Further work was done by Ruth Moufang and her students. The concepts have been one of the motivators of incidence geometry. When parallel lines are taken as primary, synthesis produces affine geometry. Though Euclidean geometry is both an affine and metric geometry, in general affine spaces may be missing a metric. The extra flexibility thus afforded makes affine geometry appropriate for the study of spacetime, as discussed in the history of affine geometry. In 1955 Herbert Busemann and Paul J. Kelley sounded a nostalgic note for synthetic geometry: Although reluctantly, geometers must admit that the beauty of synthetic geometry has lost its appeal for the new generation. The reasons are clear: not so long ago synthetic geometry was the only field in which the reasoning proceeded strictly from axioms, whereas this appeal — so fundamental to many mathematically interested people — is now made by many other fields. For example, college studies now include linear algebra, topology, and graph theory where the subject is developed from first principles, and propositions are deduced by elementary proofs. Expecting to replace synthetic with analytic geometry leads to loss of geometric content. Today's student of geometry has axioms other than Euclid's available: see Hilbert's axioms and Tarski's axioms. Ernst Kötter published a (German) report in 1901 on "The development of synthetic geometry from Monge to Staudt (1847)"; ## Proofs using synthetic geometry Synthetic proofs of geometric theorems make use of auxiliary constructs (such as helping lines) and concepts such as equality of sides or angles and similarity and congruence of triangles. Examples of such proofs can be found in the articles Butterfly theorem, Angle bisector theorem, Apollonius' theorem, British flag theorem, Ceva's theorem, Equal incircles theorem, Geometric mean theorem, Heron's formula, Isosceles triangle theorem, Law of cosines, and others that are linked to here. ## Computational synthetic geometry In conjunction with computational geometry, a computational synthetic geometry has been founded, having close connection, for example, with matroid theory. Synthetic differential geometry is an application of topos theory to the foundations of differentiable manifold theory.
https://en.wikipedia.org/wiki/Synthetic_geometry
The zeroth law of thermodynamics is one of the four principal laws of thermodynamics. It provides an independent definition of temperature without reference to entropy, which is defined in the second law. The law was established by Ralph H. Fowler in the 1930s, long after the first, second, and third laws had been widely recognized. The zeroth law states that if two thermodynamic systems are both in thermal equilibrium with a third system, then the two systems are in thermal equilibrium with each other.Guggenheim, E.A. (1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, North-Holland Publishing Company., Amsterdam, (1st edition 1949) fifth edition 1965, p. 8: "If two systems are both in thermal equilibrium with a third system then they are in thermal equilibrium with each other." Two systems are said to be in thermal equilibrium if they are linked by a wall permeable only to heat, and they do not change over time. Another formulation by James Clerk Maxwell is "All heat is of the same kind". Another statement of the law is "All diathermal walls are equivalent". The zeroth law is important for the mathematical formulation of thermodynamics. It makes the relation of thermal equilibrium between systems an equivalence relation, which can represent equality of some quantity associated with each system. A quantity that is the same for two systems, if they can be placed in thermal equilibrium with each other, is a scale of temperature. The zeroth law is needed for the definition of such scales, and justifies the use of practical thermometers. ## Equivalence relation A thermodynamic system is by definition in its own state of internal thermodynamic equilibrium, that is to say, there is no change in its observable state (i.e. macrostate) over time and no flows occur in it. One precise statement of the zeroth law is that the relation of thermal equilibrium is an equivalence relation on pairs of thermodynamic systems. In other words, the set of all systems each in its own state of internal thermodynamic equilibrium may be divided into subsets in which every system belongs to one and only one subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with a member of any other subset. This means that a unique "tag" can be assigned to every system, and if the "tags" of two systems are the same, they are in thermal equilibrium with each other, and if different, they are not. This property is used to justify the use of empirical temperature as a tagging system. Empirical temperature provides further relations of thermally equilibrated systems, such as order and continuity with regard to "hotness" or "coldness", but these are not implied by the standard statement of the zeroth law. If it is defined that a thermodynamic system is in thermal equilibrium with itself (i.e., thermal equilibrium is reflexive), then the zeroth law may be stated as follows: This statement asserts that thermal equilibrium is a left-Euclidean relation between thermodynamic systems. If we also define that every thermodynamic system is in thermal equilibrium with itself, then thermal equilibrium is also a reflexive relation. Binary relations that are both reflexive and Euclidean are equivalence relations. Thus, again implicitly assuming reflexivity, the zeroth law is therefore often expressed as a right-Euclidean statement: One consequence of an equivalence relationship is that the equilibrium relationship is symmetric: If A is in thermal equilibrium with B, then B is in thermal equilibrium with A. Thus, the two systems are in thermal equilibrium with each other, or they are in mutual equilibrium. Another consequence of equivalence is that thermal equilibrium is described as a transitive relation: A reflexive, transitive relation does not guarantee an equivalence relationship. For the above statement to be true, both reflexivity and symmetry must be implicitly assumed. It is the Euclidean relationships which apply directly to thermometry. An ideal thermometer is a thermometer which does not measurably change the state of the system it is measuring. Assuming that the unchanging reading of an ideal thermometer is a valid tagging system for the equivalence classes of a set of equilibrated thermodynamic systems, then the systems are in thermal equilibrium, if a thermometer gives the same reading for each system. If the system are thermally connected, no subsequent change in the state of either one can occur. If the readings are different, then thermally connecting the two systems causes a change in the states of both systems. The zeroth law provides no information regarding this final reading. ## Foundation of temperature Nowadays, there are two nearly separate concepts of temperature, the thermodynamic concept, and that of the kinetic theory of gases and other materials. The zeroth law belongs to the thermodynamic concept, but this is no longer the primary international definition of temperature. The current primary international definition of temperature is in terms of the kinetic energy of freely moving microscopic particles such as molecules, related to temperature through the Boltzmann constant $$ k_{\mathrm B} $$ . The present article is about the thermodynamic concept, not about the kinetic theory concept. The zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set (such as the set of all systems each in its own state of internal thermodynamic equilibrium) divides that set into a collection of distinct subsets ("disjoint subsets") where any member of the set is a member of one and only one such subset. In the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be uniquely "tagged" with a label identifying the subset to which it belongs. Although the labeling may be quite arbitrary, temperature is just such a labeling process which uses the real number system for tagging. The zeroth law justifies the use of suitable thermodynamic systems as thermometers to provide such a labeling, which yield any number of possible empirical temperature scales, and justifies the use of the second law of thermodynamics to provide an absolute, or thermodynamic temperature scale. Such temperature scales bring additional continuity and ordering (i.e., "hot" and "cold") properties to the concept of temperature. In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural order of nearby surfaces. One may therefore construct a global temperature function that provides a continuous ordering of states. The dimensionality of a surface of constant temperature is one less than the number of thermodynamic parameters, thus, for an ideal gas described with three thermodynamic parameters P, V and N, it is a two-dimensional surface. For example, if two systems of ideal gases are in joint thermodynamic equilibrium across an immovable diathermal wall, then = where Pi is the pressure in the ith system, Vi is the volume, and Ni is the amount (in moles, or simply the number of atoms) of gas. The surface = constant defines surfaces of equal thermodynamic temperature, and one may label defining T so that = RT, where R is some constant. These systems can now be used as a thermometer to calibrate other systems. Such systems are known as "ideal gas thermometers". In a sense, focused on the zeroth law, there is only one kind of diathermal wall or one kind of heat, as expressed by Maxwell's dictum that "All heat is of the same kind". But in another sense, heat is transferred in different ranks, as expressed by Arnold Sommerfeld's dictum "Thermodynamics investigates the conditions that govern the transformation of heat into work. It teaches us to recognize temperature as the measure of the work-value of heat. Heat of higher temperature is richer, is capable of doing more work. Work may be regarded as heat of an infinitely high temperature, as unconditionally available heat." This is why temperature is the particular variable indicated by the zeroth law's statement of equivalence. ## Dependence on the existence of walls permeable only to heat In Constantin Carathéodory's (1909) theory, it is postulated that there exist walls "permeable only to heat", though heat is not explicitly defined in that paper. This postulate is a physical postulate of existence. It does not say that there is only one kind of heat. This paper of Carathéodory states as proviso 4 of its account of such walls: "Whenever each of the systems S1 and S2 is made to reach equilibrium with a third system S3 under identical conditions, systems S1 and S2 are in mutual equilibrium". It is the function of this statement in the paper, not there labeled as the zeroth law, to provide not only for the existence of transfer of energy other than by work or transfer of matter, but further to provide that such transfer is unique in the sense that there is only one kind of such wall, and one kind of such transfer. This is signaled in the postulate of this paper of Carathéodory that precisely one non-deformation variable is needed to complete the specification of a thermodynamic state, beyond the necessary deformation variables, which are not restricted in number. It is therefore not exactly clear what Carathéodory means when in the introduction of this paper he writes It is possible to develop the whole theory without assuming the existence of heat, that is of a quantity that is of a different nature from the normal mechanical quantities. It is the opinion of Elliott H. Lieb and Jakob Yngvason (1999) that the derivation from statistical mechanics of the law of entropy increase is a goal that has so far eluded the deepest thinkers. Thus the idea remains open to consideration that the existence of heat and temperature are needed as coherent primitive concepts for thermodynamics, as expressed, for example, by Maxwell and Max Planck. On the other hand, Planck (1926) clarified how the second law can be stated without reference to heat or temperature, by referring to the irreversible and universal nature of friction in natural thermodynamic processes. ## History Writing long before the term "zeroth law" was coined, in 1871 Maxwell discussed at some length ideas which he summarized by the words "All heat is of the same kind". Modern theorists sometimes express this idea by postulating the existence of a unique one-dimensional hotness manifold, into which every proper temperature scale has a monotonic mapping. This may be expressed by the statement that there is only one kind of temperature, regardless of the variety of scales in which it is expressed. Another modern expression of this idea is that "All diathermal walls are equivalent". This might also be expressed by saying that there is precisely one kind of non-mechanical, non-matter-transferring contact equilibrium between thermodynamic systems. According to Sommerfeld, Ralph H. Fowler coined the term zeroth law of thermodynamics while discussing the 1935 text by Meghnad Saha and B.N. Srivastava. They write on page 1 that "every physical quantity must be measurable in numerical terms". They presume that temperature is a physical quantity and then deduce the statement "If a body is in temperature equilibrium with two bodies and , then and themselves are in temperature equilibrium with each other". Then they italicize a self-standing paragraph, as if to state their basic postulate: They do not themselves here use the phrase "zeroth law of thermodynamics". There are very many statements of these same physical ideas in the physics literature long before this text, in very similar language. What was new here was just the label zeroth law of thermodynamics. Fowler & Guggenheim (1936/1965) wrote of the zeroth law as follows: They then proposed that The first sentence of this present article is a version of this statement. It is not explicitly evident in the existence statement of Fowler and Edward A. Guggenheim that temperature refers to a unique attribute of a state of a system, such as is expressed in the idea of the hotness manifold. Also their statement refers explicitly to statistical mechanical assemblies, not explicitly to macroscopic thermodynamically defined systems. ## References ## Further reading - 0
https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics
Superconductivity is a set of physical properties observed in superconductors: materials where electrical resistance vanishes and magnetic fields are expelled from the material. Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered, even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. An electric current through a loop of superconducting wire can persist indefinitely with no power source. The superconductivity phenomenon was discovered in 1911 by Dutch physicist Heike Kamerlingh Onnes. Like ferromagnetism and atomic spectral lines, superconductivity is a phenomenon which can only be explained by quantum mechanics. It is characterized by the ### Meissner effect , the complete cancellation of the magnetic field in the interior of the superconductor during its transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics. In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above . It was shortly found (by Ching-Wu Chu) that replacing the lanthanum with yttrium, i.e. making YBCO, raised the critical temperature to , which was important because liquid nitrogen could then be used as a refrigerant. Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. The cheaply available coolant liquid nitrogen boils at and thus the existence of superconductivity at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures. ## History Superconductivity was discovered on April 8, 1911, by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared. In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found. In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K. Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect. In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current. ### London constitutive equations The theoretical model that was first conceived for superconductivity was completely classical: it is summarized by London constitutive equations. It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect, wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface. The two constitutive equations for a superconductor by London are: $$ \frac{\partial \mathbf{j}}{\partial t} = \frac{n e^2}{m}\mathbf{E}, \qquad \mathbf{\nabla}\times\mathbf{j} =-\frac{n e^2}{m}\mathbf{B}. $$ The first equation follows from Newton's second law for superconducting electrons. ### Conventional theories (1950s) During the 1950s, theoretical condensed matter physicists arrived at an understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg–Landau theory (1950) and the microscopic BCS theory (1957). In 1950, the phenomenological Ginzburg–Landau theory of superconductivity was devised by Landau and Ginzburg. This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg–Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg–Landau theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology. Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. This important discovery pointed to the electron–phonon interaction as the microscopic mechanism responsible for superconductivity. The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer. This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972. The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian. In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg–Landau theory close to the critical temperature. Generalizations of BCS theory for conventional superconductors form the basis for the understanding of the phenomenon of superfluidity, because they fall into the lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial. ### Niobium The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron. Two superconductors with greatly different values of the critical magnetic field are combined to produce a fast, simple switch for computer elements. Soon after discovering superconductivity in 1911, Kamerlingh Onnes attempted to make an electromagnet with superconducting windings but found that relatively low magnetic fields destroyed superconductivity in the materials he investigated. Much later, in 1955, G. B. Yntema succeeded in constructing a small 0.7-tesla iron-core electromagnet with superconducting niobium wire windings. Then, in 1961, J. E. Kunzler, E. Buehler, F. S. L. Hsu, and J. H. Wernick made the startling discovery that, at 4.2 kelvin, niobium–tin, a compound consisting of three parts niobium and one part tin, was capable of supporting a current density of more than 100,000 amperes per square centimeter in a magnetic field of 8.8 tesla. The alloy was brittle and difficult to fabricate, but niobium–tin proved useful for generating magnetic fields as high as 20 tesla. In 1962, T. G. Berlincourt and R. R. Hake discovered that more ductile alloys of niobium and titanium are suitable for applications up to 10 tesla. Commercial production of niobium–titanium supermagnet wire immediately commenced at Westinghouse Electric Corporation and at Wah Chang Corporation. Although niobium–titanium boasts less-impressive superconducting properties than those of niobium–tin, niobium–titanium became the most widely used "workhorse" supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium–tin and niobium–titanium found wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy-particle accelerators, and other applications. Conectus, a European superconductivity consortium, estimated that in 2014, global economic activity for which superconductivity was indispensable amounted to about five billion euros, with MRI systems accounting for about 80% of that total. ### Josephson effect In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973. In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance. The first development and study of superconducting Bose–Einstein condensate (BEC) in 2020 suggested a "smooth transition between" BEC and Bardeen-Cooper-Shrieffer regimes. ### 2D materials Multiple types of superconductivity are reported in devices made of single-layer materials. Some of these materials can switch between conducting, insulating, and other behaviors. Twisting materials imbues them with a “moiré” pattern involving tiled hexagonal cells that act like atoms and host electrons. In this environment, the electrons move slowly enough for their collective interactions to guide their behavior. When each cell has a single electron, the electrons take on an antiferromagnetic arrangement; each electron can have a preferred location and magnetic orientation. Their intrinsic magnetic fields tend to alternate between pointing up and down. Adding electrons allows superconductivity by causing Cooper pairs to form. Fu and Schrade argued that electron-on-electron action was allowing both antiferromagnetic and superconducting states. The first success with 2D materials involved a twisted bilayer graphene sheet (2018, Tc ~1.7 K, 1.1° twist). A twisted three-layer graphene device was later shown to superconduct (2021, Tc ~2.8 K). Then an untwisted trilayer graphene device was reported to superconduct (2022, Tc 1-2 K). The latter was later shown to be tunable, easily reproducing behavior found millions of other configurations. Directly observing what happens when electrons are added to a material or slightly weakening its electric field lets physicists quickly try out an unprecedented number of recipes to see which lead to superconductivity. These devices have applications i.n quantum computing. 2D materials other than graphene have also been made to superconduct. Transition metal dichalcogenide (TMD) sheets twisted at 5 degrees intermittently achieved superconduction. by creating a Josephson junction. The device used used thin layers of palladium to connect to the sides of a tungsten telluride layer surrounded and protected by boron nitride. Another group demonstrated superconduction in molybdenum telluride (MoTe₂) in 2D van der Waals materials using ferroelectric domain walls. The Tc was implied to be higher than typical TMDs (~5–10 K). A Cornell group added a 3.5-degree twist to an insulator that allowed electrons to slow down and interact strongly, leaving one electron per cell, exhibiting superconduction. Existing theories do not explain this behavior. Fu and collaborators proposed that electrons arranged to form a repeating crystal that allows the electron grid to float independently of the background atomic nuclei allows the electron grid to relax. Its ripples pair electrons the way phonons do, although this is unconfirmed. ## Classification Superconductors are classified according to many criteria. The most common are: ### Response to a magnetic field A superconductor can be Type I, meaning it has a single critical field, above which superconductivity is lost and below which the magnetic field is completely expelled from the superconductor; or Type II, meaning it has two critical fields, between which it allows partial penetration of the magnetic field through isolated points called vortices. Furthermore, in multicomponent superconductors it is possible to combine the two behaviours. In that case the superconductor is of Type-1.5. ### Theory of operation A superconductor is conventional if it is driven by electron–phonon interaction and explained by the BCS theory or its extension, the Eliashberg theory. Otherwise, it is unconventional. Alternatively, a superconductor is called unconventional if the superconducting order parameter transforms according to a non-trivial irreducible representation of the system's point group or space group. ### Critical temperature A superconductor is generally considered high-temperature if it reaches a superconducting state above a temperature of 30 K (−243.15 °C); as in the initial discovery by Georg Bednorz and K. Alex Müller. It may also reference materials that transition to superconductivity when cooled using liquid nitrogen – that is, at only Tc > 77 K, although this is generally used only to emphasize that liquid nitrogen coolant is sufficient. Low temperature superconductors refer to materials with a critical temperature below 30 K, and are cooled mainly by liquid helium (Tc > 4.2 K). One exception to this rule is the iron pnictide group of superconductors that display behaviour and properties typical of high-temperature superconductors, yet some of the group have critical temperatures below 30 K. ### Material Superconductor material classes include chemical elements (e.g. mercury or lead), alloys (such as niobium–titanium, germanium–niobium, and niobium nitride), ceramics (YBCO and magnesium diboride), superconducting pnictides (like fluorine-doped LaOFeAs), single-layer materials such as graphene and transition metal dichalcogenides, or organic superconductors (fullerenes and carbon nanotubes; though perhaps these examples should be included among the chemical elements, as they are composed entirely of carbon). ## Elementary properties Several physical properties of superconductors vary from material to material, such as the critical temperature, the value of the superconducting gap, the critical magnetic field, and the critical current density at which superconductivity is destroyed. On the other hand, there is a class of properties that are independent of the underlying material. The Meissner effect, the quantization of the magnetic flux or permanent currents, i.e. the state of zero resistance are the most important examples. The existence of these "universal" properties is rooted in the nature of the broken symmetry of the superconductor and the emergence of off-diagonal long range order. Superconductivity is a thermodynamic phase, and thus possesses certain distinguishing properties which are largely independent of microscopic details. Off diagonal long range order is closely connected to the formation of Cooper pairs. ### Zero electrical DC resistance The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero. Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature. In practice, currents injected in superconducting coils persisted for 28 years, 7 months, 27 days in a superconducting gravimeter in Belgium, from August 4, 1995 until March 31, 2024. In such instruments, the measurement is based on the monitoring of the levitation of a superconducting niobium sphere with a mass of four grams. In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance and Joule heating. The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. This pairing is very weak, and small thermal vibrations can fracture the bond. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is the Boltzmann constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation. In the class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but non-zero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is minuscule compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero. ### Phase transition In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2015, the highest critical temperature found for a conventional superconductor is 203 K for H2S, although high pressures of approximately 90 gigapascals were required. Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature above 90 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The basic physical mechanism responsible for the high critical temperature is not yet clear. However, it is clear that a two-electron pairing is involved, although the nature of the pairing ( $$ s $$ wave vs. $$ d $$ wave) remains controversial. Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of electrons that are superconducting and consequently to a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition. The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e−α/T for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap. The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat. However, in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material. Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point. The results were strongly supported by Monte Carlo computer simulations. Meissner effect When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead, the field penetrates the superconductor but only to a very small distance, characterized by a parameter λ, called the London penetration depth, decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm. The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field. The Meissner effect is distinct from thisit is the spontaneous expulsion that occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law. The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided $$ \nabla^2\mathbf{H} = \lambda^{-2} \mathbf{H}\, $$ where H is the magnetic field and λ is the London penetration depth. This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface. A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II. ### London moment Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere. ## High-temperature superconductivity ## Applications Superconductors are promising candidate materials for devising fundamental circuit elements of electronic, spintronic, and quantum technologies. One such example is a superconducting diode, in which supercurrent flows along one direction only, that promise dissipationless superconducting and semiconducting-superconducting hybrid technologies. Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, the beam-steering magnets used in particle accelerators and plasma confining magnets in some tokamaks. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries. They can also be used in large wind turbines to overcome the restrictions imposed by high electrical currents, with an industrial grade 3.6 megawatt superconducting windmill generator having been tested successfully in Denmark. In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations. Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SI volt. Superconducting photon detectors can be realised in a variety of device configurations. Depending on the particular mode of operation, a superconductor–insulator–superconductor Josephson junction can be used as a photon detector or as a mixer. The large resistance change at the transition from the normal to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials. Superconducting nanowire single-photon detectors offer high speed, low noise single-photon detection and have been employed widely in advanced photon-counting applications. Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved. For example, in wind turbines the lower weight and volume of superconducting generators could lead to savings in construction and tower costs, offsetting the higher costs for the generator and lowering the total levelized cost of electricity (LCOE). Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, compact fusion power devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, enhancing spintronic devices with superconducting materials, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields, so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current. Compared to traditional power lines, superconducting transmission lines are more efficient and require only a fraction of the space, which would not only lead to a better environmental performance but could also improve public acceptance for expansion of the electric grid. Another attractive industrial aspect is the ability for high power transmission at lower voltages. Advancements in the efficiency of cooling systems and use of cheap coolants such as liquid nitrogen have also significantly decreased cooling costs needed for superconductivity. ## Nobel Prizes As of 2022, there have been five Nobel Prizes in Physics for superconductivity related subjects: - Heike Kamerlingh Onnes (1913), "for his investigations on the properties of matter at low temperatures which led, inter alia, to the production of liquid helium". - John Bardeen, Leon N. Cooper, and J. Robert Schrieffer (1972), "for their jointly developed theory of superconductivity, usually called the BCS-theory". - Leo Esaki, Ivar Giaever, and Brian D. Josephson (1973), "for their experimental discoveries regarding tunneling phenomena in semiconductors and superconductors, respectively" and "for his theoretical predictions of the properties of a supercurrent through a tunnel barrier, in particular those phenomena which are generally known as the Josephson effects". - Georg Bednorz and K. Alex Müller (1987), "for their important break-through in the discovery of superconductivity in ceramic materials". - Alexei A. Abrikosov, Vitaly L. Ginzburg, and Anthony J. Leggett (2003), "for pioneering contributions to the theory of superconductors and superfluids".
https://en.wikipedia.org/wiki/Superconductivity
Version control (also known as revision control, source control, and source code management) is the software engineering practice of controlling, organizing, and tracking different versions in history of computer files; primarily source code text files, but generally any type of file. Version control is a component of software configuration management. A version control system is a software tool that automates version control. Alternatively, version control is embedded as a feature of some systems such as word processors, spreadsheets, collaborative web docs, and content management systems, e.g., Wikipedia's page history. Version control includes viewing old versions and enables reverting a file to a previous version. ## Overview As teams develop software, it is common to deploy multiple versions of the same software, and for different developers to work on one or more different versions simultaneously. Bugs or features of the software are often only present in certain versions (because of the fixing of some problems and the introduction of others as the program develops). Therefore, for the purposes of locating and fixing bugs, it is vitally important to be able to retrieve and run different versions of the software to determine in which version(s) the problem occurs. It may also be necessary to develop two versions of the software concurrently: for instance, where one version has bugs fixed, but no new features (branch), while the other version is where new features are worked on (trunk). At the simplest level, developers could simply retain multiple copies of the different versions of the program, and label them appropriately. This simple approach has been used in many large software projects. While this method can work, it is inefficient as many near-identical copies of the program have to be maintained. This requires a lot of self-discipline on the part of developers and often leads to mistakes. Since the code base is the same, it also requires granting read-write-execute permission to a set of developers, and this adds the pressure of someone managing permissions so that the code base is not compromised, which adds more complexity. Consequently, systems to automate some or all of the revision control process have been developed. This abstracts most operational steps (hides them from ordinary users). Moreover, in software development, legal and business practice, and other environments, it has become increasingly common for a single document or snippet of code to be edited by a team, the members of which may be geographically dispersed and may pursue different and even contrary interests. Sophisticated revision control that tracks and accounts for ownership of changes to documents and code may be extremely helpful or even indispensable in such situations. Revision control may also track changes to configuration files, such as those typically stored in `/etc` or `/usr/local/etc` on Unix systems. This gives system administrators another way to easily track changes made and a way to roll back to earlier versions should the need arise. Many version control systems identify the version of a file as a number or letter, called the version number, version, revision number, revision, or revision level. For example, the first version of a file might be version 1. When the file is changed the next version is 2. Each version is associated with a timestamp and the person making the change. Revisions can be compared, restored, and, with some types of files, merged. ## History IBM's OS/360 IEBUPDTE software update tool dates back to 1962, arguably a precursor to version control system tools. Two source management and version control packages that were heavily used by IBM 360/370 installations were The Librarian and Panvalet. A full system designed for source code control was started in 1972: the Source Code Control System (SCCS), again for the OS/360. SCCS's user manual, especially the introduction, having been published on December 4, 1975, implied that it was the first deliberate revision control system. The Revision Control System (RCS) followed in 1982 and, later, Concurrent Versions System (CVS) added network and concurrent development features to RCS. After CVS, a dominant successor was Subversion, followed by the rise of distributed version control tools such as Git. ## Structure Revision control manages changes to a set of data over time. These changes can be structured in various ways. Often the data is thought of as a collection of many individual items, such as files or documents, and changes to individual files are tracked. This accords with intuitions about separate files but causes problems when identity changes, such as during renaming, splitting or merging of files. Accordingly, some systems such as Git, instead consider changes to the data as a whole, which is less intuitive for simple changes but simplifies more complex changes. When data that is under revision control is modified, after being retrieved by checking out, this is not in general immediately reflected in the revision control system (in the repository), but must instead be checked in or committed. A copy outside revision control is known as a "working copy". As a simple example, when editing a computer file, the data stored in memory by the editing program is the working copy, which is committed by saving. Concretely, one may print out a document, edit it by hand, and only later manually input the changes into a computer and save it. For source code control, the working copy is instead a copy of all files in a particular revision, generally stored locally on the developer's computer; in this case saving the file only changes the working copy, and checking into the repository is a separate step. If multiple people are working on a single data set or document, they are implicitly creating branches of the data (in their working copies), and thus issues of merging arise, as discussed below. For simple collaborative document editing, this can be prevented by using file locking or simply avoiding working on the same document that someone else is working on. Revision control systems are often centralized, with a single authoritative data store, the repository, and check-outs and check-ins done with reference to this central repository. Alternatively, in distributed revision control, no single repository is authoritative, and data can be checked out and checked into any repository. When checking into a different repository, this is interpreted as a merge or patch. ### Graph structure In terms of graph theory, revisions are generally thought of as a line of development (the trunk) with branches off of this, forming a directed tree, visualized as one or more parallel lines of development (the "mainlines" of the branches) branching off a trunk. In reality the structure is more complicated, forming a directed acyclic graph, but for many purposes "tree with merges" is an adequate approximation. Revisions occur in sequence over time, and thus can be arranged in order, either by revision number or timestamp. Revisions are based on past revisions, though it is possible to largely or completely replace an earlier revision, such as "delete all existing text, insert new text". In the simplest case, with no branching or undoing, each revision is based on its immediate predecessor alone, and they form a simple line, with a single latest version, the "HEAD" revision or tip. In graph theory terms, drawing each revision as a point and each "derived revision" relationship as an arrow (conventionally pointing from older to newer, in the same direction as time), this is a linear graph. If there is branching, so multiple future revisions are based on a past revision, or undoing, so a revision can depend on a revision older than its immediate predecessor, then the resulting graph is instead a directed tree (each node can have more than one child), and has multiple tips, corresponding to the revisions without children ("latest revision on each branch"). In principle the resulting tree need not have a preferred tip ("main" latest revision) – just various different revisions – but in practice one tip is generally identified as HEAD. When a new revision is based on HEAD, it is either identified as the new HEAD, or considered a new branch. The list of revisions from the start to HEAD (in graph theory terms, the unique path in the tree, which forms a linear graph as before) is the trunk or mainline. Conversely, when a revision can be based on more than one previous revision (when a node can have more than one parent), the resulting process is called a merge, and is one of the most complex aspects of revision control. This most often occurs when changes occur in multiple branches (most often two, but more are possible), which are then merged into a single branch incorporating both changes. If these changes overlap, it may be difficult or impossible to merge, and require manual intervention or rewriting. In the presence of merges, the resulting graph is no longer a tree, as nodes can have multiple parents, but is instead a rooted directed acyclic graph (DAG). The graph is acyclic since parents are always backwards in time, and rooted because there is an oldest version. Assuming there is a trunk, merges from branches can be considered as "external" to the tree – the changes in the branch are packaged up as a patch, which is applied to HEAD (of the trunk), creating a new revision without any explicit reference to the branch, and preserving the tree structure. Thus, while the actual relations between versions form a DAG, this can be considered a tree plus merges, and the trunk itself is a line. In distributed revision control, in the presence of multiple repositories these may be based on a single original version (a root of the tree), but there need not be an original root - instead there can be a separate root (oldest revision) for each repository. This can happen, for example, if two people start working on a project separately. Similarly, in the presence of multiple data sets (multiple projects) that exchange data or merge, there is no single root, though for simplicity one may think of one project as primary and the other as secondary, merged into the first with or without its own revision history. ## Specialized strategies Engineering revision control developed from formalized processes based on tracking revisions of early blueprints or bluelines. This system of control implicitly allowed returning to an earlier state of the design, for cases in which an engineering dead-end was reached in the development of the design. A revision table was used to keep track of the changes made. Additionally, the modified areas of the drawing were highlighted using revision clouds. ### In Business and Law Version control is widespread in business and law. Indeed, "contract redline" and "legal blackline" are some of the earliest forms of revision control, and are still employed in business and law with varying degrees of sophistication. The most sophisticated techniques are beginning to be used for the electronic tracking of changes to CAD files (see product data management), supplanting the "manual" electronic implementation of traditional revision control. ## Source-management models Traditional revision control systems use a centralized model where all the revision control functions take place on a shared server. If two developers try to change the same file at the same time, without some method of managing access the developers may end up overwriting each other's work. Centralized revision control systems solve this problem in one of two different "source management models": file locking and version merging. ### Atomic operations An operation is atomic if the system is left in a consistent state even if the operation is interrupted. The commit operation is usually the most critical in this sense. Commits tell the revision control system to make a group of changes final, and available to all users. Not all revision control systems have atomic commits; Concurrent Versions System lacks this feature. ### File locking The simplest method of preventing "concurrent access" problems involves locking files so that only one developer at a time has write access to the central "repository" copies of those files. Once one developer "checks out" a file, others can read that file, but no one else may change that file until that developer "checks in" the updated version (or cancels the checkout). File locking has both merits and drawbacks. It can provide some protection against difficult merge conflicts when a user is making radical changes to many sections of a large file (or group of files). If the files are left exclusively locked for too long, other developers may be tempted to bypass the revision control software and change the files locally, forcing a difficult manual merge when the other changes are finally checked in. In a large organization, files can be left "checked out" and locked and forgotten about as developers move between projects - these tools may or may not make it easy to see who has a file checked out. ### Version merging Most version control systems allow multiple developers to edit the same file at the same time. The first developer to "check in" changes to the central repository always succeeds. The system may provide facilities to merge further changes into the central repository, and preserve the changes from the first developer when other developers check in. Merging two files can be a very delicate operation, and usually possible only if the data structure is simple, as in text files. The result of a merge of two image files might not result in an image file at all. The second developer checking in the code will need to take care with the merge, to make sure that the changes are compatible and that the merge operation does not introduce its own logic errors within the files. These problems limit the availability of automatic or semi-automatic merge operations mainly to simple text-based documents, unless a specific merge plugin is available for the file types. The concept of a reserved edit can provide an optional means to explicitly lock a file for exclusive write access, even when a merging capability exists. ### ### Baseline s, labels and tags Most revision control tools will use only one of these similar terms (baseline, label, tag) to refer to the action of identifying a snapshot ("label the project") or the record of the snapshot ("try it with baseline X"). Typically only one of the terms baseline, label, or tag is used in documentation or discussion; they can be considered synonyms. In most projects, some snapshots are more significant than others, such as those used to indicate published releases, branches, or milestones. When both the term baseline and either of label or tag are used together in the same context, label and tag usually refer to the mechanism within the tool of identifying or making the record of the snapshot, and baseline indicates the increased significance of any given label or tag. Most formal discussion of configuration management uses the term baseline. ## Distributed revision control Distributed revision control systems (DRCS) take a peer-to-peer approach, as opposed to the client–server approach of centralized systems. Rather than a single, central repository on which clients synchronize, each peer's working copy of the codebase is a bona-fide repository. Distributed revision control conducts synchronization by exchanging patches (change-sets) from peer to peer. This results in some important differences from a centralized system: - No canonical, reference copy of the codebase exists by default; only working copies. - Common operations (such as commits, viewing history, and reverting changes) are fast, because there is no need to communicate with a central server. Rather, communication is only necessary when pushing or pulling changes to or from other peers. - Each working copy effectively functions as a remote backup of the codebase and of its change-history, providing inherent protection against data loss. ## Best practices Following best practices is necessary to obtain the full benefits of version control. Best practice may vary by version control tool and the field to which version control is applied. The generally accepted best practices in software development include: making incremental, small, changes; making commits which involve only one task or fix -- a corollary to this is to commit only code which works and does not knowingly break existing functionality; utilizing branching to complete functionality before release; writing clear and descriptive commit messages, make what why and how clear in either the commit description or the code; and using a consistent branching strategy. Other best software development practices such as code review and automated regression testing may assist in the following of version control best practices. ## ### Costs and benefits Costs and benefits will vary dependent upon the version control tool chosen and the field in which it is applied. This section speaks to the field of software development, where version control is widely applied. Costs In addition to the costs of licensing the version control software, using version control requires time and effort. The concepts underlying version control must be understood and the technical particulars required to operate the version control software chosen must be learned. Version control best practices must be learned and integrated into the organization's existing software development practices. Management effort may be required to maintain the discipline needed to follow best practices in order to obtain useful benefit. ### Benefits #### Allows for reverting changes A core benefit is the ability to keep history and revert changes, allowing the developer to easily undo changes. This gives the developer more opportunity to experiment, eliminating the fear of breaking existing code. #### ### Branch ing simplifies deployment, maintenance and development Branching assists with deployment. Branching and merging, the production, packaging, and labeling of source code patches and the easy application of patches to code bases, simplifies the maintenance and concurrent development of the multiple code bases associated with the various stages of the deployment process; development, testing, staging, production, etc. #### Damage mitigation, accountability and process and design improvement There can be damage mitigation, accountability, process and design improvement, and other benefits associated with the record keeping provided by version control, the tracking of who did what, when, why, and how. When bugs arise, knowing what was done when helps with damage mitigation and recovery by assisting in the identification of what problems exist, how long they have existed, and determining problem scope and solutions. Previous versions can be installed and tested to verify conclusions reached by examination of code and commit messages. #### Simplifies debugging Version control can greatly simplify debugging. The application of a test case to multiple versions can quickly identify the change which introduced a bug. The developer need not be familiar with the entire code base and can focus instead on the code that introduced the problem. #### Improves collaboration and communication Version control enhances collaboration in multiple ways. Since version control can identify conflicting changes, i.e. incompatible changes made to the same lines of code, there is less need for coordination among developers. The packaging of commits, branches, and all the associated commit messages and version labels, improves communication between developers, both in the moment and over time. Better communication, whether instant or deferred, can improve the code review process, the testing process, and other critical aspects of the software development process. ## Integration Some of the more advanced revision-control tools offer many other facilities, allowing deeper integration with other tools and software-engineering processes. ### Integrated development environment Plugins are often available for IDEs such as Oracle JDeveloper, IntelliJ IDEA, Eclipse, Visual Studio, Delphi, NetBeans IDE, Xcode, and GNU Emacs (via vc.el). Advanced research prototypes generate appropriate commit messages. ## Common terminology Terminology can vary from system to system, but some terms in common usage include: Baseline An approved revision of a document or source file to which subsequent changes can be made. See baselines, labels and tags. ### Blame A search for the author and revision that last modified a particular line. Branch A set of files under version control may be branched or forked at a point in time so that, from that time forward, two copies of those files may develop at different speeds or in different ways independently of each other. ### Change A change (or diff, or delta) represents a specific modification to a document under version control. The granularity of the modification considered a change varies between version control systems. ### Change list On many version control systems with atomic multi-change commits, a change list (or CL), change set, update, or patch identifies the set of changes made in a single commit. This can also represent a sequential view of the source code, allowing the examination of source as of any particular changelist ID. ### Checkout To check out (or co) is to create a local working copy from the repository. A user may specify a specific revision or obtain the latest. The term 'checkout' can also be used as a noun to describe the working copy. When a file has been checked out from a shared file server, it cannot be edited by other users. Think of it like a hotel, when you check out, you no longer have access to its amenities. ### Clone Cloning means creating a repository containing the revisions from another repository. This is equivalent to pushing or pulling into an empty (newly initialized) repository. As a noun, two repositories can be said to be clones if they are kept synchronized, and contain the same revisions. ### Commit (noun) ### Commit (verb) To commit (check in, ci or, more rarely, install, submit or record) is to write or merge the changes made in the working copy back to the repository. A commit contains metadata, typically the author information and a commit message that describes the change. ### Commit message A short note, written by the developer, stored with the commit, which describes the commit. Ideally, it records why the modification was made, a description of the modification's effect or purpose, and non-obvious aspects of how the change works. ### Conflict A conflict occurs when different parties make changes to the same document, and the system is unable to reconcile the changes. A user must resolve the conflict by combining the changes, or by selecting one change in favour of the other. ### Delta compression Most revision control software uses delta compression, which retains only the differences between successive versions of files. This allows for more efficient storage of many different versions of files. ### Dynamic stream A stream in which some or all file versions are mirrors of the parent stream's versions. ### Export Exporting is the act of obtaining the files from the repository. It is similar to checking out except that it creates a clean directory tree without the version-control metadata used in a working copy. This is often used prior to publishing the contents, for example. ### Fetch See pull. ### Forward integration The process of merging changes made in the main trunk into a development (feature or team) branch. ### Head Also sometimes called tip, this refers to the most recent commit, either to the trunk or to a branch. The trunk and each branch have their own head, though HEAD is sometimes loosely used to refer to the trunk. ### Import Importing is the act of copying a local directory tree (that is not currently a working copy) into the repository for the first time. ### Initialize To create a new, empty repository. ### Interleaved deltas Some revision control software uses Interleaved deltas, a method that allows storing the history of text based files in a more efficient way than by using Delta compression. ### Label See tag. ### Locking When a developer locks a file, no one else can update that file until it is unlocked. Locking can be supported by the version control system, or via informal communications between developers (aka social locking). ### Mainline Similar to trunk, but there can be a mainline for each branch. ### Merge A merge or integration is an operation in which two sets of changes are applied to a file or set of files. Some sample scenarios are as follows: - A user, working on a set of files, updates or syncs their working copy with changes made, and checked into the repository, by other users. - A user tries to check in files that have been updated by others since the files were checked out, and the revision control software automatically merges the files (typically, after prompting the user if it should proceed with the automatic merge, and in some cases only doing so if the merge can be clearly and reasonably resolved). - A branch is created, the code in the files is independently edited, and the updated branch is later incorporated into a single, unified trunk. - A set of files is branched, a problem that existed before the branching is fixed in one branch, and the fix is then merged into the other branch. (This type of selective merge is sometimes known as a cherry pick to distinguish it from the complete merge in the previous case.) ### Promote The act of copying file content from a less controlled location into a more controlled location. For example, from a user's workspace into a repository, or from a stream to its parent. ### Pull, push Copy revisions from one repository into another. Pull is initiated by the receiving repository, while push is initiated by the source. Fetch is sometimes used as a synonym for pull, or to mean a pull followed by an update. ### Pull request ### Repository ### Resolve The act of user intervention to address a conflict between different changes to the same document. ### Reverse integration The process of merging different team branches into the main trunk of the versioning system. ### Revision and version A version is any change in form. In SVK, a Revision is the state at a point in time of the entire tree in the repository. ### Share The act of making one file or folder available in multiple branches at the same time. When a shared file is changed in one branch, it is changed in other branches. ### Stream A container for branched files that has a known relationship to other such containers. Streams form a hierarchy; each stream can inherit various properties (like versions, namespace, workflow rules, subscribers, etc.) from its parent stream. ### Tag A tag or label refers to an important snapshot in time, consistent across many files. These files at that point may all be tagged with a user-friendly, meaningful name or revision number. See baselines, labels and tags. ### Trunk The trunk is the unique line of development that is not a branch (sometimes also called Baseline, Mainline or Master) ### Update An update (or sync, but sync can also mean a combined push and pull) merges changes made in the repository (by other people, for example) into the local working copy. Update is also the term used by some CM tools (CM+, PLS, SMS) for the change package concept (see changelist). Synonymous with checkout in revision control systems that require each repository to have exactly one working copy (common in distributed systems) ### Unlocking Releasing a lock. ### Working copy The working copy is the local copy of files from a repository, at a specific time or revision. All work done to the files in a repository is initially done on a working copy, hence the name. Conceptually, it is a sandbox.
https://en.wikipedia.org/wiki/Version_control
The Big Bang is a physical theory that describes how the universe expanded from an initial state of high density and temperature. Various cosmological models based on the Big Bang concept explain a broad range of phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The uniformity of the universe, known as the horizon and flatness problems, is explained through cosmic inflation: a phase of accelerated expansion during the earliest stages. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated  billion years ago, which is considered the age of the universe. Extrapolating this cosmic expansion backward in time using the known laws of physics, the models describe an extraordinarily hot and dense primordial universe. Physics lacks a widely accepted theory that can model the earliest conditions of the Big Bang. As the universe expanded, it cooled sufficiently to allow the formation of subatomic particles, and later atoms. These primordial elements—mostly hydrogen, with some helium and lithium—then coalesced under the force of gravity aided by dark matter, forming early stars and galaxies. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to a concept called dark energy. The concept of an expanding universe was scientifically originated by the physicist Alexander Friedmann in 1922 with the mathematical derivation of the Friedmann equations. The earliest empirical observation of an expanding universe is known as Hubble's law, published in work by physicist Edwin Hubble in 1929, which discerned that galaxies are moving away from Earth at a rate that accelerates proportionally with distance. Independent of Friedmann's work, and independent of Hubble's observations, physicist Georges Lemaître proposed that the universe emerged from a "primeval atom" in 1931, introducing the modern notion of the Big Bang. In 1964, the CMB was discovered. Over the next few years measurements showed this radiation to be uniform over directions in the sky and the shape of the energy versus intensity curve, both consistent with the Big Bang models of high temperatures and densities in the distant past. By the late 1960s most cosmologists were convinced that competing steady-state model of cosmic evolution was incorrect. There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. These include the unequal abundances of matter and antimatter known as baryon asymmetry, the detailed nature of dark matter surrounding galaxies, and the origin of dark energy. ## Features of the models ### Assumptions Big Bang cosmology models depend on three major assumptions: the universality of physical laws, the cosmological principle, and that the matter content can be modeled as a perfect fluid. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location. A perfect fluid has no viscosity; the pressure of a perfect fluid is proportional to its density. These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that the largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. The key physical law behind these models, general relativity has passed stringent tests on the scale of the Solar System and binary stars. The cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995. ### Expansion prediction The cosmological principle dramatically simplifies the equations of general relativity, giving the Friedmann–Lemaître–Robertson–Walker metric to describe the geometry of the universe and, with the assumption of a perfect fluid, the Friedmann equations giving the time dependence of that geometry. The only parameter at this level of description is the mass-energy density: the geometry of the universe and its expansion is a direct consequence of its density. All of the major features of Big Bang cosmology are related to these results. ### Mass-energy density In Big Bang cosmology, the mass–energy density controls the shape and evolution of the universe. By combining astronomical observations with known laws of thermodynamics and particle physics, cosmologists have worked out the components of the density over the lifespan of the universe. In the current universe, luminous matter, the stars, planets, and so on makes up less than 5% of the density. ### Dark matter accounts for 27% and dark energy the remaining 68%. ### Horizons An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach earth. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric that describes the expansion of the universe. Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well. ### Thermalization Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other. ## Timeline According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling. ### Singularity Existing theories of physics cannot tell us about the moment of the Big Bang. Extrapolation of the expansion of the universe backwards in time using only general relativity yields a gravitational singularity with infinite density and temperature at a finite time in the past, but the meaning of this extrapolation in the context of the Big Bang is unclear. Moreover, classical gravitational theories are expected to be inadequate to describe physics under these conditions. Quantum gravity effects are expected to be dominant during the Planck epoch, when the temperature of the universe was close to the Planck scale (around 1032 K or 1028 eV). Even below the Planck scale, undiscovered physics could greatly influence the expansion history of the universe. The Standard Model of particle physics is only tested up to temperatures of order 1017K (10 TeV) in particle colliders, such as the Large Hadron Collider. Moreover, new physical phenomena decoupled from the Standard Model could have been important before the time of neutrino decoupling, when the temperature of the universe was only about 1010K (1 MeV). ### Inflation and baryogenesis The earliest phases of the Big Bang are subject to much speculation, given the lack of available data. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, , and consequently had a temperature of approximately 1032 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell. At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were "frozen in" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified. Inflation stopped locally at around 10−33 to 10−32 seconds, with the observable universe's volume having increased by a factor of at least 1078. Reheating followed as the inflaton field decayed, until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe. ### Cooling The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds. After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 108 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos). A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei. As the universe cooled, the rest energy density of matter came to gravitationally dominate over that of the photon and neutrino radiation at a time of about 50,000 years. At a time of about 380,000 years, the universe cooled enough that electrons and nuclei combined into neutral atoms (mostly hydrogen) in an event called recombination. This process made the previously opaque universe transparent, and the photons that last scattered during this epoch comprise the cosmic microwave background. ### Structure formation After the recombination epoch, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. ### Cosmic acceleration Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together, gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate. ### Dark energy in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory. All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10−15 seconds. Understanding this earliest of eras in the history of the universe is one of the greatest unsolved problems in physics. ## Concept history ### Etymology English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." However, it did not catch on until the 1970s. It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models.: "To create a picture in the mind of the listener, Hoyle had likened the explosive theory of the universe's origin to a 'big bang'." Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is "unconvincing", and mentions a number of indications that it was not a pejorative. A primordial singularity is sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase. The term itself has been argued to be a misnomer because it evokes an explosion. The argument is that whereas an explosion suggests expansion into a surrounding space, the Big Bang only describes the intrinsic expansion of the contents of the universe. Another issue pointed out by Santhosh Mathew is that bang implies sound, which is not an important feature of the model. However, an attempt to find a more suitable alternative was not successful. According to Timothy Ferris: (quoting Ferris). The term 'big bang' was coined with derisive intent by Fred Hoyle, and its endurance testifies to Sir Fred's creativity and wit. Indeed, the term survived an international competition in which three judges — the television science reporter Hugh Downs, the astronomer Carl Sagan, and myself — sifted through 13,099 entries from 41 countries and concluded that none was apt enough to replace it. No winner was declared, and like it or not, we are stuck with 'big bang'. ### Before the name Early cosmological models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time.- Friedmann (1922) translated into English. In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law. Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe.- Lemaître (1927) translated into English. He inferred the relation that Hubble would later observe, given the cosmological principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence. In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by an expanding universe imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the expanding universe concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, disagreed: During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis. After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's expanding universe theory, advocated and developed by George Gamow, who used it to develop a theory for the abundance of chemical elements in the universe. and whose associates, Ralph Alpher and Robert Herman, predicted the cosmic background radiation. ### As a named model Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe. In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe). Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating. ## Observational evidence The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the cosmic microwave background, large-scale structure, and Hubble's law. The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures. These are sometimes called the "four pillars" of the Big Bang models. Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics. ### Hubble's law and the expansion of the universe Observations of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed: $$ v = H_0D $$ where - $$ v $$ is the recessional velocity of the galaxy or other distant object, - $$ D $$ is the proper distance to the object, and - $$ H_0 $$ is Hubble's constant, measured to be km/s/Mpc by the WMAP. Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker. The theory requires the relation $$ v = HD $$ to hold at all times, where $$ D $$ is the proper distance, $$ v $$ is the recessional velocity, and $$ v $$ , $$ H $$ , and $$ D $$ vary as the universe expands (hence we write $$ H_0 $$ to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity $$ v $$ . For distances comparable to the size of the observable universe, the attribution of the cosmological redshift becomes more ambiguous, although its interpretation as a kinematic Doppler shift remains the most natural one. An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder. ### Cosmic microwave background radiation In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics. The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around , the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent. In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 104, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 105. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results. During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies. In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing. ### Abundance of primordial elements Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (4He), helium-3 (3He), deuterium (2H), and lithium-7 (7Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for 4He:H, about 10−3 for 2H:H, about 10−4 for 3He:H, and about 10−9 for 7Li:H. The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two for 7Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too. ### Galactic evolution and distribution Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory. ### Primordial gas clouds In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN. ### Other lines of evidence The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model. The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult. ### Future observations Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang. ## Problems and related issues in physics As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists. ### Baryon asymmetry It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter, rather than antimatter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium.- Sakharov (1967) translated into English. - Reprinted in: . All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry. Dark energy Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, cosmological models require that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy". Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure--baryon acoustic oscillations--as a cosmic ruler. Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant. The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units. Dark matter During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, the galaxy rotation problem, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters. Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway. Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations. ### Horizon problem The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature. A resolution to this apparent inconsistency is offered by inflation theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation. Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been confirmed by measurements of the CMB. A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended. ### Magnetic monopoles The magnetic monopole objection was raised in the late 1970s. Grand unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness. ### Flatness problem The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be flat. Observations indicate the universe is consistent with being flat. The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 1014 of its critical value, or it would not exist as it does today. ## Misconceptions One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe. Another common misconception is that the Big Bang must be understood as the expansion of space and not in terms of the contents of space exploding apart. In fact, either description can be accurate. The expansion of space (implied by the FLRW metric) is only a mathematical convention, corresponding to a choice of coordinates on spacetime. There is no generally covariant sense in which space expands. The recession speeds associated with Hubble's law are not velocities in a relativistic sense (for example, they are not related to the spatial components of 4-velocities). Therefore, it is not remarkable that according to Hubble's law, galaxies farther than the Hubble distance recede faster than the speed of light. Such recession speeds do not correspond to faster-than-light travel. Many popular accounts attribute the cosmological redshift to the expansion of space. This can be misleading because the expansion of space is only a coordinate choice. The most natural interpretation of the cosmological redshift is that it is a Doppler shift. ## Implications Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture. ### Pre–Big Bang cosmology The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the "primeval atom" while Gamow called the material "ylem". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, if specific laws of nature were to come to existence in a random way, inflation models show, some combinations of these are far more probable, partly explaining why our Universe is rather stable. Another possible explanation for the stability of the Universe could be a hypothetical multiverse, which assumes every possible universe to exist, and thinking species could only emerge in those stable enough. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created. The Big Bang theory is built upon the equations of classical general relativity, which are not expected to be valid at the origin of cosmic time, as the temperature of the universe approaches the Planck scale. Correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang. While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony". Some speculative proposals in this regard, each of which entails untested hypotheses, are: - The simplest models, in which the Big Bang was caused by quantum fluctuations. That scenario had very little chance of happening, but, according to the totalitarian principle, even the most improbable event will eventually happen. It took place instantly, in our perspective, due to the absence of perceived time before the Big Bang. - Emergent Universe models, which feature a low-activity past-eternal era before the Big Bang, resembling ancient ideas of a cosmic egg and birth of the world out of primordial chaos. - Models in which the whole of spacetime is finite, including the Hartle–Hawking no-boundary condition. For these cases, the Big Bang does represent the limit of time but without a singularity. In such a case, the universe is self-sufficient. - Brane cosmology models, in which inflation is due to the movement of branes in string theory; the pre-Big Bang model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes; and the cyclic model, a variant of the ekpyrotic model in which collisions occur periodically. In the latter model the Big Bang was preceded by a Big Crunch and the universe cycles from one process to the other. - Eternal inflation, in which universal inflation ends locally here and there in a random fashion, each end-point leading to a bubble universe, expanding from its own big bang. This is sometimes referred to as pre-big bang inflation. Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse. ### Ultimate fate of the universe Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch. Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death. Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip. ### Religious and philosophical interpretations As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous.
https://en.wikipedia.org/wiki/Big_Bang
A Fourier series () is an expansion of a periodic function into a sum of trigonometric functions. The Fourier series is an example of a trigonometric series. By expressing a function as a sum of sines and cosines, many problems involving the function become easier to analyze because trigonometric functions are well understood. For example, Fourier series were first used by Joseph Fourier to find solutions to the heat equation. This application is possible because the derivatives of trigonometric functions fall into simple patterns. Fourier series cannot be used to approximate arbitrary functions, because most functions have infinitely many terms in their Fourier series, and the series do not always converge. Well-behaved functions, for example smooth functions, have Fourier series that converge to the original function. The coefficients of the Fourier series are determined by integrals of the function multiplied by trigonometric functions, described in . The study of the convergence of Fourier series focus on the behaviors of the partial sums, which means studying the behavior of the sum as more and more terms from the series are summed. The figures below illustrate some partial Fourier series results for the components of a square wave. Fourier series are closely related to the Fourier transform, a more general tool that can even find the frequency information for functions that are not periodic. Periodic functions can be identified with functions on a circle; for this reason Fourier series are the subject of Fourier analysis on the circle group, denoted by $$ \mathbb{T} $$ or $$ S_1 $$ . The Fourier transform is also part of Fourier analysis, but is defined for functions on $$ \mathbb{R}^n $$ . Since Fourier's time, many different approaches to defining and understanding the concept of Fourier series have been discovered, all of which are consistent with one another, but each of which emphasizes different aspects of the topic. Some of the more powerful and elegant approaches are based on mathematical ideas and tools that were not available in Fourier's time. Fourier originally defined the Fourier series for real-valued functions of real arguments, and used the sine and cosine functions in the decomposition. Many other Fourier-related transforms have since been defined, extending his initial idea to many applications and birthing an area of mathematics called Fourier analysis. ## History The Fourier series is named in honor of Jean-Baptiste Joseph Fourier (1768–1830), who made important contributions to the study of trigonometric series, after preliminary investigations by Leonhard Euler, Jean le Rond d'Alembert, and Daniel Bernoulli. Fourier introduced the series for the purpose of solving the heat equation in a metal plate, publishing his initial results in his 1807 Mémoire sur la propagation de la chaleur dans les corps solides (Treatise on the propagation of heat in solid bodies), and publishing his Théorie analytique de la chaleur (Analytical theory of heat) in 1822. The Mémoire introduced Fourier analysis, specifically Fourier series. Through Fourier's research the fact was established that an arbitrary (at first, continuous and later generalized to any piecewise-smooth) function can be represented by a trigonometric series. The first announcement of this great discovery was made by Fourier in 1807, before the French Academy. Early ideas of decomposing a periodic function into the sum of simple oscillating functions date back to the 3rd century BC, when ancient astronomers proposed an empiric model of planetary motions, based on deferents and epicycles. Independently of Fourier, astronomer Friedrich Wilhelm Bessel introduced Fourier series to solve Kepler's equation. His work was published in 1819, unaware of Fourier's work which remained unpublished until 1822. The heat equation is a partial differential equation. Prior to Fourier's work, no solution to the heat equation was known in the general case, although particular solutions were known if the heat source behaved in a simple way, in particular, if the heat source was a sine or cosine wave. These simple solutions are now sometimes called eigensolutions. Fourier's idea was to model a complicated heat source as a superposition (or linear combination) of simple sine and cosine waves, and to write the solution as a superposition of the corresponding eigensolutions. This superposition or linear combination is called the Fourier series. From a modern point of view, Fourier's results are somewhat informal, due to the lack of a precise notion of function and integral in the early nineteenth century. Later, Peter Gustav Lejeune Dirichlet and Bernhard Riemann expressed Fourier's results with greater precision and formality. Although the original motivation was to solve the heat equation, it later became obvious that the same techniques could be applied to a wide array of mathematical and physical problems, and especially those involving linear differential equations with constant coefficients, for which the eigensolutions are sinusoids. The Fourier series has many such applications in electrical engineering, vibration analysis, acoustics, optics, signal processing, image processing, quantum mechanics, econometrics, shell theory, etc. ### Beginnings Joseph Fourier wrote This immediately gives any coefficient ak of the trigonometric series for φ(y) for any function which has such an expansion. It works because if φ has such an expansion, then (under suitable convergence assumptions) the integral $$ \begin{align} &\int_{-1}^1\varphi(y)\cos(2k+1)\frac{\pi y}{2}\,dy \\ &= \int_{-1}^1\left(a\cos\frac{\pi y}{2}\cos(2k+1)\frac{\pi y}{2}+a'\cos 3\frac{\pi y}{2}\cos(2k+1)\frac{\pi y}{2}+\cdots\right)\,dy \end{align} $$ can be carried out term-by-term. But all terms involving $$ \cos(2j+1)\frac{\pi y}{2} \cos(2k+1)\frac{\pi y}{2} $$ for vanish when integrated from −1 to 1, leaving only the $$ k^{\text{th}} $$ term, which is 1. In these few lines, which are close to the modern formalism used in Fourier series, Fourier revolutionized both mathematics and physics. Although similar trigonometric series were previously used by Euler, d'Alembert, Daniel Bernoulli and Gauss, Fourier believed that such trigonometric series could represent any arbitrary function. In what sense that is actually true is a somewhat subtle issue and the attempts over many years to clarify this idea have led to important discoveries in the theories of convergence, function spaces, and harmonic analysis. When Fourier submitted a later competition essay in 1811, the committee (which included Lagrange, Laplace, Malus and Legendre, among others) concluded: ...the manner in which the author arrives at these equations is not exempt of difficulties and...his analysis to integrate them still leaves something to be desired on the score of generality and even rigour. ### Fourier's motivation The Fourier series expansion of the sawtooth function (below) looks more complicated than the simple formula $$ s(x)=\tfrac{x}{\pi} $$ , so it is not immediately apparent why one would need the Fourier series. While there are many applications, Fourier's motivation was in solving the heat equation. For example, consider a metal plate in the shape of a square whose sides measure $$ \pi $$ meters, with coordinates $$ (x,y) \in [0,\pi] \times [0,\pi] $$ . If there is no heat source within the plate, and if three of the four sides are held at 0 degrees Celsius, while the fourth side, given by $$ y=\pi $$ , is maintained at the temperature gradient $$ T(x,\pi)=x $$ degrees Celsius, for $$ x $$ in $$ (0,\pi) $$ , then one can show that the stationary heat distribution (or the heat distribution after a long time has elapsed) is given by $$ T(x,y) = 2\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} \sin(nx) {\sinh(ny) \over \sinh(n\pi)}. $$ Here, sinh is the hyperbolic sine function. This solution of the heat equation is obtained by multiplying each term of the equation from ### Analysis § #### Example by $$ \sinh(ny)/\sinh(n\pi) $$ . While our example function $$ s(x) $$ seems to have a needlessly complicated Fourier series, the heat distribution $$ T(x,y) $$ is nontrivial. The function $$ T $$ cannot be written as a closed-form expression. This method of solving the heat problem was made possible by Fourier's work. ### Other applications Another application is to solve the Basel problem by using ### Parseval's theorem . The example generalizes and one may compute ζ(2n), for any positive integer n. ## Definition The Fourier series of a complex-valued -periodic function $$ s(x) $$ , integrable over the interval $$ [0,P] $$ on the real line, is defined as a trigonometric series of the form $$ \sum_{n=-\infty}^\infty c_n e^{i 2\pi \tfrac{n}{P} x }, $$ such that the Fourier coefficients $$ c_n $$ are complex numbers defined by the integral $$ c_n = \frac{1}{P}\int_0^P s(x)\ e^{-i 2\pi \tfrac{n}{P} x }\,dx. $$ The series does not necessarily converge (in the pointwise sense) and, even if it does, it is not necessarily equal to $$ s(x) $$ . Only when certain conditions are satisfied (e.g. if $$ s(x) $$ is continuously differentiable) does the Fourier series converge to $$ s(x) $$ , i.e., $$ s(x) = \sum_{n=-\infty}^\infty c_n e^{i 2\pi \tfrac{n}{P} x }. $$ For functions satisfying the Dirichlet sufficiency conditions, pointwise convergence holds. However, these are not necessary conditions and there are many theorems about different types of convergence of Fourier series (e.g. uniform convergence or mean convergence). The definition naturally extends to the Fourier series of a (periodic) distribution $$ s $$ (also called Fourier-Schwartz series). Then the Fourier series converges to $$ s(x) $$ in the distribution sense. The process of determining the Fourier coefficients of a given function or signal is called analysis, while forming the associated trigonometric series (or its various approximations) is called synthesis. ### Synthesis A Fourier series can be written in several equivalent forms, shown here as the $$ N^\text{th} $$ partial sums $$ s_N(x) $$ of the Fourier series of $$ s(x) $$ : The harmonics are indexed by an integer, $$ n, $$ which is also the number of cycles the corresponding sinusoids make in interval $$ P $$ . Therefore, the sinusoids have: - a wavelength equal to $$ \tfrac{P}{n} $$ in the same units as $$ x $$ . - a frequency equal to $$ \tfrac{n}{P} $$ in the reciprocal units of $$ x $$ . These series can represent functions that are just a sum of one or more frequencies in the harmonic spectrum. In the limit $$ N\to\infty $$ , a trigonometric series can also represent the intermediate frequencies or non-sinusoidal functions because of the infinite number of terms. Analysis The coefficients can be given/assumed, such as a music synthesizer or time samples of a waveform. In the latter case, the exponential form of Fourier series synthesizes a discrete-time Fourier transform where variable $$ x $$ represents frequency instead of time. In general, the coefficients are determined by analysis of a given function $$ s(x) $$ whose domain of definition is an interval of length $$ P $$ . The $$ \tfrac{2}{P} $$ scale factor follows from substituting into and utilizing the orthogonality of the trigonometric system. The equivalence of and follows from Euler's formula $$ \cos x = \frac{e^{ix} + e^{-ix}}{2}, \quad \sin x = \frac{e^{ix} - e^{-ix}}{2i}, $$ resulting in: with $$ c_{0} $$ being the mean value of $$ s $$ on the interval $$ P $$ . Conversely: Example Consider a sawtooth function: $$ s(x) = s(x + 2\pi k) = \frac{x}{\pi}, \quad \mathrm{for } -\pi < x < \pi,\text{ and } k \in \mathbb{Z}. $$ In this case, the Fourier coefficients are given by $$ \begin{align} a_0 &= 0.\\ a_n & = \frac{1}{\pi}\int_{-\pi}^{\pi}s(x) \cos(nx)\,dx = 0, \quad n \ge 1. \\ b_n & = \frac{1}{\pi}\int_{-\pi}^{\pi}s(x) \sin(nx)\, dx\\ &= -\frac{2}{\pi n}\cos(n\pi) + \frac{2}{\pi^2 n^2}\sin(n\pi)\\ &= \frac{2\,(-1)^{n+1}}{\pi n}, \quad n \ge 1.\end{align} $$ It can be shown that the Fourier series converges to $$ s(x) $$ at every point $$ x $$ where $$ s $$ is differentiable, and therefore: $$ \begin{align} s(x) &= a_0 + \sum_{n=1}^\infty \left[a_n\cos\left(nx\right)+b_n sin\left(nx\right)\right] \\[4pt] &=\frac{2}{\pi}\sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} \sin(nx), \quad \mathrm{for}\ (x-\pi)\ \text{is not a multiple of}\ 2\pi. \end{align} $$ When $$ x=\pi $$ , the Fourier series converges to 0, which is the half-sum of the left- and right-limit of $$ s $$ at $$ x=\pi $$ . This is a particular instance of the Dirichlet theorem for Fourier series. This example leads to a solution of the Basel problem. ### Amplitude-phase form If the function $$ s(x) $$ is real-valued then the Fourier series can also be represented as where $$ A_{n} $$ is the amplitude and $$ \varphi_{n} $$ is the phase shift of the $$ n^{th} $$ harmonic. The equivalence of and follows from the trigonometric identity: $$ \cos\left(2\pi \tfrac{n}{P}x-\varphi_n\right) = \cos(\varphi_n)\cos\left(2\pi \tfrac{n}{P} x\right) + \sin(\varphi_n)\sin\left(2\pi \tfrac{n}{P} x\right), $$ which implies $$ a_n = A_n \cos(\varphi_n)\quad \text{and}\quad b_n = A_n \sin(\varphi_n) $$ are the rectangular coordinates of a vector with polar coordinates $$ A_n $$ and $$ \varphi_n $$ given by $$ A_n = \sqrt{a_n^2 + b_n^2}\quad \text{and}\quad \varphi_n = \operatorname{Arg}(c_n) = \operatorname{atan2}(b_n, a_n) $$ where $$ \operatorname{Arg}(c_n) $$ is the argument of $$ c_{n} $$ . An example of determining the parameter $$ \varphi_n $$ for one value of $$ n $$ is shown in Figure 2. It is the value of $$ \varphi $$ at the maximum correlation between $$ s(x) $$ and a cosine template, $$ \cos(2\pi \tfrac{n}{P} x - \varphi) $$ . The blue graph is the cross-correlation function, also known as a matched filter: $$ \begin{align} \Chi(\varphi) &= \int_{P} s(x) \cdot \cos\left( 2\pi \tfrac{n}{P} x -\varphi \right)\, dx\quad \varphi \in \left[ 0, 2\pi \right]\\ &=\cos(\varphi) \underbrace{\int_{P} s(x) \cdot \cos\left( 2\pi \tfrac{n}{P} x\right) dx}_{X(0)} + \sin(\varphi) \underbrace{\int_{P} s(x) \cdot \sin\left( 2\pi \tfrac{n}{P} x\right) dx}_{ X(\pi/2) } \end{align} $$ Fortunately, it is not necessary to evaluate this entire function, because its derivative is zero at the maximum: $$ X'(\varphi) = \sin(\varphi)\cdot X(0) - \cos(\varphi)\cdot X(\pi/2) = 0, \quad \textrm{at}\ \varphi = \varphi_n. $$ Hence $$ \varphi_n \equiv \arctan(b_n/a_n) = \arctan(X(\pi/2)/X(0)). $$ ### Common notations The notation $$ c_n $$ is inadequate for discussing the Fourier coefficients of several different functions. Therefore, it is customarily replaced by a modified form of the function ( $$ s, $$ in this case), such as $$ \widehat{s}(n) $$ or $$ S[n], $$ and functional notation often replaces subscripting: $$ \begin{align} s(x) &= \sum_{n=-\infty}^\infty \widehat{s}(n)\cdot e^{i 2\pi \tfrac{n}{P} x} && \scriptstyle \text{common mathematics notation} \\ &= \sum_{n=-\infty}^\infty S[n]\cdot e^{i 2\pi \tfrac{n}{P} x} && \scriptstyle \text{common engineering notation} \end{align} $$ In engineering, particularly when the variable $$ x $$ represents time, the coefficient sequence is called a frequency domain representation. Square brackets are often used to emphasize that the domain of this function is a discrete set of frequencies. Another commonly used frequency domain representation uses the Fourier series coefficients to modulate a Dirac comb: $$ S(f) \ \triangleq \ \sum_{n=-\infty}^\infty S[n]\cdot \delta \left(f-\frac{n}{P}\right), $$ where $$ f $$ represents a continuous frequency domain. When variable $$ x $$ has units of seconds, $$ f $$ has units of hertz. The "teeth" of the comb are spaced at multiples (i.e. harmonics) of $$ \tfrac{1}{P} $$ , which is called the fundamental frequency. $$ s(x) $$ can be recovered from this representation by an inverse Fourier transform: $$ \begin{align} \mathcal{F}^{-1}\{S(f)\} &= \int_{-\infty}^\infty \left( \sum_{n=-\infty}^\infty S[n]\cdot \delta \left(f-\frac{n}{P}\right)\right) e^{i 2 \pi f x}\,df, \\[6pt] &= \sum_{n=-\infty}^\infty S[n]\cdot \int_{-\infty}^\infty \delta\left(f-\frac{n}{P}\right) e^{i 2 \pi f x}\,df, \\[6pt] &= \sum_{n=-\infty}^\infty S[n]\cdot e^{i 2\pi \tfrac{n}{P} x} \ \ \triangleq \ s(x). \end{align} $$ The constructed function $$ S(f) $$ is therefore commonly referred to as a Fourier transform, even though the Fourier integral of a periodic function is not convergent at the harmonic frequencies. ## Table of common Fourier series Some common pairs of periodic functions and their Fourier series coefficients are shown in the table below. - $$ s(x) $$ designates a periodic function with period $$ P. $$ - $$ a_0, a_n, b_n $$ designate the Fourier series coefficients (sine-cosine form) of the periodic function $$ s(x). $$ Time domain PlotFrequency domain (sine-cosine form) RemarksReferenceFull-wave rectified sineHalf-wave rectified sine ## Table of basic transformation rules This table shows some mathematical operations in the time domain and the corresponding effect in the Fourier series coefficients. Notation: - Complex conjugation is denoted by an asterisk. - $$ s(x),r(x) $$ designate $$ P $$ -periodic functions or functions defined only for $$ x \in [0,P]. $$ - $$ S[n], R[n] $$ designate the Fourier series coefficients (exponential form) of $$ s $$ and $$ r. $$ Property Time domain Frequency domain (exponential form) Remarks Reference Linearity Time reversal / Frequency reversal Time conjugation Time reversal & conjugation Real part in time Imaginary part in time Real part in frequency Imaginary part in frequency Shift in time / Modulation in frequency Shift in frequency / Modulation in time ## Properties ### Symmetry relations When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform: $$ \begin{array}{rlcccccccc} \mathsf{Time\ domain} & s & = & s_{\mathrm{RE}} & + & s_{\mathrm{RO}} & + & i\ s_{\mathrm{IE}} & + & i\ s_{\mathrm{IO}} \\ &\Bigg\Updownarrow\mathcal{F} & &\Bigg\Updownarrow\mathcal{F} & &\ \ \Bigg\Updownarrow\mathcal{F} & &\ \ \Bigg\Updownarrow\mathcal{F} & &\ \ \Bigg\Updownarrow\mathcal{F}\\ \mathsf{Frequency\ domain} & S & = & S_\mathrm{RE} & + & i\ S_\mathrm{IO}\, & + & i\ S_\mathrm{IE} & + & S_\mathrm{RO} \end{array} $$ From this, various relationships are apparent, for example: - The transform of a real-valued function $$ (s_\mathrm{RE}+s_\mathrm{RO}) $$ is the conjugate symmetric function $$ S_\mathrm{RE}+i\ S_\mathrm{IO}. $$ Conversely, a conjugate symmetric transform implies a real-valued time-domain. - The transform of an imaginary-valued function $$ (i\ s_\mathrm{IE}+i\ s_\mathrm{IO}) $$ is the conjugate antisymmetric function $$ S_\mathrm{RO}+i\ S_\mathrm{IE}, $$ and the converse is true. - The transform of a conjugate symmetric function $$ (s_\mathrm{RE}+i\ s_\mathrm{IO}) $$ is the real-valued function $$ S_\mathrm{RE}+S_\mathrm{RO}, $$ and the converse is true. - The transform of a conjugate antisymmetric function $$ (s_\mathrm{RO}+i\ s_\mathrm{IE}) $$ is the imaginary-valued function $$ i\ S_\mathrm{IE}+i\ S_\mathrm{IO}, $$ and the converse is true. ### Riemann–Lebesgue lemma If $$ S $$ is integrable, $$ \lim_{|n| \to \infty} S[n]=0 $$ , $$ \lim_{n \to +\infty} a_n=0 $$ and $$ \lim_{n \to +\infty} b_n=0. $$ Parseval's theorem If $$ s $$ belongs to $$ L^2(P) $$ (periodic over an interval of length $$ P $$ ) then: $$ \frac{1}{P}\int_{P} |s(x)|^2 \, dx = \sum_{n=-\infty}^\infty \Bigl|S[n]\Bigr|^2. $$ ### Plancherel's theorem If $$ c_0,\, c_{\pm 1},\, c_{\pm 2}, \ldots $$ are coefficients and $$ \sum_{n=-\infty}^\infty |c_n|^2 < \infty $$ then there is a unique function $$ s\in L^2(P) $$ such that $$ S[n] = c_n $$ for every $$ n $$ . ### Convolution theorems Given $$ P $$ -periodic functions, $$ s_P $$ and $$ r_P $$ with Fourier series coefficients $$ S[n] $$ and $$ R[n], $$ $$ n \in \mathbb{Z}, $$ - The pointwise product: $$ h_P(x) \triangleq s_P(x)\cdot r_P(x) $$ is also $$ P $$ -periodic, and its Fourier series coefficients are given by the discrete convolution of the $$ S $$ and $$ R $$ sequences: $$ H[n] = \{S*R\}[n]. $$ - The periodic convolution: $$ h_P(x) \triangleq \int_{P} s_P(\tau)\cdot r_P(x-\tau)\, d\tau $$ is also $$ P $$ -periodic, with Fourier series coefficients: $$ H[n] = P \cdot S[n]\cdot R[n]. $$ - A doubly infinite sequence $$ \left \{c_n \right \}_{n \in Z} $$ in $$ c_0(\mathbb{Z}) $$ is the sequence of Fourier coefficients of a function in $$ L^1([0,2\pi]) $$ if and only if it is a convolution of two sequences in $$ \ell^2(\mathbb{Z}) $$ . See ### Derivative property If $$ s $$ is a 2-periodic function on $$ \mathbb{R} $$ which is $$ k $$ times differentiable, and its $$ k^{\text{th}} $$ derivative is continuous, then $$ s $$ belongs to the function space $$ C^k(\mathbb{R}) $$ . - If $$ s \in C^k(\mathbb{R}) $$ , then the Fourier coefficients of the $$ k^{\text{th}} $$ derivative of $$ s $$ can be expressed in terms of the Fourier coefficients $$ \widehat{s}[n] $$ of $$ s $$ , via the formula $$ \widehat{s^{(k)}}[n] = (in)^k \widehat{s}[n]. $$ In particular, since for any fixed $$ k\geq 1 $$ we have $$ \widehat{s^{(k)}}[n]\to 0 $$ as $$ n\to\infty $$ , it follows that $$ |n|^k\widehat{s}[n] $$ tends to zero, i.e., the Fourier coefficients converge to zero faster than the $$ k^{\text{th}} $$ power of $$ |n| $$ . ### Compact groups One of the interesting properties of the Fourier transform which we have mentioned, is that it carries convolutions to pointwise products. If that is the property which we seek to preserve, one can produce Fourier series on any compact group. Typical examples include those classical groups that are compact. This generalizes the Fourier transform to all spaces of the form L2(G), where G is a compact group, in such a way that the Fourier transform carries convolutions to pointwise products. The Fourier series exists and converges in similar ways to the case. An alternative extension to compact groups is the Peter–Weyl theorem, which proves results about representations of compact groups analogous to those about finite groups. ### Riemannian manifolds If the domain is not a group, then there is no intrinsically defined convolution. However, if $$ X $$ is a compact Riemannian manifold, it has a Laplace–Beltrami operator. The Laplace–Beltrami operator is the differential operator that corresponds to Laplace operator for the Riemannian manifold $$ X $$ . Then, by analogy, one can consider heat equations on $$ X $$ . Since Fourier arrived at his basis by attempting to solve the heat equation, the natural generalization is to use the eigensolutions of the Laplace–Beltrami operator as a basis. This generalizes Fourier series to spaces of the type $$ L^2(X) $$ , where $$ X $$ is a Riemannian manifold. The Fourier series converges in ways similar to the $$ [-\pi,\pi] $$ case. A typical example is to take $$ X $$ to be the sphere with the usual metric, in which case the Fourier basis consists of spherical harmonics. ### Locally compact Abelian groups The generalization to compact groups discussed above does not generalize to noncompact, nonabelian groups. However, there is a straightforward generalization to Locally Compact Abelian (LCA) groups. This generalizes the Fourier transform to $$ L^1(G) $$ or $$ L^2(G) $$ , where $$ G $$ is an LCA group. If $$ G $$ is compact, one also obtains a Fourier series, which converges similarly to the $$ [-\pi,\pi] $$ case, but if $$ G $$ is noncompact, one obtains instead a Fourier integral. This generalization yields the usual Fourier transform when the underlying locally compact Abelian group is $$ \mathbb{R} $$ . ## Extensions ### Fourier-Stieltjes series Let $$ F(x) $$ be a function of bounded variation defined on the closed interval $$ [0,P]\subseteq\mathbb{R} $$ . The Fourier series whose coefficients are given by $$ c_n = \frac{1}{P}\int_0^P \ e^{-i 2\pi \tfrac{n}{P} x }\,dF(x), \quad \forall n\in\mathbb{Z}, $$ is called the Fourier-Stieltjes series. The space of functions of bounded variation $$ BV $$ is a subspace of $$ L^1 $$ . As any $$ F \in BV $$ defines a Radon measure (i.e. a locally finite Borel measure on $$ \mathbb{R} $$ ), this definition can be extended as follows. Consider the space $$ M $$ of all finite Borel measures on the real line; as such $$ L^1 \subset M $$ . If there is a measure $$ \mu \in M $$ such that the Fourier-Stieltjes coefficients are given by $$ c_n = \hat\mu(n)=\frac{1}{P}\int_0^P \ e^{-i 2\pi \tfrac{n}{P} x }\,d\mu(x), \quad \forall n\in\mathbb{Z}, $$ then the series is called a Fourier-Stieltjes series. Likewise, the function $$ \hat\mu(n) $$ , where $$ \mu \in M $$ , is called a Fourier-Stieltjes transform. The question whether or not $$ \mu $$ exists for a given sequence of $$ c_n $$ forms the basis of the trigonometric moment problem. Furthermore, $$ M $$ is a strict subspace of the space of (tempered) distributions $$ \mathcal{D} $$ , i.e., $$ M \subset \mathcal{D} $$ . If the Fourier coefficients are determined by a distribution $$ F \in \mathcal{D} $$ then the series is described as a Fourier-Schwartz series. Contrary to the Fourier-Stieltjes series, deciding whether a given series is a Fourier series or a Fourier-Schwartz series is relatively trivial due to the characteristics of its dual space; the Schwartz space $$ \mathcal{S}(\mathbb{R}^n) $$ . ### Fourier series on a square We can also define the Fourier series for functions of two variables $$ x $$ and $$ y $$ in the square $$ [-\pi,\pi]\times[-\pi,\pi] $$ : $$ \begin{align} f(x,y) & = \sum_{j,k \in \Z} c_{j,k}e^{ijx}e^{iky},\\[5pt] c_{j,k} & = \frac{1}{4 \pi^2} \int_{-\pi}^\pi \int_{-\pi}^\pi f(x,y) e^{-ijx}e^{-iky}\, dx \, dy. \end{align} $$ Aside from being useful for solving partial differential equations such as the heat equation, one notable application of Fourier series on the square is in image compression. In particular, the JPEG image compression standard uses the two-dimensional discrete cosine transform, a discrete form of the Fourier cosine transform, which uses only cosine as the basis function. For two-dimensional arrays with a staggered appearance, half of the Fourier series coefficients disappear, due to additional symmetry. ### Fourier series of a Bravais-lattice-periodic function A three-dimensional Bravais lattice is defined as the set of vectors of the form $$ \mathbf{R} = n_1\mathbf{a}_1 + n_2\mathbf{a}_2 + n_3\mathbf{a}_3 $$ where $$ n_i $$ are integers and $$ \mathbf{a}_i $$ are three linearly independent but not necessarily orthogonal vectors. Let us consider some function $$ f(\mathbf{r}) $$ with the same periodicity as the Bravais lattice, i.e. $$ f(\mathbf{r}) = f(\mathbf{R}+\mathbf{r}) $$ for any lattice vector $$ \mathbf{R} $$ . This situation frequently occurs in solid-state physics where $$ f(\mathbf{r}) $$ might, for example, represent the effective potential that an electron "feels" inside a periodic crystal. In presence of such a periodic potential, the quantum-mechanical description of the electron results in a periodically modulated plane-wave commonly known as Bloch state. In order to develop $$ f(\mathbf{r}) $$ in a Fourier series, it is convenient to introduce an auxiliary function $$ g(x_1,x_2,x_3) \triangleq f(\mathbf{r}) = f \left (x_1\frac{\mathbf{a}_{1}}{a_1}+x_2\frac{\mathbf{a}_{2}}{a_2}+x_3\frac{\mathbf{a}_{3}}{a_3} \right ). $$ Both $$ f(\mathbf{r}) $$ and $$ g(x_1,x_2,x_3) $$ contain essentially the same information. However, instead of the position vector $$ \mathbf{r} $$ , the arguments of $$ g $$ are coordinates $$ x_{1,2,3} $$ along the unit vectors $$ \mathbf{a}_{i}/{a_i} $$ of the Bravais lattice, such that $$ g $$ is an ordinary periodic function in these variables, $$ g(x_1,x_2,x_3) = g(x_1+a_1,x_2,x_3) = g(x_1,x_2+a_2,x_3) = g(x_1,x_2,x_3+a_3)\quad\forall\;x_1,x_2,x_3. $$ This trick allows us to develop $$ g $$ as a multi-dimensional Fourier series, in complete analogy with the square-periodic function discussed in the previous section. Its Fourier coefficients are $$ \begin{align} c(m_1, m_2, m_3) = \frac{1}{a_3}\int_0^{a_3} dx_3 \frac{1}{a_2}\int_0^{a_2} dx_2 \frac{1}{a_1}\int_0^{a_1} dx_1\, g(x_1, x_2, x_3)\, e^{-i 2\pi \left(\tfrac{m_1}{a_1} x_1+\tfrac{m_2}{a_2} x_2 + \tfrac{m_3}{a_3} x_3\right)} \end{align}, $$ where $$ m_1,m_2,m_3 $$ are all integers. $$ c(m_1,m_2,m_3) $$ plays the same role as the coefficients $$ c_{j,k} $$ in the previous section but in order to avoid double subscripts we note them as a function. Once we have these coefficients, the function $$ g $$ can be recovered via the Fourier series $$ g(x_1, x_2, x_3)=\sum_{m_1, m_2, m_3 \in \Z } \,c(m_1, m_2, m_3) \, e^{i 2\pi \left( \tfrac{m_1}{a_1} x_1+ \tfrac{m_2}{a_2} x_2 + \tfrac{m_3}{a_3} x_3\right)}. $$ We would now like to abandon the auxiliary coordinates $$ x_{1,2,3} $$ and to return to the original position vector $$ \mathbf{r} $$ . This can be achieved by means of the reciprocal lattice whose vectors $$ \mathbf{b}_{1,2,3} $$ are defined such that they are orthonormal (up to a factor $$ 2\pi $$ ) to the original Bravais vectors $$ \mathbf{a}_{1,2,3} $$ , $$ \mathbf{a}_i\cdot\mathbf{b_j}=2\pi\delta_{ij}, $$ with $$ \delta_{ij} $$ the Kronecker delta. With this, the scalar product between a reciprocal lattice vector $$ \mathbf{Q} $$ and an arbitrary position vector $$ \mathbf{r} $$ written in the Bravais lattice basis becomes $$ \mathbf{Q} \cdot \mathbf{r} = \left ( m_1\mathbf{b}_1 + m_2\mathbf{b}_2 + m_3\mathbf{b}_3 \right ) \cdot \left (x_1\frac{\mathbf{a}_1}{a_1}+ x_2\frac{\mathbf{a}_2}{a_2} +x_3\frac{\mathbf{a}_3}{a_3} \right ) = 2\pi \left( x_1\frac{m_1}{a_1}+x_2\frac{m_2}{a_2}+x_3\frac{m_3}{a_3} \right ), $$ which is exactly the expression occurring in the Fourier exponents. The Fourier series for $$ f(\mathbf{r}) =g(x_1,x_2,x_3) $$ can therefore be rewritten as a sum over the all reciprocal lattice vectors $$ \mathbf{Q}= m_1\mathbf{b}_1+m_2\mathbf{b}_2+m_3\mathbf{b}_3 $$ , $$ f(\mathbf{r})=\sum_{\mathbf{Q}} c(\mathbf{Q})\, e^{i \mathbf{Q} \cdot \mathbf{r}}, $$ and the coefficients are $$ c(\mathbf{Q}) = \frac{1}{a_3} \int_0^{a_3} dx_3 \, \frac{1}{a_2}\int_0^{a_2} dx_2 \, \frac{1}{a_1}\int_0^{a_1} dx_1 \, f\left(x_1\frac{\mathbf{a}_1}{a_1} + x_2\frac{\mathbf{a}_2}{a_2} + x_3\frac{\mathbf{a}_3}{a_3} \right) e^{-i \mathbf{Q} \cdot \mathbf{r}}. $$ The remaining task will be to convert this integral over lattice coordinates back into a volume integral. The relation between the lattice coordinates $$ x_{1,2,3} $$ and the original cartesian coordinates $$ \mathbf{r} = (x,y,z) $$ is a linear system of equations, $$ \mathbf{r} = x_1\frac{\mathbf{a}_1}{a_1}+x_2\frac{\mathbf{a}_2}{a_2}+x_3\frac{\mathbf{a}_3}{a_3}, $$ which, when written in matrix form, $$ \begin{bmatrix}x\\y\\z\end{bmatrix} =\mathbf{J}\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix} =\begin{bmatrix}\frac{\mathbf{a}_1}{a_1},\frac{\mathbf{a}_2}{a_2},\frac{\mathbf{a}_3}{a_3}\end{bmatrix}\begin{bmatrix}x_1\\x_2\\x_3\end{bmatrix}\,, $$ involves a constant matrix $$ \mathbf{J} $$ whose columns are the unit vectors $$ \mathbf{a}_j/a_j $$ of the Bravais lattice. When changing variables from $$ \mathbf{r} $$ to $$ (x_1,x_2,x_3) $$ in an integral, the same matrix $$ \mathbf{J} $$ appears as a Jacobian matrix $$ \mathbf{J}=\begin{bmatrix} \dfrac{\partial x}{\partial x_1} & \dfrac{\partial x}{\partial x_2} & \dfrac{\partial x}{\partial x_3 } \\[12pt] \dfrac{\partial y}{\partial x_1} & \dfrac{\partial y}{\partial x_2} & \dfrac{\partial y}{\partial x_3} \\[12pt] \dfrac{\partial z}{\partial x_1} & \dfrac{\partial z}{\partial x_2} & \dfrac{\partial z}{\partial x_3} \end{bmatrix}\,. $$ Its determinant $$ J $$ is therefore also constant and can be inferred from any integral over any domain; here we choose to calculate the volume of the primitive unit cell $$ \Gamma $$ in both coordinate systems: $$ V_{\Gamma} = \int_{\Gamma} d^3 r = J \int_{0}^{a_1} dx_1 \int_{0}^{a_2} dx_2 \int_{0}^{a_3} dx_3=J\, a_1 a_2 a_3 $$ The unit cell being a parallelepiped, we have $$ V_{\Gamma}=\mathbf{a}_1\cdot(\mathbf{a}_2 \times \mathbf{a}_3) $$ and thus $$ d^3r=J dx_1 dx_2 dx_3 =\frac{\mathbf{a}_1\cdot(\mathbf{a}_2 \times \mathbf{a}_3)}{a_1 a_2 a_3} dx_1 dx_2 dx_3. $$ This allows us to write $$ c (\mathbf{Q}) $$ as the desired volume integral over the primitive unit cell $$ \Gamma $$ in ordinary cartesian coordinates: $$ c(\mathbf{Q}) = \frac{1}{\mathbf{a}_1\cdot(\mathbf{a}_2 \times \mathbf{a}_3)}\int_{\Gamma} d^3 r\, f(\mathbf{r})\cdot e^{-i \mathbf{Q} \cdot \mathbf{r}}\,. $$ ### Hilbert space As the trigonometric series is a special class of orthogonal system, Fourier series can naturally be defined in the context of Hilbert spaces. For example, the space of square-integrable functions on $$ [-\pi,\pi] $$ forms the Hilbert space $$ L^2([-\pi,\pi]) $$ . Its inner product, defined for any two elements $$ f $$ and $$ g $$ , is given by: $$ \langle f, g \rangle = \frac{1}{2\pi}\int_{-\pi}^{\pi} f(x)\overline{g(x)}\,dx. $$ This space is equipped with the orthonormal basis $$ \left\{e_n=e^{inx}: n \in \Z\right\} $$ . Then the (generalized) Fourier series expansion of $$ f \in L^{2}([-\pi,\pi]) $$ , given by $$ f(x) = \sum_{n=-\infty}^\infty c_n e^{i n x }, $$ can be written as $$ f=\sum_{n=-\infty}^\infty \langle f,e_n \rangle \, e_n. $$ The sine-cosine form follows in a similar fashion. Indeed, the sines and cosines form an orthogonal set: $$ \int_{-\pi}^{\pi} \cos(mx)\, \cos(nx)\, dx = \frac{1}{2}\int_{-\pi}^{\pi} \cos((n-m)x)+\cos((n+m)x)\, dx = \pi \delta_{mn}, \quad m, n \ge 1, $$ $$ \int_{-\pi}^{\pi} \sin(mx)\, \sin(nx)\, dx = \frac{1}{2}\int_{-\pi}^{\pi} \cos((n-m)x)-\cos((n+m)x)\, dx = \pi \delta_{mn}, \quad m, n \ge 1 $$ (where δmn is the Kronecker delta), and $$ \int_{-\pi}^{\pi} \cos(mx)\, \sin(nx)\, dx = \frac{1}{2}\int_{-\pi}^{\pi} \sin((n+m)x)+\sin((n-m)x)\, dx = 0; $$ Hence, the set $$ \left\{\frac{1}{\sqrt{2}},\frac{\cos x}{\sqrt{2}},\frac{\sin x}{\sqrt{2}},\dots,\frac{\cos (nx)}{\sqrt{2}},\frac{\sin (nx)}{\sqrt{2}},\dots \right\}, $$ also forms an orthonormal basis for $$ L^2([-\pi,\pi]) $$ . The density of their span is a consequence of the Stone–Weierstrass theorem, but follows also from the properties of classical kernels like the Fejér kernel. ## Fourier theorem proving convergence of Fourier series In engineering, the Fourier series is generally assumed to converge except at jump discontinuities since the functions encountered in engineering are usually better-behaved than those in other disciplines. In particular, if $$ s $$ is continuous and the derivative of $$ s(x) $$ (which may not exist everywhere) is square integrable, then the Fourier series of $$ s $$ converges absolutely and uniformly to $$ s(x) $$ . If a function is square-integrable on the interval $$ [x_0,x_0+P] $$ , then the Fourier series converges to the function almost everywhere. It is possible to define Fourier coefficients for more general functions or distributions, in which case pointwise convergence often fails, and convergence in norm or weak convergence is usually studied. The theorems proving that a Fourier series is a valid representation of any periodic function (that satisfies the Dirichlet conditions), and informal variations of them that do not specify the convergence conditions, are sometimes referred to generically as Fourier's theorem or the Fourier theorem. ### Least squares property The earlier : $$ s_N(x) = \sum_{n=-N}^N S[n]\ e^{i 2\pi\tfrac{n}{P} x}, $$ is a trigonometric polynomial of degree $$ N $$ that can be generally expressed as: $$ p_N(x)=\sum_{n=-N}^N p[n]\ e^{i 2\pi\tfrac{n}{P}x}. $$ Parseval's theorem implies that: ### Convergence theorems Because of the least squares property, and because of the completeness of the Fourier basis, we obtain an elementary convergence result. If $$ s $$ is continuously differentiable, then $$ (i n) S[n] $$ is the $$ n^{\text{th}} $$ Fourier coefficient of the first derivative $$ s' $$ . Since $$ s' $$ is continuous, and therefore bounded, it is square-integrable and its Fourier coefficients are square-summable. Then, by the Cauchy–Schwarz inequality, $$ \left(\sum_{n\ne 0}|S[n]|\right)^2\le \sum_{n\ne 0}\frac1{n^2}\cdot\sum_{n\ne 0} |nS[n]|^2. $$ This means that $$ s $$ is absolutely summable. The sum of this series is a continuous function, equal to $$ s $$ , since the Fourier series converges in $$ L^1 $$ to $$ s $$ : This result can be proven easily if $$ s $$ is further assumed to be $$ C^2 $$ , since in that case $$ n^2S[n] $$ tends to zero as $$ n \rightarrow \infty $$ . More generally, the Fourier series is absolutely summable, thus converges uniformly to $$ s $$ , provided that $$ s $$ satisfies a Hölder condition of order $$ \alpha > 1/2 $$ . In the absolutely summable case, the inequality: $$ \sup_x |s(x) - s_N(x)| \le \sum_{|n| > N} |S[n]| $$ proves uniform convergence. Many other results concerning the convergence of Fourier series are known, ranging from the moderately simple result that the series converges at $$ x $$ if $$ s $$ is differentiable at $$ x $$ , to more sophisticated results such as Carleson's theorem which states that the Fourier series of an $$ L^2 $$ function converges almost everywhere. ### Divergence Since Fourier series have such good convergence properties, many are often surprised by some of the negative results. For example, the Fourier series of a continuous T-periodic function need not converge pointwise. The uniform boundedness principle yields a simple non-constructive proof of this fact. In 1922, Andrey Kolmogorov published an article titled Une série de Fourier-Lebesgue divergente presque partout in which he gave an example of a Lebesgue-integrable function whose Fourier series diverges almost everywhere. He later constructed an example of an integrable function whose Fourier series diverges everywhere. It is possible to give explicit examples of a continuous function whose Fourier series diverges at 0: for instance, the even and 2π-periodic function f defined for all x in [0,π] by $$ f(x) = \sum_{n=1}^{\infty} \frac{1}{n^2} \sin\left[ \left( 2^{n^3} +1 \right) \frac{x}{2}\right]. $$ Because the function is even the Fourier series contains only cosines: $$ \sum_{m=0}^\infty C_m \cos(mx). $$ The coefficients are: $$ C_m=\frac 1\pi\sum_{n=1}^{\infty} \frac{1}{n^2} \left\{\frac 2{2^{n^3} +1-2m}+\frac 2{2^{n^3} +1+2m}\right\} $$ As increases, the coefficients will be positive and increasing until they reach a value of about $$ C_m\approx 2/(n^2\pi) $$ at $$ m=2^{n^3}/2 $$ for some and then become negative (starting with a value around $$ -2/(n^2\pi) $$ ) and getting smaller, before starting a new such wave. At $$ x=0 $$ the Fourier series is simply the running sum of $$ C_m, $$ and this builds up to around $$ \frac 1{n^2\pi}\sum_{k=0}^{2^{n^3}/2}\frac 2{2k+1}\sim\frac 1{n^2\pi}\ln 2^{n^3}=\frac n\pi\ln 2 $$ in the th wave before returning to around zero, showing that the series does not converge at zero but reaches higher and higher peaks. Note that though the function is continuous, it is not differentiable.
https://en.wikipedia.org/wiki/Fourier_series
Linear hashing (LH) is a dynamic data structure which implements a hash table and grows or shrinks one bucket at a time. It was invented by Witold Litwin in 1980. It has been analyzed by Baeza-Yates and Soza-Pollman. It is the first in a number of schemes known as dynamic hashing such as Larson's Linear Hashing with Partial Extensions, Linear Hashing with Priority Splitting, Linear Hashing with Partial Expansions and Priority Splitting, or Recursive Linear Hashing. The file structure of a dynamic hashing data structure adapts itself to changes in the size of the file, so expensive periodic file reorganization is avoided. A Linear Hashing file expands by splitting a predetermined bucket into two and shrinks by merging two predetermined buckets into one. The trigger for a reconstruction depends on the flavor of the scheme; it could be an overflow at a bucket or load factor (i.e., the number of records divided by the number of buckets) moving outside of a predetermined range. In Linear Hashing there are two types of buckets, those that are to be split and those already split. While extendible hashing splits only overflowing buckets, spiral hashing (a.k.a. spiral storage) distributes records unevenly over the buckets such that buckets with high costs of insertion, deletion, or retrieval are earliest in line for a split. Linear Hashing has also been made into a scalable distributed data structure, ### LH* . In LH*, each bucket resides at a different server. LH* itself has been expanded to provide data availability in the presence of failed buckets. Key based operations (inserts, deletes, updates, reads) in LH and LH* take maximum constant time independent of the number of buckets and hence of records. ## Algorithm details Records in LH or LH* consists of a key and a content, the latter basically all the other attributes of the record. They are stored in buckets. For example, in Ellis' implementation, a bucket is a linked list of records. The file allows the key based CRUD operations create or insert, read, update, and delete as well as a scan operations that scans all records, for example to do a database select operation on a non-key attribute. Records are stored in buckets whose numbering starts with 0. The key distinction from schemes such as Fagin's extendible hashing is that as the file expands due to insertions, only one bucket is split at a time, and the order in which buckets are split is already predetermined. ### Hash functions The hash function $$ h_i(c) $$ returns the 0-based index of the bucket that contains the record with key $$ c $$ . When a bucket which uses the hash function $$ h_i $$ is split into two new buckets, the hash function $$ h_i $$ is replaced with $$ h_{i+1} $$ for both of those new buckets. At any time, at most two hash functions $$ h_l $$ and $$ h_{l+1} $$ are used; such that $$ l $$ corresponds to the current level. The family of hash functions $$ h_i(c) $$ is also referred to as the dynamic hash function. Typically, the value of $$ i $$ in $$ h_i $$ corresponds to the number of rightmost binary digits of the key $$ c $$ that are used to segregate the buckets. This dynamic hash function can be expressed arithmetically as $$ h_i(c) \mapsto (c \bmod 2^i) $$ . Note that when the total number of buckets is equal to one, $$ i=0 $$ . Complete the calculations below to determine the correct hashing function for the given hashing key $$ c $$ . ```python 1. l represents the current level 1. s represents the split pointer index a = h_l(c) if (a < s): a = h_{l+1}(c) ``` ### Split control Linear hashing algorithms may use only controlled splits or both controlled and uncontrolled splits. Controlled splitting occurs if a split is performed whenever the load factor, which is monitored by the file, exceeds a predetermined threshold. If the hash index uses controlled splitting, the buckets are allowed to overflow by using linked overflow blocks. When the load factor surpasses a set threshold, the split pointer's designated bucket is split. Instead of using the load factor, this threshold can also be expressed as an occupancy percentage, in which case, the maximum number of records in the hash index equals (occupancy percentage)*(max records per non-overflowed bucket)*(number of buckets). An uncontrolled split occurs when a split is performed whenever a bucket overflows, in which case that bucket would be split into two separate buckets. File contraction occurs in some LH algorithm implementations if a controlled split causes the load factor to sink below a threshold. In this case, a merge operation would be triggered which would undo the last split, and reset the file state. ### Split pointer The index of the next bucket to be split is part of the file state and called the split pointer $$ s $$ . The split pointer corresponds to the first bucket that uses the hash function $$ h_l $$ instead of $$ h_{l+1} $$ . For example, if numerical records are inserted into the hash index according to their farthest right binary digits, the bucket corresponding with the appended bucket will be split. Thus, if we have the buckets labelled as 000, 001, 10, 11, 100, 101, we would split the bucket 10 because we are appending and creating the next sequential bucket 110. This would give us the buckets 000, 001, 010, 11, 100, 101, 110. When a bucket is split, split pointer and possibly the level are updated according to the following, such that the level is 0 when the linear hashing index only has 1 bucket. ```python 1. l represents the current level 1. s represents the split pointer index s = s + 1 if (s >= 2^l): l = l + 1 s = 0 ``` LH* The main contribution of LH* is to allow a client of an LH* file to find the bucket where the record resides even if the client does not know the file state. Clients in fact store their version of the file state, which is initially just the knowledge of the first bucket, namely Bucket 0. Based on their file state, a client calculates the address of a key and sends a request to that bucket. At the bucket, the request is checked and if the record is not at the bucket, it is forwarded. In a reasonably stable system, that is, if there is only one split or merge going on while the request is processed, it can be shown that there are at most two forwards. After a forward, the final bucket sends an Image Adjustment Message to the client whose state is now closer to the state of the distributed file. While forwards are reasonably rare for active clients, their number can be even further reduced by additional information exchange between servers and clients ## Other properties ### File state calculation The file state consists of split pointer $$ s $$ and level $$ l $$ . If the original file started with $$ N=1 $$ buckets, then the number of buckets $$ n $$ and the file state are related via $$ n = 2^l+s $$ . ## Adoption in language systems Griswold and Townsend discussed the adoption of linear hashing in the Icon language. They discussed the implementation alternatives of dynamic array algorithm used in linear hashing, and presented performance comparisons using a list of Icon benchmark applications. ## Adoption in database systems Linear hashing is used in the Berkeley database system (BDB), which in turn is used by many software systems, using a C implementation derived from the CACM article and first published on the Usenet in 1988 by Esmond Pitt. ## References ## External links - TommyDS, C implementation of a Linear Hashtable - An in Memory Go Implementation with Explanation - A C++ Implementation of Linear Hashtable which Supports Both Filesystem and In-Memory storage
https://en.wikipedia.org/wiki/Linear_hashing
In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints are not linear equalities or the objective function is not a linear function. An optimization problem is one of calculation of the extrema (maxima, minima or stationary points) of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear. ## Definition and discussion Let n, m, and p be positive integers. Let X be a subset of Rn (usually a box-constrained one), let f, gi, and hj be real-valued functions on X for each i in {1, ..., m} and each j in {1, ..., p}, with at least one of f, gi, and hj being nonlinear. A nonlinear programming problem is an optimization problem of the form $$ \begin{align} \text{minimize } & f(x) \\ \text{subject to } & g_i(x) \leq 0 \text{ for each } i \in \{1, \dotsc, m\} \\ & h_j(x) = 0 \text{ for each } j \in \{1, \dotsc, p\} \\ & x \in X. \end{align} $$ Depending on the constraint set, there are several possibilities: - feasible problem is one for which there exists at least one set of values for the choice variables satisfying all the constraints. - an infeasible problem is one for which no set of values for the choice variables satisfies all the constraints. That is, the constraints are mutually contradictory, and no solution exists; the feasible set is the empty set. - unbounded problem is a feasible problem for which the objective function can be made to be better than any given finite value. Thus there is no optimal solution, because there is always a feasible solution that gives a better objective function value than does any given proposed solution. Most realistic applications feature feasible problems, with infeasible or unbounded problems seen as a failure of an underlying model. In some cases, infeasible problems are handled by minimizing a sum of feasibility violations. Some special cases of nonlinear programming have specialized solution methods: - If the objective function is concave (maximization problem), or convex (minimization problem) and the constraint set is convex, then the program is called convex and general methods from convex optimization can be used in most cases. - If the objective function is quadratic and the constraints are linear, quadratic programming techniques are used. - If the objective function is a ratio of a concave and a convex function (in the maximization case) and the constraints are convex, then the problem can be transformed to a convex optimization problem using fractional programming techniques. ## Applicability A typical non-convex problem is that of optimizing transportation costs by selection from a set of transportation methods, one or more of which exhibit economies of scale, with various connectivities and capacity constraints. An example would be petroleum product transport given a selection or combination of pipeline, rail tanker, road tanker, river barge, or coastal tankship. Owing to economic batch size the cost functions may have discontinuities in addition to smooth changes. In experimental science, some simple data analysis (such as fitting a spectrum with a sum of peaks of known location and shape but unknown magnitude) can be done with linear methods, but in general these problems are also nonlinear. Typically, one has a theoretical model of the system under study with variable parameters in it and a model the experiment or experiments, which may also have unknown parameters. One tries to find a best fit numerically. In this case one often wants a measure of the precision of the result, as well as the best fit itself. ## Methods for solving a general nonlinear program ### Analytic methods Under differentiability and constraint qualifications, the Karush–Kuhn–Tucker (KKT) conditions provide necessary conditions for a solution to be optimal. If some of the functions are non-differentiable, subdifferential versions of Karush–Kuhn–Tucker (KKT) conditions are available. Under convexity, the KKT conditions are sufficient for a global optimum. Without convexity, these conditions are sufficient only for a local optimum. In some cases, the number of local optima is small, and one can find all of them analytically and find the one for which the objective value is smallest. ### Numeric methods In most realistic cases, it is very hard to solve the KKT conditions analytically, and so the problems are solved using numerical methods. These methods are iterative: they start with an initial point, and then proceed to points that are supposed to be closer to the optimal point, using some update rule. There are three kinds of update rules: - Zero-order routines - use only the values of the objective function and constraint functions at the current point; - First-order routines - use also the values of the gradients of these functions; - Second-order routines - use also the values of the Hessians of these functions. Third-order routines (and higher) are theoretically possible, but not used in practice, due to the higher computational load and little theoretical benefit. ### Branch and bound Another method involves the use of branch and bound techniques, where the program is divided into subclasses to be solved with convex (minimization problem) or linear approximations that form a lower bound on the overall cost within the subdivision. With subsequent divisions, at some point an actual solution will be obtained whose cost is equal to the best lower bound obtained for any of the approximate solutions. This solution is optimal, although possibly not unique. The algorithm may also be stopped early, with the assurance that the best possible solution is within a tolerance from the best point found; such points are called ε-optimal. Terminating to ε-optimal points is typically necessary to ensure finite termination. This is especially useful for large, difficult problems and problems with uncertain costs or values where the uncertainty can be estimated with an appropriate reliability estimation. ## Implementations There exist numerous nonlinear programming solvers, including open source: - ALGLIB (C++, C#, Java, Python API) implements several first-order and derivative-free nonlinear programming solvers - NLopt (C/C++ implementation, with numerous interfaces including Julia, Python, R, MATLAB/Octave), includes various nonlinear programming solvers - SciPy (de facto standard for scientific Python) has scipy.optimize solver, which includes several nonlinear programming algorithms (zero-order, first order and second order ones). - IPOPT (C++ implementation, with numerous interfaces including C, Fortran, Java, AMPL, R, Python, etc.) is an interior point method solver (zero-order, and optionally first order and second order derivatives). ## Numerical Examples ### 2-dimensional example A simple problem (shown in the diagram) can be defined by the constraints $$ \begin{align} x_1 &\geq 0 \\ x_2 &\geq 0 \\ x_1^2 + x_2^2 &\geq 1 \\ x_1^2 + x_2^2 &\leq 2 \end{align} $$ with an objective function to be maximized $$ f(\mathbf x) = x_1 + x_2 $$ where . ### 3-dimensional example Another simple problem (see diagram) can be defined by the constraints $$ \begin{align} x_1^2 - x_2^2 + x_3^2 &\leq 2 \\ x_1^2 + x_2^2 + x_3^2 &\leq 10 \end{align} $$ with an objective function to be maximized $$ f(\mathbf x) = x_1 x_2 + x_2 x_3 $$ where .
https://en.wikipedia.org/wiki/Nonlinear_programming
The Open Systems Interconnection (OSI) model is a reference model developed by the International Organization for Standardization (ISO) that "provides a common basis for the coordination of standards development for the purpose of systems interconnection." In the OSI reference model, the components of a communication system are distinguished in seven abstraction layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application. The model describes communications from the physical implementation of transmitting bits across a transmission medium to the highest-level representation of data of a distributed application. Each layer has well-defined functions and semantics and serves a class of functionality to the layer above it and is served by the layer below it. Established, well-known communication protocols are decomposed in software development into the model's hierarchy of function calls. The Internet protocol suite as defined in and is a model of networking developed contemporarily to the OSI model, and was funded primarily by the U.S. Department of Defense. It was the foundation for the development of the Internet. It assumed the presence of generic physical links and focused primarily on the software layers of communication, with a similar but much less rigorous structure than the OSI model. In comparison, several networking models have sought to create an intellectual framework for clarifying networking concepts and activities, but none have been as successful as the OSI reference model in becoming the standard model for discussing and teaching networking in the field of information technology. The model allows transparent communication through equivalent exchange of protocol data units (PDUs) between two parties, through what is known as peer-to-peer networking (also known as peer-to-peer communication). As a result, the OSI reference model has not only become an important piece among professionals and non-professionals alike, but also in all networking between one or many parties, due in large part to its commonly accepted user-friendly framework. ## History The development of the OSI model started in the late 1970s to support the emergence of the diverse computer networking methods that were competing for application in the large national networking efforts in the world (see OSI protocols and Protocol Wars). In the 1980s, the model became a working product of the Open Systems Interconnection group at the International Organization for Standardization (ISO). While attempting to provide a comprehensive description of networking, the model failed to garner reliance during the design of the Internet, which is reflected in the less prescriptive Internet Protocol Suite, principally sponsored under the auspices of the Internet Engineering Task Force (IETF). In the early- and mid-1970s, networking was largely either government-sponsored (NPL network in the UK, ARPANET in the US, CYCLADES in France) or vendor-developed with proprietary standards, such as IBM's Systems Network Architecture and Digital Equipment Corporation's DECnet. Public data networks were only just beginning to emerge, and these began to use the X.25 standard in the late 1970s. The Experimental Packet Switched System in the UK –1975 identified the need for defining higher-level protocols. The UK National Computing Centre publication, Why Distributed Computing, which came from considerable research into future configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977. Beginning in 1977, the ISO initiated a program to develop general standards and methods of networking. A similar process evolved at the International Telegraph and Telephone Consultative Committee (CCITT, from French: Comité Consultatif International Téléphonique et Télégraphique). Both bodies developed documents that defined similar networking models. The British Department of Trade and Industry acted as the secretariat, and universities in the United Kingdom developed prototypes of the standards. The OSI model was first defined in raw form in Washington, D.C., in February 1978 by French software engineer Hubert Zimmermann, and the refined but still draft standard was published by the ISO in 1980. The drafters of the reference model had to contend with many competing priorities and interests. The rate of technological change made it necessary to define standards that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards. Although not a standard itself, it was a framework in which future standards could be defined. In May 1983, the CCITT and ISO documents were merged to form The Basic Reference Model for Open Systems Interconnection, usually referred to as the Open Systems Interconnection Reference Model, OSI Reference Model, or simply OSI model. It was published in 1984 by both the ISO, as standard ISO 7498, and the renamed CCITT (now called the Telecommunications Standardization Sector of the International Telecommunication Union or ITU-T) as standard X.200. OSI had two major components: an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols. The OSI reference model was a major advance in the standardisation of network concepts. It promoted the idea of a consistent model of protocol layers, defining interoperability between network devices and software. The concept of a seven-layer model was provided by the work of Charles Bachman at Honeywell Information Systems. Various aspects of OSI design evolved from experiences with the NPL network, ARPANET, CYCLADES, EIN, and the International Network Working Group (IFIP WG6.1). In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it and provided facilities for use by the layer above it. The OSI standards documents are available from the ITU-T as the X.200 series of recommendations. Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO/IEC standards for the OSI model were available from ISO. Not all are free of charge. OSI was an industry effort, attempting to get industry participants to agree on common network standards to provide multi-vendor interoperability. It was common for large networks to support multiple network protocol suites, with many devices unable to interoperate with other devices because of a lack of common protocols. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. However, while OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking. The OSI model is still used as a reference for teaching and documentation; however, the OSI protocols originally conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing. Others say the original OSI model does not fit today's networking protocols and have suggested instead a simplified approach. ## Definitions Communication protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions, like the OSI model, abstractly describe the functionality provided to a layer N by a layer N−1, where N is one of the seven layers of protocols operating in the local host. At each level N, two entities at the communicating devices (layer N peers) exchange protocol data units (PDUs) by means of a layer N protocol. Each PDU contains a payload, called the service data unit (SDU), along with protocol-related headers or footers. Data processing by two communicating OSI-compatible devices proceeds as follows: 1. The data to be transmitted is composed at the topmost layer of the transmitting device (layer N) into a protocol data unit (PDU). 1. The PDU is passed to layer N−1, where it is known as the service data unit (SDU). 1. At layer N−1 the SDU is concatenated with a header, a footer, or both, producing a layer N−1 PDU. It is then passed to layer N−2. 1. The process continues until reaching the lowermost level, from which the data is transmitted to the receiving device. 1. At the receiving device the data is passed from the lowest to the highest layer as a series of SDUs while being successively stripped from each layer's header or footer until reaching the topmost layer, where the last of the data is consumed. ### Standards documents The OSI model was defined in ISO/IEC 7498 which consists of the following parts: - ISO/IEC 7498-1 The Basic Model - ISO/IEC 7498-2 Security Architecture - ISO/IEC 7498-3 Naming and addressing - ISO/IEC 7498-4 Management framework ISO/IEC 7498-1 is also published as ITU-T Recommendation X.200. ## Layer architecture The recommendation X.200 describes seven layers, labelled 1 to 7. Layer 1 is the lowest layer in this model. ### Layer 1: Physical layer The physical layer is responsible for the transmission and reception of unstructured raw data between a device, such as a network interface controller, Ethernet hub, or network switch, and a physical transmission medium. It converts the digital bits into electrical, radio, or optical signals (analogue signals). Layer specifications define characteristics such as voltage levels, the timing of voltage changes, physical data rates, maximum transmission distances, modulation scheme, channel access method and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. Bit rate control is done at the physical layer and may define transmission mode as simplex, half duplex, and full duplex. The components of a physical layer can be described in terms of the network topology. Physical layer specifications are included in the specifications for the ubiquitous Bluetooth, Ethernet, and USB standards. An example of a less well-known physical layer specification would be for the CAN standard. The physical layer also specifies how encoding occurs over a physical signal, such as electrical voltage or a light pulse. For example, a 1 bit might be represented on a copper wire by the transition from a 0-volt to a 5-volt signal, whereas a 0 bit might be represented by the transition from a 5-volt to a 0-volt signal. As a result, common problems occurring at the physical layer are often related to the incorrect media termination, EMI or noise scrambling, and NICs and hubs that are misconfigured or do not work correctly. ### Layer 2: Data link layer The data link layer provides node-to-node data transfer—a link between two directly connected nodes. It detects and possibly corrects errors that may occur in the physical layer. It defines the protocol to establish and terminate a connection between two physically connected devices. It also defines the protocol for flow control between them. IEEE 802 divides the data link layer into two sublayers: - Medium access control (MAC) layer – responsible for controlling how devices in a network gain access to a medium and permission to transmit data. - Logical link control (LLC) layer – responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization. The MAC and LLC layers of IEEE 802 networks such as 802.3 Ethernet, 802.11 Wi-Fi, and 802.15.4 Zigbee operate at the data link layer. The Point-to-Point Protocol (PPP) is a data link layer protocol that can operate over several different physical layers, such as synchronous and asynchronous serial lines. The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer that provides both error correction and flow control by means of a selective-repeat sliding-window protocol. Security, specifically (authenticated) encryption, at this layer can be applied with MACsec. ### Layer 3: Network layer The network layer provides the functional and procedural means of transferring packets from one node to another connected in "different networks". A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver the message to the destination node, possibly routing it through intermediate nodes. If the message is too large to be transmitted from one node to another on the data link layer between those nodes, the network may implement message delivery by splitting the message into several fragments at one node, sending the fragments independently, and reassembling the fragments at another node. It may, but does not need to, report delivery errors. Message delivery at the network layer is not necessarily guaranteed to be reliable; a network layer protocol may provide reliable message delivery, but it does not need to do so. A number of layer-management protocols, a function defined in the management annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them. ### Layer 4: Transport layer The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source host to a destination host from one application to another across a network while maintaining the quality-of-service functions. Transport protocols may be connection-oriented or connectionless. This may require breaking large protocol data units or long data streams into smaller chunks called "segments", since the network layer imposes a maximum packet size called the maximum transmission unit (MTU), which depends on the maximum packet size imposed by all data link layers on the network path between the two hosts. The amount of data in a data segment must be small enough to allow for a network-layer header and a transport-layer header. For example, for data being transferred across Ethernet, the MTU is 1500 bytes, the minimum size of a TCP header is 20 bytes, and the minimum size of an IPv4 header is 20 bytes, so the maximum segment size is 1500−(20+20) bytes, or 1460 bytes. The process of dividing data into segments is called segmentation; it is an optional function of the transport layer. Some connection-oriented transport protocols, such as TCP and the OSI connection-oriented transport protocol (COTP), perform segmentation and reassembly of segments on the receiving side; connectionless transport protocols, such as UDP and the OSI connectionless transport protocol (CLTP), usually do not. The transport layer also controls the reliability of a given link between a source and destination host through flow control, error control, and acknowledgments of sequence and existence. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail delivery through the acknowledgment hand-shake system. The transport layer will also provide the acknowledgement of the successful data transmission and sends the next data if no errors occurred. Reliability, however, is not a strict requirement within the transport layer. Protocols like UDP, for example, are used in applications that are willing to accept some packet loss, reordering, errors or duplication. Streaming media, real-time multiplayer games and voice over IP (VoIP) are examples of applications in which loss of packets is not usually a fatal problem. The OSI connection-oriented transport protocol defines five classes of connection-mode transport protocols, ranging from class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0–4 classes are shown in the following table: Feature name TP0 TP1 TP2 TP3 TP4 Connection-oriented network Connectionless network Concatenation and separation Segmentation and reassembly Error recovery Reinitiate connection Multiplexingdemultiplexing over single virtual circuit Explicit flow control Retransmission on timeout Reliable transport service If an excessive number of PDUs are unacknowledged. An easy way to visualize the transport layer is to compare it with a post office, which deals with the dispatch and classification of mail and parcels sent. A post office inspects only the outer envelope of mail to determine its delivery. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunnelling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at the endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete Layer 2 frames or Layer 3 packets to deliver to the endpoint. L2TP carries PPP frames inside transport segments. Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer 4 protocols within OSI. Transport Layer Security (TLS) does not strictly fit inside the model either. It contains characteristics of the transport and presentation layers. ### Layer 5: Session layer The session layer creates the setup, controls the connections, and ends the teardown, between two or more computers, which is called a "session". Common functions of the session layer include user logon (establishment) and user logoff (termination) functions. Including this matter, authentication methods are also built into most client software, such as FTP Client and NFS Client for Microsoft Networks. Therefore, the session layer establishes, manages and terminates the connections between the local and remote applications. The session layer also provides for full-duplex, half-duplex, or simplex operation, and establishes procedures for checkpointing, suspending, restarting, and terminating a session between two related streams of data, such as an audio and a video stream in a web-conferencing application. Therefore, the session layer is commonly implemented explicitly in application environments that use remote procedure calls. ### Layer 6: Presentation layer The presentation layer establishes data formatting and data translation into a format specified by the application layer during the encapsulation of outgoing messages while being passed down the protocol stack, and possibly reversed during the deencapsulation of incoming messages when being passed up the protocol stack. For this very reason, outgoing messages during encapsulation are converted into a format specified by the application layer, while the conversion for incoming messages during deencapsulation are reversed. The presentation layer handles protocol conversion, data encryption, data decryption, data compression, data decompression, incompatibility of data representation between operating systems, and graphic commands. The presentation layer transforms data into the form that the application layer accepts, to be sent across a network. Since the presentation layer converts data and graphics into a display format for the application layer, the presentation layer is sometimes called the syntax layer. For this reason, the presentation layer negotiates the transfer of syntax structure through the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML. ### Layer 7: Application layer The application layer is the layer of the OSI model that is closest to the end user, which means both the OSI application layer and the user interact directly with a software application that implements a component of communication between the client and server, such as File Explorer and Microsoft Word. Such application programs fall outside the scope of the OSI model unless they are directly integrated into the application layer through the functions of communication, as is the case with applications such as web browsers and email programs. Other examples of software are Microsoft Network Software for File and Printer Sharing and Unix/Linux Network File System Client for access to shared file resources. Application-layer functions typically include file sharing, message handling, and database access, through the most common protocols at the application layer, known as HTTP, FTP, SMB/CIFS, TFTP, and SMTP. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. The most important distinction in the application layer is the distinction between the application entity and the application. For example, a reservation website might have two application entities: one using HTTP to communicate with its users, and one for a remote database protocol to record reservations. Neither of these protocols have anything to do with reservations. That logic is in the application itself. The application layer has no means to determine the availability of resources in the network. ## Cross-layer functions Cross-layer functions are services that are not tied to a given layer, but may affect more than one layer. Some orthogonal aspects, such as management and security, involve all of the layers (See ITU-T X.800 Recommendation). These services are aimed at improving the CIA triad—confidentiality, integrity, and availability—of the transmitted data. Cross-layer functions are the norm, in practice, because the availability of a communication service is determined by the interaction between network design and network management protocols. Specific examples of cross-layer functions include the following: - Security service (telecommunication) as defined by ITU-T X.800 recommendation. - Management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application-layer protocol, Common Management Information Protocol (CMIP) and its corresponding service, Common Management Information Service (CMIS), they need to interact with every layer in order to deal with their instances. - Multiprotocol Label Switching (MPLS), ATM, and X.25 are 3a protocols. OSI subdivides the Network Layer into three sublayers: 3a) Subnetwork Access, 3b) Subnetwork Dependent Convergence and 3c) Subnetwork Independent Convergence. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram-based service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Sometimes one sees reference to a Layer 2.5. - Cross MAC and PHY Scheduling is essential in wireless networks because of the time-varying nature of wireless channels. By scheduling packet transmission only in favourable channel conditions, which requires the MAC layer to obtain channel state information from the PHY layer, network throughput can be significantly improved and energy waste can be avoided. ## Programming interfaces Neither the OSI Reference Model, nor any OSI protocol specifications, outline any programming interfaces, other than deliberately abstract service descriptions. Protocol specifications define a methodology for communication between peers, but the software interfaces are implementation-specific. For example, the Network Driver Interface Specification (NDIS) and Open Data-Link Interface (ODI) are interfaces between the media (layer 2) and the network protocol (layer 3). ## Comparison to other networking suites The table below presents a list of OSI layers, the original OSI protocols, and some approximate modern matches. This correspondence is rough: the OSI model contains idiosyncrasies not found in later systems such as the IP stack in modern Internet. Layer OSI protocols TCP/IP protocols SignalingSystem 7 AppleTalk IPX SNA UMTS HTTP-based protocols Miscellaneous examples No. Name 7 Application 6 Presentation Presentation Services 5 Session Sockets 4 Transport Port number can be specified. 3 Network ATP Out of scope. IP addresses can be used instead of domain names in URLs. 2 Data link IEEE 802.3 framingEthernet II framing Out of scope. 1 Physical TCP/IP stack does not care about the physical medium, as long as it provides a way to communicate octets UMTS air interfaces Out of scope. ### Comparison with TCP/IP model The design of protocols in the TCP/IP model of the Internet does not concern itself with strict hierarchical encapsulation and layering. contains a section entitled "Layering considered harmful". TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the host-to-host transport path; the internetworking range; and the scope of the direct links to other nodes on the local network. Despite using a different concept for layering than the OSI model, these layers are often compared with the OSI layering scheme in the following manner: - The Internet application layer maps to the OSI application layer, presentation layer, and most of the session layer. - The TCP/IP transport layer maps to the graceful close function of the OSI session layer as well as the OSI transport layer. - The internet layer performs functions as those in a subset of the OSI network layer. - The link layer corresponds to the OSI data link layer and may include similar functions as the physical layer, as well as some protocols of the OSI's network layer. These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in the internal organization of the network layer. The OSI protocol suite that was specified as part of the OSI project was considered by many as too complicated and inefficient, and to a large extent unimplementable. Taking the "forklift upgrade" approach to networking, it specified eliminating all existing networking protocols and replacing them at all layers of the stack. This made implementation difficult and was resisted by many vendors and users with significant investments in other network technologies. In addition, the protocols included so many optional features that many vendors' implementations were not interoperable. Although the OSI model is often still referenced, the Internet protocol suite has become the standard for networking. TCP/IP's pragmatic approach to computer networking and to independent implementations of simplified protocols made it a practical methodology. Some protocols and specifications in the OSI stack remain in use, one example being IS-IS, which was specified for OSI as ISO/IEC 10589:2002 and adapted for Internet use with TCP/IP as RFC 1142.
https://en.wikipedia.org/wiki/OSI_model
Alternative models to the Standard Higgs Model are models which are considered by many particle physicists to solve some of the Higgs boson's existing problems. Two of the most currently researched models are quantum triviality, and Higgs hierarchy problem. ## Overview In particle physics, elementary particles and forces give rise to the world around us. Physicists explain the behaviors of these particles and how they interact using the Standard Model—a widely accepted framework believed to explain most of the world we see around us. Initially, when these models were being developed and tested, it seemed that the mathematics behind those models, which were satisfactory in areas already tested, would also forbid elementary particles from having any mass, which showed clearly that these initial models were incomplete. In 1964 three groups of physicists almost simultaneously released papers describing how masses could be given to these particles, using approaches known as symmetry breaking. This approach allowed the particles to obtain a mass, without breaking other parts of particle physics theory that were already believed reasonably correct. This idea became known as the Higgs mechanism, and later experiments confirmed that such a mechanism does exist—but they could not show exactly how it happens. The simplest theory for how this effect takes place in nature, and the theory that became incorporated into the Standard Model, was that if one or more of a particular kind of "field" (known as a Higgs field) happened to permeate space, and if it could interact with elementary particles in a particular way, then this would give rise to a Higgs mechanism in nature. In the basic Standard Model there is one field and one related Higgs boson; in some extensions to the Standard Model there are multiple fields and multiple Higgs bosons. In the years since the Higgs field and boson were proposed as a way to explain the origins of symmetry breaking, several alternatives have been proposed that suggest how a symmetry breaking mechanism could occur without requiring a Higgs field to exist. Models which do not include a Higgs field or a Higgs boson are known as Higgsless models. In these models, strongly interacting dynamics rather than an additional (Higgs) field produce the non-zero vacuum expectation value that breaks electroweak symmetry. ## List of alternative models A partial list of proposed alternatives to a Higgs field as a source for symmetry breaking includes: - Technicolor models break electroweak symmetry through new gauge interactions, which were originally modeled on quantum chromodynamics. - Extra-dimensional Higgsless models use the fifth component of the gauge fields to play the role of the Higgs fields. It is possible to produce electroweak symmetry breaking by imposing certain boundary conditions on the extra dimensional fields, increasing the unitarity breakdown scale up to the energy scale of the extra dimension. Through the AdS/QCD correspondence this model can be related to technicolor models and to "UnHiggs" models in which the Higgs field is of unparticle nature. - Models of composite W and Z vector bosons. - Top quark condensate. - "Unitary Weyl gauge". By adding a suitable gravitational term to the standard model action in curved spacetime, the theory develops a local conformal (Weyl) invariance. The conformal gauge is fixed by choosing a reference mass scale based on the gravitational coupling constant. This approach generates the masses for the vector bosons and matter fields similar to the Higgs mechanism without traditional spontaneous symmetry breaking. - Asymptotically safe weak interactions based on some nonlinear sigma models. - Preon and models inspired by preons such as Ribbon model of Standard Model particles by Sundance Bilson-Thompson, based in braid theory and compatible with loop quantum gravity and similar theories. This model not only explains mass but leads to an interpretation of electric charge as a topological quantity (twists carried on the individual ribbons) and colour charge as modes of twisting. - Symmetry breaking driven by non-equilibrium dynamics of quantum fields above the electroweak scale. - Unparticle physics and the unhiggs. These are models that posit that the Higgs sector and Higgs boson are scaling invariant, also known as unparticle physics. - In theory of superfluid vacuum masses of elementary particles can arise as a result of interaction with the physical vacuum, similarly to the gap generation mechanism in superconductors. - UV-completion by classicalization, in which the unitarization of the WW scattering happens by creation of classical configurations.
https://en.wikipedia.org/wiki/Alternatives_to_the_Standard_Higgs_Model
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polytope. A linear programming algorithm finds a point in the polytope where this function has the largest (or smallest) value if such a point exists. Linear programs are problems that can be expressed in standard form as: $$ \begin{align} & \text{Find a vector} && \mathbf{x} \\ & \text{that maximizes} && \mathbf{c}^\mathsf{T} \mathbf{x}\\ & \text{subject to} && A \mathbf{x} \le \mathbf{b} \\ & \text{and} && \mathbf{x} \ge \mathbf{0}. \end{align} $$ Here the components of $$ \mathbf{x} $$ are the variables to be determined, $$ \mathbf{c} $$ and $$ \mathbf{b} $$ are given vectors, and $$ A $$ is a given matrix. The function whose value is to be maximized ( $$ \mathbf x\mapsto\mathbf{c}^\mathsf{T}\mathbf{x} $$ in this case) is called the objective function. The constraints $$ A \mathbf{x} \le \mathbf{b} $$ and $$ \mathbf{x} \geq \mathbf{0} $$ specify a convex polytope over which the objective function is to be optimized. Linear programming can be applied to various fields of study. It is widely used in mathematics and, to a lesser extent, in business, economics, and some engineering problems. There is a close connection between linear programs, eigenequations, John von Neumann's general equilibrium model, and structural equilibrium models (see dual linear program for details). Industries that use linear programming models include transportation, energy, telecommunications, and manufacturing. It has proven useful in modeling diverse types of problems in planning, routing, scheduling, assignment, and design. ## History The problem of solving a system of linear inequalities dates back at least as far as Fourier, who in 1827 published a method for solving them, and after whom the method of Fourier–Motzkin elimination is named. In the late 1930s, Soviet mathematician Leonid Kantorovich and American economist Wassily Leontief independently delved into the practical applications of linear programming. Kantorovich focused on manufacturing schedules, while Leontief explored economic applications. Their groundbreaking work was largely overlooked for decades. The turning point came during World War II when linear programming emerged as a vital tool. It found extensive use in addressing complex wartime challenges, including transportation logistics, scheduling, and resource allocation. Linear programming proved invaluable in optimizing these processes while considering critical constraints such as costs and resource availability. Despite its initial obscurity, the wartime successes propelled linear programming into the spotlight. Post-WWII, the method gained widespread recognition and became a cornerstone in various fields, from operations research to economics. The overlooked contributions of Kantorovich and Leontief in the late 1930s eventually became foundational to the broader acceptance and utilization of linear programming in optimizing decision-making processes. Kantorovich's work was initially neglected in the USSR. About the same time as Kantorovich, the Dutch-American economist T. C. Koopmans formulated classical economic problems as linear programs. Kantorovich and Koopmans later shared the 1975 Nobel Memorial Prize in Economic Sciences. In 1941, Frank Lauren Hitchcock also formulated transportation problems as linear programs and gave a solution very similar to the later simplex method. Hitchcock had died in 1957, and the Nobel Memorial Prize is not awarded posthumously. From 1946 to 1947 George B. Dantzig independently developed general linear programming formulation to use for planning problems in the US Air Force. In 1947, Dantzig also invented the simplex method that, for the first time efficiently, tackled the linear programming problem in most cases. When Dantzig arranged a meeting with John von Neumann to discuss his simplex method, von Neumann immediately conjectured the theory of duality by realizing that the problem he had been working in game theory was equivalent. Dantzig provided formal proof in an unpublished report "A Theorem on Linear Inequalities" on January 5, 1948. Dantzig's work was made available to public in 1951. In the post-war years, many industries applied it in their daily planning. Dantzig's original example was to find the best assignment of 70 people to 70 jobs. The computing power required to test all the permutations to select the best assignment is vast; the number of possible configurations exceeds the number of particles in the observable universe. However, it takes only a moment to find the optimum solution by posing the problem as a linear program and applying the simplex algorithm. The theory behind linear programming drastically reduces the number of possible solutions that must be checked. The linear programming problem was first shown to be solvable in polynomial time by Leonid Khachiyan in 1979, but a larger theoretical and practical breakthrough in the field came in 1984 when Narendra Karmarkar introduced a new interior-point method for solving linear-programming problems. ## Uses Linear programming is a widely used field of optimization for several reasons. Many practical problems in operations research can be expressed as linear programming problems. Certain special cases of linear programming, such as network flow problems and multicommodity flow problems, are considered important enough to have much research on specialized algorithms. A number of algorithms for other types of optimization problems work by solving linear programming problems as sub-problems. Historically, ideas from linear programming have inspired many of the central concepts of optimization theory, such as duality, decomposition, and the importance of convexity and its generalizations. Likewise, linear programming was heavily used in the early formation of microeconomics, and it is currently utilized in company management, such as planning, production, transportation, and technology. Although the modern management issues are ever-changing, most companies would like to maximize profits and minimize costs with limited resources. Google also uses linear programming to stabilize YouTube videos. ## Standard form Standard form is the usual and most intuitive form of describing a linear programming problem. It consists of the following three parts: - A linear (or affine) function to be maximized e.g. $$ f(x_{1},x_{2}) = c_1 x_1 + c_2 x_2 $$ - Problem constraints of the following form e.g. $$ \begin{matrix} a_{11} x_1 + a_{12} x_2 &\leq b_1 \\ a_{21} x_1 + a_{22} x_2 &\leq b_2 \\ a_{31} x_1 + a_{32} x_2 &\leq b_3 \\ \end{matrix} $$ - Non-negative variables e.g. $$ \begin{matrix} x_1 \geq 0 \\ x_2 \geq 0 \end{matrix} $$ The problem is usually expressed in matrix form, and then becomes: $$ \max \{\, \mathbf{c}^\mathsf{T} \mathbf{x} \mid \mathbf{x}\in\mathbb{R}^n\land A \mathbf{x} \leq \mathbf{b} \land \mathbf{x} \geq 0 \,\} $$ Other forms, such as minimization problems, problems with constraints on alternative forms, and problems involving negative variables can always be rewritten into an equivalent problem in standard form. ### ### Example Suppose that a farmer has a piece of farm land, say L hectares, to be planted with either wheat or barley or some combination of the two. The farmer has F kilograms of fertilizer and P kilograms of pesticide. Every hectare of wheat requires F1 kilograms of fertilizer and P1 kilograms of pesticide, while every hectare of barley requires F2 kilograms of fertilizer and P2 kilograms of pesticide. Let S1 be the selling price of wheat and S2 be the selling price of barley, per hectare. If we denote the area of land planted with wheat and barley by x1 and x2 respectively, then profit can be maximized by choosing optimal values for x1 and x2. This problem can be expressed with the following linear programming problem in the standard form: Maximize: (maximize the revenue (the total wheat sales plus the total barley sales) – revenue is the "objective function") (limit on total area) (limit on fertilizer) (limit on pesticide) (cannot plant a negative area). In matrix form this becomes: maximize $$ \begin{bmatrix} S_1 & S_2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} $$ subject to $$ \begin{bmatrix} 1 & 1 \\ F_1 & F_2 \\ P_1 & P_2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \le \begin{bmatrix} L \\ F \\ P \end{bmatrix}, \, \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} \ge \begin{bmatrix} 0 \\ 0 \end{bmatrix}. $$ ## Augmented form (slack form) Linear programming problems can be converted into an augmented form in order to apply the common form of the simplex algorithm. This form introduces non-negative slack variables to replace inequalities with equalities in the constraints. The problems can then be written in the following block matrix form: Maximize $$ z $$ : $$ \begin{bmatrix} 1 & -\mathbf{c}^\mathsf{T} & 0 \\ 0 & \mathbf{A} & \mathbf{I} \end{bmatrix} \begin{bmatrix} z \\ \mathbf{x} \\ \mathbf{s} \end{bmatrix} = \begin{bmatrix} 0 \\ \mathbf{b} \end{bmatrix} $$ $$ \mathbf{x} \ge 0, \mathbf{s} \ge 0 $$ where $$ \mathbf{s} $$ are the newly introduced slack variables, $$ \mathbf{x} $$ are the decision variables, and $$ z $$ is the variable to be maximized. Example The example above is converted into the following augmented form: {| |- | colspan="2" | Maximize: $$ S_1\cdot x_1+S_2\cdot x_2 $$ | (objective function) |- | subject to: | $$ x_1 + x_2 + x_3 = L $$ | (augmented constraint) |- | | $$ F_1\cdot x_1+F_2\cdot x_2 + x_4 = F $$ | (augmented constraint) |- | | $$ P_1\cdot x_1 + P_2\cdot x_2 + x_5 = P $$ | (augmented constraint) |- | | $$ x_1,x_2,x_3,x_4,x_5 \ge 0. $$ |} where $$ x_3, x_4, x_5 $$ are (non-negative) slack variables, representing in this example the unused area, the amount of unused fertilizer, and the amount of unused pesticide. In matrix form this becomes: Maximize $$ z $$ : $$ \begin{bmatrix} 1 & -S_1 & -S_2 & 0 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 & 0 \\ 0 & F_1 & F_2 & 0 & 1 & 0 \\ 0 & P_1 & P_2 & 0 & 0 & 1 \\ \end{bmatrix} \begin{bmatrix} z \\ x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix} = \begin{bmatrix} 0 \\ L \\ F \\ P \end{bmatrix}, \, \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \end{bmatrix} \ge 0. $$ ## Duality Every linear programming problem, referred to as a primal problem, can be converted into a dual problem, which provides an upper bound to the optimal value of the primal problem. In matrix form, we can express the primal problem as: Maximize cTx subject to Ax ≤ b, x ≥ 0; with the corresponding symmetric dual problem, Minimize bTy subject to ATy ≥ c, y ≥ 0. An alternative primal formulation is: Maximize cTx subject to Ax ≤ b; with the corresponding asymmetric dual problem, Minimize bTy subject to ATy = c, y ≥ 0. There are two ideas fundamental to duality theory. One is the fact that (for the symmetric dual) the dual of a dual linear program is the original primal linear program. Additionally, every feasible solution for a linear program gives a bound on the optimal value of the objective function of its dual. The weak duality theorem states that the objective function value of the dual at any feasible solution is always greater than or equal to the objective function value of the primal at any feasible solution. The strong duality theorem states that if the primal has an optimal solution, x*, then the dual also has an optimal solution, y*, and cTx*=bTy*. A linear program can also be unbounded or infeasible. Duality theory tells us that if the primal is unbounded then the dual is infeasible by the weak duality theorem. Likewise, if the dual is unbounded, then the primal must be infeasible. However, it is possible for both the dual and the primal to be infeasible. See dual linear program for details and several more examples. ## Variations ### Covering/packing dualities A covering LP is a linear program of the form: Minimize: bTy, subject to: ATy ≥ c, y ≥ 0, such that the matrix A and the vectors b and c are non-negative. The dual of a covering LP is a packing LP, a linear program of the form: Maximize: cTx, subject to: Ax ≤ b, x ≥ 0, such that the matrix A and the vectors b and c are non-negative. #### Examples Covering and packing LPs commonly arise as a linear programming relaxation of a combinatorial problem and are important in the study of approximation algorithms. For example, the LP relaxations of the set packing problem, the independent set problem, and the matching problem are packing LPs. The LP relaxations of the set cover problem, the vertex cover problem, and the dominating set problem are also covering LPs. Finding a fractional coloring of a graph is another example of a covering LP. In this case, there is one constraint for each vertex of the graph and one variable for each independent set of the graph. ## Complementary slackness It is possible to obtain an optimal solution to the dual when only an optimal solution to the primal is known using the complementary slackness theorem. The theorem states: Suppose that x = (x1, x2, ... , xn) is primal feasible and that y = (y1, y2, ... , ym) is dual feasible. Let (w1, w2, ..., wm) denote the corresponding primal slack variables, and let (z1, z2, ... , zn) denote the corresponding dual slack variables. Then x and y are optimal for their respective problems if and only if - xj zj = 0, for j = 1, 2, ... , n, and - wi yi = 0, for i = 1, 2, ... , m. So if the i-th slack variable of the primal is not zero, then the i-th variable of the dual is equal to zero. Likewise, if the j-th slack variable of the dual is not zero, then the j-th variable of the primal is equal to zero. This necessary condition for optimality conveys a fairly simple economic principle. In standard form (when maximizing), if there is slack in a constrained primal resource (i.e., there are "leftovers"), then additional quantities of that resource must have no value. Likewise, if there is slack in the dual (shadow) price non-negativity constraint requirement, i.e., the price is not zero, then there must be scarce supplies (no "leftovers"). ## Theory ### Existence of optimal solutions Geometrically, the linear constraints define the feasible region, which is a convex polytope. A linear function is a convex function, which implies that every local minimum is a global minimum; similarly, a linear function is a concave function, which implies that every local maximum is a global maximum. An optimal solution need not exist, for two reasons. First, if the constraints are inconsistent, then no feasible solution exists: For instance, the constraints x ≥ 2 and x ≤ 1 cannot be satisfied jointly; in this case, we say that the LP is infeasible. Second, when the polytope is unbounded in the direction of the gradient of the objective function (where the gradient of the objective function is the vector of the coefficients of the objective function), then no optimal value is attained because it is always possible to do better than any finite value of the objective function. ### Optimal vertices (and rays) of polyhedra Otherwise, if a feasible solution exists and if the constraint set is bounded, then the optimum value is always attained on the boundary of the constraint set, by the maximum principle for convex functions (alternatively, by the minimum principle for concave functions) since linear functions are both convex and concave. However, some problems have distinct optimal solutions; for example, the problem of finding a feasible solution to a system of linear inequalities is a linear programming problem in which the objective function is the zero function (i.e., the constant function taking the value zero everywhere). For this feasibility problem with the zero-function for its objective-function, if there are two distinct solutions, then every convex combination of the solutions is a solution. The vertices of the polytope are also called basic feasible solutions. The reason for this choice of name is as follows. Let d denote the number of variables. Then the fundamental theorem of linear inequalities implies (for feasible problems) that for every vertex x* of the LP feasible region, there exists a set of d (or fewer) inequality constraints from the LP such that, when we treat those d constraints as equalities, the unique solution is x*. Thereby we can study these vertices by means of looking at certain subsets of the set of all constraints (a discrete set), rather than the continuum of LP solutions. This principle underlies the simplex algorithm for solving linear programs. ## Algorithms ### Basis exchange algorithms #### Simplex algorithm of Dantzig The simplex algorithm, developed by George Dantzig in 1947, solves LP problems by constructing a feasible solution at a vertex of the polytope and then walking along a path on the edges of the polytope to vertices with non-decreasing values of the objective function until an optimum is reached for sure. In many practical problems, "stalling" occurs: many pivots are made with no increase in the objective function. In rare practical problems, the usual versions of the simplex algorithm may actually "cycle". To avoid cycles, researchers developed new pivoting rules. In practice, the simplex algorithm is quite efficient and can be guaranteed to find the global optimum if certain precautions against cycling are taken. The simplex algorithm has been proved to solve "random" problems efficiently, i.e. in a cubic number of steps, which is similar to its behavior on practical problems. However, the simplex algorithm has poor worst-case behavior: Klee and Minty constructed a family of linear programming problems for which the simplex method takes a number of steps exponential in the problem size. In fact, for some time it was not known whether the linear programming problem was solvable in polynomial time, i.e. of complexity class P. #### Criss-cross algorithm Like the simplex algorithm of Dantzig, the criss-cross algorithm is a basis-exchange algorithm that pivots between bases. However, the criss-cross algorithm need not maintain feasibility, but can pivot rather from a feasible basis to an infeasible basis. The criss-cross algorithm does not have polynomial time-complexity for linear programming. Both algorithms visit all 2D corners of a (perturbed) cube in dimension D, the Klee–Minty cube, in the worst case. ### Interior point In contrast to the simplex algorithm, which finds an optimal solution by traversing the edges between vertices on a polyhedral set, interior-point methods move through the interior of the feasible region. #### Ellipsoid algorithm, following Khachiyan This is the first worst-case polynomial-time algorithm ever found for linear programming. To solve a problem which has n variables and can be encoded in L input bits, this algorithm runs in $$ O(n^6 L) $$ time. Leonid Khachiyan solved this long-standing complexity issue in 1979 with the introduction of the ellipsoid method. The convergence analysis has (real-number) predecessors, notably the iterative methods developed by Naum Z. Shor and the approximation algorithms by Arkadi Nemirovski and D. Yudin. #### Projective algorithm of Karmarkar Khachiyan's algorithm was of landmark importance for establishing the polynomial-time solvability of linear programs. The algorithm was not a computational break-through, as the simplex method is more efficient for all but specially constructed families of linear programs. However, Khachiyan's algorithm inspired new lines of research in linear programming. In 1984, N. Karmarkar proposed a projective method for linear programming. Karmarkar's algorithm improved on Khachiyan's worst-case polynomial bound (giving $$ O(n^{3.5}L) $$ ). Karmarkar claimed that his algorithm was much faster in practical LP than the simplex method, a claim that created great interest in interior-point methods. Since Karmarkar's discovery, many interior-point methods have been proposed and analyzed. #### Vaidya's 87 algorithm In 1987, Vaidya proposed an algorithm that runs in $$ O(n^3) $$ time. #### Vaidya's 89 algorithm In 1989, Vaidya developed an algorithm that runs in $$ O(n^{2.5}) $$ time. Formally speaking, the algorithm takes $$ O( (n+d)^{1.5} n L) $$ arithmetic operations in the worst case, where $$ d $$ is the number of constraints, $$ n $$ is the number of variables, and $$ L $$ is the number of bits. #### Input sparsity time algorithms In 2015, Lee and Sidford showed that linear programming can be solved in $$ \tilde O((nnz(A) + d^2)\sqrt{d}L) $$ time, where $$ \tilde O $$ denotes the soft O notation, and $$ nnz(A) $$ represents the number of non-zero elements, and it remains taking $$ O(n^{2.5}L) $$ in the worst case. #### Current matrix multiplication time algorithm In 2019, Cohen, Lee and Song improved the running time to $$ \tilde O( ( n^{\omega} + n^{2.5-\alpha/2} + n^{2+1/6} ) L) $$ time, $$ \omega $$ is the exponent of matrix multiplication and $$ \alpha $$ is the dual exponent of matrix multiplication. $$ \alpha $$ is (roughly) defined to be the largest number such that one can multiply an $$ n \times n $$ matrix by a $$ n \times n^\alpha $$ matrix in $$ O(n^2) $$ time. In a followup work by Lee, Song and Zhang, they reproduce the same result via a different method. These two algorithms remain $$ \tilde O( n^{2+1/6} L ) $$ when $$ \omega = 2 $$ and $$ \alpha = 1 $$ . The result due to Jiang, Song, Weinstein and Zhang improved $$ \tilde O ( n^{2+1/6} L) $$ to $$ \tilde O ( n^{2+1/18} L) $$ . ### Comparison of interior-point methods and simplex algorithms The current opinion is that the efficiencies of good implementations of simplex-based methods and interior point methods are similar for routine applications of linear programming. However, for specific types of LP problems, it may be that one type of solver is better than another (sometimes much better), and that the structure of the solutions generated by interior point methods versus simplex-based methods are significantly different with the support set of active variables being typically smaller for the latter one. ## Open problems and recent work There are several open problems in the theory of linear programming, the solution of which would represent fundamental breakthroughs in mathematics and potentially major advances in our ability to solve large-scale linear programs. - Does LP admit a strongly polynomial-time algorithm? - Does LP admit a strongly polynomial-time algorithm to find a strictly complementary solution? - Does LP admit a polynomial-time algorithm in the real number (unit cost) model of computation? This closely related set of problems has been cited by Stephen Smale as among the 18 greatest unsolved problems of the 21st century. In Smale's words, the third version of the problem "is the main unsolved problem of linear programming theory." While algorithms exist to solve linear programming in weakly polynomial time, such as the ellipsoid methods and interior-point techniques, no algorithms have yet been found that allow strongly polynomial-time performance in the number of constraints and the number of variables. The development of such algorithms would be of great theoretical interest, and perhaps allow practical gains in solving large LPs as well. Although the Hirsch conjecture was recently disproved for higher dimensions, it still leaves the following questions open. - Are there pivot rules which lead to polynomial-time simplex variants? - Do all polytopal graphs have polynomially bounded diameter? These questions relate to the performance analysis and development of simplex-like methods. The immense efficiency of the simplex algorithm in practice despite its exponential-time theoretical performance hints that there may be variations of simplex that run in polynomial or even strongly polynomial time. It would be of great practical and theoretical significance to know whether any such variants exist, particularly as an approach to deciding if LP can be solved in strongly polynomial time. The simplex algorithm and its variants fall in the family of edge-following algorithms, so named because they solve linear programming problems by moving from vertex to vertex along edges of a polytope. This means that their theoretical performance is limited by the maximum number of edges between any two vertices on the LP polytope. As a result, we are interested in knowing the maximum graph-theoretical diameter of polytopal graphs. It has been proved that all polytopes have subexponential diameter. The recent disproof of the Hirsch conjecture is the first step to prove whether any polytope has superpolynomial diameter. If any such polytopes exist, then no edge-following variant can run in polynomial time. Questions about polytope diameter are of independent mathematical interest. Simplex pivot methods preserve primal (or dual) feasibility. On the other hand, criss-cross pivot methods do not preserve (primal or dual) feasibilitythey may visit primal feasible, dual feasible or primal-and-dual infeasible bases in any order. Pivot methods of this type have been studied since the 1970s. Essentially, these methods attempt to find the shortest pivot path on the arrangement polytope under the linear programming problem. In contrast to polytopal graphs, graphs of arrangement polytopes are known to have small diameter, allowing the possibility of strongly polynomial-time criss-cross pivot algorithm without resolving questions about the diameter of general polytopes. ## Integer unknowns If all of the unknown variables are required to be integers, then the problem is called an integer programming (IP) or integer linear programming (ILP) problem. In contrast to linear programming, which can be solved efficiently in the worst case, integer programming problems are in many practical situations (those with bounded variables) NP-hard. 0–1 integer programming or binary integer programming (BIP) is the special case of integer programming where variables are required to be 0 or 1 (rather than arbitrary integers). This problem is also classified as NP-hard, and in fact the decision version was one of Karp's 21 NP-complete problems. If only some of the unknown variables are required to be integers, then the problem is called a mixed integer (linear) programming (MIP or MILP) problem. These are generally also NP-hard because they are even more general than ILP programs. There are however some important subclasses of IP and MIP problems that are efficiently solvable, most notably problems where the constraint matrix is totally unimodular and the right-hand sides of the constraints are integers or – more general – where the system has the total dual integrality (TDI) property. Advanced algorithms for solving integer linear programs include: - cutting-plane method - Branch and bound - Branch and cut - Branch and price - if the problem has some extra structure, it may be possible to apply delayed column generation. Such integer-programming algorithms are discussed by Padberg and in Beasley. ## Integral linear programs A linear program in real variables is said to be integral if it has at least one optimal solution which is integral, i.e., made of only integer values. Likewise, a polyhedron $$ P = \{x \mid Ax \ge 0\} $$ is said to be integral if for all bounded feasible objective functions c, the linear program $$ \{\max cx \mid x \in P\} $$ has an optimum $$ x^* $$ with integer coordinates. As observed by Edmonds and Giles in 1977, one can equivalently say that the polyhedron $$ P $$ is integral if for every bounded feasible integral objective function c, the optimal value of the linear program $$ \{\max cx \mid x \in P\} $$ is an integer. Integral linear programs are of central importance in the polyhedral aspect of combinatorial optimization since they provide an alternate characterization of a problem. Specifically, for any problem, the convex hull of the solutions is an integral polyhedron; if this polyhedron has a nice/compact description, then we can efficiently find the optimal feasible solution under any linear objective. Conversely, if we can prove that a linear programming relaxation is integral, then it is the desired description of the convex hull of feasible (integral) solutions. Terminology is not consistent throughout the literature, so one should be careful to distinguish the following two concepts, - in an integer linear program, described in the previous section, variables are forcibly constrained to be integers, and this problem is NP-hard in general, - in an integral linear program, described in this section, variables are not constrained to be integers but rather one has proven somehow that the continuous problem always has an integral optimal value (assuming c is integral), and this optimal value may be found efficiently since all polynomial-size linear programs can be solved in polynomial time. One common way of proving that a polyhedron is integral is to show that it is totally unimodular. There are other general methods including the integer decomposition property and total dual integrality. Other specific well-known integral LPs include the matching polytope, lattice polyhedra, submodular flow polyhedra, and the intersection of two generalized polymatroids/g-polymatroids – e.g. see Schrijver 2003. ## Solvers and scripting (programming) languages Permissive licenses: NameLicenseBrief info GekkoMIT LicenseOpen-source library for solving large-scale LP, QP, QCQP, NLP, and MIP optimization GLOPApache v2Google's open-source linear programming solver JuMPMPL LicenseOpen-source modeling language with solvers for large-scale LP, QP, QCQP, SDP, SOCP, NLP, and MIP optimization PyomoBSDAn open-source modeling language for large-scale linear, mixed integer and nonlinear optimization SCIPApache v2A general-purpose constraint integer programming solver with an emphasis on MIP. Compatible with Zimpl modelling language. SuanShuApache v2An open-source suite of optimization algorithms to solve LP, QP, SOCP, SDP, SQP in Java Copyleft (reciprocal) licenses: NameLicenseBrief infoALGLIBGPL 2+An LP solver from ALGLIB project (C++, C#, Python)Cassowary constraint solverLGPLAn incremental constraint solving toolkit that efficiently solves systems of linear equalities and inequalitiesCLPCPLAn LP solver from COIN-ORglpkGPL GNU Linear Programming Kit, an LP/MILP solver with a native C API and numerous (15) third-party wrappers for other languages. Specialist support for flow networks. Bundles the AMPL-like GNU MathProg modelling language and translator.lp solveLGPL v2.1An LP and MIP solver featuring support for the MPS format and its own "lp" format, as well as custom formats through its "eXternal Language Interface" (XLI). Translating between model formats is also possible.QocaGPLA library for incrementally solving systems of linear equations with various goal functionsR-ProjectGPLA programming language and software environment for statistical computing and graphics MINTO (Mixed Integer Optimizer, an integer programming solver which uses branch and bound algorithm) has publicly available source code but is not open source. Proprietary licenses: NameBrief infoAIMMS A modeling language that allows to model linear, mixed integer, and nonlinear optimization models. It also offers a tool for constraint programming. Algorithm, in the forms of heuristics or exact methods, such as Branch-and-Cut or Column Generation, can also be implemented. The tool calls an appropriate solver such as CPLEX or similar, to solve the optimization problem at hand. Academic licenses are free of charge.ALGLIB A commercial edition of the copyleft licensed library. C++, C#, Python.AMPL A popular modeling language for large-scale linear, mixed integer and nonlinear optimisation with a free student limited version available (500 variables and 500 constraints).Analytica A general modeling language and interactive development environment. Its influence diagrams enable users to formulate problems as graphs with nodes for decision variables, objectives, and constraints. Analytica Optimizer Edition includes linear, mixed integer, and nonlinear solvers and selects the solver to match the problem. It also accepts other engines as plug-ins, including XPRESS, Gurobi, Artelys Knitro, and MOSEK.APMonitor API to MATLAB and Python. Solve example Linear Programming (LP) problems through MATLAB, Python, or a web-interface.CPLEX Popular solver with an API for several programming languages, and also has a modelling language and works with AIMMS, AMPL, GAMS, MPL, OpenOpt, OPL Development Studio, and TOMLAB. Free for academic use.Excel Solver Function A nonlinear solver adjusted to spreadsheets in which function evaluations are based on the recalculating cells. Basic version available as a standard add-on for Excel.FortMPGAMSGurobi OptimizerIMSL Numerical Libraries Collections of math and statistical algorithms available in C/C++, Fortran, Java and C#/.NET. Optimization routines in the IMSL Libraries include unconstrained, linearly and nonlinearly constrained minimizations, and linear programming algorithms.LINDO Solver with an API for large scale optimization of linear, integer, quadratic, conic and general nonlinear programs with stochastic programming extensions. It offers a global optimization procedure for finding guaranteed globally optimal solution to general nonlinear programs with continuous and discrete variables. It also has a statistical sampling API to integrate Monte-Carlo simulations into an optimization framework. It has an algebraic modeling language (LINGO) and allows modeling within a spreadsheet (What'sBest).Maple A general-purpose programming-language for symbolic and numerical computing.MATLAB A general-purpose and matrix-oriented programming-language for numerical computing. Linear programming in MATLAB requires the Optimization Toolbox in addition to the base MATLAB product; available routines include INTLINPROG and LINPROGMathcad A WYSIWYG math editor. It has functions for solving both linear and nonlinear optimization problems.Mathematica A general-purpose programming-language for mathematics, including symbolic and numerical capabilities.MOSEK A solver for large scale optimization with API for several languages (C++, java, .net, Matlab and python).NAG Numerical Library A collection of mathematical and statistical routines developed by the Numerical Algorithms Group for multiple programming languages (C, C++, Fortran, Visual Basic, Java and C#) and packages (MATLAB, Excel, R, LabVIEW). The Optimization chapter of the NAG Library includes routines for linear programming problems with both sparse and non-sparse linear constraint matrices, together with routines for the optimization of quadratic, nonlinear, sums of squares of linear or nonlinear functions with nonlinear, bounded or no constraints. The NAG Library has routines for both local and global optimization, and for continuous or integer problems.OptimJ A Java-based modeling language for optimization with a free version available.http://www.in-ter-trans.eu/resources/Zesch_Hellingrath_2010_Integrated+Production-Distribution+Planning.pdf OptimJ used in an optimization model for mixed-model assembly lines, University of Münsterhttp://www.aaai.org/ocs/index.php/AAAI/AAAI10/paper/viewFile/1769/2076 OptimJ used in an Approximate Subgame-Perfect Equilibrium Computation Technique for Repeated GamesSAS/OR A suite of solvers for Linear, Integer, Nonlinear, Derivative-Free, Network, Combinatorial and Constraint Optimization; the Algebraic modeling language OPTMODEL; and a variety of vertical solutions aimed at specific problems/markets, all of which are fully integrated with the SAS System.XPRESSSolver for large-scale linear programs, quadratic programs, general nonlinear and mixed-integer programs. Has API for several programming languages, also has a modelling language Mosel and works with AMPL, GAMS. Free for academic use.VisSim A visual block diagram language for simulation of dynamical systems.
https://en.wikipedia.org/wiki/Linear_programming
In behavioral psychology, reinforcement refers to consequences that increase the likelihood of an organism's future behavior, typically in the presence of a particular antecedent stimulus. For example, a rat can be trained to push a lever to receive food whenever a light is turned on; in this example, the light is the antecedent stimulus, the lever pushing is the operant behavior, and the food is the reinforcer. Likewise, a student that receives attention and praise when answering a teacher's question will be more likely to answer future questions in class; the teacher's question is the antecedent, the student's response is the behavior, and the praise and attention are the reinforcements. Punishment is the inverse to reinforcement, referring to any behavior that decreases the likelihood that a response will occur. In operant conditioning terms, punishment does not need to involve any type of pain, fear, or physical actions; even a brief spoken expression of disapproval is a type of punishment. Consequences that lead to appetitive behavior such as subjective "wanting" and "liking" (desire and pleasure) function as rewards or positive reinforcement. There is also negative reinforcement, which involves taking away an undesirable stimulus. An example of negative reinforcement would be taking an aspirin to relieve a headache. Reinforcement is an important component of operant conditioning and behavior modification. The concept has been applied in a variety of practical areas, including parenting, coaching, therapy, self-help, education, and management. ## Terminology In the behavioral sciences, the terms "positive" and "negative" refer when used in their strict technical sense to the nature of the action performed by the conditioner rather than to the responding operant's evaluation of that action and its consequence(s). "Positive" actions are those that add a factor, be it pleasant or unpleasant, to the environment, whereas "negative" actions are those that remove or withhold from the environment a factor of either type. In turn, the strict sense of "reinforcement" refers only to reward-based conditioning; the introduction of unpleasant factors and the removal or withholding of pleasant factors are instead referred to as "punishment", which when used in its strict sense thus stands in contradistinction to "reinforcement". Thus, "positive reinforcement" refers to the addition of a pleasant factor, "positive punishment" refers to the addition of an unpleasant factor, "negative reinforcement" refers to the removal or withholding of an unpleasant factor, and "negative punishment" refers to the removal or withholding of a pleasant factor. This usage is at odds with some non-technical usages of the four term combinations, especially in the case of the term "negative reinforcement", which is often used to denote what technical parlance would describe as "positive punishment" in that the non-technical usage interprets "reinforcement" as subsuming both reward and punishment and "negative" as referring to the responding operant's evaluation of the factor being introduced. By contrast, technical parlance would use the term "negative reinforcement" to describe encouragement of a given behavior by creating a scenario in which an unpleasant factor is or will be present but engaging in the behavior results in either escaping from that factor or preventing its occurrence, as in Martin Seligman’s experiment involving dogs learning to avoid electric shocks. ## Introduction B.F. Skinner was a well-known and influential researcher who articulated many of the theoretical constructs of reinforcement and behaviorism. Skinner defined reinforcers according to the change in response strength (response rate) rather than to more subjective criteria, such as what is pleasurable or valuable to someone. Accordingly, activities, foods or items considered pleasant or enjoyable may not necessarily be reinforcing (because they produce no increase in the response preceding them). Stimuli, settings, and activities only fit the definition of reinforcers if the behavior that immediately precedes the potential reinforcer increases in similar situations in the future; for example, a child who receives a cookie when he or she asks for one. If the frequency of "cookie-requesting behavior" increases, the cookie can be seen as reinforcing "cookie-requesting behavior". If however, "cookie-requesting behavior" does not increase the cookie cannot be considered reinforcing. The sole criterion that determines if a stimulus is reinforcing is the change in probability of a behavior after administration of that potential reinforcer. Other theories may focus on additional factors such as whether the person expected a behavior to produce a given outcome, but in the behavioral theory, reinforcement is defined by an increased probability of a response. The study of reinforcement has produced an enormous body of reproducible experimental results. Reinforcement is the central concept and procedure in special education, applied behavior analysis, and the experimental analysis of behavior and is a core concept in some medical and psychopharmacology models, particularly addiction, dependence, and compulsion. ## History Laboratory research on reinforcement is usually dated from the work of Edward Thorndike, known for his experiments with cats escaping from puzzle boxes. A number of others continued this research, notably B.F. Skinner, who published his seminal work on the topic in The Behavior of Organisms, in 1938, and elaborated this research in many subsequent publications. Notably Skinner argued that positive reinforcement is superior to punishment in shaping behavior. Though punishment may seem just the opposite of reinforcement, Skinner claimed that they differ immensely, saying that positive reinforcement results in lasting behavioral modification (long-term) whereas punishment changes behavior only temporarily (short-term) and has many detrimental side-effects. A great many researchers subsequently expanded our understanding of reinforcement and challenged some of Skinner's conclusions. For example, Azrin and Holz defined punishment as a “consequence of behavior that reduces the future probability of that behavior,” and some studies have shown that positive reinforcement and punishment are equally effective in modifying behavior. Research on the effects of positive reinforcement, negative reinforcement and punishment continue today as those concepts are fundamental to learning theory and apply to many practical applications of that theory. ## Operant conditioning The term operant conditioning was introduced by Skinner to indicate that in his experimental paradigm, the organism is free to operate on the environment. In this paradigm, the experimenter cannot trigger the desirable response; the experimenter waits for the response to occur (to be emitted by the organism) and then a potential reinforcer is delivered. In the classical conditioning paradigm, the experimenter triggers (elicits) the desirable response by presenting a reflex eliciting stimulus, the unconditional stimulus (UCS), which they pair (precede) with a neutral stimulus, the conditional stimulus (CS). Reinforcement is a basic term in operant conditioning. For the punishment aspect of operant conditioning, see punishment (psychology). ### Positive reinforcement Positive reinforcement occurs when a desirable event or stimulus is presented as a consequence of a behavior and the chance that this behavior will manifest in similar environments increases. For example, if reading a book is fun, then experiencing the fun positively reinforces the behavior of reading fun books. The person who receives the positive reinforcement (i.e., who has fun reading the book) will read more books to have more fun. The high probability instruction (HPI) treatment is a behaviorist treatment based on the idea of positive reinforcement. ### Negative reinforcement Negative reinforcement increases the rate of a behavior that avoids or escapes an aversive situation or stimulus. That is, something unpleasant is already happening, and the behavior helps the person avoid or escape the unpleasantness. In contrast to positive reinforcement, which involves adding a pleasant stimulus, in negative reinforcement, the focus is on the removal of an unpleasant situation or stimulus. For example, if someone feels unhappy, then they might engage in a behavior (e.g., reading books) to escape from the aversive situation (e.g., their unhappy feelings). The success of that avoidant or escapist behavior in removing the unpleasant situation or stimulus reinforces the behavior. Doing something unpleasant to people to prevent or remove a behavior from happening again is punishment, not negative reinforcement. The main difference is that reinforcement always increases the likelihood of a behavior (e.g., channel surfing while bored temporarily alleviated boredom; therefore, there will be more channel surfing while bored), whereas punishment decreases it (e.g., hangovers are an unpleasant stimulus, so people learn to avoid the behavior that led to that unpleasant stimulus). ### Extinction Extinction occurs when a given behavior is ignored (i.e. followed up with no consequence). Behaviors disappear over time when they continuously receive no reinforcement. During a deliberate extinction, the targeted behavior spikes first (in an attempt to produce the expected, previously reinforced effects), and then declines over time. Neither reinforcement nor extinction need to be deliberate in order to have an effect on a subject's behavior. For example, if a child reads books because they are fun, then the parents' decision to ignore the book reading will not remove the positive reinforcement (i.e., fun) the child receives from reading books. However, if a child engages in a behavior to get attention from the parents, then the parents' decision to ignore the behavior will cause the behavior to go extinct, and the child will find a different behavior to get their parents' attention. ### Reinforcement versus punishment Reinforcers serve to increase behaviors whereas punishers serve to decrease behaviors; thus, positive reinforcers are stimuli that the subject will work to attain, and negative reinforcers are stimuli that the subject will work to be rid of or to end. The table below illustrates the adding and subtracting of stimuli (pleasant or aversive) in relation to reinforcement vs. punishment. +Comparison chart Rewarding (pleasant) stimulus Aversive (unpleasant) stimulus Positive (adding a stimulus) Positive reinforcementExample: Reading a book because it is fun and interesting Positive punishmentExample: Telling someone that their actions are inconsiderate Negative (taking a stimulus away) Negative punishmentExample: Loss of privileges (e.g., screen time or permission to attend a desired event) if a rule is broken Negative reinforcementExample: Reading a book because it allows the reader to escape feelings of boredom or unhappiness ### Further ideas and concepts - Distinguishing between positive and negative reinforcement can be difficult and may not always be necessary. Focusing on what is being removed or added and how it affects behavior can be more helpful. - An event that punishes behavior for some may reinforce behavior for others. - Some reinforcement can include both positive and negative features, such as a drug addict taking drugs for the added euphoria (positive reinforcement) and also to eliminate withdrawal symptoms (negative reinforcement). - Reinforcement in the business world is essential in driving productivity. Employees are constantly motivated by the ability to receive a positive stimulus, such as a promotion or a bonus. Employees are also driven by negative reinforcement, such as by eliminating unpleasant tasks. - Though negative reinforcement has a positive effect in the short term for a workplace (i.e. encourages a financially beneficial action), over-reliance on a negative reinforcement hinders the ability of workers to act in a creative, engaged way creating growth in the long term. ### Primary and secondary reinforcers A primary reinforcer, sometimes called an unconditioned reinforcer, is a stimulus that does not require pairing with a different stimulus in order to function as a reinforcer and most likely has obtained this function through the evolution and its role in species' survival. Examples of primary reinforcers include food, water, and sex. Some primary reinforcers, such as certain drugs, may mimic the effects of other primary reinforcers. While these primary reinforcers are fairly stable through life and across individuals, the reinforcing value of different primary reinforcers varies due to multiple factors (e.g., genetics, experience). Thus, one person may prefer one type of food while another avoids it. Or one person may eat much food while another eats very little. So even though food is a primary reinforcer for both individuals, the value of food as a reinforcer differs between them. A secondary reinforcer, sometimes called a conditioned reinforcer, is a stimulus or situation that has acquired its function as a reinforcer after pairing with a stimulus that functions as a reinforcer. This stimulus may be a primary reinforcer or another conditioned reinforcer (such as money). When trying to distinguish primary and secondary reinforcers in human examples, use the "caveman test." If the stimulus is something that a caveman would naturally find desirable (e.g. candy) then it is a primary reinforcer. If, on the other hand, the caveman would not react to it (e.g. a dollar bill), it is a secondary reinforcer. As with primary reinforcers, an organism can experience satisfaction and deprivation with secondary reinforcers. ### Other reinforcement terms - A generalized reinforcer is a conditioned reinforcer that has obtained the reinforcing function by pairing with many other reinforcers and functions as a reinforcer under a wide-variety of motivating operations. (One example of this is money because it is paired with many other reinforcers). - In reinforcer sampling, a potentially reinforcing but unfamiliar stimulus is presented to an organism without regard to any prior behavior. - Socially-mediated reinforcement involves the delivery of reinforcement that requires the behavior of another organism. For example, another person is providing the reinforcement. - The Premack principle is a special case of reinforcement elaborated by David Premack, which states that a highly preferred activity can be used effectively as a reinforcer for a less-preferred activity. - Reinforcement hierarchy is a list of actions, rank-ordering the most desirable to least desirable consequences that may serve as a reinforcer. A reinforcement hierarchy can be used to determine the relative frequency and desirability of different activities, and is often employed when applying the Premack principle. - Contingent outcomes are more likely to reinforce behavior than non-contingent responses. Contingent outcomes are those directly linked to a causal behavior, such a light turning on being contingent on flipping a switch. Note that contingent outcomes are not necessary to demonstrate reinforcement, but perceived contingency may increase learning. - Contiguous stimuli are stimuli closely associated by time and space with specific behaviors. They reduce the amount of time needed to learn a behavior while increasing its resistance to extinction. Giving a dog a piece of food immediately after sitting is more contiguous with (and therefore more likely to reinforce) the behavior than a several minute delay in food delivery following the behavior. - Noncontingent reinforcement refers to response-independent delivery of stimuli identified as reinforcers for some behaviors of that organism. However, this typically entails time-based delivery of stimuli identified as maintaining aberrant behavior, which decreases the rate of the target behavior. As no measured behavior is identified as being strengthened, there is controversy surrounding the use of the term noncontingent "reinforcement". ## Natural and artificial reinforcement In his 1967 paper, Arbitrary and Natural Reinforcement, Charles Ferster proposed classifying reinforcement into events that increase the frequency of an operant behavior as a natural consequence of the behavior itself, and events that affect frequency by their requirement of human mediation, such as in a token economy where subjects are rewarded for certain behavior by the therapist. In 1970, Baer and Wolf developed the concept of "behavioral traps." A behavioral trap requires only a simple response to enter the trap, yet once entered, the trap cannot be resisted in creating general behavior change. It is the use of a behavioral trap that increases a person's repertoire, by exposing them to the naturally occurring reinforcement of that behavior. Behavioral traps have four characteristics: - They are "baited" with desirable reinforcers that "lure" the student into the trap. - Only a low-effort response already in the repertoire is necessary to enter the trap. - Interrelated contingencies of reinforcement inside the trap motivate the person to acquire, extend, and maintain targeted skills. - They can remain effective for long periods of time because the person shows few, if any, satiation effects. Thus, artificial reinforcement can be used to build or develop generalizable skills, eventually transitioning to naturally occurring reinforcement to maintain or increase the behavior. Another example is a social situation that will generally result from a specific behavior once it has met a certain criterion. ## Intermittent reinforcement schedules Behavior is not always reinforced every time it is emitted, and the pattern of reinforcement strongly affects how fast an operant response is learned, what its rate is at any given time, and how long it continues when reinforcement ceases. The simplest rules controlling reinforcement are continuous reinforcement, where every response is reinforced, and extinction, where no response is reinforced. Between these extremes, more complex schedules of reinforcement specify the rules that determine how and when a response will be followed by a reinforcer. Specific schedules of reinforcement reliably induce specific patterns of response, and these rules apply across many different species. The varying consistency and predictability of reinforcement is an important influence on how the different schedules operate. Many simple and complex schedules were investigated at great length by B.F. Skinner using pigeons. ### Simple schedules - Ratio schedule – the reinforcement depends only on the number of responses the organism has performed. - Continuous reinforcement (CRF) – a schedule of reinforcement in which every occurrence of the instrumental response (desired response) is followed by the reinforcer. Simple schedules have a single rule to determine when a single type of reinforcer is delivered for a specific response. - Fixed ratio (FR) – schedules deliver reinforcement after every nth response. An FR 1 schedule is synonymous with a CRF schedule. 1. (ex. Every three times a rat presses a button, that rat receives a slice of cheese) - Variable ratio schedule (VR) – reinforced on average every nth response, but not always on the nth response. 1. (ex. Gamblers win 1 out every an 10 turns on a slot machine, however this is an average and they could hypothetically win on any given turn) - Fixed interval (FI) – reinforced after n amount of time. 1. (ex. Every 10 minutes, a rat receives a slice of cheese when it presses a button. Eventually, the rat will learn to ignore the button until each 10 minute interval has elapsed) - Variable interval (VI) – reinforced on an average of n amount of time, but not always exactly n amount of time. 1. (ie. A radio host gives away concert tickets approximately every hour, but the exact minutes may vary) - Fixed time (FT) – Provides a reinforcing stimulus at a fixed time since the last reinforcement delivery, regardless of whether the subject has responded or not. In other words, it is a non-contingent schedule. - Variable time (VT) – Provides reinforcement at an average variable time since last reinforcement, regardless of whether the subject has responded or not. Simple schedules are utilized in many differential reinforcement procedures: - Differential reinforcement of alternative behavior (DRA) - A conditioning procedure in which an undesired response is decreased by placing it on extinction or, less commonly, providing contingent punishment, while simultaneously providing reinforcement contingent on a desirable response. An example would be a teacher attending to a student only when they raise their hand, while ignoring the student when he or she calls out. - Differential reinforcement of other behavior (DRO) – Also known as omission training procedures, an instrumental conditioning procedure in which a positive reinforcer is periodically delivered only if the participant does something other than the target response. An example would be reinforcing any hand action other than nose picking. - Differential reinforcement of incompatible behavior (DRI) – Used to reduce a frequent behavior without punishing it by reinforcing an incompatible response. An example would be reinforcing clapping to reduce nose picking - Differential reinforcement of low response rate (DRL) – Used to encourage low rates of responding. It is like an interval schedule, except that premature responses reset the time required between behavior. - Differential reinforcement of high rate (DRH) – Used to increase high rates of responding. It is like an interval schedule, except that a minimum number of responses are required in the interval in order to receive reinforcement. #### Effects of different types of simple schedules - Fixed ratio: activity slows after reinforcer is delivered, then response rates increase until the next reinforcer delivery (post-reinforcement pause). - Variable ratio: rapid, steady rate of responding; most resistant to extinction. - Fixed interval: responding increases towards the end of the interval; poor resistance to extinction. - Variable interval: steady activity results, good resistance to extinction. - Ratio schedules produce higher rates of responding than interval schedules, when the rates of reinforcement are otherwise similar. - Variable schedules produce higher rates and greater resistance to extinction than most fixed schedules. This is also known as the Partial Reinforcement Extinction Effect (PREE). - The variable ratio schedule produces both the highest rate of responding and the greatest resistance to extinction (for example, the behavior of gamblers at slot machines). - Fixed schedules produce "post-reinforcement pauses" (PRP), where responses will briefly cease immediately following reinforcement, though the pause is a function of the upcoming response requirement rather than the prior reinforcement. - The PRP of a fixed interval schedule is frequently followed by a "scallop-shaped" accelerating rate of response, while fixed ratio schedules produce a more "angular" response. - fixed interval scallop: the pattern of responding that develops with fixed interval reinforcement schedule, performance on a fixed interval reflects subject's accuracy in telling time. - Organisms whose schedules of reinforcement are "thinned" (that is, requiring more responses or a greater wait before reinforcement) may experience "ratio strain" if thinned too quickly. This produces behavior similar to that seen during extinction. - Ratio strain: the disruption of responding that occurs when a fixed ratio response requirement is increased too rapidly. - Ratio run: high and steady rate of responding that completes each ratio requirement. Usually higher ratio requirement causes longer post-reinforcement pauses to occur. - Partial reinforcement schedules are more resistant to extinction than continuous reinforcement schedules. - Ratio schedules are more resistant than interval schedules and variable schedules more resistant than fixed ones. - Momentary changes in reinforcement value lead to dynamic changes in behavior. ### Compound schedules Compound schedules combine two or more different simple schedules in some way using the same reinforcer for the same behavior. There are many possibilities; among those most often used are: - Alternative schedules''' – A type of compound schedule where two or more simple schedules are in effect and whichever schedule is completed first results in reinforcement. - Conjunctive schedules – A complex schedule of reinforcement where two or more simple schedules are in effect independently of each other, and requirements on all of the simple schedules must be met for reinforcement. - Multiple schedules – Two or more schedules alternate over time, with a stimulus indicating which is in force. Reinforcement is delivered if the response requirement is met while a schedule is in effect. - Mixed schedules – Either of two, or more, schedules may occur with no stimulus indicating which is in force. Reinforcement is delivered if the response requirement is met while a schedule is in effect. - ### Concurrent schedules – A complex reinforcement procedure in which the participant can choose any one of two or more simple reinforcement schedules that are available simultaneously. Organisms are free to change back and forth between the response alternatives at any time. - Concurrent-chain schedule of reinforcement' – A complex reinforcement procedure in which the participant is permitted to choose during the first link which of several simple reinforcement schedules will be in effect in the second link. Once a choice has been made, the rejected alternatives become unavailable until the start of the next trial. - Interlocking schedules – A single schedule with two components where progress in one component affects progress in the other component. In an interlocking FR 60 FI 120-s schedule, for example, each response subtracts time from the interval component such that each response is "equal" to removing two seconds from the FI schedule. - Chained schedules – Reinforcement occurs after two or more successive schedules have been completed, with a stimulus indicating when one schedule has been completed and the next has started - Tandem schedules – Reinforcement occurs when two or more successive schedule requirements have been completed, with no stimulus indicating when a schedule has been completed and the next has started. - Higher-order schedules – completion of one schedule is reinforced according to a second schedule; e.g. in FR2 (FI10 secs), two successive fixed interval schedules require completion before a response is reinforced. ### Superimposed schedules The psychology term superimposed schedules of reinforcement refers to a structure of rewards where two or more simple schedules of reinforcement operate simultaneously. Reinforcers can be positive, negative, or both. An example is a person who comes home after a long day at work. The behavior of opening the front door is rewarded by a big kiss on the lips by the person's spouse and a rip in the pants from the family dog jumping enthusiastically. Another example of superimposed schedules of reinforcement is a pigeon in an experimental cage pecking at a button. The pecks deliver a hopper of grain every 20th peck, and access to water after every 200 pecks. Superimposed schedules of reinforcement are a type of compound schedule that evolved from the initial work on simple schedules of reinforcement by B.F. Skinner and his colleagues (Skinner and Ferster, 1957). They demonstrated that reinforcers could be delivered on schedules, and further that organisms behaved differently under different schedules. Rather than a reinforcer, such as food or water, being delivered every time as a consequence of some behavior, a reinforcer could be delivered after more than one instance of the behavior. For example, a pigeon may be required to peck a button switch ten times before food appears. This is a "ratio schedule". Also, a reinforcer could be delivered after an interval of time passed following a target behavior. An example is a rat that is given a food pellet immediately following the first response that occurs after two minutes has elapsed since the last lever press. This is called an "interval schedule". In addition, ratio schedules can deliver reinforcement following fixed or variable number of behaviors by the individual organism. Likewise, interval schedules can deliver reinforcement following fixed or variable intervals of time following a single response by the organism. Individual behaviors tend to generate response rates that differ based upon how the reinforcement schedule is created. Much subsequent research in many labs examined the effects on behaviors of scheduling reinforcers. If an organism is offered the opportunity to choose between or among two or more simple schedules of reinforcement at the same time, the reinforcement structure is called a "concurrent schedule of reinforcement". Brechner (1974, 1977) introduced the concept of superimposed schedules of reinforcement in an attempt to create a laboratory analogy of social traps, such as when humans overharvest their fisheries or tear down their rainforests. Brechner created a situation where simple reinforcement schedules were superimposed upon each other. In other words, a single response or group of responses by an organism led to multiple consequences. Concurrent schedules of reinforcement can be thought of as "or" schedules, and superimposed schedules of reinforcement can be thought of as "and" schedules. Brechner and Linder (1981) and Brechner (1987) expanded the concept to describe how superimposed schedules and the social trap analogy could be used to analyze the way energy flows through systems. Superimposed schedules of reinforcement have many real-world applications in addition to generating social traps. Many different human individual and social situations can be created by superimposing simple reinforcement schedules. For example, a human being could have simultaneous tobacco and alcohol addictions. Even more complex situations can be created or simulated by superimposing two or more concurrent schedules. For example, a high school senior could have a choice between going to Stanford University or UCLA, and at the same time have the choice of going into the Army or the Air Force, and simultaneously the choice of taking a job with an internet company or a job with a software company. That is a reinforcement structure of three superimposed concurrent schedules of reinforcement. Superimposed schedules of reinforcement can create the three classic conflict situations (approach–approach conflict, approach–avoidance conflict, and avoidance–avoidance conflict) described by Kurt Lewin (1935) and can operationalize other Lewinian situations analyzed by his force field analysis. Other examples of the use of superimposed schedules of reinforcement as an analytical tool are its application to the contingencies of rent control (Brechner, 2003) and problem of toxic waste dumping in the Los Angeles County storm drain system (Brechner, 2010). Concurrent schedules In operant conditioning, concurrent schedules of reinforcement are schedules of reinforcement that are simultaneously available to an animal subject or human participant, so that the subject or participant can respond on either schedule. For example, in a two-alternative forced choice task, a pigeon in a Skinner box is faced with two pecking keys; pecking responses can be made on either, and food reinforcement might follow a peck on either. The schedules of reinforcement arranged for pecks on the two keys can be different. They may be independent, or they may be linked so that behavior on one key affects the likelihood of reinforcement on the other. It is not necessary for responses on the two schedules to be physically distinct. In an alternate way of arranging concurrent schedules, introduced by Findley in 1958, both schedules are arranged on a single key or other response device, and the subject can respond on a second key to change between the schedules. In such a "Findley concurrent" procedure, a stimulus (e.g., the color of the main key) signals which schedule is in effect. Concurrent schedules often induce rapid alternation between the keys. To prevent this, a "changeover delay" is commonly introduced: each schedule is inactivated for a brief period after the subject switches to it. When both the concurrent schedules are variable intervals, a quantitative relationship known as the matching law is found between relative response rates in the two schedules and the relative reinforcement rates they deliver; this was first observed by R.J. Herrnstein in 1961. Matching law is a rule for instrumental behavior which states that the relative rate of responding on a particular response alternative equals the relative rate of reinforcement for that response (rate of behavior = rate of reinforcement). Animals and humans have a tendency to prefer choice in schedules. ## Shaping Shaping is the reinforcement of successive approximations to a desired instrumental response. In training a rat to press a lever, for example, simply turning toward the lever is reinforced at first. Then, only turning and stepping toward it is reinforced. Eventually the rat will be reinforced for pressing the lever. The successful attainment of one behavior starts the shaping process for the next. As training progresses, the response becomes progressively more like the desired behavior, with each subsequent behavior becoming a closer approximation of the final behavior. The intervention of shaping is used in many training situations, and also for individuals with autism as well as other developmental disabilities. When shaping is combined with other evidence-based practices such as Functional Communication Training (FCT), it can yield positive outcomes for human behavior. Shaping typically uses continuous reinforcement, but the response can later be shifted to an intermittent reinforcement schedule. Shaping is also used for food refusal. Food refusal is when an individual has a partial or total aversion to food items. This can be as minimal as being a picky eater to so severe that it can affect an individual's health. Shaping has been used to have a high success rate for food acceptance. ## Chaining Chaining involves linking discrete behaviors together in a series, such that the consequence of each behavior is both the reinforcement for the previous behavior, and the antecedent stimulus for the next behavior. There are many ways to teach chaining, such as forward chaining (starting from the first behavior in the chain), backwards chaining (starting from the last behavior) and total task chaining (teaching each behavior in the chain simultaneously). People's morning routines are a typical chain, with a series of behaviors (e.g. showering, drying off, getting dressed) occurring in sequence as a well learned habit. Challenging behaviors seen in individuals with autism and other related disabilities have successfully managed and maintained in studies using a scheduled of chained reinforcements. Functional communication training is an intervention that often uses chained schedules of reinforcement to effectively promote the appropriate and desired functional communication response. ## Mathematical models There has been research on building a mathematical model of reinforcement. This model is known as MPR, which is short for mathematical principles of reinforcement. Peter Killeen has made key discoveries in the field with his research on pigeons. ## Applications Reinforcement and punishment are ubiquitous in human social interactions, and a great many applications of operant principles have been suggested and implemented. Following are a few examples. ### Addiction and dependence Positive and negative reinforcement play central roles in the development and maintenance of addiction and drug dependence. An addictive drug is intrinsically rewarding; that is, it functions as a primary positive reinforcer of drug use. The brain's reward system assigns it incentive salience (i.e., it is "wanted" or "desired"), so as an addiction develops, deprivation of the drug leads to craving. In addition, stimuli associated with drug use – e.g., the sight of a syringe, and the location of use – become associated with the intense reinforcement induced by the drug. These previously neutral stimuli acquire several properties: their appearance can induce craving, and they can become conditioned positive reinforcers of continued use. Thus, if an addicted individual encounters one of these drug cues, a craving for the associated drug may reappear. For example, anti-drug agencies previously used posters with images of drug paraphernalia as an attempt to show the dangers of drug use. However, such posters are no longer used because of the effects of incentive salience in causing relapse upon sight of the stimuli illustrated in the posters. In drug dependent individuals, negative reinforcement occurs when a drug is self-administered in order to alleviate or "escape" the symptoms of physical dependence (e.g., tremors and sweating) and/or psychological dependence (e.g., anhedonia, restlessness, irritability, and anxiety) that arise during the state of drug withdrawal. ### Animal training Animal trainers and pet owners were applying the principles and practices of operant conditioning long before these ideas were named and studied, and animal training still provides one of the clearest and most convincing examples of operant control. Of the concepts and procedures described in this article, a few of the most salient are: availability of immediate reinforcement (e.g. the ever-present bag of dog yummies); contingency, assuring that reinforcement follows the desired behavior and not something else; the use of secondary reinforcement, as in sounding a clicker immediately after a desired response; shaping, as in gradually getting a dog to jump higher and higher; intermittent reinforcement, reducing the frequency of those yummies to induce persistent behavior without satiation; chaining, where a complex behavior is gradually put together. ### Child behavior – parent management training Providing positive reinforcement for appropriate child behaviors is a major focus of parent management training. Typically, parents learn to reward appropriate behavior through social rewards (such as praise, smiles, and hugs) as well as concrete rewards (such as stickers or points towards a larger reward as part of an incentive system created collaboratively with the child).Kazdin AE (2010). Problem-solving skills training and parent management training for oppositional defiant disorder and conduct disorder. Evidence-based psychotherapies for children and adolescents (2nd ed.), 211–226. New York: Guilford Press. In addition, parents learn to select simple behaviors as an initial focus and reward each of the small steps that their child achieves towards reaching a larger goal (this concept is called "successive approximations").Forgatch MS, Patterson GR (2010). Parent management training — Oregon model: An intervention for antisocial behavior in children and adolescents. Evidence-based psychotherapies for children and adolescents (2nd ed.), 159–78. New York: Guilford Press. They may also use indirect rewards such through progress charts. Providing positive reinforcement in the classroom can be beneficial to student success. When applying positive reinforcement to students, it's crucial to make it individualized to that student's needs. This way, the student understands why they are receiving the praise, they can accept it, and eventually learn to continue the action that was earned by positive reinforcement. For example, using rewards or extra recess time might apply to some students more, whereas others might accept the enforcement by receiving stickers or check marks indicating praise. ### Economics Both psychologists and economists have become interested in applying operant concepts and findings to the behavior of humans in the marketplace. An example is the analysis of consumer demand, as indexed by the amount of a commodity that is purchased. In economics, the degree to which price influences consumption is called "the price elasticity of demand." Certain commodities are more elastic than others; for example, a change in price of certain foods may have a large effect on the amount bought, while gasoline and other essentials may be less affected by price changes. In terms of operant analysis, such effects may be interpreted in terms of motivations of consumers and the relative value of the commodities as reinforcers.Domjan, M. (2009). The Principles of Learning and Behavior. Wadsworth Publishing Company. 6th Edition. pages 244–249. ### Gambling – variable ratio scheduling As stated earlier in this article, a variable ratio schedule yields reinforcement after the emission of an unpredictable number of responses. This schedule typically generates rapid, persistent responding. Slot machines pay off on a variable ratio schedule, and they produce just this sort of persistent lever-pulling behavior in gamblers. Because the machines are programmed to pay out less money than they take in, the persistent slot-machine user invariably loses in the long run. Slots machines, and thus variable ratio reinforcement, have often been blamed as a factor underlying gambling addiction. ### Praise The concept of praise as a means of behavioral reinforcement in humans is rooted in B.F. Skinner's model of operant conditioning. Through this lens, praise has been viewed as a means of positive reinforcement, wherein an observed behavior is made more likely to occur by contingently praising said behavior. Hundreds of studies have demonstrated the effectiveness of praise in promoting positive behaviors, notably in the study of teacher and parent use of praise on child in promoting improved behavior and academic performance, but also in the study of work performance. Praise has also been demonstrated to reinforce positive behaviors in non-praised adjacent individuals (such as a classmate of the praise recipient) through vicarious reinforcement. Praise may be more or less effective in changing behavior depending on its form, content and delivery. In order for praise to effect positive behavior change, it must be contingent on the positive behavior (i.e., only administered after the targeted behavior is enacted), must specify the particulars of the behavior that is to be reinforced, and must be delivered sincerely and credibly. Acknowledging the effect of praise as a positive reinforcement strategy, numerous behavioral and cognitive behavioral interventions have incorporated the use of praise in their protocols. The strategic use of praise is recognized as an evidence-based practice in both classroom management and parenting training interventions, though praise is often subsumed in intervention research into a larger category of positive reinforcement, which includes strategies such as strategic attention and behavioral rewards. ### Traumatic bonding Traumatic bonding occurs as the result of ongoing cycles of abuse in which the intermittent reinforcement of reward and punishment creates powerful emotional bonds that are resistant to change.Chrissie Sanderson. Counselling Survivors of Domestic Abuse. Jessica Kingsley Publishers; 15 June 2008. . p. 84. The other source indicated that 'The necessary conditions for traumatic bonding are that one person must dominate the other and that the level of abuse chronically spikes and then subsides. The relationship is characterized by periods of permissive, compassionate, and even affectionate behavior from the dominant person, punctuated by intermittent episodes of intense abuse. To maintain the upper hand, the victimizer manipulates the behavior of the victim and limits the victim's options so as to perpetuate the power imbalance. Any threat to the balance of dominance and submission may be met with an escalating cycle of punishment ranging from seething intimidation to intensely violent outbursts. The victimizer also isolates the victim from other sources of support, which reduces the likelihood of detection and intervention, impairs the victim's ability to receive countervailing self-referent feedback, and strengthens the sense of unilateral dependency ... The traumatic effects of these abusive relationships may include the impairment of the victim's capacity for accurate self-appraisal, leading to a sense of personal inadequacy and a subordinate sense of dependence upon the dominating person. Victims also may encounter a variety of unpleasant social and legal consequences of their emotional and behavioral affiliation with someone who perpetrated aggressive acts, even if they themselves were the recipients of the aggression. ### Video games Most video games are designed around some type of compulsion loop, adding a type of positive reinforcement through a variable rate schedule to keep the player playing the game, though this can also lead to video game addiction. As part of a trend in the monetization of video games in the 2010s, some games offered "loot boxes" as rewards or purchasable by real-world funds that offered a random selection of in-game items, distributed by rarity. The practice has been tied to the same methods that slot machines and other gambling devices dole out rewards, as it follows a variable rate schedule. While the general perception that loot boxes are a form of gambling, the practice is only classified as such in a few countries as gambling and otherwise legal. However, methods to use those items as virtual currency for online gambling or trading for real-world money has created a skin gambling market that is under legal evaluation. ## Criticisms The standard definition of behavioral reinforcement has been criticized as circular, since it appears to argue that response strength is increased by reinforcement, and defines reinforcement as something that increases response strength (i.e., response strength is increased by things that increase response strength). However, the correct usage of reinforcement is that something is a reinforcer because'' of its effect on behavior, and not the other way around. It becomes circular if one says that a particular stimulus strengthens behavior because it is a reinforcer, and does not explain why a stimulus is producing that effect on the behavior. Other definitions have been proposed, such as F.D. Sheffield's "consummatory behavior contingent on a response", but these are not broadly used in psychology. Increasingly, understanding of the role reinforcers play is moving away from a "strengthening" effect to a "signalling" effect. That is, the view that reinforcers increase responding because they signal the behaviors that are likely to result in reinforcement. While in most practical applications, the effect of any given reinforcer will be the same regardless of whether the reinforcer is signalling or strengthening, this approach helps to explain a number of behavioral phenomena including patterns of responding on intermittent reinforcement schedules (fixed interval scallops) and the differential outcomes effect.
https://en.wikipedia.org/wiki/Reinforcement
In linear algebra, a generalized eigenvector of an $$ n\times n $$ matrix $$ A $$ is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector. Let $$ V $$ be an $$ n $$ -dimensional vector space and let $$ A $$ be the matrix representation of a linear map from $$ V $$ to $$ V $$ with respect to some ordered basis. There may not always exist a full set of $$ n $$ linearly independent eigenvectors of $$ A $$ that form a complete basis for $$ V $$ . That is, the matrix $$ A $$ may not be diagonalizable. This happens when the algebraic multiplicity of at least one eigenvalue $$ \lambda_i $$ is greater than its geometric multiplicity (the nullity of the matrix $$ (A-\lambda_i I) $$ , or the dimension of its nullspace). In this case, $$ \lambda_i $$ is called a defective eigenvalue and $$ A $$ is called a defective matrix. A generalized eigenvector $$ x_i $$ corresponding to $$ \lambda_i $$ , together with the matrix $$ (A-\lambda_i I) $$ generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of $$ V $$ . Using generalized eigenvectors, a set of linearly independent eigenvectors of $$ A $$ can be extended, if necessary, to a complete basis for $$ V $$ . This basis can be used to determine an "almost diagonal matrix" $$ J $$ in ## Jordan normal form , similar to $$ A $$ , which is useful in computing certain matrix functions of $$ A $$ . The matrix $$ J $$ is also useful in solving the system of linear differential equations $$ \mathbf x' = A \mathbf x, $$ where $$ A $$ need not be diagonalizable. The dimension of the generalized eigenspace corresponding to a given eigenvalue $$ \lambda $$ is the algebraic multiplicity of $$ \lambda $$ . ## Overview and definition There are several equivalent ways to define an ordinary eigenvector. For our purposes, an eigenvector $$ \mathbf u $$ associated with an eigenvalue $$ \lambda $$ of an $$ n $$ × $$ n $$ matrix $$ A $$ is a nonzero vector for which $$ (A - \lambda I) \mathbf u = \mathbf 0 $$ , where $$ I $$ is the $$ n $$ × $$ n $$ identity matrix and $$ \mathbf 0 $$ is the zero vector of length $$ n $$ . That is, $$ \mathbf u $$ is in the kernel of the transformation $$ (A - \lambda I) $$ . If $$ A $$ has $$ n $$ linearly independent eigenvectors, then $$ A $$ is similar to a diagonal matrix $$ D $$ . That is, there exists an invertible matrix $$ M $$ such that $$ A $$ is diagonalizable through the similarity transformation $$ D = M^{-1}AM $$ . The matrix $$ D $$ is called a spectral matrix for $$ A $$ . The matrix $$ M $$ is called a modal matrix for $$ A $$ . Diagonalizable matrices are of particular interest since matrix functions of them can be computed easily. On the other hand, if $$ A $$ does not have $$ n $$ linearly independent eigenvectors associated with it, then $$ A $$ is not diagonalizable. Definition: A vector $$ \mathbf x_m $$ is a generalized eigenvector of rank m of the matrix $$ A $$ and corresponding to the eigenvalue $$ \lambda $$ if $$ (A - \lambda I)^m \mathbf x_m = \mathbf 0 $$ but $$ (A - \lambda I)^{m-1} \mathbf x_m \ne \mathbf 0. $$ Clearly, a generalized eigenvector of rank 1 is an ordinary eigenvector. Every $$ n $$ × $$ n $$ matrix $$ A $$ has $$ n $$ linearly independent generalized eigenvectors associated with it and can be shown to be similar to an "almost diagonal" matrix $$ J $$ in Jordan normal form. That is, there exists an invertible matrix $$ M $$ such that $$ J = M^{-1}AM $$ . The matrix $$ M $$ in this case is called a generalized modal matrix for $$ A $$ . If $$ \lambda $$ is an eigenvalue of algebraic multiplicity $$ \mu $$ , then $$ A $$ will have $$ \mu $$ linearly independent generalized eigenvectors corresponding to $$ \lambda $$ . These results, in turn, provide a straightforward method for computing certain matrix functions of $$ A $$ . Note: For an $$ n \times n $$ matrix $$ A $$ over a field $$ F $$ to be expressed in Jordan normal form, all eigenvalues of $$ A $$ must be in $$ F $$ . That is, the characteristic polynomial $$ f(x) $$ must factor completely into linear factors; $$ F $$ must be an algebraically closed field. For example, if $$ A $$ has real-valued elements, then it may be necessary for the eigenvalues and the components of the eigenvectors to have complex values. The set spanned by all generalized eigenvectors for a given $$ \lambda $$ forms the generalized eigenspace for $$ \lambda $$ . ## Examples Here are some examples to illustrate the concept of generalized eigenvectors. Some of the details will be described later. ### Example 1 This example is simple but clearly illustrates the point. This type of matrix is used frequently in textbooks. Suppose $$ A = \begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix}. $$ Then there is only one eigenvalue, $$ \lambda = 1 $$ , and its algebraic multiplicity is $$ m=2 $$ . Notice that this matrix is in Jordan normal form but is not diagonal. Hence, this matrix is not diagonalizable. Since there is one superdiagonal entry, there will be one generalized eigenvector of rank greater than 1 (or one could note that the vector space $$ V $$ is of dimension 2, so there can be at most one generalized eigenvector of rank greater than 1). Alternatively, one could compute the dimension of the nullspace of $$ A - \lambda I $$ to be $$ p=1 $$ , and thus there are $$ m-p=1 $$ generalized eigenvectors of rank greater than 1. The ordinary eigenvector $$ \mathbf v_1=\begin{pmatrix}1 \\0 \end{pmatrix} $$ is computed as usual (see the eigenvector page for examples). Using this eigenvector, we compute the generalized eigenvector $$ \mathbf v_2 $$ by solving $$ (A-\lambda I) \mathbf v_2 = \mathbf v_1. $$ Writing out the values: $$ \left(\begin{pmatrix} 1 & 1\\ 0 & 1 \end{pmatrix} - 1 \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix}\right)\begin{pmatrix}v_{21} \\v_{22} \end{pmatrix} = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}v_{21} \\v_{22} \end{pmatrix} = \begin{pmatrix}1 \\0 \end{pmatrix}. $$ This simplifies to $$ v_{22}= 1. $$ The element $$ v_{21} $$ has no restrictions. The generalized eigenvector of rank 2 is then $$ \mathbf v_2=\begin{pmatrix}a \\1 \end{pmatrix} $$ , where a can have any scalar value. The choice of a = 0 is usually the simplest. Note that $$ (A-\lambda I) \mathbf v_2 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}a \\1 \end{pmatrix} = \begin{pmatrix}1 \\0 \end{pmatrix} = \mathbf v_1, $$ so that $$ \mathbf v_2 $$ is a generalized eigenvector, because $$ (A-\lambda I)^2 \mathbf v_2 = (A-\lambda I) [(A-\lambda I)\mathbf v_2] =(A-\lambda I) \mathbf v_1 = \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix} \begin{pmatrix}1 \\0 \end{pmatrix} = \begin{pmatrix}0 \\0 \end{pmatrix} = \mathbf 0, $$ so that $$ \mathbf v_1 $$ is an ordinary eigenvector, and that $$ \mathbf v_1 $$ and $$ \mathbf v_2 $$ are linearly independent and hence constitute a basis for the vector space $$ V $$ . ### Example 2 This example is more complex than Example 1. Unfortunately, it is a little difficult to construct an interesting example of low order. The matrix $$ A = \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 3 & 1 & 0 & 0 & 0 \\ 6 & 3 & 2 & 0 & 0 \\ 10 & 6 & 3 & 2 & 0 \\ 15 & 10 & 6 & 3 & 2 \end{pmatrix} $$ has eigenvalues $$ \lambda_1 = 1 $$ and $$ \lambda_2 = 2 $$ with algebraic multiplicities $$ \mu_1 = 2 $$ and $$ \mu_2 = 3 $$ , but geometric multiplicities $$ \gamma_1 = 1 $$ and $$ \gamma_2 = 1 $$ . The generalized eigenspaces of $$ A $$ are calculated below. $$ \mathbf x_1 $$ is the ordinary eigenvector associated with $$ \lambda_1 $$ . $$ \mathbf x_2 $$ is a generalized eigenvector associated with $$ \lambda_1 $$ . $$ \mathbf y_1 $$ is the ordinary eigenvector associated with $$ \lambda_2 $$ . $$ \mathbf y_2 $$ and $$ \mathbf y_3 $$ are generalized eigenvectors associated with $$ \lambda_2 $$ . $$ (A-1 I) \mathbf x_1 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 & 0 \\ 6 & 3 & 1 & 0 & 0 \\ 10 & 6 & 3 & 1 & 0 \\ 15 & 10 & 6 & 3 & 1 \end{pmatrix}\begin{pmatrix} 0 \\ 3 \\ -9 \\ 9 \\ -3 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} = \mathbf 0 , $$ $$ (A - 1 I) \mathbf x_2 = \begin{pmatrix} 0 & 0 & 0 & 0 & 0 \\ 3 & 0 & 0 & 0 & 0 \\ 6 & 3 & 1 & 0 & 0 \\ 10 & 6 & 3 & 1 & 0 \\ 15 & 10 & 6 & 3 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ -15 \\ 30 \\ -1 \\ -45 \end{pmatrix} = \begin{pmatrix} 0 \\ 3 \\ -9 \\ 9 \\ -3 \end{pmatrix} = \mathbf x_1 , $$ $$ (A - 2 I) \mathbf y_1 = \begin{pmatrix} -1 & 0 & 0 & 0 & 0 \\ 3 & -1 & 0 & 0 & 0 \\ 6 & 3 & 0 & 0 & 0 \\ 10 & 6 & 3 & 0 & 0 \\ 15 & 10 & 6 & 3 & 0 \end{pmatrix} \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 9 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} = \mathbf 0 , $$ $$ (A - 2 I) \mathbf y_2 = \begin{pmatrix} -1 & 0 & 0 & 0 & 0 \\ 3 & -1 & 0 & 0 & 0 \\ 6 & 3 & 0 & 0 & 0 \\ 10 & 6 & 3 & 0 & 0 \\ 15 & 10 & 6 & 3 & 0 \end{pmatrix} \begin{pmatrix} 0 \\ 0 \\ 0 \\ 3 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 9 \end{pmatrix} = \mathbf y_1 , $$ $$ (A - 2 I) \mathbf y_3 = \begin{pmatrix} -1 & 0 & 0 & 0 & 0 \\ 3 & -1 & 0 & 0 & 0 \\ 6 & 3 & 0 & 0 & 0 \\ 10 & 6 & 3 & 0 & 0 \\ 15 & 10 & 6 & 3 & 0 \end{pmatrix} \begin{pmatrix} 0 \\ 0 \\ 1 \\ -2 \\ 0 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 3 \\ 0 \end{pmatrix} = \mathbf y_2 . $$ This results in a basis for each of the generalized eigenspaces of $$ A $$ . Together the two chains of generalized eigenvectors span the space of all 5-dimensional column vectors. $$ \left\{ \mathbf x_1, \mathbf x_2 \right\} = \left\{ \begin{pmatrix} 0 \\ 3 \\ -9 \\ 9 \\ -3 \end{pmatrix}, \begin{pmatrix} 1 \\ -15 \\ 30 \\ -1 \\ -45 \end{pmatrix} \right\}, \left\{ \mathbf y_1, \mathbf y_2, \mathbf y_3 \right\} = \left\{ \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 9 \end{pmatrix}, \begin{pmatrix} 0 \\ 0 \\ 0 \\ 3 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 0 \\ 1 \\ -2 \\ 0 \end{pmatrix} \right\}. $$ An "almost diagonal" matrix $$ J $$ in Jordan normal form, similar to $$ A $$ is obtained as follows: $$ M = \begin{pmatrix} \mathbf x_1 & \mathbf x_2 & \mathbf y_1 & \mathbf y_2 & \mathbf y_3 \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 &0& 0 \\ 3 & -15 & 0 &0& 0 \\ -9 & 30 & 0 &0& 1 \\ 9 & -1 & 0 &3& -2 \\ -3 & -45 & 9 &0& 0 \end{pmatrix}, $$ $$ J = \begin{pmatrix} 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 2 & 1 & 0 \\ 0 & 0 & 0 & 2 & 1 \\ 0 & 0 & 0 & 0 & 2 \end{pmatrix}, $$ where $$ M $$ is a generalized modal matrix for $$ A $$ , the columns of $$ M $$ are a canonical basis for $$ A $$ , and $$ AM = MJ $$ . ## Jordan chains Definition: Let $$ \mathbf x_m $$ be a generalized eigenvector of rank m corresponding to the matrix $$ A $$ and the eigenvalue $$ \lambda $$ . The chain generated by $$ \mathbf x_m $$ is a set of vectors $$ \left\{ \mathbf x_m, \mathbf x_{m-1}, \dots , \mathbf x_1 \right\} $$ given by where $$ \mathbf x_1 $$ is always an ordinary eigenvector with a given eigenvalue $$ \lambda $$ . Thus, in general, The vector $$ \mathbf x_j $$ , given by (), is a generalized eigenvector of rank j corresponding to the eigenvalue $$ \lambda $$ . A chain is a linearly independent set of vectors. ## Canonical basis Definition: A set of n linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains. Thus, once we have determined that a generalized eigenvector of rank m is in a canonical basis, it follows that the m − 1 vectors $$ \mathbf x_{m-1}, \mathbf x_{m-2}, \ldots , \mathbf x_1 $$ that are in the Jordan chain generated by $$ \mathbf x_m $$ are also in the canonical basis. Let $$ \lambda_i $$ be an eigenvalue of $$ A $$ of algebraic multiplicity $$ \mu_i $$ . First, find the ranks (matrix ranks) of the matrices $$ (A - \lambda_i I), (A - \lambda_i I)^2, \ldots , (A - \lambda_i I)^{m_i} $$ . The integer $$ m_i $$ is determined to be the first integer for which $$ (A - \lambda_i I)^{m_i} $$ has rank $$ n - \mu_i $$ (n being the number of rows or columns of $$ A $$ , that is, $$ A $$ is n × n). Now define $$ \rho_k = \operatorname{rank}(A - \lambda_i I)^{k-1} - \operatorname{rank}(A - \lambda_i I)^k \qquad (k = 1, 2, \ldots , m_i). $$ The variable $$ \rho_k $$ designates the number of linearly independent generalized eigenvectors of rank k corresponding to the eigenvalue $$ \lambda_i $$ that will appear in a canonical basis for $$ A $$ . Note that $$ \operatorname{rank}(A - \lambda_i I)^0 = \operatorname{rank}(I) = n $$ . ## Computation of generalized eigenvectors In the preceding sections we have seen techniques for obtaining the $$ n $$ linearly independent generalized eigenvectors of a canonical basis for the vector space $$ V $$ associated with an $$ n \times n $$ matrix $$ A $$ . These techniques can be combined into a procedure: Solve the characteristic equation of $$ A $$ for eigenvalues $$ \lambda_i $$ and their algebraic multiplicities $$ \mu_i $$ ; For each $$ \lambda_i : $$ Determine $$ n - \mu_i $$ ; Determine $$ m_i $$ ; Determine $$ \rho_k $$ for $$ (k = 1, \ldots , m_i) $$ ; Determine each Jordan chain for $$ \lambda_i $$ ; ### Example 3 The matrix $$ A = \begin{pmatrix} 5 & 1 & -2 & 4 \\ 0 & 5 & 2 & 2 \\ 0 & 0 & 5 & 3 \\ 0 & 0 & 0 & 4 \end{pmatrix} $$ has an eigenvalue $$ \lambda_1 = 5 $$ of algebraic multiplicity $$ \mu_1 = 3 $$ and an eigenvalue $$ \lambda_2 = 4 $$ of algebraic multiplicity $$ \mu_2 = 1 $$ . We also have $$ n=4 $$ . For $$ \lambda_1 $$ we have $$ n - \mu_1 = 4 - 3 = 1 $$ . $$ (A - 5I) = \begin{pmatrix} 0 & 1 & -2 & 4 \\ 0 & 0 & 2 & 2 \\ 0 & 0 & 0 & 3 \\ 0 & 0 & 0 & -1 \end{pmatrix}, \qquad \operatorname{rank}(A - 5I) = 3. $$ $$ (A - 5I)^2 = \begin{pmatrix} 0 & 0 & 2 & -8 \\ 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & -3 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \qquad \operatorname{rank}(A - 5I)^2 = 2. $$ $$ (A - 5I)^3 = \begin{pmatrix} 0 & 0 & 0 & 14 \\ 0 & 0 & 0 & -4 \\ 0 & 0 & 0 & 3 \\ 0 & 0 & 0 & -1 \end{pmatrix}, \qquad \operatorname{rank}(A - 5I)^3 = 1. $$ The first integer $$ m_1 $$ for which $$ (A - 5I)^{m_1} $$ has rank $$ n - \mu_1 = 1 $$ is $$ m_1 = 3 $$ . We now define $$ \rho_3 = \operatorname{rank}(A - 5I)^2 - \operatorname{rank}(A - 5I)^3 = 2 - 1 = 1 , $$ $$ \rho_2 = \operatorname{rank}(A - 5I)^1 - \operatorname{rank}(A - 5I)^2 = 3 - 2 = 1 , $$ $$ \rho_1 = \operatorname{rank}(A - 5I)^0 - \operatorname{rank}(A - 5I)^1 = 4 - 3 = 1 . $$ Consequently, there will be three linearly independent generalized eigenvectors; one each of ranks 3, 2 and 1. Since $$ \lambda_1 $$ corresponds to a single chain of three linearly independent generalized eigenvectors, we know that there is a generalized eigenvector $$ \mathbf x_3 $$ of rank 3 corresponding to $$ \lambda_1 $$ such that but Equations () and () represent linear systems that can be solved for $$ \mathbf x_3 $$ . Let $$ \mathbf x_3 = \begin{pmatrix} x_{31} \\ x_{32} \\ x_{33} \\ x_{34} \end{pmatrix}. $$ Then $$ (A - 5I)^3 \mathbf x_3 = \begin{pmatrix} 0 & 0 & 0 & 14 \\ 0 & 0 & 0 & -4 \\ 0 & 0 & 0 & 3 \\ 0 & 0 & 0 & -1 \end{pmatrix} \begin{pmatrix} x_{31} \\ x_{32} \\ x_{33} \\ x_{34} \end{pmatrix} = \begin{pmatrix} 14 x_{34} \\ -4 x_{34} \\ 3 x_{34} \\ - x_{34} \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} $$ and $$ (A - 5I)^2 \mathbf x_3 = \begin{pmatrix} 0 & 0 & 2 & -8 \\ 0 & 0 & 0 & 4 \\ 0 & 0 & 0 & -3 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} x_{31} \\ x_{32} \\ x_{33} \\ x_{34} \end{pmatrix} = \begin{pmatrix} 2 x_{33} - 8 x_{34} \\ 4 x_{34} \\ -3 x_{34} \\ x_{34} \end{pmatrix} \ne \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \end{pmatrix}. $$ Thus, in order to satisfy the conditions () and (), we must have $$ x_{34} = 0 $$ and $$ x_{33} \ne 0 $$ . No restrictions are placed on $$ x_{31} $$ and $$ x_{32} $$ . By choosing $$ x_{31} = x_{32} = x_{34} = 0, x_{33} = 1 $$ , we obtain $$ \mathbf x_3 = \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix} $$ as a generalized eigenvector of rank 3 corresponding to $$ \lambda_1 = 5 $$ . Note that it is possible to obtain infinitely many other generalized eigenvectors of rank 3 by choosing different values of $$ x_{31} $$ , $$ x_{32} $$ and $$ x_{33} $$ , with $$ x_{33} \ne 0 $$ . Our first choice, however, is the simplest. Now using equations (), we obtain $$ \mathbf x_2 $$ and $$ \mathbf x_1 $$ as generalized eigenvectors of rank 2 and 1, respectively, where $$ \mathbf x_2 = (A - 5I) \mathbf x_3 = \begin{pmatrix} -2 \\ 2 \\ 0 \\ 0 \end{pmatrix}, $$ and $$ \mathbf x_1 = (A - 5I) \mathbf x_2 = \begin{pmatrix} 2 \\ 0 \\ 0 \\ 0 \end{pmatrix}. $$ The simple eigenvalue $$ \lambda_2 = 4 $$ can be dealt with using standard techniques and has an ordinary eigenvector $$ \mathbf y_1 = \begin{pmatrix} -14 \\ 4 \\ -3 \\ 1 \end{pmatrix}. $$ A canonical basis for $$ A $$ is $$ \left\{ \mathbf x_3, \mathbf x_2, \mathbf x_1, \mathbf y_1 \right\} = \left\{ \begin{pmatrix} 0 \\ 0 \\ 1 \\ 0 \end{pmatrix} \begin{pmatrix} -2 \\ 2 \\ 0 \\ 0 \end{pmatrix} \begin{pmatrix} 2 \\ 0 \\ 0 \\ 0 \end{pmatrix} \begin{pmatrix} -14 \\ 4 \\ -3 \\ 1 \end{pmatrix} \right\}. $$ $$ \mathbf x_1, \mathbf x_2 $$ and $$ \mathbf x_3 $$ are generalized eigenvectors associated with $$ \lambda_1 $$ , while $$ \mathbf y_1 $$ is the ordinary eigenvector associated with $$ \lambda_2 $$ . This is a fairly simple example. In general, the numbers $$ \rho_k $$ of linearly independent generalized eigenvectors of rank $$ k $$ will not always be equal. That is, there may be several chains of different lengths corresponding to a particular eigenvalue. ## Generalized modal matrix Let $$ A $$ be an n × n matrix. A generalized modal matrix $$ M $$ for $$ A $$ is an n × n matrix whose columns, considered as vectors, form a canonical basis for $$ A $$ and appear in $$ M $$ according to the following rules: - All Jordan chains consisting of one vector (that is, one vector in length) appear in the first columns of $$ M $$ . - All vectors of one chain appear together in adjacent columns of $$ M $$ . - Each chain appears in $$ M $$ in order of increasing rank (that is, the generalized eigenvector of rank 1 appears before the generalized eigenvector of rank 2 of the same chain, which appears before the generalized eigenvector of rank 3 of the same chain, etc.). Jordan normal form An example of a matrix in Jordan normal form.The red blocks are called Jordan blocks. Let $$ V $$ be an n-dimensional vector space; let $$ \phi $$ be a linear map in , the set of all linear maps from $$ V $$ into itself; and let $$ A $$ be the matrix representation of $$ \phi $$ with respect to some ordered basis. It can be shown that if the characteristic polynomial $$ f(\lambda) $$ of $$ A $$ factors into linear factors, so that $$ f(\lambda) $$ has the form $$ f(\lambda) = \pm (\lambda - \lambda_1)^{\mu_1}(\lambda - \lambda_2)^{\mu_2} \cdots (\lambda - \lambda_r)^{\mu_r} , $$ where $$ \lambda_1, \lambda_2, \ldots , \lambda_r $$ are the distinct eigenvalues of $$ A $$ , then each $$ \mu_i $$ is the algebraic multiplicity of its corresponding eigenvalue $$ \lambda_i $$ and $$ A $$ is similar to a matrix $$ J $$ in Jordan normal form, where each $$ \lambda_i $$ appears $$ \mu_i $$ consecutive times on the diagonal, and the entry directly above each $$ \lambda_i $$ (that is, on the superdiagonal) is either 0 or 1: in each block the entry above the first occurrence of each $$ \lambda_i $$ is always 0 (except in the first block); all other entries on the superdiagonal are 1. All other entries (that is, off the diagonal and superdiagonal) are 0. (But no ordering is imposed among the eigenvalues, or among the blocks for a given eigenvalue.) The matrix $$ J $$ is as close as one can come to a diagonalization of $$ A $$ . If $$ A $$ is diagonalizable, then all entries above the diagonal are zero. Note that some textbooks have the ones on the subdiagonal, that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal. Every n × n matrix $$ A $$ is similar to a matrix $$ J $$ in Jordan normal form, obtained through the similarity transformation $$ J = M^{-1}AM $$ , where $$ M $$ is a generalized modal matrix for $$ A $$ . (See Note above.) ### Example 4 Find a matrix in Jordan normal form that is similar to $$ A = \begin{pmatrix} 0 & 4 & 2 \\ -3 & 8 & 3 \\ 4 & -8 & -2 \end{pmatrix}. $$ Solution: The characteristic equation of $$ A $$ is $$ (\lambda - 2)^3 = 0 $$ , hence, $$ \lambda = 2 $$ is an eigenvalue of algebraic multiplicity three. Following the procedures of the previous sections, we find that $$ \operatorname{rank}(A - 2I) = 1 $$ and $$ \operatorname{rank}(A - 2I)^2 = 0 = n - \mu . $$ Thus, $$ \rho_2 = 1 $$ and $$ \rho_1 = 2 $$ , which implies that a canonical basis for $$ A $$ will contain one linearly independent generalized eigenvector of rank 2 and two linearly independent generalized eigenvectors of rank 1, or equivalently, one chain of two vectors $$ \left\{ \mathbf x_2, \mathbf x_1 \right\} $$ and one chain of one vector $$ \left\{ \mathbf y_1 \right\} $$ . Designating $$ M = \begin{pmatrix} \mathbf y_1 & \mathbf x_1 & \mathbf x_2 \end{pmatrix} $$ , we find that $$ M = \begin{pmatrix} 2 & 2 & 0 \\ 1 & 3 & 0 \\ 0 & -4 & 1 \end{pmatrix}, $$ and $$ J = \begin{pmatrix} 2 & 0 & 0 \\ 0 & 2 & 1 \\ 0 & 0 & 2 \end{pmatrix}, $$ where $$ M $$ is a generalized modal matrix for $$ A $$ , the columns of $$ M $$ are a canonical basis for $$ A $$ , and $$ AM = MJ $$ . Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both $$ M $$ and $$ J $$ may be interchanged, it follows that both $$ M $$ and $$ J $$ are not unique. ### Example 5 In Example 3, we found a canonical basis of linearly independent generalized eigenvectors for a matrix $$ A $$ . A generalized modal matrix for $$ A $$ is $$ M = \begin{pmatrix} \mathbf y_1 & \mathbf x_1 & \mathbf x_2 & \mathbf x_3 \end{pmatrix} = \begin{pmatrix} -14 & 2 & -2 & 0 \\ 4 & 0 & 2 & 0 \\ -3 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \end{pmatrix}. $$ A matrix in Jordan normal form, similar to $$ A $$ is $$ J = \begin{pmatrix} 4 & 0 & 0 & 0 \\ 0 & 5 & 1 & 0 \\ 0 & 0 & 5 & 1 \\ 0 & 0 & 0 & 5 \end{pmatrix}, $$ so that $$ AM = MJ $$ . ## Applications ### Matrix functions Three of the most fundamental operations which can be performed on square matrices are matrix addition, multiplication by a scalar, and matrix multiplication. These are exactly those operations necessary for defining a polynomial function of an n × n matrix $$ A $$ . If we recall from basic calculus that many functions can be written as a Maclaurin series, then we can define more general functions of matrices quite easily. If $$ A $$ is diagonalizable, that is $$ D = M^{-1}AM , $$ with $$ D = \begin{pmatrix} \lambda_1 & 0 & \cdots & 0 \\ 0 & \lambda_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n \end{pmatrix}, $$ then $$ D^k = \begin{pmatrix} \lambda_1^k & 0 & \cdots & 0 \\ 0 & \lambda_2^k & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & \lambda_n^k \end{pmatrix} $$ and the evaluation of the Maclaurin series for functions of $$ A $$ is greatly simplified. For example, to obtain any power k of $$ A $$ , we need only compute $$ D^k $$ , premultiply $$ D^k $$ by $$ M $$ , and postmultiply the result by $$ M^{-1} $$ . Using generalized eigenvectors, we can obtain the Jordan normal form for $$ A $$ and these results can be generalized to a straightforward method for computing functions of nondiagonalizable matrices. (See Matrix function#Jordan decomposition.) ### Differential equations Consider the problem of solving the system of linear ordinary differential equations where $$ \mathbf x = \begin{pmatrix} x_1(t) \\ x_2(t) \\ \vdots \\ x_n(t) \end{pmatrix}, \quad \mathbf x' = \begin{pmatrix} x_1'(t) \\ x_2'(t) \\ \vdots \\ x_n'(t) \end{pmatrix}, $$ and $$ A = (a_{ij}) . $$ If the matrix $$ A $$ is a diagonal matrix so that $$ a_{ij} = 0 $$ for $$ i \ne j $$ , then the system () reduces to a system of n equations which take the form In this case, the general solution is given by $$ x_1 = k_1 e^{a_{11}t} $$ $$ x_2 = k_2 e^{a_{22}t} $$ $$ \vdots $$ $$ x_n = k_n e^{a_{nn}t} . $$ In the general case, we try to diagonalize $$ A $$ and reduce the system () to a system like () as follows. If $$ A $$ is diagonalizable, we have $$ D = M^{-1}AM $$ , where $$ M $$ is a modal matrix for $$ A $$ . Substituting $$ A = MDM^{-1} $$ , equation () takes the form $$ M^{-1} \mathbf x' = D(M^{-1} \mathbf x) $$ , or where The solution of () is $$ y_1 = k_1 e^{\lambda_1 t} $$ $$ y_2 = k_2 e^{\lambda_2 t} $$ $$ \vdots $$ $$ y_n = k_n e^{\lambda_n t} . $$ The solution $$ \mathbf x $$ of () is then obtained using the relation (). On the other hand, if $$ A $$ is not diagonalizable, we choose $$ M $$ to be a generalized modal matrix for $$ A $$ , such that $$ J = M^{-1}AM $$ is the Jordan normal form of $$ A $$ . The system $$ \mathbf y' = J \mathbf y $$ has the form where the $$ \lambda_i $$ are the eigenvalues from the main diagonal of $$ J $$ and the $$ \epsilon_i $$ are the ones and zeros from the superdiagonal of $$ J $$ . The system () is often more easily solved than (). We may solve the last equation in () for $$ y_n $$ , obtaining $$ y_n = k_n e^{\lambda_n t} $$ . We then substitute this solution for $$ y_n $$ into the next to last equation in () and solve for $$ y_{n-1} $$ . Continuing this procedure, we work through () from the last equation to the first, solving the entire system for $$ \mathbf y $$ . The solution $$ \mathbf x $$ is then obtained using the relation (). Lemma: Given the following chain of generalized eigenvectors of length $$ r, $$ $$ X_1 = v_1e^{\lambda t} $$ $$ X_2 = (tv_1+v_2)e^{\lambda t} $$ $$ X_3 = \left(\frac{t^2}{2}v_1+tv_2+v_3\right)e^{\lambda t} $$ $$ \vdots $$ $$ X_r = \left(\frac{t^{r-1}}{(r-1)!}v_1+...+\frac{t^2}{2}v_{r-2}+tv_{r-1}+v_r\right)e^{\lambda t} $$ , these functions solve the system of equations, $$ X' = AX. $$ Proof: Define $$ v_0=0 $$ $$ X_j(t)= # e^{\lambda t}\sum_{i = 1}^j\frac{t^{j-i}}{(j-i)!} v_i. $$ Then, as $$ {t^{0}}=1 $$ and $$ 1'=0 $$ , $$ X'_j(t)=e^{\lambda t}\sum_{i = 1}^{j-1}\frac{t^{j-i-1}}{(j-i-1)!}v_i+e^{\lambda t}\lambda\sum_{i = 1}^j\frac{t^{j-i}}{(j-i)!}v_i $$ . On the other hand we have, $$ v_0=0 $$ and so $$ AX_j(t)=e^{\lambda t}\sum_{i = 1}^j\frac{t^{j-i}}{(j-i)!}Av_i $$ $$ e^{\lambda t}\sum_{i 1}^j\frac{t^{j-i}}{(j-i)!}(v_{i-1}+\lambda v_i) $$ $$ # e^{\lambda t}\sum_{i = 2}^j\frac{t^{j-i}}{(j-i)!}v_{i-1}+e^{\lambda t}\lambda\sum_{i 1}^j\frac{t^{j-i}}{(j-i)!}v_i $$ $$ # e^{\lambda t}\sum_{i = 1}^{j-1}\frac{t^{j-i-1}}{(j-i-1)!}v_{i}+e^{\lambda t}\lambda\sum_{i 1}^j\frac{t^{j-i}}{(j-i)!}v_i $$ $$ =X'_j(t) $$ as required. ## Notes ## References - - - - - - - - - - - - Category:Linear algebra Category:Matrix theory
https://en.wikipedia.org/wiki/Generalized_eigenvector
In computer programming, an inline assembler is a feature of some compilers that allows low-level code written in assembly language to be embedded within a program, among code that otherwise has been compiled from a higher-level language such as C or Ada. ## Motivation and alternatives The embedding of assembly language code is usually done for one of these reasons: - Optimization: Programmers can use assembly language code to implement the most performance-sensitive parts of their program's algorithms, code that is apt to be more efficient than what might otherwise be generated by the compiler. - Access to processor-specific instructions: Most processors offer special instructions, such as Compare and Swap and Test and Set instructions which may be used to construct semaphores or other synchronization and locking primitives. Nearly every modern processor has these or similar instructions, as they are necessary to implement multitasking. ## Examples of specialized instructions are found in the SPARC VIS, Intel MMX and SSE, and Motorola Altivec instruction sets. - Access to special calling conventions not yet supported by the compiler. - System calls and interrupts: High-level languages rarely have a direct facility to make arbitrary system calls, so assembly code is used. Direct interrupts are even more rarely supplied. - To emit special directives for the linker or assembler, for example to change sectioning, macros, or to make symbol aliases. On the other hand, inline assembler poses a direct problem for the compiler itself as it complicates the analysis of what is done to each variable, a key part of register allocation. This means the performance might actually decrease. Inline assembler also complicates future porting and maintenance of a program. Alternative facilities are often provided as a way to simplify the work for both the compiler and the programmer. Intrinsic functions for special instructions are provided by most compilers and C-function wrappers for arbitrary system calls are available on every Unix platform. ## Syntax ### In language standards The ISO C++ standard and ISO C standards (annex J) specify a conditionally supported syntax for inline assembler: An asm declaration has the form asm-declaration: ( string-literal ) ; The asm declaration is conditionally-supported; its meaning is implementation-defined.C++, [dcl.asm] This definition, however, is rarely used in actual C, as it is simultaneously too liberal (in the interpretation) and too restricted (in the use of one string literal only). ### In actual compilers In practical use, inline assembly operating on values is rarely standalone as free-floating code. Since the programmer cannot predict what register a variable is assigned to, compilers typically provide a way to substitute them in as an extension. There are, in general, two types of inline assembly supported by C/C++ compilers: - (or ) in GCC. GCC uses a direct extension of the ISO rules: assembly code template is written in strings, with inputs, outputs, and clobbered registers specified after the strings in colons. C variables are used directly while register names are quoted as string literals. - in Microsoft Visual C++ (MSVC), Borland/Embarcadero C compiler, and descendants. This syntax is not based on ISO rules at all; programmers simply write ASM inside a block without needing to conform to C syntax. Variables are available as if they are registers and some C expressions are allowed. ARM Compiler used to have a similar facility. The two families of extensions represent different understandings of division of labor in processing inline assembly. The GCC form preserves the overall syntax of the language and compartmentizes what the compiler needs to know: what is needed and what is changed. It does not explicitly require the compiler to understand instruction names, as the compiler is only needed to substitute its register assignments, plus a few operations, to handle the input requirements. However, the user is prone to specifying clobbered registers incorrectly. The MSVC form of an embedded domain-specific language provides ease of writing, but it requires the compiler itself to know about opcode names and their clobbering properties, demanding extra attention in maintenance and porting. It is still possible to check GCC-style assembly for clobber mistakes with knowledge of the instruction set. GNAT (Ada language frontend of the GCC suite), and LLVM uses the GCC syntax. The D programming language uses a DSL similar to the MSVC extension officially for x86_64, but the LLVM-based LDC also provides the GCC-style syntax on every architecture. MSVC only supports inline assembler on 32-bit x86. The Rust language has since migrated to a syntax abstracting away inline assembly options further than the LLVM (GCC-style) version. It provides enough information to allow transforming the block into an externally-assembled function if the backend could not handle embedded assembly. Examples ### A system call in GCC Calling an operating system directly is generally not possible under a system using protected memory. The OS runs at a more privileged level (kernel mode) than the user (user mode); a (software) interrupt is used to make requests to the operating system. This is rarely a feature in a higher-level language, and so wrapper functions for system calls are written using inline assembler. The following C code example shows an x86 system call wrapper in AT&T assembler syntax, using the GNU Assembler. Such calls are normally written with the aid of macros; the full code is included for clarity. In this particular case, the wrapper performs a system call of a number given by the caller with three operands, returning the result. To recap, GCC supports both basic and extended assembly. The former simply passes text verbatim to the assembler, while the latter performs some substitutions for register locations. ```c extern int errno; int syscall3(int num, int arg1, int arg2, int arg3) { int res; __asm__ ( "int $0x80" /* make the request to the OS */ : "=a" (res), /* return result in eax ("a") */ "+b" (arg1), /* pass arg1 in ebx ("b") [as a "+" output because the syscall may change it] */ "+c" (arg2), /* pass arg2 in ecx ("c") [ditto] */ "+d" (arg3) /* pass arg3 in edx ("d") [ditto] */ : "a" (num) /* pass system call number in eax ("a") */ : "memory", "cc", /* announce to the compiler that the memory and condition codes have been modified */ "esi", "edi", "ebp"); /* these registers are clobbered [changed by the syscall] too */ /* The operating system will return a negative value on error; - wrappers return -1 on error and set the errno global variable */ if (-125 <= res && res < 0) { errno = -res; res = -1; } return res; } ``` ### Processor-specific instruction in D This example of inline assembly from the D programming language shows code that computes the tangent of x using the x86's FPU (x87) instructions. ```d // Compute the tangent of x real tan(real x) { asm { fld x[EBP] ; // load x fxam ; // test for oddball values fstsw AX ; sahf ; jc trigerr ; // C0 = 1: x is NAN, infinity, or empty // 387's can handle denormals SC18: fptan ; fstp ST(0) ; // dump X, which is always 1 fstsw AX ; sahf ; // if (!(fp_status & 0x20)) goto Lret jnp Lret ; // C2 = 1: x is out of range, do argument reduction fldpi ; // load pi fxch ; SC17: fprem1 ; // reminder (partial) fstsw AX ; sahf ; jp SC17 ; // C2 = 1: partial reminder, need to loop fstp ST(1) ; // remove pi from stack jmp SC18 ; } trigerr: return real.nan; Lret: // No need to manually return anything as the value is already on FP stack ; } ``` The followed by conditional jump idiom is used to access the x87 FPU status word bits C0 and C2. stores the status in a general-purpose register; sahf sets the FLAGS register to the higher 8 bits of the register; and the jump is used to judge on whatever flag bit that happens to correspond to the FPU status bit. ## References ## External links - GCC-Inline-Assembly-HOWTO - Clang Inline assembly - GNAT Inline Assembler - GCC Inline Assembler Reference - Compiler Explorer Category:Assembly languages Category:Articles with example C code Category:Articles with example D code
https://en.wikipedia.org/wiki/Inline_assembler
The atomic nucleus is the small, dense region consisting of protons and neutrons at the center of an atom, discovered in 1911 by Ernest Rutherford at the University of Manchester based on the 1909 Geiger–Marsden gold foil experiment. After the discovery of the neutron in 1932, models for a nucleus composed of protons and neutrons were quickly developed by Dmitri Ivanenko and Werner Heisenberg. An atom is composed of a positively charged nucleus, with a cloud of negatively charged electrons surrounding it, bound together by electrostatic force. Almost all of the mass of an atom is located in the nucleus, with a very small contribution from the electron cloud. Protons and neutrons are bound together to form a nucleus by the nuclear force. The diameter of the nucleus is in the range of () for hydrogen (the diameter of a single proton) to about for uranium. These dimensions are much smaller than the diameter of the atom itself (nucleus + electron cloud), by a factor of about 26,634 (uranium atomic radius is about ()) to about 60,250 (hydrogen atomic radius is about ). The branch of physics involved with the study and understanding of the atomic nucleus, including its composition and the forces that bind it together, is called nuclear physics. ## History The nucleus was discovered in 1911, as a result of Ernest Rutherford's efforts to test Thomson's "plum pudding model" of the atom. The electron had already been discovered by J. J. Thomson. Knowing that atoms are electrically neutral, J. J. Thomson postulated that there must be a positive charge as well. In his plum pudding model, Thomson suggested that an atom consisted of negative electrons randomly scattered within a sphere of positive charge. Ernest Rutherford later devised an experiment with his research partner Hans Geiger and with help of Ernest Marsden, that involved the deflection of alpha particles (helium nuclei) directed at a thin sheet of metal foil. He reasoned that if J. J. Thomson's model were correct, the positively charged alpha particles would easily pass through the foil with very little deviation in their paths, as the foil should act as electrically neutral if the negative and positive charges are so intimately mixed as to make it appear neutral. To his surprise, many of the particles were deflected at very large angles. Because the mass of an alpha particle is about 8000 times that of an electron, it became apparent that a very strong force must be present if it could deflect the massive and fast moving alpha particles. He realized that the plum pudding model could not be accurate and that the deflections of the alpha particles could only be explained if the positive and negative charges were separated from each other and that the mass of the atom was a concentrated point of positive charge. This justified the idea of a nuclear atom with a dense center of positive charge and mass. ### Etymology The term nucleus is from the Latin word , a diminutive of ('nut'), meaning 'the kernel' (i.e., the 'small nut') inside a watery type of fruit (like a peach). In 1844, Michael Faraday used the term to refer to the "central point of an atom". The modern atomic meaning was proposed by Ernest Rutherford in 1912. The adoption of the term "nucleus" to atomic theory, however, was not immediate. In 1916, for example, Gilbert N. Lewis stated, in his famous article The Atom and the Molecule, that "the atom is composed of the kernel and an outer atom or shell." Similarly, the term kern meaning kernel is used for nucleus in German and Dutch. ## Principles The nucleus of an atom consists of neutrons and protons, which in turn are the manifestation of more elementary particles, called quarks, that are held in association by the nuclear strong force in certain stable combinations of hadrons, called baryons. The nuclear strong force extends far enough from each baryon so as to bind the neutrons and protons together against the repulsive electrical force between the positively charged protons. The nuclear strong force has a very short range, and essentially drops to zero just beyond the edge of the nucleus. The collective action of the positively charged nucleus is to hold the electrically negative charged electrons in their orbits about the nucleus. The collection of negatively charged electrons orbiting the nucleus display an affinity for certain configurations and numbers of electrons that make their orbits stable. Which chemical element an atom represents is determined by the number of protons in the nucleus; the neutral atom will have an equal number of electrons orbiting that nucleus. Individual chemical elements can create more stable electron configurations by combining to share their electrons. It is that sharing of electrons to create stable electronic orbits about the nuclei that appears to us as the chemistry of our macro world. Protons define the entire charge of a nucleus, and hence its chemical identity. Neutrons are electrically neutral, but contribute to the mass of a nucleus to nearly the same extent as the protons. Neutrons can explain the phenomenon of isotopes (same atomic number with different atomic mass). The main role of neutrons is to reduce electrostatic repulsion inside the nucleus. ## Composition and shape Protons and neutrons are fermions, with different values of the strong isospin quantum number, so two protons and two neutrons can share the same space wave function since they are not identical quantum entities. They are sometimes viewed as two different quantum states of the same particle, the nucleon. Two fermions, such as two protons, or two neutrons, or a proton + neutron (the deuteron) can exhibit bosonic behavior when they become loosely bound in pairs, which have integer spin. In the rare case of a hypernucleus, a third baryon called a hyperon, containing one or more strange quarks and/or other unusual quark(s), can also share the wave function. However, this type of nucleus is extremely unstable and not found on Earth except in high-energy physics experiments. The neutron has a positively charged core of radius ≈ 0.3 fm surrounded by a compensating negative charge of radius between 0.3 fm and 2 fm. The proton has an approximately exponentially decaying positive charge distribution with a mean square radius of about 0.8 fm. The shape of the atomic nucleus can be spherical, rugby ball-shaped (prolate deformation), discus-shaped (oblate deformation), triaxial (a combination of oblate and prolate deformation) or pear-shaped. ## Forces Nuclei are bound together by the residual strong force (nuclear force). The residual strong force is a minor residuum of the strong interaction which binds quarks together to form protons and neutrons. This force is much weaker between neutrons and protons because it is mostly neutralized within them, in the same way that electromagnetic forces between neutral atoms (such as van der Waals forces that act between two inert gas atoms) are much weaker than the electromagnetic forces that hold the parts of the atoms together internally (for example, the forces that hold the electrons in an inert gas atom bound to its nucleus). The nuclear force is highly attractive at the distance of typical nucleon separation, and this overwhelms the repulsion between protons due to the electromagnetic force, thus allowing nuclei to exist. However, the residual strong force has a limited range because it decays quickly with distance (see Yukawa potential); thus only nuclei smaller than a certain size can be completely stable. The largest known completely stable nucleus (i.e. stable to alpha, beta, and gamma decay) is lead-208 which contains a total of 208 nucleons (126 neutrons and 82 protons). Nuclei larger than this maximum are unstable and tend to be increasingly short-lived with larger numbers of nucleons. However, bismuth-209 is also stable to beta decay and has the longest half-life to alpha decay of any known isotope, estimated at a billion times longer than the age of the universe. The residual strong force is effective over a very short range (usually only a few femtometres (fm); roughly one or two nucleon diameters) and causes an attraction between any pair of nucleons. For example, between a proton and a neutron to form a deuteron [NP], and also between protons and protons, and neutrons and neutrons. ## Halo nuclei and nuclear force range limits The effective absolute limit of the range of the nuclear force (also known as residual strong force) is represented by halo nuclei such as lithium-11 or boron-14, in which dineutrons, or other collections of neutrons, orbit at distances of about (roughly similar to the radius of the nucleus of uranium-238). These nuclei are not maximally dense. Halo nuclei form at the extreme edges of the chart of the nuclides—the neutron drip line and proton drip line—and are all unstable with short half-lives, measured in milliseconds; for example, lithium-11 has a half-life of . Halos in effect represent an excited state with nucleons in an outer quantum shell which has unfilled energy levels "below" it (both in terms of radius and energy). The halo may be made of either neutrons [NN, NNN] or protons [PP, PPP]. Nuclei which have a single neutron halo include 11Be and 19C. A two-neutron halo is exhibited by 6He, 11Li, 17B, 19B and 22C. Two-neutron halo nuclei break into three fragments, never two, and are called Borromean nuclei because of this behavior (referring to a system of three interlocked rings in which breaking any ring frees both of the others). 8He and 14Be both exhibit a four-neutron halo. Nuclei which have a proton halo include 8B and 26P. A two-proton halo is exhibited by 17Ne and 27S. Proton halos are expected to be more rare and unstable than the neutron examples, because of the repulsive electromagnetic forces of the halo proton(s). ## Nuclear models Although the standard model of physics is widely believed to completely describe the composition and behavior of the nucleus, generating predictions from theory is much more difficult than for most other areas of particle physics. This is due to two reasons: - In principle, the physics within a nucleus can be derived entirely from quantum chromodynamics (QCD). In practice however, current computational and mathematical approaches for solving QCD in low-energy systems such as the nuclei are extremely limited. This is due to the phase transition that occurs between high-energy quark matter and low-energy hadronic matter, which renders perturbative techniques unusable, making it difficult to construct an accurate QCD-derived model of the forces between nucleons. Current approaches are limited to either phenomenological models such as the Argonne v18 potential or chiral effective field theory. - Even if the nuclear force is well constrained, a significant amount of computational power is required to accurately compute the properties of nuclei ab initio. Developments in many-body theory have made this possible for many low mass and relatively stable nuclei, but further improvements in both computational power and mathematical approaches are required before heavy nuclei or highly unstable nuclei can be tackled. Historically, experiments have been compared to relatively crude models that are necessarily imperfect. None of these models can completely explain experimental data on nuclear structure. The nuclear radius (R) is considered to be one of the basic quantities that any model must predict. For stable nuclei (not halo nuclei or other unstable distorted nuclei) the nuclear radius is roughly proportional to the cube root of the mass number (A) of the nucleus, and particularly in nuclei containing many nucleons, as they arrange in more spherical configurations: The stable nucleus has approximately a constant density and therefore the nuclear radius R can be approximated by the following formula, $$ R = r_0 A^{1/3} \, $$ where A = Atomic mass number (the number of protons Z, plus the number of neutrons N) and r0 = 1.25 fm = 1.25 × 10−15 m. In this equation, the "constant" r0 varies by 0.2 fm, depending on the nucleus in question, but this is less than 20% change from a constant. In other words, packing protons and neutrons in the nucleus gives approximately the same total size result as packing hard spheres of a constant size (like marbles) into a tight spherical or almost spherical bag (some stable nuclei are not quite spherical, but are known to be prolate). Models of nuclear structure include: ### Cluster model The cluster model describes the nucleus as a molecule-like collection of proton-neutron groups (e.g., alpha particles) with one or more valence neutrons occupying molecular orbitals. ### Liquid drop model Early models of the nucleus viewed the nucleus as a rotating liquid drop. In this model, the trade-off of long-range electromagnetic forces and relatively short-range nuclear forces, together cause behavior which resembled surface tension forces in liquid drops of different sizes. This formula is successful at explaining many important phenomena of nuclei, such as their changing amounts of binding energy as their size and composition changes (see semi-empirical mass formula), but it does not explain the special stability which occurs when nuclei have special "magic numbers" of protons or neutrons. The terms in the semi-empirical mass formula, which can be used to approximate the binding energy of many nuclei, are considered as the sum of five types of energies (see below). Then the picture of a nucleus as a drop of incompressible liquid roughly accounts for the observed variation of binding energy of the nucleus: Volume energy. When an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. So, this nuclear energy is proportional to the volume. Surface energy. A nucleon at the surface of a nucleus interacts with fewer other nucleons than one in the interior of the nucleus and hence its binding energy is less. This surface energy term takes that into account and is therefore negative and is proportional to the surface area. Coulomb energy. The electric repulsion between each pair of protons in a nucleus contributes toward decreasing its binding energy. Asymmetry energy (also called Pauli Energy). An energy associated with the Pauli exclusion principle. Were it not for the Coulomb energy, the most stable form of nuclear matter would have the same number of neutrons as protons, since unequal numbers of neutrons and protons imply filling higher energy levels for one type of particle, while leaving lower energy levels vacant for the other type. Pairing energy. An energy which is a correction term that arises from the tendency of proton pairs and neutron pairs to occur. An even number of particles is more stable than an odd number. ### Shell models and other quantum models A number of models for the nucleus have also been proposed in which nucleons occupy orbitals, much like the atomic orbitals in atomic physics theory. These wave models imagine nucleons to be either sizeless point particles in potential wells, or else probability waves as in the "optical model", frictionlessly orbiting at high speed in potential wells. In the above models, the nucleons may occupy orbitals in pairs, due to being fermions, which allows explanation of even/odd Z and N effects well known from experiments. The exact nature and capacity of nuclear shells differs from those of electrons in atomic orbitals, primarily because the potential well in which the nucleons move (especially in larger nuclei) is quite different from the central electromagnetic potential well which binds electrons in atoms. Some resemblance to atomic orbital models may be seen in a small atomic nucleus like that of helium-4, in which the two protons and two neutrons separately occupy 1s orbitals analogous to the 1s orbital for the two electrons in the helium atom, and achieve unusual stability for the same reason. Nuclei with 5 nucleons are all extremely unstable and short-lived, yet, helium-3, with 3 nucleons, is very stable even with lack of a closed 1s orbital shell. Another nucleus with 3 nucleons, the triton hydrogen-3 is unstable and will decay into helium-3 when isolated. Weak nuclear stability with 2 nucleons {NP} in the 1s orbital is found in the deuteron hydrogen-2, with only one nucleon in each of the proton and neutron potential wells. While each nucleon is a fermion, the {NP} deuteron is a boson and thus does not follow Pauli Exclusion for close packing within shells. Lithium-6 with 6 nucleons is highly stable without a closed second 1p shell orbital. For light nuclei with total nucleon numbers 1 to 6 only those with 5 do not show some evidence of stability. Observations of beta-stability of light nuclei outside closed shells indicate that nuclear stability is much more complex than simple closure of shell orbitals with magic numbers of protons and neutrons. For larger nuclei, the shells occupied by nucleons begin to differ significantly from electron shells, but nevertheless, present nuclear theory does predict the magic numbers of filled nuclear shells for both protons and neutrons. The closure of the stable shells predicts unusually stable configurations, analogous to the noble group of nearly-inert gases in chemistry. An example is the stability of the closed shell of 50 protons, which allows tin to have 10 stable isotopes, more than any other element. Similarly, the distance from shell-closure explains the unusual instability of isotopes which have far from stable numbers of these particles, such as the radioactive elements 43 (technetium) and 61 (promethium), each of which is preceded and followed by 17 or more stable elements. There are however problems with the shell model when an attempt is made to account for nuclear properties well away from closed shells. This has led to complex post hoc distortions of the shape of the potential well to fit experimental data, but the question remains whether these mathematical manipulations actually correspond to the spatial deformations in real nuclei. Problems with the shell model have led some to propose realistic two-body and three-body nuclear force effects involving nucleon clusters and then build the nucleus on this basis. Three such cluster models are the 1936 Resonating Group Structure model of John Wheeler, Close-Packed Spheron Model of Linus Pauling and the 2D Ising Model of MacGregor.
https://en.wikipedia.org/wiki/Atomic_nucleus
The word "mass" has two meanings in special relativity: invariant mass (also called rest mass) is an invariant quantity which is the same for all observers in all reference frames, while the relativistic mass is dependent on the velocity of the observer. According to the concept of mass–energy equivalence, invariant mass is equivalent to rest energy, while relativistic mass is equivalent to relativistic energy (also called total energy). The term "relativistic mass" tends not to be used in particle and nuclear physics and is often avoided by writers on special relativity, in favor of referring to the body's relativistic energy. In contrast, "invariant mass" is usually preferred over rest energy. The measurable inertia of a body in a given frame of reference is determined by its relativistic mass, not merely its invariant mass. For example, photons have zero rest mass but contribute to the inertia (and weight in a gravitational field) of any system containing them. The concept is generalized in mass in general relativity. ## Rest mass The term mass in special relativity usually refers to the rest mass of the object, which is the Newtonian mass as measured by an observer moving along with the object. The invariant mass is another name for the rest mass of single particles. The more general invariant mass (calculated with a more complicated formula) loosely corresponds to the "rest mass" of a "system". Thus, invariant mass is a natural unit of mass used for systems which are being viewed from their center of momentum frame (COM frame), as when any closed system (for example a bottle of hot gas) is weighed, which requires that the measurement be taken in the center of momentum frame where the system has no net momentum. Under such circumstances the invariant mass is equal to the relativistic mass (discussed below), which is the total energy of the system divided by c2 (the speed of light squared). The concept of invariant mass does not require bound systems of particles, however. As such, it may also be applied to systems of unbound particles in high-speed relative motion. Because of this, it is often employed in particle physics for systems which consist of widely separated high-energy particles. If such systems were derived from a single particle, then the calculation of the invariant mass of such systems, which is a never-changing quantity, will provide the rest mass of the parent particle (because it is conserved over time). It is often convenient in calculation that the invariant mass of a system is the total energy of the system (divided by ) in the COM frame (where, by definition, the momentum of the system is zero). However, since the invariant mass of any system is also the same quantity in all inertial frames, it is a quantity often calculated from the total energy in the COM frame, then used to calculate system energies and momenta in other frames where the momenta are not zero, and the system total energy will necessarily be a different quantity than in the COM frame. As with energy and momentum, the invariant mass of a system cannot be destroyed or changed, and it is thus conserved, so long as the system is closed to all influences. (The technical term is isolated system meaning that an idealized boundary is drawn around the system, and no mass/energy is allowed across it.) ## ### Relativistic mass The relativistic mass is the sum total quantity of energy in a body or system (divided by ). Thus, the mass in the formula $$ E = m_\text{rel} c^2 $$ is the relativistic mass. For a particle of non-zero rest mass moving at a speed $$ v $$ relative to the observer, one finds $$ m_\text{rel} = \frac{m}{\sqrt{1 - \dfrac{v^2}{c^2}}}. $$ In the center of momentum frame, $$ v = 0 $$ and the relativistic mass equals the rest mass. In other frames, the relativistic mass (of a body or system of bodies) includes a contribution from the "net" kinetic energy of the body (the kinetic energy of the center of mass of the body), and is larger the faster the body moves. Thus, unlike the invariant mass, the relativistic mass depends on the observer's frame of reference. However, for given single frames of reference and for isolated systems, the relativistic mass is also a conserved quantity. The relativistic mass is also the proportionality factor between velocity and momentum, $$ \mathbf{p} = m_\text{rel}\mathbf{v}. $$ Newton's second law remains valid in the form $$ \mathbf{f} = \frac{d(m_\text{rel}\mathbf{v})}{dt}. $$ When a body emits light of frequency $$ \nu $$ and wavelength $$ \lambda $$ as a photon of energy $$ E = h \nu = h c / \lambda $$ , the mass of the body decreases by $$ E/c^2 = h/ \lambda c $$ , which someKetterle, W. and Jamison, A. O. (2020). "An atomic physics perspective on the kilogram’s new definition", "Physics Today" 73, 32-38 interpret as the relativistic mass of the emitted photon since it also fulfills $$ p = m_\text{rel}c = h/\lambda $$ . Although some authors present relativistic mass as a fundamental concept of the theory, it has been argued that this is wrong as the fundamentals of the theory relate to space–time. There is disagreement over whether the concept is pedagogically useful. It explains simply and quantitatively why a body subject to a constant acceleration cannot reach the speed of light, and why the mass of a system emitting a photon decreases. In relativistic quantum chemistry, relativistic mass is used to explain electron orbital contraction in heavy elements. The notion of mass as a property of an object from Newtonian mechanics does not bear a precise relationship to the concept in relativity. Relativistic mass is not referenced in nuclear and particle physics, and a survey of introductory textbooks in 2005 showed that only 5 of 24 texts used the concept, although it is still prevalent in popularizations. If a stationary box contains many particles, its weight increases in its rest frame the faster the particles are moving. Any energy in the box (including the kinetic energy of the particles) adds to the mass, so that the relative motion of the particles contributes to the mass of the box. But if the box itself is moving (its center of mass is moving), there remains the question of whether the kinetic energy of the overall motion should be included in the mass of the system. The invariant mass is calculated excluding the kinetic energy of the system as a whole (calculated using the single velocity of the box, which is to say the velocity of the box's center of mass), while the relativistic mass is calculated including invariant mass plus the kinetic energy of the system which is calculated from the velocity of the center of mass. ## Relativistic vs. rest mass Relativistic mass and rest mass are both traditional concepts in physics, but the relativistic mass corresponds to the total energy. The relativistic mass is the mass of the system as it would be measured on a scale, but in some cases (such as the box above) this fact remains true only because the system on average must be at rest to be weighed (it must have zero net momentum, which is to say, the measurement is in its center of momentum frame). For example, if an electron in a cyclotron is moving in circles with a relativistic velocity, the mass of the cyclotron+electron system is increased by the relativistic mass of the electron, not by the electron's rest mass. But the same is also true of any closed system, such as an electron-and-box, if the electron bounces at high speed inside the box. It is only the lack of total momentum in the system (the system momenta sum to zero) which allows the kinetic energy of the electron to be "weighed". If the electron is stopped and weighed, or the scale were somehow sent after it, it would not be moving with respect to the scale, and again the relativistic and rest masses would be the same for the single electron (and would be smaller). In general, relativistic and rest masses are equal only in systems which have no net momentum and the system center of mass is at rest; otherwise they may be different. The invariant mass is proportional to the value of the total energy in one reference frame, the frame where the object as a whole is at rest (as defined below in terms of center of mass). This is why the invariant mass is the same as the rest mass for single particles. However, the invariant mass also represents the measured mass when the center of mass is at rest for systems of many particles. This special frame where this occurs is also called the center of momentum frame, and is defined as the inertial frame in which the center of mass of the object is at rest (another way of stating this is that it is the frame in which the momenta of the system's parts add to zero). For compound objects (made of many smaller objects, some of which may be moving) and sets of unbound objects (some of which may also be moving), only the center of mass of the system is required to be at rest, for the object's relativistic mass to be equal to its rest mass. A so-called massless particle (such as a photon, or a theoretical graviton) moves at the speed of light in every frame of reference. In this case there is no transformation that will bring the particle to rest. The total energy of such particles becomes smaller and smaller in frames which move faster and faster in the same direction. As such, they have no rest mass, because they can never be measured in a frame where they are at rest. This property of having no rest mass is what causes these particles to be termed "massless". However, even massless particles have a relativistic mass, which varies with their observed energy in various frames of reference. ## Invariant mass The invariant mass is the ratio of four-momentum (the four-dimensional generalization of classical momentum) to four-velocity: $$ p^\mu = m v^\mu $$ and is also the ratio of four-acceleration to four-force when the rest mass is constant. The four-dimensional form of Newton's second law is: $$ F^\mu = m A^\mu. $$ ## Relativistic energy–momentum equation The relativistic expressions for and obey the relativistic energy–momentum relation: $$ E^2 - (pc)^2 = \left(mc^2\right)^2 $$ where the m is the rest mass, or the invariant mass for systems, and is the total energy. The equation is also valid for photons, which have : $$ E^2 - (pc)^2 = 0 $$ and therefore $$ E = pc $$ A photon's momentum is a function of its energy, but it is not proportional to the velocity, which is always . For an object at rest, the momentum is zero, therefore $$ E = mc^2. $$ Note that the formula is true only for particles or systems with zero momentum. The rest mass is only proportional to the total energy in the rest frame of the object. When the object is moving, the total energy is given by $$ E = \sqrt{\left(mc^2\right)^2 + (pc)^2} $$ To find the form of the momentum and energy as a function of velocity, it can be noted that the four-velocity, which is proportional to $$ \left(c, \vec{v}\right) $$ , is the only four-vector associated with the particle's motion, so that if there is a conserved four-momentum $$ \left(E, \vec{p}c\right) $$ , it must be proportional to this vector. This allows expressing the ratio of energy to momentum as $$ pc = E \frac{v}{c} , $$ resulting in a relation between and : $$ E^2 = \left(mc^2\right)^2 + E^2 \frac{v^2}{c^2}, $$ This results in $$ E = \frac{mc^2}{\sqrt{1 - \dfrac{v^2}{c^2}}} $$ and $$ p = \frac{mv}{\sqrt{1 - \dfrac{v^2}{c^2}}}. $$ these expressions can be written as $$ \begin{align} E_0 &= mc^2 , \\ E &= \gamma mc^2 , \\ p &= mv \gamma , \end{align} $$ where the factor $$ \gamma = {1}/{\sqrt{1-\frac{v^2}{c^2}}}. $$ When working in units where , known as the natural unit system, all the relativistic equations are simplified and the quantities energy, momentum, and mass have the same natural dimension: $$ m^2 = E^2 - p^2. $$ The equation is often written this way because the difference $$ E^2 - p^2 $$ is the relativistic length of the energy momentum four-vector, a length which is associated with rest mass or invariant mass in systems. Where and , this equation again expresses the mass–energy equivalence . ## The mass of composite systems The rest mass of a composite system is not the sum of the rest masses of the parts, unless all the parts are at rest. The total mass of a composite system includes the kinetic energy and field energy in the system. The total energy of a composite system can be determined by adding together the sum of the energies of its components. The total momentum $$ \vec{p} $$ of the system, a vector quantity, can also be computed by adding together the momenta of all its components. Given the total energy and the length (magnitude) of the total momentum vector $$ \vec{p} $$ , the invariant mass is given by: $$ m = \frac{\sqrt{E^2 - (pc)^2}}{c^2} $$ In the system of natural units where , for systems of particles (whether bound or unbound) the total system invariant mass is given equivalently by the following: $$ m^2 = \left(\sum E\right)^2 - \left\|\sum \vec{p} \ \right\|^2 $$ Where, again, the particle momenta $$ \vec{p} $$ are first summed as vectors, and then the square of their resulting total magnitude (Euclidean norm) is used. This results in a scalar number, which is subtracted from the scalar value of the square of the total energy. For such a system, in the special center of momentum frame where momenta sum to zero, again the system mass (called the invariant mass) corresponds to the total system energy or, in units where , is identical to it. This invariant mass for a system remains the same quantity in any inertial frame, although the system total energy and total momentum are functions of the particular inertial frame which is chosen, and will vary in such a way between inertial frames as to keep the invariant mass the same for all observers. Invariant mass thus functions for systems of particles in the same capacity as "rest mass" does for single particles. Note that the invariant mass of an isolated system (i.e., one closed to both mass and energy) is also independent of observer or inertial frame, and is a constant, conserved quantity for isolated systems and single observers, even during chemical and nuclear reactions. The concept of invariant mass is widely used in particle physics, because the invariant mass of a particle's decay products is equal to its rest mass. This is used to make measurements of the mass of particles like the Z boson or the top quark. ## Conservation versus invariance of mass in special relativity Total energy is an additive conserved quantity (for single observers) in systems and in reactions between particles, but rest mass (in the sense of being a sum of particle rest masses) may not be conserved through an event in which rest masses of particles are converted to other types of energy, such as kinetic energy. Finding the sum of individual particle rest masses would require multiple observers, one for each particle rest inertial frame, and these observers ignore individual particle kinetic energy. Conservation laws require a single observer and a single inertial frame. In general, for isolated systems and single observers, relativistic mass is conserved (each observer sees it constant over time), but is not invariant (that is, different observers see different values). Invariant mass, however, is both conserved and invariant (all single observers see the same value, which does not change over time). The relativistic mass corresponds to the energy, so conservation of energy automatically means that relativistic mass is conserved for any given observer and inertial frame. However, this quantity, like the total energy of a particle, is not invariant. This means that, even though it is conserved for any observer during a reaction, its absolute value will change with the frame of the observer, and for different observers in different frames. By contrast, the rest mass and invariant masses of systems and particles are conserved also invariant. For example: A closed container of gas (closed to energy as well) has a system "rest mass" in the sense that it can be weighed on a resting scale, even while it contains moving components. This mass is the invariant mass, which is equal to the total relativistic energy of the container (including the kinetic energy of the gas) only when it is measured in the center of momentum frame. Just as is the case for single particles, the calculated "rest mass" of such a container of gas does not change when it is in motion, although its "relativistic mass" does change. The container may even be subjected to a force which gives it an overall velocity, or else (equivalently) it may be viewed from an inertial frame in which it has an overall velocity (that is, technically, a frame in which its center of mass has a velocity). In this case, its total relativistic mass and energy increase. However, in such a situation, although the container's total relativistic energy and total momentum increase, these energy and momentum increases subtract out in the invariant mass definition, so that the moving container's invariant mass will be calculated as the same value as if it were measured at rest, on a scale. ### Closed systems All conservation laws in special relativity (for energy, mass, and momentum) require isolated systems, meaning systems that are totally isolated, with no mass–energy allowed in or out, over time. If a system is isolated, then both total energy and total momentum in the system are conserved over time for any observer in any single inertial frame, though their absolute values will vary, according to different observers in different inertial frames. The invariant mass of the system is also conserved, but does not change with different observers. This is also the familiar situation with single particles: all observers calculate the same particle rest mass (a special case of the invariant mass) no matter how they move (what inertial frame they choose), but different observers see different total energies and momenta for the same particle. Conservation of invariant mass also requires the system to be enclosed so that no heat and radiation (and thus invariant mass) can escape. As in the example above, a physically enclosed or bound system does not need to be completely isolated from external forces for its mass to remain constant, because for bound systems these merely act to change the inertial frame of the system or the observer. Though such actions may change the total energy or momentum of the bound system, these two changes cancel, so that there is no change in the system's invariant mass. This is just the same result as with single particles: their calculated rest mass also remains constant no matter how fast they move, or how fast an observer sees them move. On the other hand, for systems which are unbound, the "closure" of the system may be enforced by an idealized surface, inasmuch as no mass–energy can be allowed into or out of the test-volume over time, if conservation of system invariant mass is to hold during that time. If a force is allowed to act on (do work on) only one part of such an unbound system, this is equivalent to allowing energy into or out of the system, and the condition of "closure" to mass–energy (total isolation) is violated. In this case, conservation of invariant mass of the system also will no longer hold. Such a loss of rest mass in systems when energy is removed, according to where is the energy removed, and is the change in rest mass, reflect changes of mass associated with movement of energy, not "conversion" of mass to energy. ### The system invariant mass vs. the individual rest masses of parts of the system Again, in special relativity, the rest mass of a system is not required to be equal to the sum of the rest masses of the parts (a situation which would be analogous to gross mass-conservation in chemistry). For example, a massive particle can decay into photons which individually have no mass, but which (as a system) preserve the invariant mass of the particle which produced them. Also a box of moving non-interacting particles (e.g., photons, or an ideal gas) will have a larger invariant mass than the sum of the rest masses of the particles which compose it. This is because the total energy of all particles and fields in a system must be summed, and this quantity, as seen in the center of momentum frame, and divided by , is the system's invariant mass. In special relativity, mass is not "converted" to energy, for all types of energy still retain their associated mass. Neither energy nor invariant mass can be destroyed in special relativity, and each is separately conserved over time in closed systems. Thus, a system's invariant mass may change only because invariant mass is allowed to escape, perhaps as light or heat. Thus, when reactions (whether chemical or nuclear) release energy in the form of heat and light, if the heat and light is not allowed to escape (the system is closed and isolated), the energy will continue to contribute to the system rest mass, and the system mass will not change. Only if the energy is released to the environment will the mass be lost; this is because the associated mass has been allowed out of the system, where it contributes to the mass of the surroundings. ## History of the relativistic mass concept ### Transverse and longitudinal mass Concepts that were similar to what nowadays is called "relativistic mass", were already developed before the advent of special relativity. For example, it was recognized by J. J. Thomson in 1881 that a charged body is harder to set in motion than an uncharged body, which was worked out in more detail by Oliver Heaviside (1889) and George Frederick Charles Searle (1897). So the electrostatic energy behaves as having some sort of electromagnetic mass $$ m_\text{em} = \frac{4}{3} E_\text{em}/c^2 $$ , which can increase the normal mechanical mass of the bodies. Then, it was pointed out by Thomson and Searle that this electromagnetic mass also increases with velocity. This was further elaborated by Hendrik Lorentz (1899, 1904) in the framework of Lorentz ether theory. He defined mass as the ratio of force to acceleration, not as the ratio of momentum to velocity, so he needed to distinguish between the mass $$ m_\text{L} = \gamma^3 m $$ parallel to the direction of motion and the mass $$ m_\text{T} = \gamma m $$ perpendicular to the direction of motion (where $$ \gamma = 1/\sqrt{1 - v^2/c^2} $$ is the Lorentz factor, is the relative velocity between the ether and the object, and is the speed of light). Only when the force is perpendicular to the velocity, Lorentz's mass is equal to what is now called "relativistic mass". Max Abraham (1902) called $$ m_\text{L} $$ longitudinal mass and $$ m_\text{T} $$ transverse mass (although Abraham used more complicated expressions than Lorentz's relativistic ones). So, according to Lorentz's theory no body can reach the speed of light because the mass becomes infinitely large at this velocity. Albert Einstein also initially used the concepts of longitudinal and transverse mass in his 1905 electrodynamics paper (equivalent to those of Lorentz, but with a different $$ m_\text{T} $$ by an unfortunate force definition, which was later corrected), and in another paper in 1906. However, he later abandoned velocity dependent mass concepts (see quote at the end of next section). The precise relativistic expression (which is equivalent to Lorentz's) relating force and acceleration for a particle with non-zero rest mass $$ m $$ moving in the x direction with velocity v and associated Lorentz factor $$ \gamma $$ is $$ \begin{align} f_\text{x} &= m \gamma^3 a_\text{x} &= m_\text{L} a_\text{x}, \\ f_\text{y} &= m \gamma a_\text{y} &= m_\text{T} a_\text{y}, \\ f_\text{z} &= m \gamma a_\text{z} &= m_\text{T} a_\text{z}. \end{align} $$ Relativistic mass In special relativity, an object that has nonzero rest mass cannot travel at the speed of light. As the object approaches the speed of light, the object's energy and momentum increase without bound. In the first years after 1905, following Lorentz and Einstein, the terms longitudinal and transverse mass were still in use. However, those expressions were replaced by the concept of relativistic mass, an expression which was first defined by Gilbert N. Lewis and Richard C. Tolman in 1909. They defined the total energy and mass of a body as $$ m_\text{rel} = \frac{E}{c^2}, $$ and of a body at rest $$ m_0 = \frac{E_0}{c^2}, $$ with the ratio $$ \frac{m_\text{rel}}{m_0} = \gamma. $$ Tolman in 1912 further elaborated on this concept, and stated: "the expression m0(1 − v/c)−1/2 is best suited for the mass of a moving body." In 1934, Tolman argued that the relativistic mass formula $$ m_\text{rel} = E / c^2 $$ holds for all particles, including those moving at the speed of light, while the formula $$ m_\text{rel} = \gamma m_0 $$ only applies to a slower-than-light particle (a particle with a nonzero rest mass). Tolman remarked on this relation that "We have, moreover, of course the experimental verification of the expression in the case of moving electrons ... We shall hence have no hesitation in accepting the expression as correct in general for the mass of a moving particle." When the relative velocity is zero, $$ \gamma $$ is simply equal to 1, and the relativistic mass is reduced to the rest mass as one can see in the next two equations below. As the velocity increases toward the speed of light c, the denominator of the right side approaches zero, and consequently $$ \gamma $$ approaches infinity. While Newton's second law remains valid in the form $$ \mathbf{f} = \frac{d(m_\text{rel}\mathbf{v})}{dt}, $$ the derived form $$ \mathbf{f} = m_\text{rel} \mathbf{a} $$ is not valid because $$ m_\text{rel} $$ in $$ {d(m_\text{rel}\mathbf{v})} $$ is generally not a constant (see the section above on transverse and longitudinal mass). Even though Einstein initially used the expressions "longitudinal" and "transverse" mass in two papers (see previous section), in his first paper on $$ E = mc^2 $$ (1905) he treated as what would now be called the rest mass. Einstein never derived an equation for "relativistic mass", and in later years he expressed his dislike of the idea: ### Popular science and textbooks The concept of relativistic mass is widely used in popular science writing and in high school and undergraduate textbooks. Authors such as Okun and A. B. Arons have argued against this as archaic and confusing, and not in accord with modern relativistic theory.Also in Arons wrote: For many years it was conventional to enter the discussion of dynamics through derivation of the relativistic mass, that is the mass–velocity relation, and this is probably still the dominant mode in textbooks. More recently, however, it has been increasingly recognized that relativistic mass is a troublesome and dubious concept. [See, for example, Okun (1989).]... The sound and rigorous approach to relativistic dynamics is through direct development of that expression for momentum that ensures conservation of momentum in all frames: rather than through relativistic mass. C. Alder takes a similarly dismissive stance on mass in relativity. Writing on said subject matter, he says that "its introduction into the theory of special relativity was much in the way of a historical accident", noting towards the widespread knowledge of and how the public's interpretation of the equation has largely informed how it is taught in higher education. He instead supposes that the difference between rest and relativistic mass should be explicitly taught, so that students know why mass should be thought of as invariant "in most discussions of inertia". Many contemporary authors such as Taylor and Wheeler avoid using the concept of relativistic mass altogether: While spacetime has the unbounded geometry of Minkowski space, the velocity-space is bounded by and has the geometry of hyperbolic geometry where relativistic mass plays an analogous role to that of Newtonian mass in the barycentric coordinates of Euclidean geometry. The connection of velocity to hyperbolic geometry enables the 3-velocity-dependent relativistic mass to be related to the 4-velocity Minkowski formalism.
https://en.wikipedia.org/wiki/Mass_in_special_relativity%23Relativistic_vs._rest_mass
A search engine is a software system that provides hyperlinks to web pages and other relevant information on the Web in response to a user's query. The user inputs a query within a web browser or a mobile app, and the search results are often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news. For a search provider, its engine is part of a distributed computing system that can encompass many data centers throughout the world. The speed and accuracy of an engine's response to a query is based on a complex system of indexing that is continuously updated by automated web crawlers. This can include data mining the files and databases stored on web servers, but some content is not accessible to crawlers. There have been many search engines since the dawn of the Web in the 1990s, but Google Search became the dominant one in the 2000s and has remained so. It currently has a 90% global market share. The business of websites improving their visibility in search results, known as marketing and optimization, has thus largely focused on Google. ## History + Timeline (full list) Year Engine Current status1993W3CatalogALIWEBJumpStationWWW Worm1994WebCrawlerGo.com, redirects to Disney ### Lycos Infoseek, redirects to Disney1995 ### Yahoo! Search, initially a search function for Yahoo! DirectoryDaumSearch.chMagellan ### Excite MetaCrawlerAltaVista, acquired by Yahoo! in 2003, since 2013 redirects to Yahoo!SAPO1996RankDex, incorporated into Baidu in 2000DogpileHotBot (used Inktomi search technology)Ask Jeeves (rebranded ask.com)1997AOL NetFind (rebranded AOL Search since 1999)goo.ne.jpNorthern LightYandex1998GoogleIxquick as Startpage.comMSN Search as Bingempas (merged with NATE)1999AlltheWeb (URL redirected to Yahoo!)GenieKnows, rebranded Yellowee (was redirecting to justlocalbusiness.com)NaverTeoma (redirect to Ask.com)2000BaiduExaleadGigablast2001Kartoo2003Info.com2004A9.comClusty (redirect to DuckDuckGo)MojeekSogou2005SearchMeKidzSearch, Google Search2006Soso, merged with SogouQuaeroSearch.comChaChaAsk.comLive Search as Bing, rebranded MSN Search2007wikiseekSprooseWikia SearchBlackle.com, Google Search2008Powerset (redirects to Bing)PicollatorViewziBoogamiLeapFishForestle (redirects to Ecosia)DuckDuckGoTinEye2009Bing, rebranded Live SearchYebolScout (Goby)NATEEcosiaStartpage.com, sister engine of Ixquick2010Blekko, sold to IBMCuilYandex (English)Parsijoo2011YaCy, P2P2012Volunia2013Qwant2014Egerin, Kurdish / SoraniSwisscowsSearx2015YoozCliqz2016Kiddle, Google Search2017Presearch2018Kagi2020Petal2021Brave SearchQueyeYou.com ### Pre-1990s In 1945, Vannevar Bush described an information retrieval system that would allow a user to access a great expanse of information, all at a single desk. He called it a memex. He described the system in an article titled "As We May Think" that was published in The Atlantic Monthly. The memex was intended to give a user the capability to overcome the ever-increasing difficulty of locating information in ever-growing centralized indices of scientific work. Vannevar Bush envisioned libraries of research with connected annotations, which are similar to modern hyperlinks. Link analysis eventually became a crucial component of search engines through algorithms such as Hyper Search and PageRank. ### 1990s: Birth of search engines The first internet search engines predate the debut of the Web in December 1990: WHOIS user search dates back to 1982, and the Knowbot Information Service multi-network user search was first implemented in 1989. The first well documented search engine that searched content files, namely FTP files, was ### Archie , which debuted on 10 September 1990. Prior to September 1993, the World Wide Web was entirely indexed by hand. There was a list of webservers edited by Tim Berners-Lee and hosted on the CERN webserver. One snapshot of the list in 1992 remains, but as more and more web servers went online the central list could no longer keep up. On the NCSA site, new servers were announced under the title "What's New!". The first tool used for searching content (as opposed to users) on the Internet was Archie. The name stands for "archive" without the "v". It was created by Alan Emtage, computer science student at McGill University in Montreal, Quebec, Canada. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie Search Engine did not index the contents of these sites since the amount of data was so limited it could be readily searched manually. The rise of Gopher (created in 1991 by Mark McCahill at the University of Minnesota) led to two new search programs, ### Veronica and Jughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie Search Engine" was not a reference to the Archie comic book series, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor. In the summer of 1993, no search engine existed for the web, though numerous specialized catalogs were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series of Perl scripts that periodically mirrored these pages and rewrote them into a standard format. This formed the basis for W3Catalog, the web's first primitive search engine, released on September 2, 1993. In June 1993, Matthew Gray, then at MIT, produced what was probably the first web robot, the Perl-based World Wide Web Wanderer, and used it to generate an index called "Wandex". The purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engine Aliweb appeared in November 1993. Aliweb did not use a web robot, but instead depended on being notified by website administrators of the existence at each site of an index file in a particular format. JumpStation (created in December 1993 by Jonathon Fletcher) used a web robot to find web pages and to build its index, and used a web form as the interface to its query program. It was thus the first WWW resource-discovery tool to combine the three essential features of a web search engine (crawling, indexing, and searching) as described below. Because of the limited resources available on the platform it ran on, its indexing and hence searching were limited to the titles and headings found in the web pages the crawler encountered. One of the first "all text" crawler-based search engines was WebCrawler, which came out in 1994. Unlike its predecessors, it allowed users to search for any word in any web page, which has become the standard for all major search engines since. It was also the search engine that was widely known by the public. Also, in 1994, Lycos (which started at Carnegie Mellon University) was launched and became a major commercial endeavor. The first popular search engine on the Web was Yahoo! Search. The first product from Yahoo!, founded by Jerry Yang and David Filo in January 1994, was a Web directory called Yahoo! Directory. In 1995, a search function was added, allowing users to search Yahoo! Directory. It became one of the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than its full-text copies of web pages. Soon after, a number of search engines appeared and vied for popularity. These included Magellan, Excite, Infoseek, Inktomi, Northern Light, and AltaVista. Information seekers could also browse the directory instead of doing a keyword-based search. In 1996, Robin Li developed the RankDex site-scoring algorithm for search engines results page rankingYanhong Li, "Toward a Qualitative Search Engine", IEEE Internet Computing, vol. 2, no. 4, pp. 24–29, July/Aug. 1998, and received a US patent for the technology. It was the first search engine that used hyperlinks to measure the quality of websites it was indexing, predating the very similar algorithm patent filed by Google two years later in 1998. Larry Page referenced Li's work in some of his U.S. patents for PageRank. Li later used his Rankdex technology for the Baidu search engine, which was founded by him in China and launched in 2000. In 1996, Netscape was looking to give a single search engine an exclusive deal as the featured search engine on Netscape's web browser. There was so much interest that instead, Netscape struck deals with five of the major search engines: for $5 million a year, each search engine would be in rotation on the Netscape search engine page. The five engines were Yahoo!, Magellan, Lycos, Infoseek, and Excite. Google adopted the idea of selling search terms in 1998 from a small search engine company named goto.com. This move had a significant effect on the search engine business, which went from struggling to one of the most profitable businesses in the Internet. Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s. Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in March 2000. ### 2000s–present: Post dot-com bubble Around 2000, Google's search engine rose to prominence. The company achieved better results for many searches with an algorithm called PageRank, as was explained in the paper Anatomy of a Search Engine written by Sergey Brin and Larry Page, the later founders of Google. This iterative algorithm ranks web pages based on the number and PageRank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Larry Page's patent for PageRank cites Robin Li's earlier RankDex patent as an influence. Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in a web portal. In fact, the Google search engine became so popular that spoof engines emerged such as Mystery Seeker. By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, and Overture (which owned AlltheWeb and AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions. Microsoft first launched MSN Search in the fall of 1998 using search results from Inktomi. In early 1999, the site began to display listings from Looksmart, blended with results from Inktomi. For a short time in 1999, MSN Search used results from AltaVista instead. In 2004, Microsoft began a transition to its own search technology, powered by its own web crawler (called msnbot). Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in which Yahoo! Search would be powered by Microsoft Bing technology. As of 2019 active search engine crawlers include those of Google, Sogou, Baidu, Bing, Gigablast, Mojeek, DuckDuckGo and Yandex. ## Approach A search engine maintains the following processes in near real time: 1. Web crawling 1. Indexing 1. Searching Web search engines get their information by web crawling from site to site. The "spider" checks for the standard filename robots.txt, addressed to it. The robots.txt file contains directives for search spiders, telling it which pages to crawl and which pages not to crawl. After checking for robots.txt and either finding it or not, the spider sends certain information back to be indexed depending on many factors, such as the titles, page content, JavaScript, Cascading Style Sheets (CSS), headings, or its metadata in HTML meta tags. After a certain number of pages crawled, amount of data indexed, or time spent on the website, the spider stops crawling and moves on. "[N]o web crawler may actually crawl the entire reachable web. Due to infinite websites, spider traps, spam, and other exigencies of the real web, crawlers instead apply a crawl policy to determine when the crawling of a site should be deemed sufficient. Some websites are crawled exhaustively, while others are crawled only partially". Indexing means associating words and other definable tokens found on web pages to their domain names and HTML-based fields. The associations are stored in a public database and accessible through web search queries. A query from a user can be a single word, multiple words or a sentence. The index helps find information relating to the query as quickly as possible. Some of the techniques for indexing, and caching are trade secrets, whereas web crawling is a straightforward process of visiting all sites on a systematic basis. Between visits by the spider, the cached version of the page (some or all the content needed to render it) stored in the search engine working memory is quickly sent to an inquirer. If a visit is overdue, the search engine can just act as a web proxy instead. In this case, the page may differ from the search terms indexed. The cached page holds the appearance of the version whose words were previously indexed, so a cached version of a page can be useful to the website when the actual page has been lost, but this problem is also considered a mild form of linkrot. Typically when a user enters a query into a search engine it is a few keywords. The index already has the names of the sites containing the keywords, and these are instantly obtained from the index. The real processing load is in generating the web pages that are the search results list: Every page in the entire list must be weighted according to information in the indexes. Then the top search result item requires the lookup, reconstruction, and markup of the snippets showing the context of the keywords matched. These are only part of the processing each search results web page requires, and further pages (next to the top) require more of this post-processing. Beyond simple keyword lookups, search engines offer their own GUI- or command-driven operators and search parameters to refine the search results. These provide the necessary controls for the user engaged in the feedback loop users create by filtering and weighting while refining the search results, given the initial pages of the first search results. For example, from 2007 the Google.com search engine has allowed one to filter by date by clicking "Show search tools" in the leftmost column of the initial search results page, and then selecting the desired date range. It is also possible to weight by date because each page has a modification time. Most search engines support the use of the Boolean operators AND, OR and NOT to help end users refine the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This first form relies much more heavily on the computer itself to do the bulk of the work. Most Web search engines are commercial ventures supported by advertising revenue and thus some of them allow advertisers to have their listings ranked higher in search results for a fee. Search engines that do not accept money for their search results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads. ### Local search Local search is the process that optimizes the efforts of local businesses. They focus on change to make sure all searches are consistent. It is important because many people determine where they plan to go and what to buy based on their searches. ## Market share As of January 2022 Google is by far the world's most used search engine, with a market share of 90%, and the world's other most used search engines were Bing at 4%, Yandex at 2%, Yahoo! at 1%. Other search engines not listed have less than a 3% market share. In 2024, Google's dominance was ruled an illegal monopoly in a case brought by the US Department of Justice. ### Russia and East Asia In Russia, Yandex has a market share of 62.6%, compared to Google's 28.3%. Yandex is the second most used search engine on smartphones in Asia and ### Europe . In China, Baidu is the most popular search engine. South Korea-based search portal Naver is used for 62.8% of online searches in the country. Yahoo! Japan and Yahoo! Taiwan are the most popular choices for Internet searches in Japan and Taiwan, respectively. China is one of few countries where Google is not in the top three web search engines for market share. Google was previously more popular in China, but withdrew significantly after a disagreement with the government over censorship and a cyberattack. Bing, however, is in the top three web search engines with a market share of 14.95%. Baidu is top with 49.1% of the market share. Europe Most countries' markets in the European Union are dominated by Google, except for the Czech Republic, where Seznam is a strong competitor. The search engine Qwant is based in Paris, France, where it attracts most of its 50 million monthly registered users from. ## Search engine bias Although search engines are programmed to rank websites based on some combination of their popularity and relevancy, empirical studies indicate various political, economic, and social biases in the information they provide and the underlying assumptions about the technology. These biases can be a direct result of economic and commercial processes (e.g., companies that advertise with a search engine can become also more popular in its organic search results), and political processes (e.g., the removal of search results to comply with local laws). For example, Google will not surface certain neo-Nazi websites in France and Germany, where Holocaust denial is illegal. Biases can also be a result of social processes, as search engine algorithms are frequently designed to exclude non-normative viewpoints in favor of more "popular" results. Indexing algorithms of major search engines skew towards coverage of U.S.-based sites, rather than websites from non-U.S. countries. Google Bombing is one example of an attempt to manipulate search results for political, social or commercial reasons. Several scholars have studied the cultural changes triggered by search engines, and the representation of certain controversial topics in their results, such as terrorism in Ireland, climate change denial, and conspiracy theories. ## Customized results and filter bubbles There has been concern raised that search engines such as Google and Bing provide customized results based on the user's activity history, leading to what has been termed echo chambers or filter bubbles by Eli Pariser in 2011. The argument is that search engines and social media platforms use algorithms to selectively guess what information a user would like to see, based on information about the user (such as location, past click behaviour and search history). As a result, websites tend to show only information that agrees with the user's past viewpoint. According to Eli Pariser users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. Since this problem has been identified, competing search engines have emerged that seek to avoid this problem by not tracking or "bubbling" users, such as DuckDuckGo. However many scholars have questioned Pariser's view, finding that there is little evidence for the filter bubble. On the contrary, a number of studies trying to verify the existence of filter bubbles have found only minor levels of personalisation in search, that most people encounter a range of views when browsing online, and that Google news tends to promote mainstream established news outlets. ## Religious search engines The global growth of the Internet and electronic media in the Arab and Muslim world during the last decade has encouraged Islamic adherents in the Middle East and Asian sub-continent, to attempt their own search engines, their own filtered search portals that would enable users to perform safe searches. More than usual safe search filters, these Islamic web portals categorizing websites into being either "halal" or "haram", based on interpretation of Sharia law. ImHalal came online in September 2011. Halalgoogling came online in July 2013. These use haram filters on the collections from Google and Bing (and others). While lack of investment and slow pace in technologies in the Muslim world has hindered progress and thwarted success of an Islamic search engine, targeting as the main consumers Islamic adherents, projects like Muxlim (a Muslim lifestyle site) received millions of dollars from investors like Rite Internet Ventures, and it also faltered. Other religion-oriented search engines are Jewogle, the Jewish version of Google, and Christian search engine SeekFind.org. SeekFind filters sites that attack or degrade their faith. ## Search engine submission Web search engine submission is a process in which a webmaster submits a website directly to a search engine. While search engine submission is sometimes presented as a way to promote a website, it generally is not necessary because the major search engines use web crawlers that will eventually find most web sites on the Internet without assistance. They can either submit one web page at a time, or they can submit the entire site using a sitemap, but it is normally only necessary to submit the home page of a web site as search engines are able to crawl a well designed website. There are two remaining reasons to submit a web site or web page to a search engine: to add an entirely new web site without waiting for a search engine to discover it, and to have a web site's record updated after a substantial redesign. Some search engine submission software not only submits websites to multiple search engines, but also adds links to websites from their own pages. This could appear helpful in increasing a website's ranking, because external links are one of the most important factors determining a website's ranking. However, John Mueller of Google has stated that this "can lead to a tremendous number of unnatural links for your site" with a negative impact on site ranking. ## Comparison to social bookmarking ## Technology Archie The first web search engine was Archie, created in 1990 by Alan Emtage, a student at McGill University in Montreal. The author originally wanted to call the program "archives", but had to shorten it to comply with the Unix world standard of assigning programs and files short, cryptic names such as grep, cat, troff, sed, awk, perl, and so on. The primary method of storing and retrieving files was via the File Transfer Protocol (FTP). This was (and still is) a system that specified a common way for computers to exchange files over the Internet. It works like this: Some administrator decides that he wants to make files available from his computer. He sets up a program on his computer, called an FTP server. When someone on the Internet wants to retrieve a file from this computer, he or she connects to it via another program called an FTP client. Any FTP client program can connect with any FTP server program as long as the client and server programs both fully follow the specifications set forth in the FTP protocol. Initially, anyone who wanted to share a file had to set up an FTP server in order to make the file available to others. Later, "anonymous" FTP sites became repositories for files, allowing all users to post and retrieve them. Even with archive sites, many important files were still scattered on small FTP servers. These files could be located only by the Internet equivalent of word of mouth: Somebody would post an e-mail to a message list or a discussion forum announcing the availability of a file. Archie changed all that. It combined a script-based data gatherer, which fetched site listings of anonymous FTP files, with a regular expression matcher for retrieving file names matching a user query. (4) In other words, Archie's gatherer scoured FTP sites across the Internet and indexed all of the files it found. Its regular expression matcher provided users with access to its database. Veronica In 1993, the University of Nevada System Computing Services group developed Veronica. It was created as a type of searching device similar to Archie but for Gopher files. Another Gopher search service, called Jughead, appeared a little later, probably for the sole purpose of rounding out the comic-strip triumvirate. Jughead is an acronym for Jonzy's Universal Gopher Hierarchy Excavation and Display, although, like Veronica, it is probably safe to assume that the creator backed into the acronym. Jughead's functionality was pretty much identical to Veronica's, although it appears to be a little rougher around the edges. ### The Lone Wanderer The World Wide Web Wanderer, developed by Matthew Gray in 1993 was the first robot on the Web and was designed to track the Web's growth. Initially, the Wanderer counted only Web servers, but shortly after its introduction, it started to capture URLs as it went along. The database of captured URLs became the Wandex, the first web database. Matthew Gray's Wanderer created quite a controversy at the time, partially because early versions of the software ran rampant through the Net and caused a noticeable netwide performance degradation. This degradation occurred because the Wanderer would access the same page hundreds of times a day. The Wanderer soon amended its ways, but the controversy over whether robots were good or bad for the Internet remained. In response to the Wanderer, Martijn Koster created Archie-Like Indexing of the Web, or ALIWEB, in October 1993. As the name implies, ALIWEB was the HTTP equivalent of Archie, and because of this, it is still unique in many ways. ALIWEB does not have a web-searching robot. Instead, webmasters of participating sites post their own index information for each page they want listed. The advantage to this method is that users get to describe their own site, and a robot does not run about eating up Net bandwidth. The disadvantages of ALIWEB are more of a problem today. The primary disadvantage is that a special indexing file must be submitted. Most users do not understand how to create such a file, and therefore they do not submit their pages. This leads to a relatively small database, which meant that users are less likely to search ALIWEB than one of the large bot-based sites. This Catch-22 has been somewhat offset by incorporating other databases into the ALIWEB search, but it still does not have the mass appeal of search engines such as Yahoo! or Lycos. Excite Excite, initially called Architext, was started by six Stanford undergraduates in February 1993. Their idea was to use statistical analysis of word relationships in order to provide more efficient searches through the large amount of information on the Internet. Their project was fully funded by mid-1993. Once funding was secured. they released a version of their search software for webmasters to use on their own web sites. At the time, the software was called Architext, but it now goes by the name of Excite for Web Servers. Excite was the first serious commercial search engine which launched in 1995. It was developed in Stanford and was purchased for $6.5 billion by @Home. In 2001 Excite and @Home went bankrupt and InfoSpace bought Excite for $10 million. Some of the first analysis of web searching was conducted on search logs from Excite Yahoo! In April 1994, two Stanford University Ph.D. candidates, David Filo and Jerry Yang, created some pages that became rather popular. They called the collection of pages Yahoo! Their official explanation for the name choice was that they considered themselves to be a pair of yahoos. As the number of links grew and their pages began to receive thousands of hits a day, the team created ways to better organize the data. In order to aid in data retrieval, Yahoo! (www.yahoo.com) became a searchable directory. The search feature was a simple database search engine. Because Yahoo! entries were entered and categorized manually, Yahoo! was not really classified as a search engine. Instead, it was generally considered to be a searchable directory. Yahoo! has since automated some aspects of the gathering and classification process, blurring the distinction between engine and directory. The Wanderer captured only URLs, which made it difficult to find things that were not explicitly described by their URL. Because URLs are rather cryptic to begin with, this did not help the average user. Searching Yahoo! or the Galaxy was much more effective because they contained additional descriptive information about the indexed sites. Lycos At Carnegie Mellon University during July 1994, Michael Mauldin, on leave from CMU, developed the Lycos search engine. ### Types of web search engines Search engines on the web are sites enriched with facility to search the content stored on other sites. There is difference in the way various search engines work, but they all perform three basic tasks. 1. Finding and selecting full or partial content based on the keywords provided. 1. Maintaining index of the content and referencing to the location they find 1. Allowing users to look for words or combinations of words found in that index. The process begins when a user enters a query statement into the system through the interface provided. Type Example Description Conventional librarycatalog Search by keyword, title, author, etc. Text-based Google, Bing, Yahoo! Search by keywords. Limited search using queries in natural language. Voice-based Google, Bing, Yahoo! Search by keywords. Limited search using queries in natural language. Multimedia search QBIC, WebSeek, SaFe Search by visual appearance (shapes, colors,..) Q/A Stack Exchange, NSIR Search in (restricted) natural language Clustering Systems Vivisimo, Clusty Research Systems Lemur, Nutch There are basically three types of search engines: Those that are powered by robots (called crawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two. Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine. Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index. In both cases, when you query a search engine to locate information, you're actually searching through the index that the search engine has created —you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index has not been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated. So why will the same search on different search engines produce different results? Part of the answer to that question is because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted. But more important, not every search engine uses the same algorithm to search through the indices. The algorithm is what the search engines use to determine the relevance of the information in the index to what the user is searching for. One of the elements that a search engine algorithm scans for is the frequency and location of keywords on a Web page. Those with higher frequency are typically considered more relevant. But search engine technology is becoming sophisticated in its attempt to discourage what is known as keyword stuffing, or spamdexing. Another common element that algorithms analyze is the way that pages link to other pages in the Web. By analyzing how pages link to each other, an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more savvy to Web masters who build artificial links into their sites in order to build an artificial ranking. Modern web search engines are highly intricate software systems that employ technology that has evolved over the years. There are a number of sub-categories of search engine software that are separately applicable to specific 'browsing' needs. These include web search engines (e.g. Google), database or structured data search engines (e.g. Dieselpoint), and mixed search engines or enterprise search. The more prevalent search engines, such as Google and Yahoo!, utilize hundreds of thousands computers to process trillions of web pages in order to return fairly well-aimed results. Due to this high volume of queries and text processing, the software is required to run in a highly dispersed environment with a high degree of superfluity. Another category of search engines is scientific search engines. These are search engines which search scientific literature. The best known example is Google Scholar. Researchers are working on improving search engine technology by making them understand the content element of the articles, such as extracting theoretical constructs or key research findings.
https://en.wikipedia.org/wiki/Search_engine
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation $$ i^{2}= -1 $$ ; every complex number can be expressed in the form $$ a + bi $$ , where and are real numbers. Because no real number satisfies the above equation, was called an imaginary number by René Descartes. For the complex number is called the , and is called the . The set of complex numbers is denoted by either of the symbols $$ \mathbb C $$ or . Despite the historical nomenclature, "imaginary" complex numbers have a mathematical existence as firm as that of the real numbers, and they are fundamental tools in the scientific description of the natural world."Complex numbers, as much as reals, and perhaps even more, find a unity with nature that is truly remarkable. It is as though Nature herself is as impressed by the scope and consistency of the complex-number system as we are ourselves, and has entrusted to these numbers the precise operations of her world at its minutest scales.", . Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation $$ (x+1)^2 = -9 $$ has no real solution, because the square of a real number cannot be negative, but has the two nonreal complex solutions $$ -1+3i $$ and $$ -1-3i $$ . Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule $$ i^{2}=-1 $$ along with the associative, commutative, and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field with the real numbers as a subfield. Because of these properties, , and which form is written depends upon convention and style considerations. The complex numbers also form a real vector space of dimension two, with $$ \{1,i\} $$ as a standard basis. This standard basis makes the complex numbers a Cartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely some geometric objects and operations can be expressed in terms of complex numbers. For example, the real numbers form the real line, which is pictured as the horizontal axis of the complex plane, while real multiples of $$ i $$ are the vertical axis. A complex number can also be defined by its geometric polar coordinates: the radius is called the absolute value of the complex number, while the angle from the positive real axis is called the argument of the complex number. The complex numbers of absolute value one form the unit circle. Adding a fixed complex number to all complex numbers defines a translation in the complex plane, and multiplying by a fixed complex number is a similarity centered at the origin (dilating by the absolute value, and rotating by the argument). The operation of complex conjugation is the reflection symmetry with respect to the real axis. The complex numbers form a rich structure that is simultaneously an algebraically closed field, a commutative algebra over the reals, and a Euclidean vector space of dimension two. ## Definition and basic operations A complex number is an expression of the form , where and are real numbers, and is an abstract symbol, the so-called imaginary unit, whose meaning will be explained further below. For example, is a complex number. For a complex number , the real number is called its real part, and the real number (not the complex number ) is its imaginary part. The real part of a complex number is denoted , $$ \mathcal{Re}(z) $$ , or $$ \mathfrak{R}(z) $$ ; the imaginary part is , $$ \mathcal{Im}(z) $$ , or $$ \mathfrak{I}(z) $$ : for example, $$ \operatorname{Re}(2 + 3i) = 2 $$ , $$ \operatorname{Im}(2 + 3i) = 3 $$ . A complex number can be identified with the ordered pair of real numbers $$ (\Re (z),\Im (z)) $$ , which may be interpreted as coordinates of a point in a Euclidean plane with standard coordinates, which is then called the complex plane or Argand diagram. The horizontal axis is generally used to display the real part, with increasing values to the right, and the imaginary part marks the vertical axis, with increasing values upwards. A real number can be regarded as a complex number , whose imaginary part is 0. A purely imaginary number is a complex number , whose real part is zero. It is common to write , , and ; for example, . The set of all complex numbers is denoted by $$ \Complex $$ (blackboard bold) or (upright bold). In some disciplines such as electromagnetism and electrical engineering, is used instead of , as frequently represents electric current, and complex numbers are written as or . ### Addition and subtraction Two complex numbers $$ a =x+yi $$ and $$ b =u+vi $$ are added by separately adding their real and imaginary parts. That is to say: $$ a + b =(x+yi) + (u+vi) = (x+u) + (y+v)i. $$ Similarly, subtraction can be performed as $$ a - b =(x+yi) - (u+vi) = (x-u) + (y-v)i. $$ The addition can be geometrically visualized as follows: the sum of two complex numbers and , interpreted as points in the complex plane, is the point obtained by building a parallelogram from the three vertices , and the points of the arrows labeled and (provided that they are not on a line). Equivalently, calling these points , , respectively and the fourth point of the parallelogram the triangles and are congruent. ### Multiplication The product of two complex numbers is computed as follows: $$ (a+bi) \cdot (c+di) = ac - bd + (ad+bc)i. $$ For example, $$ (3+2i)(4-i) = 3 \cdot 4 - (2 \cdot (-1)) + (3 \cdot (-1) + 2 \cdot 4)i = 14 +5i. $$ In particular, this includes as a special case the fundamental formula $$ i^2 = i \cdot i = -1. $$ This formula distinguishes the complex number i from any real number, since the square of any (negative or positive) real number is always a non-negative real number. With this definition of multiplication and addition, familiar rules for the arithmetic of rational or real numbers continue to hold for complex numbers. More precisely, the distributive property, the commutative properties (of addition and multiplication) hold. Therefore, the complex numbers form an algebraic structure known as a field, the same way as the rational or real numbers do. ### Complex conjugate, absolute value, argument and division The complex conjugate of the complex number is defined as $$ \overline z = x-yi. $$ It is also denoted by some authors by $$ z^* $$ . Geometrically, is the "reflection" of about the real axis. Conjugating twice gives the original complex number: $$ \overline{\overline{z}}=z. $$ A complex number is real if and only if it equals its own conjugate. The unary operation of taking the complex conjugate of a complex number cannot be expressed by applying only the basic operations of addition, subtraction, multiplication and division. For any complex number , the product $$ z \cdot \overline z = (x+iy)(x-iy) = x^2 + y^2 $$ is a non-negative real number. This allows to define the absolute value (or modulus or magnitude) of z to be the square root $$ |z|=\sqrt{x^2+y^2}. $$ By Pythagoras' theorem, $$ |z| $$ is the distance from the origin to the point representing the complex number z in the complex plane. In particular, the circle of radius one around the origin consists precisely of the numbers z such that $$ |z| = 1 $$ . If $$ z = x = x + 0i $$ is a real number, then $$ |z|= |x| $$ : its absolute value as a complex number and as a real number are equal. Using the conjugate, the reciprocal of a nonzero complex number $$ z = x + yi $$ can be computed to be $$ \frac{1}{z} = \frac{\bar{z}}{z\bar{z}} = \frac{\bar{z}}{|z|^2} = \frac{x - yi}{x^2 + y^2} = \frac{x}{x^2 + y^2} - \frac{y}{x^2 + y^2}i. $$ More generally, the division of an arbitrary complex number $$ w = u + vi $$ by a non-zero complex number $$ z = x + yi $$ equals $$ \frac{w}{z} = \frac{w\bar{z}}{|z|^2} = \frac{(u + vi)(x - iy)}{x^2 + y^2} = \frac{ux + vy}{x^2 + y^2} + \frac{vx - uy}{x^2 + y^2}i. $$ This process is sometimes called "rationalization" of the denominator (although the denominator in the final expression may be an irrational real number), because it resembles the method to remove roots from simple expressions in a denominator. Extract of page 37 The argument of (sometimes called the "phase" ) is the angle of the radius with the positive real axis, and is written as , expressed in radians in this article. The angle is defined only up to adding integer multiples of $$ 2\pi $$ , since a rotation by $$ 2\pi $$ (or 360°) around the origin leaves all points in the complex plane unchanged. One possible choice to uniquely specify the argument is to require it to be within the interval $$ (-\pi,\pi] $$ , which is referred to as the principal value. The argument can be computed from the rectangular form by means of the arctan (inverse tangent) function. ### Polar form For any complex number z, with absolute value $$ r = |z| $$ and argument $$ \varphi $$ , the equation $$ z=r(\cos\varphi +i\sin\varphi) $$ holds. This identity is referred to as the polar form of z. It is sometimes abbreviated as $$ z = r \operatorname\mathrm{cis} \varphi $$ . In electronics, one represents a phasor with amplitude and phase in angle notation: If two complex numbers are given in polar form, i.e., and , the product and division can be computed as $$ z_1 z_2 = r_1 r_2 (\cos(\varphi_1 + \varphi_2) + i \sin(\varphi_1 + \varphi_2)). $$ $$ \frac{z_1}{z_2} = \frac{r_1}{r_2} \left(\cos(\varphi_1 - \varphi_2) + i \sin(\varphi_1 - \varphi_2)\right), \text{if }z_2 \ne 0. $$ (These are a consequence of the trigonometric identities for the sine and cosine function.) In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. The picture at the right illustrates the multiplication of $$ (2+i)(3+i)=5+5i. $$ Because the real and imaginary part of are equal, the argument of that number is 45 degrees, or (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangles are arctan(1/3) and arctan(1/2), respectively. Thus, the formula $$ \frac{\pi}{4} = \arctan\left(\frac{1}{2}\right) + \arctan\left(\frac{1}{3}\right) $$ holds. As the arctan function can be approximated highly efficiently, formulas like this – known as Machin-like formulas – are used for high-precision approximations of : $$ \frac{\pi}{4} = 4 \arctan\left(\frac{1}{5}\right) - \arctan\left(\frac{1}{239}\right) $$ ### Powers and roots The n-th power of a complex number can be computed using de Moivre's formula, which is obtained by repeatedly applying the above formula for the product: $$ z^{n}=\underbrace{z \cdot \dots \cdot z}_{n \text{ factors}} = (r(\cos \varphi + i\sin \varphi ))^n = r^n \, (\cos n\varphi + i \sin n \varphi). $$ For example, the first few powers of the imaginary unit i are $$ i, i^2 = -1, i^3 = -i, i^4 = 1, i^5 = i, \dots $$ . The th roots of a complex number are given by $$ z^{1/n} = \sqrt[n]r \left( \cos \left(\frac{\varphi+2k\pi}{n}\right) + i \sin \left(\frac{\varphi+2k\pi}{n}\right)\right) $$ for . (Here $$ \sqrt[n]r $$ is the usual (positive) th root of the positive real number .) Because sine and cosine are periodic, other integer values of do not give other values. For any $$ z \ne 0 $$ , there are, in particular n distinct complex n-th roots. For example, there are 4 fourth roots of 1, namely $$ z_1 = 1, z_2 = i, z_3 = -1, z_4 = -i. $$ In general there is no natural way of distinguishing one particular complex th root of a complex number. (This is in contrast to the roots of a positive real number x, which has a unique positive real n-th root, which is therefore commonly referred to as the n-th root of x.) One refers to this situation by saying that the th root is a -valued function of . ### Fundamental theorem of algebra The fundamental theorem of algebra, of Carl Friedrich Gauss and Jean le Rond d'Alembert, states that for any complex numbers (called coefficients) , the equation $$ a_n z^n + \dotsb + a_1 z + a_0 = 0 $$ has at least one complex solution z, provided that at least one of the higher coefficients is nonzero. This property does not hold for the field of rational numbers $$ \Q $$ (the polynomial does not have a rational root, because is not a rational number) nor the real numbers $$ \R $$ (the polynomial does not have a real root, because the square of is positive for any real number ). Because of this fact, $$ \Complex $$ is called an algebraically closed field. It is a cornerstone of various applications of complex numbers, as is detailed further below. There are various proofs of this theorem, by either analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of odd degree has at least one real root. ## History The solution in radicals (without trigonometric functions) of a general cubic equation, when all three of its roots are real numbers, contains the square roots of negative numbers, a situation that cannot be rectified by factoring aided by the rational root test, if the cubic is irreducible; this is the so-called casus irreducibilis ("irreducible case"). This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545 in his Ars Magna, though his understanding was rudimentary; moreover, he later described complex numbers as being "as subtle as they are useless". Cardano did use imaginary numbers, but described using them as "mental torture." This was prior to the use of the graphical complex plane. Cardano and other Italian mathematicians, notably Scipione del Ferro, in the 1500s created an algorithm for solving cubic equations which generally had one real solution and two solutions containing an imaginary number. Because they ignored the answers with the imaginary numbers, Cardano found them useless. Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root. Many mathematicians contributed to the development of complex numbers. The rules for addition, subtraction, multiplication, and root extraction of complex numbers were developed by the Italian mathematician Rafael Bombelli. A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions. The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his Stereometrica he considered, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term $$ \sqrt{81 - 144} $$ in his calculations, which today would simplify to $$ \sqrt{-63} = 3i\sqrt{7} $$ . Negative quantities were not conceived of in Hellenistic mathematics and Hero merely replaced the negative value by its positive $$ \sqrt{144 - 81} = 3\sqrt{7}. $$ The impetus to study complex numbers as a topic in itself first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (Niccolò Fontana Tartaglia and Gerolamo Cardano). It was soon realized (but proved much later) that these formulas, even if one were interested only in real solutions, sometimes required the manipulation of square roots of negative numbers. In fact, it was proved later that the use of complex numbers is unavoidable when all three roots are real and distinct. However, the general formula can still be used in this case, with some care to deal with the ambiguity resulting from the existence of three cubic roots for nonzero complex numbers. Rafael Bombelli was the first to address explicitly these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic, trying to resolve these issues. The term "imaginary" for these quantities was coined by René Descartes in 1637, who was at pains to stress their unreal nature: A further source of confusion was that the equation $$ \sqrt{-1}^2 = \sqrt{-1}\sqrt{-1} = -1 $$ seemed to be capriciously inconsistent with the algebraic identity $$ \sqrt{a}\sqrt{b} = \sqrt{ab} $$ , which is valid for non-negative real numbers and , and which was also used in complex number calculations with one of , positive and the other negative. The incorrect use of this identity in the case when both and are negative, and the related identity $$ \frac{1}{\sqrt{a}} = \sqrt{\frac{1}{a}} $$ , even bedeviled Leonhard Euler. This difficulty eventually led to the convention of using the special symbol in place of $$ \sqrt{-1} $$ to guard against this mistake. Extract of page 32 Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout. In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be re-expressed by the following de Moivre's formula: $$ (\cos \theta + i\sin \theta)^{n} = \cos n \theta + i\sin n \theta. $$ In 1748, Euler went further and obtained Euler's formula of complex analysis: $$ e ^{i\theta } = \cos \theta + i\sin \theta $$ by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities. The idea of a complex number as a point in the complex plane was first described by Danish–Norwegian mathematician Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's A Treatise of Algebra. Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Carl Friedrich Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology: If one formerly contemplated this subject from a false point of view and therefore found a mysterious darkness, this is in large part attributable to clumsy terminology. Had one not called +1, −1, positive, negative, or imaginary (or even impossible) units, but instead, say, direct, inverse, or lateral units, then there could scarcely have been talk of such darkness. In the beginning of the 19th century, other mathematicians discovered independently the geometrical representation of the complex numbers: Buée, Mourey, Warren, Français and his brother, Bellavitis. The English mathematician G.H. Hardy remarked that Gauss was the first mathematician to use complex numbers in "a really confident and scientific way" although mathematicians such as Norwegian Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise. Augustin-Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand called the direction factor, and $$ r = \sqrt{a^2 + b^2} $$ the modulus; Cauchy (1821) called the reduced form (l'expression réduite) and apparently introduced the term argument; Gauss used for $$ \sqrt{-1} $$ , introduced the term complex number for , and called the norm. The expression direction coefficient, often used for , is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others. Important work (including a systematization) in complex multivariate calculus has been started at beginning of the 20th century. Important results have been achieved by Wilhelm Wirtinger in 1927. ## Abstract algebraic aspects While the above low-level definitions, including the addition and multiplication, accurately describe the complex numbers, there are other, equivalent approaches that reveal the abstract algebraic structure of the complex numbers more immediately. ### Construction as a quotient field One approach to $$ \C $$ is via polynomials, i.e., expressions of the form $$ p(X) = a_nX^n+\dotsb+a_1X+a_0, $$ where the coefficients are real numbers. The set of all such polynomials is denoted by $$ \R[X] $$ . Since sums and products of polynomials are again polynomials, this set $$ \R[X] $$ forms a commutative ring, called the polynomial ring (over the reals). To every such polynomial p, one may assign the complex number $$ p(i) = a_n i^n + \dotsb + a_1 i + a_0 $$ , i.e., the value obtained by setting $$ X = i $$ . This defines a function $$ \R[X] \to \C $$ This function is surjective since every complex number can be obtained in such a way: the evaluation of a linear polynomial $$ a+bX $$ at $$ X = i $$ is $$ a+bi $$ . However, the evaluation of polynomial $$ X^2 + 1 $$ at i is 0, since $$ i^2 + 1 = 0. $$ This polynomial is irreducible, i.e., cannot be written as a product of two linear polynomials. Basic facts of abstract algebra then imply that the kernel of the above map is an ideal generated by this polynomial, and that the quotient by this ideal is a field, and that there is an isomorphism $$ \R[X] / (X^2 + 1) \stackrel \cong \to \C $$ between the quotient ring and $$ \C $$ . Some authors take this as the definition of $$ \C $$ . Accepting that $$ \Complex $$ is algebraically closed, because it is an algebraic extension of $$ \mathbb{R} $$ in this approach, $$ \Complex $$ is therefore the algebraic closure of $$ \R. $$ ### Matrix representation of complex numbers Complex numbers can also be represented by matrices that have the form $$ \begin{pmatrix} a & -b \\ b & \;\; a \end{pmatrix}. $$ Here the entries and are real numbers. As the sum and product of two such matrices is again of this form, these matrices form a subring of the ring of matrices. A simple computation shows that the map $$ a+ib\mapsto \begin{pmatrix} a & -b \\ b & \;\; a \end{pmatrix} $$ is a ring isomorphism from the field of complex numbers to the ring of these matrices, proving that these matrices form a field. This isomorphism associates the square of the absolute value of a complex number with the determinant of the corresponding matrix, and the conjugate of a complex number with the transpose of the matrix. The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such matrices. The action of the matrix on a vector corresponds to the multiplication of by . In particular, if the determinant is , there is a real number such that the matrix has the form $$ \begin{pmatrix} \cos t & - \sin t \\ \sin t & \;\; \cos t \end{pmatrix}. $$ In this case, the action of the matrix on vectors and the multiplication by the complex number $$ \cos t+i\sin t $$ are both the rotation of the angle . ## Complex analysis The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane. ### Convergence The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, $$ \mathbb{C} $$ , endowed with the metric $$ \operatorname{d}(z_1, z_2) = |z_1 - z_2| $$ is a complete metric space, which notably includes the triangle inequality $$ |z_1 + z_2| \le |z_1| + |z_2| $$ for any two complex numbers and . ### Complex exponential Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the exponential function , also written , is defined as the infinite series, which can be shown to converge for any z: $$ \exp z:= 1+z+\frac{z^2}{2\cdot 1}+\frac{z^3}{3\cdot 2\cdot 1}+\cdots = \sum_{n=0}^{\infty} \frac{z^n}{n!}. $$ For example, $$ \exp (1) $$ is Euler's number $$ e \approx 2.718 $$ . Euler's formula states: $$ \exp(i\varphi) = \cos \varphi + i\sin \varphi $$ for any real number . This formula is a quick consequence of general basic facts about convergent power series and the definitions of the involved functions as power series. As a special case, this includes Euler's identity $$ \exp(i \pi) = -1. $$ ### Complex logarithm For any positive real number t, there is a unique real number x such that $$ \exp(x) = t $$ . This leads to the definition of the natural logarithm as the inverse $$ \ln \colon \R^+ \to \R ; x \mapsto \ln x $$ of the exponential function. The situation is different for complex numbers, since $$ \exp(z+2\pi i) = \exp z \exp (2 \pi i) = \exp z $$ by the functional equation and Euler's identity. For example, , so both and are possible values for the complex logarithm of . In general, given any non-zero complex number w, any number z solving the equation $$ \exp z = w $$ is called a complex logarithm of , denoted $$ \log w $$ . It can be shown that these numbers satisfy $$ z = \log w = \ln|w| + i\arg w, $$ where $$ \arg $$ is the argument defined above, and $$ \ln $$ the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of , log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval . This leads to the complex logarithm being a bijective function taking values in the strip $$ \R^+ + \; i \, \left(-\pi, \pi\right] $$ (that is denoted $$ S_0 $$ in the above illustration) $$ \ln \colon \; \Complex^\times \; \to \; \; \; \R^+ + \; i \, \left(-\pi, \pi\right] . $$ If $$ z \in \Complex \setminus \left( -\R_{\ge 0} \right) $$ is not a non-positive real number (a positive or a non-real number), the resulting principal value of the complex logarithm is obtained with . It is an analytic function outside the negative real numbers, but it cannot be prolongated to a function that is continuous at any negative real number $$ z \in -\R^+ $$ , where the principal value is . Complex exponentiation is defined as $$ z^\omega = \exp(\omega \ln z), $$ and is multi-valued, except when is an integer. For , for some natural number , this recovers the non-uniqueness of th roots mentioned above. If is real (and an arbitrary complex number), one has a preferred choice of $$ \ln x $$ , the real logarithm, which can be used to define a preferred exponential function. Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example, they do not satisfy $$ a^{bc} = \left(a^b\right)^c. $$ Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right. ### Complex sine and cosine The series defining the real trigonometric functions sine and cosine, as well as the hyperbolic functions sinh and cosh, also carry over to complex arguments without change. For the other trigonometric and hyperbolic functions, such as tangent, things are slightly more complicated, as the defining series do not converge for all complex values. Therefore, one must define them either in terms of sine, cosine and exponential, or, equivalently, by using the method of analytic continuation. ### Holomorphic functions A function $$ f: \mathbb{C} $$ → $$ \mathbb{C} $$ is called holomorphic or complex differentiable at a point $$ z_0 $$ if the limit $$ \lim_{z \to z_0} {f(z) - f(z_0) \over z - z_0 } $$ exists (in which case it is denoted by $$ f'(z_0) $$ ). This mimics the definition for real differentiable functions, except that all quantities are complex numbers. Loosely speaking, the freedom of approaching $$ z_0 $$ in different directions imposes a much stronger condition than being (real) differentiable. For example, the function $$ f(z) = \overline z $$ is differentiable as a function $$ \R^2 \to \R^2 $$ , but is not complex differentiable. A real differentiable function is complex differentiable if and only if it satisfies the Cauchy–Riemann equations, which are sometimes abbreviated as $$ \frac{\partial f}{\partial \overline z} = 0. $$ Complex analysis shows some features not apparent in real analysis. For example, the identity theorem asserts that two holomorphic functions and agree if they agree on an arbitrarily small open subset of $$ \mathbb{C} $$ . Meromorphic functions, functions that can locally be written as with a holomorphic function , still share some of the features of holomorphic functions. Other functions have essential singularities, such as at . ## Applications Complex numbers have applications in many scientific areas, including signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. Some of these applications are described below. Complex conjugation is also employed in inversive geometry, a branch of geometry studying reflections more general than ones about a line. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when the maximum power transfer theorem is looked for. ### Geometry #### Shapes Three non-collinear points $$ u, v, w $$ in the plane determine the shape of the triangle $$ \{u, v, w\} $$ . Locating the points in the complex plane, this shape of a triangle may be expressed by complex arithmetic as $$ S(u, v, w) = \frac {u - w}{u - v}. $$ The shape $$ S $$ of a triangle will remain the same, when the complex plane is transformed by translation or dilation (by an affine transformation), corresponding to the intuitive notion of shape, and describing similarity. Thus each triangle $$ \{u, v, w\} $$ is in a similarity class of triangles with the same shape. #### Fractal geometry The Mandelbrot set is a popular example of a fractal formed on the complex plane. It is defined by plotting every location $$ c $$ where iterating the sequence $$ f_c(z)=z^2+c $$ does not diverge when iterated infinitely. Similarly, Julia sets have the same rules, except where $$ c $$ remains constant. #### Triangles Every triangle has a unique Steiner inellipse – an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem: Denote the triangle's vertices in the complex plane as , , and . Write the cubic equation $$ (x-a)(x-b)(x-c)=0 $$ , take its derivative, and equate the (quadratic) derivative to zero. Marden's theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse. ### Algebraic number theory As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in $$ \mathbb{C} $$ . A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to $$ \overline{\mathbb{Q}} $$ , the algebraic closure of $$ \mathbb{Q} $$ , which also contains all algebraic numbers, $$ \mathbb{C} $$ has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem. Another example is the Gaussian integers; that is, numbers of the form , where and are integers, which can be used to classify sums of squares. ### Analytic number theory Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta function is related to the distribution of prime numbers. ### Improper integrals In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration. ### Dynamic equations In differential equations, it is common to first find all complex roots of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form . Likewise, in difference equations, the complex roots of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form . ### Linear algebra Since $$ \C $$ is algebraically closed, any non-empty complex square matrix has at least one (complex) eigenvalue. By comparison, real matrices do not always have real eigenvalues, for example rotation matrices (for rotations of the plane for angles other than 0° or 180°) leave no direction fixed, and therefore do not have any real eigenvalue. The existence of (complex) eigenvalues, and the ensuing existence of eigendecomposition is a useful tool for computing matrix powers and matrix exponentials. Complex numbers often generalize concepts originally conceived in the real numbers. For example, the conjugate transpose generalizes the transpose, hermitian matrices generalize symmetric matrices, and unitary matrices generalize orthogonal matrices. ### In applied mathematics #### Control theory In control theory, systems are often transformed from the time domain to the complex frequency domain using the Laplace transform. The system's zeros and poles are then analyzed in the complex plane. The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane. In the root locus method, it is important whether zeros and poles are in the left or right half planes, that is, have real part greater than or less than zero. If a linear, time-invariant (LTI) system has poles that are - in the right half plane, it will be unstable, - all in the left half plane, it will be stable, - on the imaginary axis, it will have marginal stability. If a system has zeros in the right half plane, it is a nonminimum phase system. #### Signal analysis Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value of the corresponding is the amplitude and the argument is the phase. If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex-valued functions of the form $$ x(t) = \operatorname{Re} \{X( t ) \} $$ and $$ X( t ) = A e^{i\omega t} = a e^{ i \phi } e^{i\omega t} = a e^{i (\omega t + \phi) } $$ where ω represents the angular frequency and the complex number A encodes the phase and amplitude as explained above. This use is also extended into digital signal processing and digital image processing, which use digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals. Another example, relevant to the two side bands of amplitude modulation of AM radio, is: $$ \begin{align} \cos((\omega + \alpha)t) + \cos\left((\omega - \alpha)t\right) & = \operatorname{Re}\left(e^{i(\omega + \alpha)t} + e^{i(\omega - \alpha)t}\right) \\ & = \operatorname{Re}\left(\left(e^{i\alpha t} + e^{-i\alpha t}\right) \cdot e^{i\omega t}\right) \\ & = \operatorname{Re}\left(2\cos(\alpha t) \cdot e^{i\omega t}\right) \\ & = 2 \cos(\alpha t) \cdot \operatorname{Re}\left(e^{i\omega t}\right) \\ & = 2 \cos(\alpha t) \cdot \cos\left(\omega t\right). \end{align} $$ ### In physics #### Electromagnetism and electrical engineering In electrical engineering, the Fourier transform is used to analyze varying electric currents and voltages. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus. In electrical engineering, the imaginary unit is denoted by , to avoid confusion with , which is generally in use to denote electric current, or, more particularly, , which is generally in use to denote instantaneous electric current. Because the voltage in an AC circuit is oscillating, it can be represented as $$ V(t) = V_0 e^{j \omega t} = V_0 \left (\cos\omega t + j \sin\omega t \right ), $$ To obtain the measurable quantity, the real part is taken: $$ v(t) = \operatorname{Re}(V) = \operatorname{Re}\left [ V_0 e^{j \omega t} \right ] = V_0 \cos \omega t. $$ The complex-valued signal is called the analytic representation of the real-valued, measurable signal . #### Fluid dynamics In fluid dynamics, complex functions are used to describe potential flow in two dimensions. #### Quantum mechanics The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers. #### Relativity In special relativity and general relativity, some formulas for the metric on spacetime become simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity. ## Characterizations, generalizations and related notions ### Algebraic characterization The field $$ \Complex $$ has the following three properties: - First, it has characteristic 0. This means that for any number of summands (all of which equal one). - Second, its transcendence degree over $$ \Q $$ , the prime field of $$ \Complex, $$ is the cardinality of the continuum. - Third, it is algebraically closed (see above). It can be shown that any field having these properties is isomorphic (as a field) to $$ \Complex. $$ For example, the algebraic closure of the field $$ \Q_p $$ of the -adic number also satisfies these three properties, so these two fields are isomorphic (as fields, but not as topological fields). Also, $$ \Complex $$ is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that $$ \Complex $$ contains many proper subfields that are isomorphic to $$ \Complex $$ . ### Characterization as a topological field The preceding characterization of $$ \Complex $$ describes only the algebraic aspects of $$ \Complex. $$ That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of $$ \Complex $$ as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological properties. $$ \Complex $$ contains a subset (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions: - is closed under addition, multiplication and taking inverses. - If and are distinct elements of , then either or is in . - If is any nonempty subset of , then for some in $$ \Complex. $$ Moreover, $$ \Complex $$ has a nontrivial involutive automorphism (namely the complex conjugation), such that is in for any nonzero in $$ \Complex. $$ Any field with these properties can be endowed with a topology by taking the sets as a base, where ranges over the field and ranges over . With this topology is isomorphic as a topological field to $$ \Complex. $$ The only connected locally compact topological fields are $$ \R $$ and $$ \Complex. $$ This gives another characterization of $$ \Complex $$ as a topological field, because $$ \Complex $$ can be distinguished from $$ \R $$ because the nonzero complex numbers are connected, while the nonzero real numbers are not. ### Other number systems + Number systems rational numbers real numbers complex numbers quaternions octonions sedenions complete dimension as an -vector space [does not apply] 1 2 4 8 16 ordered multiplication commutative multiplication associative normed division algebra [does not apply] The process of extending the field $$ \mathbb R $$ of reals to $$ \mathbb C $$ is an instance of the Cayley–Dickson construction. Applying this construction iteratively to $$ \C $$ then yields the quaternions, the octonions, the sedenions, and the trigintaduonions. This construction turns out to diminish the structural properties of the involved number systems. Unlike the reals, $$ \Complex $$ is not an ordered field, that is to say, it is not possible to define a relation that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so precludes the existence of an ordering on $$ \Complex. $$ Passing from $$ \C $$ to the quaternions $$ \mathbb H $$ loses commutativity, while the octonions (additionally to not being commutative) fail to be associative. The reals, complex numbers, quaternions and octonions are all normed division algebras over $$ \mathbb R $$ . By Hurwitz's theorem they are the only ones; the sedenions, the next step in the Cayley–Dickson construction, fail to have this structure. The Cayley–Dickson construction is closely related to the regular representation of $$ \mathbb C, $$ thought of as an $$ \mathbb R $$ -algebra (an $$ \mathbb{R} $$ -vector space with a multiplication), with respect to the basis . This means the following: the $$ \mathbb R $$ -linear map $$ \begin{align} \mathbb{C} &\rightarrow \mathbb{C} \\ z &\mapsto wz \end{align} $$ for some fixed complex number can be represented by a matrix (once a basis has been chosen). With respect to the basis , this matrix is $$ \begin{pmatrix} \operatorname{Re}(w) & -\operatorname{Im}(w) \\ \operatorname{Im}(w) & \operatorname{Re}(w) \end{pmatrix}, $$ that is, the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of $$ \mathbb C $$ in the 2 × 2 real matrices, it is not the only one. Any matrix $$ J = \begin{pmatrix}p & q \\ r & -p \end{pmatrix}, \quad p^2 + qr + 1 = 0 $$ has the property that its square is the negative of the identity matrix: . Then $$ \{ z = a I + b J : a,b \in \mathbb{R} \} $$ is also isomorphic to the field $$ \mathbb C, $$ and gives an alternative complex structure on $$ \mathbb R^2. $$ This is generalized by the notion of a linear complex structure. Hypercomplex numbers also generalize $$ \mathbb R, $$ $$ \mathbb C, $$ $$ \mathbb H, $$ and $$ \mathbb{O}. $$ For example, this notion contains the split-complex numbers, which are elements of the ring $$ \mathbb R[x]/(x^2-1) $$ (as opposed to $$ \mathbb R[x]/(x^2+1) $$ for complex numbers). In this ring, the equation has four solutions. The field $$ \mathbb R $$ is the completion of $$ \mathbb Q, $$ the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on $$ \mathbb Q $$ lead to the fields $$ \mathbb Q_p $$ of -adic numbers (for any prime number ), which are thereby analogous to $$ \mathbb{R} $$ . There are no other nontrivial ways of completing $$ \mathbb Q $$ than $$ \mathbb R $$ and $$ \mathbb Q_p, $$ by Ostrowski's theorem. The algebraic closures $$ \overline {\mathbb{Q}_p} $$ of $$ \mathbb Q_p $$ still carry a norm, but (unlike $$ \mathbb C $$ ) are not complete with respect to it. The completion $$ \mathbb{C}_p $$ of $$ \overline {\mathbb{Q}_p} $$ turns out to be algebraically closed. By analogy, the field is called -adic complex numbers. The fields $$ \mathbb R, $$ $$ \mathbb Q_p, $$ and their finite field extensions, including $$ \mathbb C, $$ are called local fields.
https://en.wikipedia.org/wiki/Complex_number
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability ) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., , the binomial distribution is a ### Bernoulli distribution . The binomial distribution is the basis for the binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size drawn with replacement from a population of size . If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for much larger than , the binomial distribution remains a good approximation, and is widely used. ## Definitions ### Probability mass function If the random variable follows the binomial distribution with parameters and , we write . The probability of getting exactly successes in independent Bernoulli trials (with the same rate ) is given by the probability mass function: $$ f(k,n,p) = \Pr(X = k) = \binom{n}{k}p^k(1-p)^{n-k} $$ for , where $$ \binom{n}{k} =\frac{n!}{k!(n-k)!} $$ is the binomial coefficient. The formula can be understood as follows: is the probability of obtaining the sequence of independent Bernoulli trials in which trials are "successes" and the remaining trials result in "failure". Since the trials are independent with probabilities remaining constant between them, any sequence of trials with successes (and failures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are $$ \binom{n}{k} $$ such sequences, since the binomial coefficient $$ \binom{n}{k} $$ counts the number of ways to choose the positions of the successes among the trials. The binomial distribution is concerned with the probability of obtaining any of these sequences, meaning the probability of obtaining one of them () must be added $$ \binom{n}{k} $$ times, hence $$ \Pr(X = k) = \binom{n}{k} p^k (1-p)^{n-k} $$ . In creating reference tables for binomial distribution probability, usually, the table is filled in up to values. This is because for , the probability can be calculated by its complement as $$ f(k,n,p)=f(n-k,n,1-p). $$ Looking at the expression as a function of , there is a value that maximizes it. This value can be found by calculating $$ \frac{f(k+1,n,p)}{f(k,n,p)}=\frac{(n-k)p}{(k+1)(1-p)} $$ and comparing it to 1. There is always an integer that satisfies $$ (n+1)p-1 \leq M < (n+1)p. $$ is monotone increasing for and monotone decreasing for , with the exception of the case where is an integer. In this case, there are two values for which is maximal: and . is the most probable outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode. Equivalently, . Taking the floor function, we obtain . ### Example Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is $$ f(4,6,0.3) = \binom{6}{4}0.3^4 (1-0.3)^{6-4}= 0.059535. $$ ### Cumulative distribution function The cumulative distribution function can be expressed as: $$ F(k;n,p) = \Pr(X \le k) = \sum_{i=0}^{\lfloor k \rfloor} {n\choose i}p^i(1-p)^{n-i}, $$ where $$ \lfloor k\rfloor $$ is the "floor" under , i.e. the greatest integer less than or equal to . It can also be represented in terms of the regularized incomplete beta function, as follows: $$ \begin{align} F(k;n,p) & = \Pr(X \le k) \\ &= I_{1-p}(n-k, k+1) \\ & = (n-k) {n \choose k} \int_0^{1-p} t^{n-k-1} (1-t)^k \, dt , \end{align} $$ which is equivalent to the cumulative distribution functions of the beta distribution and of the -distribution: $$ F(k;n,p) = F_{\text{beta-distribution}}\left(x=1-p;\alpha=n-k,\beta=k+1\right) $$ $$ F(k;n,p) = F_{F\text{-distribution}}\left(x=\frac{1-p}{p}\frac{k+1}{n-k};d_1=2(n-k),d_2=2(k+1)\right). $$ Some closed-form bounds for the cumulative distribution function are given below. ## Properties ### Expected value and variance If , that is, is a binomially distributed random variable, being the total number of experiments and p the probability of each experiment yielding a successful result, then the expected value of is: $$ \operatorname{E}[X] = np. $$ This follows from the linearity of the expected value along with the fact that is the sum of identical Bernoulli random variables, each with expected value . In other words, if $$ X_1, \ldots, X_n $$ are identical (and independent) Bernoulli random variables with parameter , then and $$ \operatorname{E}[X] = \operatorname{E}[X_1 + \cdots + X_n] = \operatorname{E}[X_1] + \cdots + \operatorname{E}[X_n] = p + \cdots + p = np. $$ The variance is: $$ \operatorname{Var}(X) = npq = np(1 - p). $$ This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances. ### Higher moments The first 6 central moments, defined as $$ \mu _{c}=\operatorname {E} \left[(X-\operatorname {E} [X])^{c}\right] $$ , are given by $$ \begin{align} \mu_1 &= 0, \\ \mu_2 &= np(1-p),\\ \mu_3 &= np(1-p)(1-2p),\\ \mu_4 &= np(1-p)(1+(3n-6)p(1-p)),\\ \mu_5 &= np(1-p)(1-2p)(1+(10n-12)p(1-p)),\\ \mu_6 &= np(1-p)(1-30p(1-p)(1-4p(1-p))+5np(1-p)(5-26p(1-p))+15n^2 p^2 (1-p)^2). \end{align} $$ The non-central moments satisfy $$ \begin{align} \operatorname {E}[X] &= np, \\ \operatorname {E}[X^2] &= np(1-p)+n^2p^2, \end{align} $$ and in general $$ \operatorname {E}[X^c] = \sum_{k=0}^c \left\{ {c \atop k} \right\} n^{\underline{k}} p^k, $$ where $$ \textstyle \left\{{c\atop k}\right\} $$ are the Stirling numbers of the second kind, and $$ n^{\underline{k}} = n(n-1)\cdots(n-k+1) $$ is the $$ k $$ th falling power of $$ n $$ . A simple bound follows by bounding the Binomial moments via the higher Poisson moments: $$ \operatorname {E}[X^c] \le \left(\frac{c}{\ln(c/(np)+1)}\right)^c \le (np)^c \exp\left(\frac{c^2}{2np}\right). $$ This shows that if $$ c=O(\sqrt{np}) $$ , then $$ \operatorname {E}[X^c] $$ is at most a constant factor away from $$ \operatorname {E}[X]^c $$ ### Mode Usually the mode of a binomial distribution is equal to $$ \lfloor (n+1)p\rfloor $$ , where $$ \lfloor\cdot\rfloor $$ is the floor function. However, when is an integer and is neither 0 nor 1, then the distribution has two modes: and . When is equal to 0 or 1, the mode will be 0 and correspondingly. These cases can be summarized as follows: $$ \text{mode} = \begin{cases} \lfloor (n+1)\,p\rfloor & \text{if }(n+1)p\text{ is 0 or a noninteger}, \\ (n+1)\,p\ \text{ and }\ (n+1)\,p - 1 &\text{if }(n+1)p\in\{1,\dots,n\}, \\ n & \text{if }(n+1)p = n + 1. \end{cases} $$ Proof: Let $$ f(k)=\binom nk p^k q^{n-k}. $$ For $$ p=0 $$ only $$ f(0) $$ has a nonzero value with $$ f(0)=1 $$ . For $$ p=1 $$ we find $$ f(n)=1 $$ and $$ f(k)=0 $$ for $$ k\neq n $$ . This proves that the mode is 0 for $$ p=0 $$ and $$ n $$ for $$ p=1 $$ . Let $$ 0 < p < 1 $$ . We find $$ \frac{f(k+1)}{f(k)} = \frac{(n-k)p}{(k+1)(1-p)} $$ . From this follows $$ \begin{align} k > (n+1)p-1 \Rightarrow f(k+1) < f(k) \\ k = (n+1)p-1 \Rightarrow f(k+1) = f(k) \\ k < (n+1)p-1 \Rightarrow f(k+1) > f(k) \end{align} $$ So when $$ (n+1)p-1 $$ is an integer, then $$ (n+1)p-1 $$ and $$ (n+1)p $$ is a mode. In the case that $$ (n+1)p-1\notin \Z $$ , then only $$ \lfloor (n+1)p-1\rfloor+1=\lfloor (n+1)p\rfloor $$ is a mode. ### Median In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However, several special results have been established: - If is an integer, then the mean, median, and mode coincide and equal .Lord, Nick. (July 2010). "Binomial averages when the mean is an integer", The Mathematical Gazette 94, 331-332. - Any median must lie within the interval $$ \lfloor np \rfloor\leq m \leq \lceil np \rceil $$ . - A median cannot lie too far away from the mean: $$ |m-np|\leq \min\{{\ln2}, \max\{p,1-p\}\} $$ . - The median is unique and equal to when (except for the case when and is odd). - When is a rational number (with the exception of \ and odd) the median is unique. - When $$ p= \frac{1}{2} $$ and is odd, any number in the interval $$ \frac{1}{2} \bigl(n-1\bigr)\leq m \leq \frac{1}{2} \bigl(n+1\bigr) $$ is a median of the binomial distribution. If $$ p= \frac{1}{2} $$ and is even, then $$ m= \frac{n}{2} $$ is the unique median. ### Tail bounds For , upper bounds can be derived for the lower tail of the cumulative distribution function $$ F(k;n,p) = \Pr(X \le k) $$ , the probability that there are at most successes. Since $$ \Pr(X \ge k) = F(n-k;n,1-p) $$ , these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for . Hoeffding's inequality yields the simple bound $$ F(k;n,p) \leq \exp\left(-2 n\left(p-\frac{k}{n}\right)^2\right), \! $$ which is however not very tight. In particular, for , we have that (for fixed , with ), but Hoeffding's bound evaluates to a positive constant. A sharper bound can be obtained from the Chernoff bound: $$ F(k;n,p) \leq \exp\left(-nD\left(\frac{k}{n}\parallel p\right)\right) $$ where is the relative entropy (or Kullback-Leibler divergence) between an -coin and a -coin (i.e. between the and distribution): $$ D(a\parallel p)=(a)\ln\frac{a}{p}+(1-a)\ln\frac{1-a}{1-p}. \! $$ Asymptotically, this bound is reasonably tight; see for details. One can also obtain lower bounds on the tail , known as anti-concentration bounds. By approximating the binomial coefficient with Stirling's formula it can be shown that $$ F(k;n,p) \geq \frac{1}{\sqrt{8n\tfrac{k}{n}(1-\tfrac{k}{n})}} \exp\left(-nD\left(\frac{k}{n}\parallel p\right)\right), $$ which implies the simpler but looser bound $$ F(k;n,p) \geq \frac1{\sqrt{2n}} \exp\left(-nD\left(\frac{k}{n}\parallel p\right)\right). $$ For and for even , it is possible to make the denominator constant: $$ F(k;n,\tfrac{1}{2}) \geq \frac{1}{15} \exp\left(- 16n \left(\frac{1}{2} -\frac{k}{n}\right)^2\right). \! $$ ## Statistical inference ### Estimation of parameters When is known, the parameter can be estimated using the proportion of successes: $$ \widehat{p} = \frac{x}{n}. $$ This estimator is found using maximum likelihood estimator and also the method of moments. This estimator is unbiased and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic (i.e.: ). It is also consistent both in probability and in MSE. This statistic is asymptotically normal thanks to the central limit theorem, because it is the same as taking the mean over Bernoulli samples. It has a variance of $$ var(\widehat{p}) = \frac{p(1-p)}{n} $$ , a property which is used in various ways, such as in Wald's confidence intervals. A closed form Bayes estimator for also exists when using the ### Beta distribution as a conjugate prior distribution. When using a general $$ \operatorname{Beta}(\alpha, \beta) $$ as a prior, the posterior mean estimator is: $$ \widehat{p}_b = \frac{x+\alpha}{n+\alpha+\beta}. $$ The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (), it approaches the MLE solution. The Bayes estimator is biased (how much depends on the priors), admissible and consistent in probability. Using the Bayesian estimator with the Beta distribution can be used with Thompson sampling. For the special case of using the standard uniform distribution as a non-informative prior, $$ \operatorname{Beta}(\alpha=1, \beta=1) = U(0,1) $$ , the posterior mean estimator becomes: $$ \widehat{p}_b = \frac{x+1}{n+2}. $$ (A posterior mode should just lead to the standard estimator.) This method is called the rule of succession, which was introduced in the 18th century by Pierre-Simon Laplace. When relying on Jeffreys prior, the prior is $$ \operatorname{Beta}(\alpha=\frac{1}{2}, \beta=\frac{1}{2}) $$ , which leads to the estimator: $$ \widehat{p}_{Jeffreys} = \frac{x+\frac{1}{2}}{n+1}. $$ When estimating with very rare events and a small (e.g.: if ), then using the standard estimator leads to $$ \widehat{p} = 0, $$ which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators. One way is to use the Bayes estimator $$ \widehat{p}_b $$ , leading to: $$ \widehat{p}_b = \frac{1}{n+2}. $$ Another method is to use the upper bound of the confidence interval obtained using the rule of three: $$ \widehat{p}_{\text{rule of 3}} = \frac{3}{n}. $$ ### Confidence intervals for the parameter p Even for quite large values of n, the actual distribution of the mean is significantly nonnormal. Because of this problem several methods to estimate confidence intervals have been proposed. In the equations for confidence intervals below, the variables have the following meaning: - n1 is the number of successes out of n, the total number of trials - $$ \widehat{p\,} = \frac{n_1}{n} $$ is the proportion of successes - $$ z $$ is the $$ 1 - \tfrac{1}{2}\alpha $$ quantile of a standard normal distribution (i.e., probit) corresponding to the target error rate $$ \alpha $$ . For example, for a 95% confidence level the error $$ \alpha $$  = 0.05, so $$ 1 - \tfrac{1}{2}\alpha $$  = 0.975 and $$ z $$  = 1.96. #### Wald method $$ \widehat{p\,} \pm z \sqrt{ \frac{ \widehat{p\,} ( 1 -\widehat{p\,} )}{ n } } . $$ A continuity correction of may be added. #### Agresti–Coull method $$ \tilde{p} \pm z \sqrt{ \frac{ \tilde{p} ( 1 - \tilde{p} )}{ n + z^2 } } $$ Here the estimate of is modified to $$ \tilde{p}= \frac{ n_1 + \frac{1}{2} z^2}{ n + z^2 } $$ This method works well for and . See here for $$ n\leq 10 $$ . For use the #### Wilson (score) method below. #### Arcsine method $$ \sin^2 \left(\arcsin \left(\sqrt{\widehat{p\,}}\right) \pm \frac{z}{2\sqrt{n}} \right). $$ Wilson (score) method The notation in the formula below differs from the previous formulas in two respects: - Firstly, has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the th quantile of the standard normal distribution', rather than being a shorthand for 'the th quantile'. - Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use $$ z = z_{\alpha / 2} $$ to get the lower bound, or use $$ z = z_{1 - \alpha/2} $$ to get the upper bound. For example: for a 95% confidence level the error $$ \alpha $$  = 0.05, so one gets the lower bound by using $$ z = z_{\alpha/2} = z_{0.025} = - 1.96 $$ , and one gets the upper bound by using $$ z = z_{1 - \alpha/2} = z_{0.975} = 1.96 $$ . $$ \frac{ \widehat{p\,} + \frac{z^2}{2n} + z \sqrt{ \frac{\widehat{p\,}(1 - \widehat{p\,})}{n} + \frac{z^2}{4 n^2} } }{ 1 + \frac{z^2}{n} } $$ #### Comparison The so-called "exact" (Clopper–Pearson) method is the most conservative. (Exact does not mean perfectly accurate; rather, it indicates that the estimates will not be less conservative than the true value.) The Wald method, although commonly recommended in textbooks, is the most biased. ## Related distributions ### Sums of binomials If and are independent binomial variables with the same probability , then is again a binomial variable; its distribution is : $$ \begin{align} \operatorname P(Z=k) &= \sum_{i=0}^k\left[\binom{n}i p^i (1-p)^{n-i}\right]\left[\binom{m}{k-i} p^{k-i} (1-p)^{m-k+i}\right]\\ &= \binom{n+m}k p^k (1-p)^{n+m-k} \end{align} $$ A Binomial distributed random variable can be considered as the sum of Bernoulli distributed random variables. So the sum of two Binomial distributed random variables and is equivalent to the sum of Bernoulli distributed random variables, which means . This can also be proven directly using the addition rule. However, if and do not have the same probability , then the variance of the sum will be smaller than the variance of a binomial variable distributed as . ### Poisson binomial distribution The binomial distribution is a special case of the Poisson binomial distribution, which is the distribution of a sum of independent non-identical Bernoulli trials . ### Ratio of two binomial distributions This result was first derived by Katz and coauthors in 1978. Let and be independent. Let . Then log(T) is approximately normally distributed with mean log(p1/p2) and variance . ### Conditional binomials If X ~ B(n, p) and Y | X ~ B(X, q) (the conditional distribution of Y, given X), then Y is a simple binomial random variable with distribution Y ~ B(n, pq). For example, imagine throwing n balls to a basket UX and taking the balls that hit and throwing them to another basket UY. If p is the probability to hit UX then X ~ B(n, p) is the number of balls that hit UX. If q is the probability to hit UY then the number of balls that hit UY is Y ~ B(X, q) and therefore Y ~ B(n, pq). Since $$ X \sim B(n, p) $$ and $$ Y \sim B(X, q) $$ , by the law of total probability, $$ \begin{align} \Pr[Y = m] &= \sum_{k = m}^{n} \Pr[Y = m \mid X = k] \Pr[X = k] \\[2pt] &= \sum_{k=m}^n \binom{n}{k} \binom{k}{m} p^k q^m (1-p)^{n-k} (1-q)^{k-m} \end{align} $$ Since $$ \tbinom{n}{k} \tbinom{k}{m} = \tbinom{n}{m} \tbinom{n-m}{k-m}, $$ the equation above can be expressed as $$ \Pr[Y = m] = \sum_{k=m}^{n} \binom{n}{m} \binom{n-m}{k-m} p^k q^m (1-p)^{n-k} (1-q)^{k-m} $$ Factoring $$ p^k = p^m p^{k-m} $$ and pulling all the terms that don't depend on $$ k $$ out of the sum now yields $$ \begin{align} \Pr[Y = m] &= \binom{n}{m} p^m q^m \left( \sum_{k=m}^n \binom{n-m}{k-m} p^{k-m} (1-p)^{n-k} (1-q)^{k-m} \right) \\[2pt] &= \binom{n}{m} (pq)^m \left( \sum_{k=m}^n \binom{n-m}{k-m} \left(p(1-q)\right)^{k-m} (1-p)^{n-k} \right) \end{align} $$ After substituting $$ i = k - m $$ in the expression above, we get $$ \Pr[Y = m] = \binom{n}{m} (pq)^m \left( \sum_{i=0}^{n-m} \binom{n-m}{i} (p - pq)^i (1-p)^{n-m - i} \right) $$ Notice that the sum (in the parentheses) above equals $$ (p - pq + 1 - p)^{n-m} $$ by the binomial theorem. Substituting this in finally yields $$ \begin{align} \Pr[Y=m] &= \binom{n}{m} (pq)^m (p - pq + 1 - p)^{n-m}\\[4pt] &= \binom{n}{m} (pq)^m (1-pq)^{n-m} \end{align} $$ and thus $$ Y \sim B(n, pq) $$ as desired. Bernoulli distribution The Bernoulli distribution is a special case of the binomial distribution, where . Symbolically, has the same meaning as . Conversely, any binomial distribution, , is the distribution of the sum of independent Bernoulli trials, , each with the same probability . ### Normal approximation If is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to is given by the normal distribution $$ \mathcal{N}(np,\,np(1-p)), $$ and this basic approximation can be improved in a simple way by using a suitable continuity correction. The basic approximation generally improves as increases (at least 20) and is better when is not near to 0 or 1. Various rules of thumb may be used to decide whether is large enough, and is far enough from the extremes of zero or one: - One rule is that for the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if - : $$ \frac{|1-2p|}{\sqrt{np(1-p)}}=\frac1{\sqrt{n}}\left|\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}\,\right|<0.3. $$ This can be made precise using the Berry–Esseen theorem. - A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if - : $$ \mu\pm3\sigma=np\pm3\sqrt{np(1-p)}\in(0,n). $$ This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above. $$ n>9 \left(\frac{1-p}{p} \right)\quad\text{and}\quad n>9\left(\frac{p}{1-p}\right). $$ The rule $$ np\pm3\sqrt{np(1-p)}\in(0,n) $$ is totally equivalent to request that $$ np-3\sqrt{np(1-p)}>0\quad\text{and}\quad np+3\sqrt{np(1-p)}<n. $$ Moving terms around yields: $$ np>3\sqrt{np(1-p)}\quad\text{and}\quad n(1-p)>3\sqrt{np(1-p)}. $$ Since $$ 0<p<1 $$ , we can apply the square power and divide by the respective factors $$ np^2 $$ and $$ n(1-p)^2 $$ , to obtain the desired conditions: $$ n>9 \left(\frac{1-p}p\right) \quad\text{and}\quad n>9 \left(\frac{p}{1-p}\right). $$ Notice that these conditions automatically imply that $$ n>9 $$ . On the other hand, apply again the square root and divide by 3, $$ \frac{\sqrt{n}}3>\sqrt{\frac{1-p}p}>0 \quad \text{and} \quad \frac{\sqrt{n}}3 > \sqrt{\frac{p}{1-p}}>0. $$ Subtracting the second set of inequalities from the first one yields: $$ \frac{\sqrt{n}}3>\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}>-\frac{\sqrt{n}}3; $$ and so, the desired first rule is satisfied, $$ \left|\sqrt{\frac{1-p}p}-\sqrt{\frac{p}{1-p}}\,\right|<\frac{\sqrt{n}}3. $$ - Another commonly used rule is that both values and must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs. Assume that both values $$ np $$ and $$ n(1-p) $$ are greater than 9. Since $$ 0< p<1 $$ , we easily have that $$ np\geq9>9(1-p)\quad\text{and}\quad n(1-p)\geq9>9p. $$ We only have to divide now by the respective factors $$ p $$ and $$ 1-p $$ , to deduce the alternative form of the 3-standard-deviation rule: $$ n>9 \left(\frac{1-p}p\right) \quad\text{and}\quad n>9 \left(\frac{p}{1-p}\right). $$ The following is an example of applying a continuity correction. Suppose one wishes to calculate for a binomial random variable . If has a distribution given by the normal approximation, then is approximated by . The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results. This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1738. Nowadays, it can be seen as a consequence of the central limit theorem since is a sum of independent, identically distributed Bernoulli variables with parameter . This fact is the basis of a hypothesis test, a "proportion z-test", for the value of using , the sample proportion and estimator of , in a common test statistic. For example, suppose one randomly samples people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of n people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation $$ \sigma = \sqrt{\frac{p(1-p)}{n}} $$ ### Poisson approximation The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product converges to a finite limit. Therefore, the Poisson distribution with parameter can be used as an approximation to of the binomial distribution if is sufficiently large and is sufficiently small. According to rules of thumb, this approximation is good if and such that , or if and such that , or if and . Concerning the accuracy of Poisson approximation, see Novak, ch. 4, and references therein. ### Limiting distributions - Poisson limit theorem: As approaches and approaches 0 with the product held fixed, the distribution approaches the Poisson distribution with expected value . - de Moivre–Laplace theorem: As approaches while remains fixed, the distribution of - : $$ \frac{X-np}{\sqrt{np(1-p)}} $$ approaches the normal distribution with expected value 0 and variance 1. This result is sometimes loosely stated by saying that the distribution of is asymptotically normal with expected value 0 and variance 1. This result is a specific case of the central limit theorem. Beta distribution The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of successes given independent events each with a probability of success. Mathematically, when and , the beta distribution and the binomial distribution are related by a factor of : $$ \operatorname{Beta}(p;\alpha;\beta) = (n+1)B(k;n;p) $$ Beta distributions also provide a family of prior probability distributions for binomial distributions in Bayesian inference: $$ P(p;\alpha,\beta) = \frac{p^{\alpha-1}(1-p)^{\beta-1}}{\operatorname{Beta}(\alpha,\beta)}. $$ Given a uniform prior, the posterior distribution for the probability of success given independent events with observed successes is a beta distribution. ## Computational methods ### Random number generation Methods for random number generation where the marginal distribution is a binomial distribution are well-established. One way to generate random variates samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that for all values from through . (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples into discrete numbers by using the probabilities calculated in the first step. ## History This distribution was derived by Jacob Bernoulli. He considered the case where where is the probability of success and and are positive integers. Blaise Pascal had earlier considered the case where , tabulating the corresponding binomial coefficients in what is now recognized as Pascal's triangle.
https://en.wikipedia.org/wiki/Binomial_distribution
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an $$ n $$ -dimensional manifold, or -manifold for short, is a topological space with the property that each point has a neighborhood that is homeomorphic to an open subset of $$ n $$ -dimensional Euclidean space. One-dimensional manifolds include lines and circles, but not self-crossing curves such as a figure 8. Two-dimensional manifolds are also called surfaces. Examples include the plane, the sphere, and the torus, and also the #### Klein bottle and real projective plane. The concept of a manifold is central to many parts of geometry and modern mathematical physics because it allows complicated structures to be described in terms of well-understood topological properties of simpler spaces. Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. The concept has applications in computer-graphics given the need to associate pictures with coordinates (e.g. CT scans). Manifolds can be equipped with additional structure. One important class of manifolds are differentiable manifolds; their differentiable structure allows calculus to be done. A Riemannian metric on a manifold allows distances and angles to be measured. Symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. The study of manifolds requires working knowledge of calculus and topology. ## Motivating examples ### Circle After a line, a circle is the simplest example of a topological manifold. Topology ignores bending, so a small piece of a circle is treated the same as a small piece of a line. Considering, for instance, the top part of the unit circle, x2 + y2 = 1, where the y-coordinate is positive (indicated by the yellow arc in Figure 1). Any point of this arc can be uniquely described by its x-coordinate. So, projection onto the first coordinate is a continuous and invertible mapping from the upper arc to the open interval (−1, 1): $$ \chi_{\mathrm{top}}(x,y) = x . \, $$ Such functions along with the open regions they map are called charts. Similarly, there are charts for the bottom (red), left (blue), and right (green) parts of the circle: $$ \begin{align} \chi_\mathrm{bottom}(x, y) &= x \\ \chi_\mathrm{left}(x, y) &= y \\ \chi_\mathrm{right}(x, y) &= y. \end{align} $$ Together, these parts cover the whole circle, and the four charts form an atlas for the circle. The top and right charts, $$ \chi_\mathrm{top} $$ and $$ \chi_\mathrm{right} $$ respectively, overlap in their domain: their intersection lies in the quarter of the circle where both $$ x $$ and $$ y $$ -coordinates are positive. Both map this part into the interval $$ (0,1) $$ , though differently. Thus a function $$ T:(0,1)\rightarrow(0,1)=\chi_\mathrm{right} \circ \chi^{-1}_\mathrm{top} $$ can be constructed, which takes values from the co-domain of $$ \chi_\mathrm{top} $$ back to the circle using the inverse, followed by $$ \chi_\mathrm{right} $$ back to the interval. If a is any number in $$ (0,1) $$ , then: $$ \begin{align} T(a) &= \chi_\mathrm{right}\left(\chi_\mathrm{top}^{-1}\left[a\right]\right) \\ &= \chi_\mathrm{right}\left(a, \sqrt{1 - a^2}\right) \\ &= \sqrt{1 - a^2} \end{align} $$ Such a function is called a transition map. The top, bottom, left, and right charts do not form the only possible atlas. ### ### Charts need not be geometric projections, and the number of charts is a matter of choice. Consider the charts $$ \chi_\mathrm{minus}(x, y) = s = \frac{y}{1 + x} $$ and $$ \chi_\mathrm{plus}(x, y) = t = \frac{y}{1 - x} $$ Here s is the slope of the line through the point at coordinates (x, y) and the fixed pivot point (−1, 0); similarly, t is the opposite of the slope of the line through the points at coordinates (x, y) and (+1, 0). The inverse mapping from s to (x, y) is given by $$ \begin{align} x &= \frac{1 - s^2}{1 + s^2} \\[5pt] y &= \frac{2s}{1 + s^2} \end{align} $$ It can be confirmed that x2 + y2 = 1 for all values of s and t. These two charts provide a second atlas for the circle, with the transition map $$ t = \frac{1}{s} $$ (that is, one has this relation between s and t for every point where s and t are both nonzero). Each chart omits a single point, either (−1, 0) for s or (+1, 0) for t, so neither chart alone is sufficient to cover the whole circle. It can be proved that it is not possible to cover the full circle with a single chart. For example, although it is possible to construct a circle from a single line interval by overlapping and "gluing" the ends, this does not produce a chart; a portion of the circle will be mapped to both ends at once, losing invertibility. ### Sphere The sphere is an example of a surface. The unit sphere of implicit equation may be covered by an atlas of six charts: the plane divides the sphere into two half spheres ( and ), which may both be mapped on the disc by the projection on the plane of coordinates. This provides two charts; the four other charts are provided by a similar construction with the two other coordinate planes. As with the circle, one may define one chart that covers the whole sphere excluding one point. Thus two charts are sufficient, but the sphere cannot be covered by a single chart. This example is historically significant, as it has motivated the terminology; it became apparent that the whole surface of the Earth cannot have a plane representation consisting of a single map (also called "chart", see nautical chart), and therefore one needs atlases for covering the whole Earth surface. ### Other curves Manifolds do not need to be connected (all in "one piece"); an example is a pair of separate circles. Manifolds need not be closed; thus a line segment without its end points is a manifold. They are never countable, unless the dimension of the manifold is 0. Putting these freedoms together, other examples of manifolds are a parabola, a hyperbola, and the locus of points on a cubic curve (a closed loop piece and an open, infinite piece). However, excluded are examples like two touching circles that share a point to form a figure-8; at the shared point, a satisfactory chart cannot be created. Even with the bending allowed by topology, the vicinity of the shared point looks like a "+", not a line. A "+" is not homeomorphic to a line segment, since deleting the center point from the "+" gives a space with four components (i.e. pieces), whereas deleting a point from a line segment gives a space with at most two pieces; topological operations always preserve the number of pieces. ## Mathematical definition Informally, a manifold is a space that is "modeled on" Euclidean space. There are many different kinds of manifolds. In geometry and topology, all manifolds are topological manifolds, possibly with additional structure. A manifold can be constructed by giving a collection of coordinate charts, that is, a covering by open sets with homeomorphisms to a Euclidean space, and patching functions: homeomorphisms from one region of Euclidean space to another region if they correspond to the same part of the manifold in two different coordinate charts. A manifold can be given additional structure if the patching functions satisfy axioms beyond continuity. For instance, differentiable manifolds have homeomorphisms on overlapping neighborhoods diffeomorphic with each other, so that the manifold has a well-defined set of functions which are differentiable in each neighborhood, thus differentiable on the manifold as a whole. Formally, a (topological) manifold is a second countable Hausdorff space that is locally homeomorphic to a Euclidean space. Second countable and Hausdorff are point-set conditions; second countable excludes spaces which are in some sense 'too large' such as the long line, while Hausdorff excludes spaces such as "the line with two origins" (these generalizations of manifolds are discussed in non-Hausdorff manifolds). Locally homeomorphic to a Euclidean space means that every point has a neighborhood homeomorphic to an open subset of the Euclidean space $$ \R^n, $$ for some nonnegative integer . This implies that either the point is an isolated point (if $$ n=0 $$ ), or it has a neighborhood homeomorphic to the open ball $$ \mathbf{B}^n = \left\{ (x_1, x_2, \dots, x_n)\in\R^n: x_1^2 + x_2^2 + \cdots + x_n^2 < 1 \right\}. $$ This implies also that every point has a neighborhood homeomorphic to $$ \R^n $$ since $$ \R^n $$ is homeomorphic, and even diffeomorphic to any open ball in it (for $$ n>0 $$ ). The that appears in the preceding definition is called the local dimension of the manifold. Generally manifolds are taken to have a constant local dimension, and the local dimension is then called the dimension of the manifold. This is, in particular, the case when manifolds are connected. However, some authors admit manifolds that are not connected, and where different points can have different dimensions. If a manifold has a fixed dimension, this can be emphasized by calling it a . For example, the (surface of a) sphere has a constant dimension of 2 and is therefore a pure manifold whereas the disjoint union of a sphere and a line in three-dimensional space is not a pure manifold. Since dimension is a local invariant (i.e. the map sending each point to the dimension of its neighbourhood over which a chart is defined, is locally constant), each connected component has a fixed dimension. Sheaf-theoretically, a manifold is a locally ringed space, whose structure sheaf is locally isomorphic to the sheaf of continuous (or differentiable, or complex-analytic, etc.) functions on Euclidean space. This definition is mostly used when discussing analytic manifolds in algebraic geometry. ## Charts, atlases, and transition maps The spherical Earth is navigated using flat maps or charts, collected in an atlas. Similarly, a manifold can be described using mathematical maps, called coordinate charts, collected in a mathematical atlas. It is not generally possible to describe a manifold with just one chart, because the global structure of the manifold is different from the simple structure of the charts. For example, no single flat map can represent the entire Earth without separation of adjacent features across the map's boundaries or duplication of coverage. When a manifold is constructed from multiple overlapping charts, the regions where they overlap carry information essential to understanding the global structure. Charts A coordinate map, a coordinate chart, or simply a chart, of a manifold is an invertible map between a subset of the manifold and a simple space such that both the map and its inverse preserve the desired structure. For a topological manifold, the simple space is a subset of some Euclidean space $$ \R^n $$ and interest focuses on the topological structure. This structure is preserved by homeomorphisms, invertible maps that are continuous in both directions. In the case of a differentiable manifold, a set of charts called an atlas, whose transition functions (see below) are all differentiable, allows us to do calculus on it. Polar coordinates, for example, form a chart for the plane $$ \R^2 $$ minus the positive x-axis and the origin. Another example of a chart is the map χtop mentioned above, a chart for the circle. ### Atlases The description of most manifolds requires more than one chart. A specific collection of charts which covers a manifold is called an atlas. An atlas is not unique as all manifolds can be covered in multiple ways using different combinations of charts. Two atlases are said to be equivalent if their union is also an atlas. The atlas containing all possible charts consistent with a given atlas is called the maximal atlas (i.e. an equivalence class containing that given atlas). Unlike an ordinary atlas, the maximal atlas of a given manifold is unique. Though useful for definitions, it is an abstract object and not used directly (e.g. in calculations). ### Transition maps Charts in an atlas may overlap and a single point of a manifold may be represented in several charts. If two charts overlap, parts of them represent the same region of the manifold, just as a map of Europe and a map of Russia may both contain Moscow. Given two overlapping charts, a transition function can be defined which goes from an open ball in $$ \R^n $$ to the manifold and then back to another (or perhaps the same) open ball in $$ \R^n $$ . The resultant map, like the map T in the circle example above, is called a change of coordinates, a coordinate transformation, a transition function, or a transition map. ### ## Additional structure An atlas can also be used to define additional structure on the manifold. The structure is first defined on each chart separately. If all transition maps are compatible with this structure, the structure transfers to the manifold. This is the standard way differentiable manifolds are defined. If the transition functions of an atlas for a topological manifold preserve the natural differential structure of $$ \R^n $$ (that is, if they are diffeomorphisms), the differential structure transfers to the manifold and turns it into a differentiable manifold. Complex manifolds are introduced in an analogous way by requiring that the transition functions of an atlas are holomorphic functions. For symplectic manifolds, the transition functions must be symplectomorphisms. The structure on the manifold depends on the atlas, but sometimes different atlases can be said to give rise to the same structure. Such atlases are called compatible. These notions are made precise in general through the use of pseudogroups. ## Manifold with boundary A manifold with boundary is a manifold with an edge. For example, a disk (circle plus interior) is a 2-manifold with as boundary the circle, a 1-manifold. The boundary of an $$ n $$ -manifold with boundary is an $$ (n-1) $$ -manifold. In three dimensions, a ball (sphere plus interior) is a 3-manifold with boundary. Its boundary is a sphere, a 2-manifold. In technical language, a manifold with boundary is a space containing both interior points and boundary points. Every interior point has a neighborhood homeomorphic to the open $$ n $$ -ball $$ \{(x_1,x_2,\dots,x_n)\vert\Sigma x_i^2<1\} $$ . Every boundary point has a neighborhood homeomorphic to the "half" $$ n $$ -ball $$ \{(x_1,x_2,\dots,x_n)\vert\Sigma x_i^2<1\text{ and }x_1\geq0\} $$ . Any homeomorphism between half-balls must send points with $$ x_1=0 $$ to points with $$ x_1=0 $$ . This invariance allows to "define" boundary points; see next paragraph. Note that a square with interior is not a manifold with boundary. The four vertices are neither locally homeomorphic to Euclidean space nor to Euclidean half-space. This is an example of a manifold with corners. Similarly, products of manifolds with boundaries are not generally manifolds with boundaries, but instead are manifolds with corners. ### Boundary and interior Let $$ M $$ be a manifold with boundary. The interior of $$ M $$ , denoted $$ \operatorname{Int} M $$ , is the set of points in $$ M $$ which have neighborhoods homeomorphic to an open subset of $$ \R^n $$ . The boundary of $$ M $$ , denoted $$ \partial M $$ , is the complement of $$ \operatorname{Int} M $$ in $$ M $$ . The boundary points can be characterized as those points which land on the boundary hyperplane $$ (x_n=0) $$ of $$ \R^n_+ $$ under some coordinate chart. If $$ M $$ is a manifold with boundary of dimension $$ n $$ , then $$ \operatorname{Int} M $$ is a manifold (without boundary) of dimension $$ n $$ and $$ \partial M $$ is a manifold (without boundary) of dimension $$ n-1 $$ . ## Construction A single manifold can be constructed in different ways, each stressing a different aspect of the manifold, thereby leading to a slightly different viewpoint. Charts Perhaps the simplest way to construct a manifold is the one used in the example above of the circle. First, a subset of $$ \R^2 $$ is identified, and then an atlas covering this subset is constructed. The concept of manifold grew historically from constructions like this. Here is another example, applying this method to the construction of a sphere: #### Sphere with charts A sphere can be treated in almost the same way as the circle. In mathematics a sphere is just the surface (not the solid interior), which can be defined as a subset of $$ \R^3 $$ : $$ S = \left\{ (x,y,z) \in \R^3 \mid x^2 + y^2 + z^2 = 1 \right\}. $$ The sphere is two-dimensional, so each chart will map part of the sphere to an open subset of $$ \R^2 $$ . Consider the northern hemisphere, which is the part with positive z coordinate (coloured red in the picture on the right). The function defined by $$ \chi(x, y, z) = (x, y),\ $$ maps the northern hemisphere to the open unit disc by projecting it on the (x, y) plane. A similar chart exists for the southern hemisphere. Together with two charts projecting on the (x, z) plane and two charts projecting on the (y, z) plane, an atlas of six charts is obtained which covers the entire sphere. This can be easily generalized to higher-dimensional spheres. ### Patchwork A manifold can be constructed by gluing together pieces in a consistent manner, making them into overlapping charts. This construction is possible for any manifold and hence it is often used as a characterisation, especially for differentiable and ### Riemannian manifolds . It focuses on an atlas, as the patches naturally provide charts, and since there is no exterior space involved it leads to an intrinsic view of the manifold. The manifold is constructed by specifying an atlas, which is itself defined by transition maps. A point of the manifold is therefore an equivalence class of points which are mapped to each other by transition maps. Charts map equivalence classes to points of a single patch. There are usually strong demands on the consistency of the transition maps. For topological manifolds they are required to be homeomorphisms; if they are also diffeomorphisms, the resulting manifold is a differentiable manifold. This can be illustrated with the transition map t = 1⁄s from the second half of the circle example. Start with two copies of the line. Use the coordinate s for the first copy, and t for the second copy. Now, glue both copies together by identifying the point t on the second copy with the point s = 1⁄t on the first copy (the points t = 0 and s = 0 are not identified with any point on the first and second copy, respectively). This gives a circle. #### Intrinsic and extrinsic view The first construction and this construction are very similar, but represent rather different points of view. In the first construction, the manifold is seen as embedded in some Euclidean space. This is the extrinsic view. When a manifold is viewed in this way, it is easy to use intuition from Euclidean spaces to define additional structure. For example, in a Euclidean space, it is always clear whether a vector at some point is tangential or normal to some surface through that point. The patchwork construction does not use any embedding, but simply views the manifold as a topological space by itself. This abstract point of view is called the intrinsic view. It can make it harder to imagine what a tangent vector might be, and there is no intrinsic notion of a normal bundle, but instead there is an intrinsic stable normal bundle. #### n-Sphere as a patchwork The n-sphere Sn is a generalisation of the idea of a circle (1-sphere) and sphere (2-sphere) to higher dimensions. An n-sphere Sn can be constructed by gluing together two copies of $$ \R^n $$ . The transition map between them is inversion in a sphere, defined as $$ \R^n \setminus \{0\} \to \R^n \setminus \{0\}: x \mapsto x/\|x\|^2. $$ This function is its own inverse and thus can be used in both directions. As the transition map is a smooth function, this atlas defines a smooth manifold. In the case n = 1, the example simplifies to the circle example given earlier. ### Identifying points of a manifold It is possible to define different points of a manifold to be the same point. This can be visualized as gluing these points together in a single point, forming a quotient space. There is, however, no reason to expect such quotient spaces to be manifolds. Among the possible quotient spaces that are not necessarily manifolds, orbifolds and CW complexes are considered to be relatively well-behaved. An example of a quotient space of a manifold that is also a manifold is the real projective space, identified as a quotient space of the corresponding sphere. One method of identifying points (gluing them together) is through a right (or left) action of a group, which acts on the manifold. Two points are identified if one is moved onto the other by some group element. If M is the manifold and G is the group, the resulting quotient space is denoted by M / G (or G \ M). Manifolds which can be constructed by identifying points include tori and real projective spaces (starting with a plane and a sphere, respectively). ### Gluing along boundaries Two manifolds with boundaries can be glued together along a boundary. If this is done the right way, the result is also a manifold. Similarly, two boundaries of a single manifold can be glued together. Formally, the gluing is defined by a bijection between the two boundaries. Two points are identified when they are mapped onto each other. For a topological manifold, this bijection should be a homeomorphism, otherwise the result will not be a topological manifold. Similarly, for a differentiable manifold, it has to be a diffeomorphism. For other manifolds, other structures should be preserved. A finite cylinder may be constructed as a manifold by starting with a strip [0,1] × [0,1] and gluing a pair of opposite edges on the boundary by a suitable diffeomorphism. A projective plane may be obtained by gluing a sphere with a hole in it to a #### Möbius strip along their respective circular boundaries. ### Cartesian products The Cartesian product of manifolds is also a manifold. The dimension of the product manifold is the sum of the dimensions of its factors. Its topology is the product topology, and a Cartesian product of charts is a chart for the product manifold. Thus, an atlas for the product manifold can be constructed using atlases for its factors. If these atlases define a differential structure on the factors, the corresponding atlas defines a differential structure on the product manifold. The same is true for any other structure defined on the factors. If one of the factors has a boundary, the product manifold also has a boundary. Cartesian products may be used to construct tori and finite cylinders, for example, as S1 × S1 and S1 × [0,1], respectively. ## History The study of manifolds combines many important areas of mathematics: it generalizes concepts such as curves and surfaces as well as ideas from linear algebra and topology. ### Early development Before the modern concept of a manifold there were several important results. Non-Euclidean geometry considers spaces where Euclid's parallel postulate fails. Saccheri first studied such geometries in 1733, but sought only to disprove them. Gauss, Bolyai and Lobachevsky independently discovered them 100 years later. Their research uncovered two types of spaces whose geometric structures differ from that of classical Euclidean space; these gave rise to hyperbolic geometry and elliptic geometry. In the modern theory of manifolds, these notions correspond to Riemannian manifolds with constant negative and positive curvature, respectively. Carl Friedrich Gauss may have been the first to consider abstract spaces as mathematical objects in their own right. His theorema egregium gives a method for computing the curvature of a surface without considering the ambient space in which the surface lies. Such a surface would, in modern terminology, be called a manifold; and in modern terms, the theorem proved that the curvature of the surface is an intrinsic property. Manifold theory has come to focus exclusively on these intrinsic properties (or invariants), while largely ignoring the extrinsic properties of the ambient space. Another, more topological example of an intrinsic property of a manifold is its Euler characteristic. Leonhard Euler showed that for a convex polytope in the three-dimensional Euclidean space with V vertices (or corners), E edges, and F faces, $$ V - E + F = 2.\ $$ The same formula will hold if we project the vertices and edges of the polytope onto a sphere, creating a topological map with V vertices, E edges, and F faces, and in fact, will remain true for any spherical map, even if it does not arise from any convex polytope. Thus 2 is a topological invariant of the sphere, called its Euler characteristic. On the other hand, a torus can be sliced open by its 'parallel' and 'meridian' circles, creating a map with V = 1 vertex, E = 2 edges, and F = 1 face. Thus the Euler characteristic of the torus is 1 − 2 + 1 = 0. The Euler characteristic of other surfaces is a useful topological invariant, which can be extended to higher dimensions using Betti numbers. In the mid nineteenth century, the Gauss–Bonnet theorem linked the Euler characteristic to the Gaussian curvature. ### Synthesis Investigations of Niels Henrik Abel and Carl Gustav Jacobi on inversion of elliptic integrals in the first half of 19th century led them to consider special types of complex manifolds, now known as Jacobians. Bernhard Riemann further contributed to their theory, clarifying the geometric meaning of the process of analytic continuation of functions of complex variables. Another important source of manifolds in 19th century mathematics was analytical mechanics, as developed by Siméon Poisson, Jacobi, and William Rowan Hamilton. The possible states of a mechanical system are thought to be points of an abstract space, phase space in Lagrangian and Hamiltonian formalisms of classical mechanics. This space is, in fact, a high-dimensional manifold, whose dimension corresponds to the degrees of freedom of the system and where the points are specified by their generalized coordinates. For an unconstrained movement of free particles the manifold is equivalent to the Euclidean space, but various conservation laws constrain it to more complicated formations, e.g. Liouville tori. The theory of a rotating solid body, developed in the 18th century by Leonhard Euler and Joseph-Louis Lagrange, gives another example where the manifold is nontrivial. Geometrical and topological aspects of classical mechanics were emphasized by Henri Poincaré, one of the founders of topology. Riemann was the first one to do extensive work generalizing the idea of a surface to higher dimensions. The name manifold comes from Riemann's original German term, Mannigfaltigkeit, which William Kingdon Clifford translated as "manifoldness". In his Göttingen inaugural lecture, Riemann described the set of all possible values of a variable with certain constraints as a Mannigfaltigkeit, because the variable can have many values. He distinguishes between stetige Mannigfaltigkeit and diskrete Mannigfaltigkeit (continuous manifoldness and discontinuous manifoldness), depending on whether the value changes continuously or not. As continuous examples, Riemann refers to not only colors and the locations of objects in space, but also the possible shapes of a spatial figure. Using induction, Riemann constructs an n-fach ausgedehnte Mannigfaltigkeit (n times extended manifoldness or n-dimensional manifoldness) as a continuous stack of (n−1) dimensional manifoldnesses. Riemann's intuitive notion of a Mannigfaltigkeit evolved into what is today formalized as a manifold. Riemannian manifolds and Riemann surfaces are named after Riemann. ### Poincaré's definition In his very influential paper, Analysis Situs, Henri Poincaré gave a definition of a differentiable manifold (variété) which served as a precursor to the modern concept of a manifold. In the first section of Analysis Situs, Poincaré defines a manifold as the level set of a continuously differentiable function between Euclidean spaces that satisfies the nondegeneracy hypothesis of the implicit function theorem. In the third section, he begins by remarking that the graph of a continuously differentiable function is a manifold in the latter sense. He then proposes a new, more general, definition of manifold based on a 'chain of manifolds' (une chaîne des variétés). Poincaré's notion of a chain of manifolds is a precursor to the modern notion of atlas. In particular, he considers two manifolds defined respectively as graphs of functions $$ \theta(y) $$ and $$ \theta'\left(y'\right) $$ . If these manifolds overlap (a une partie commune), then he requires that the coordinates $$ y $$ depend continuously differentiably on the coordinates $$ y' $$ and vice versa ('...les sont fonctions analytiques des et inversement'). In this way he introduces a precursor to the notion of a chart and of a transition map. For example, the unit circle in the plane can be thought of as the graph of the function $$ y = \sqrt{1 - x^2} $$ or else the function $$ y = -\sqrt{1 - x^2} $$ in a neighborhood of every point except the points (1, 0) and (−1, 0); and in a neighborhood of those points, it can be thought of as the graph of, respectively, $$ x = \sqrt{1 - y^2} $$ and $$ x = -\sqrt{1 - y^2} $$ . The circle can be represented by a graph in the neighborhood of every point because the left hand side of its defining equation $$ x^2 + y^2 - 1 = 0 $$ has nonzero gradient at every point of the circle. By the implicit function theorem, every submanifold of Euclidean space is locally the graph of a function. Hermann Weyl gave an intrinsic definition for differentiable manifolds in his lecture course on Riemann surfaces in 1911–1912, opening the road to the general concept of a topological space that followed shortly. During the 1930s Hassler Whitney and others clarified the foundational aspects of the subject, and thus intuitions dating back to the latter half of the 19th century became precise, and developed through differential geometry and Lie group theory. Notably, the Whitney embedding theorem showed that the intrinsic definition in terms of charts was equivalent to Poincaré's definition in terms of subsets of Euclidean space. ### Topology of manifolds: highlights Two-dimensional manifolds, also known as a 2D surfaces embedded in our common 3D space, were considered by Riemann under the guise of Riemann surfaces, and rigorously classified in the beginning of the 20th century by Poul Heegaard and Max Dehn. Poincaré pioneered the study of three-dimensional manifolds and raised a fundamental question about them, today known as the Poincaré conjecture. After nearly a century, Grigori Perelman proved the Poincaré conjecture (see the Solution of the Poincaré conjecture). William Thurston's geometrization program, formulated in the 1970s, provided a far-reaching extension of the Poincaré conjecture to the general three-dimensional manifolds. Four-dimensional manifolds were brought to the forefront of mathematical research in the 1980s by Michael Freedman and in a different setting, by Simon Donaldson, who was motivated by the then recent progress in theoretical physics (Yang–Mills theory), where they serve as a substitute for ordinary 'flat' spacetime. Andrey Markov Jr. showed in 1960 that no algorithm exists for classifying four-dimensional manifolds. Important work on higher-dimensional manifolds, including analogues of the Poincaré conjecture, had been done earlier by René Thom, John Milnor, Stephen Smale and Sergei Novikov. A very pervasive and flexible technique underlying much work on the topology of manifolds is Morse theory. Additional structure ### Topological manifolds The simplest kind of manifold to define is the topological manifold, which looks locally like some "ordinary" Euclidean space $$ \R^n $$ . By definition, all manifolds are topological manifolds, so the phrase "topological manifold" is usually used to emphasize that a manifold lacks additional structure, or that only its topological properties are being considered. Formally, a topological manifold is a topological space locally homeomorphic to a Euclidean space. This means that every point has a neighbourhood for which there exists a homeomorphism (a bijective continuous function whose inverse is also continuous) mapping that neighbourhood to $$ \R^n $$ . These homeomorphisms are the charts of the manifold. A topological manifold looks locally like a Euclidean space in a rather weak manner: while for each individual chart it is possible to distinguish differentiable functions or measure distances and angles, merely by virtue of being a topological manifold a space does not have any particular and consistent choice of such concepts. In order to discuss such properties for a manifold, one needs to specify further structure and consider differentiable manifolds and Riemannian manifolds discussed below. In particular, the same underlying topological manifold can have several mutually incompatible classes of differentiable functions and an infinite number of ways to specify distances and angles. Usually additional technical assumptions on the topological space are made to exclude pathological cases. It is customary to require that the space be Hausdorff and second countable. The dimension of the manifold at a certain point is the dimension of the Euclidean space that the charts at that point map to (number n in the definition). All points in a connected manifold have the same dimension. Some authors require that all charts of a topological manifold map to Euclidean spaces of same dimension. In that case every topological manifold has a topological invariant, its dimension. ### Differentiable manifolds For most applications, a special kind of topological manifold, namely, a differentiable manifold, is used. If the local charts on a manifold are compatible in a certain sense, one can define directions, tangent spaces, and differentiable functions on that manifold. In particular it is possible to use calculus on a differentiable manifold. Each point of an n-dimensional differentiable manifold has a tangent space. This is an n-dimensional Euclidean space consisting of the tangent vectors of the curves through the point. Two important classes of differentiable manifolds are smooth and analytic manifolds. For smooth manifolds the transition maps are smooth, that is, infinitely differentiable. Analytic manifolds are smooth manifolds with the additional condition that the transition maps are analytic (they can be expressed as power series). The sphere can be given analytic structure, as can most familiar curves and surfaces. A rectifiable set generalizes the idea of a piecewise smooth or rectifiable curve to higher dimensions; however, rectifiable sets are not in general manifolds. Riemannian manifolds To measure distances and angles on manifolds, the manifold must be Riemannian. A Riemannian manifold is a differentiable manifold in which each tangent space is equipped with an inner product $$ \langle \cdot,\cdot\rangle $$ in a manner which varies smoothly from point to point. Given two tangent vectors $$ u $$ and $$ v $$ , the inner product $$ \langle u,v\rangle $$ gives a real number. The dot (or scalar) product is a typical example of an inner product. This allows one to define various notions such as length, angles, areas (or volumes), curvature and divergence of vector fields. All differentiable manifolds (of constant dimension) can be given the structure of a Riemannian manifold. The Euclidean space itself carries a natural structure of Riemannian manifold (the tangent spaces are naturally identified with the Euclidean space itself and carry the standard scalar product of the space). Many familiar curves and surfaces, including for example all -spheres, are specified as subspaces of a Euclidean space and inherit a metric from their embedding in it. ### Finsler manifolds A Finsler manifold allows the definition of distance but does not require the concept of angle; it is an analytic manifold in which each tangent space is equipped with a norm, $$ \|\cdot\| $$ , in a manner which varies smoothly from point to point. This norm can be extended to a metric, defining the length of a curve; but it cannot in general be used to define an inner product. Any Riemannian manifold is a Finsler manifold. ### Lie groups Lie groups, named after Sophus Lie, are differentiable manifolds that carry also the structure of a group which is such that the group operations are defined by smooth maps. A Euclidean vector space with the group operation of vector addition is an example of a non-compact Lie group. A simple example of a compact Lie group is the circle: the group operation is simply rotation. This group, known as $$ \operatorname{U}(1) $$ , can be also characterised as the group of complex numbers of modulus 1 with multiplication as the group operation. Other examples of Lie groups include special groups of matrices, which are all subgroups of the general linear group, the group of $$ n\times n $$ matrices with non-zero determinant. If the matrix entries are real numbers, this will be an $$ n^2 $$ -dimensional disconnected manifold. The orthogonal groups, the symmetry groups of the sphere and hyperspheres, are $$ n(n-1)/2 $$ dimensional manifolds, where $$ n-1 $$ is the dimension of the sphere. Further examples can be found in the table of Lie groups. ### Other types of manifolds - A complex manifold is a manifold whose charts take values in $$ \Complex^n $$ and whose transition functions are holomorphic on the overlaps. These manifolds are the basic objects of study in complex geometry. A one-complex-dimensional manifold is called a Riemann surface. An $$ n $$ -dimensional complex manifold has dimension $$ 2n $$ as a real differentiable manifold. - A CR manifold is a manifold modeled on boundaries of domains in $$ \Complex^n $$ . - 'Infinite dimensional manifolds': to allow for infinite dimensions, one may consider Banach manifolds which are locally homeomorphic to Banach spaces. Similarly, Fréchet manifolds are locally homeomorphic to Fréchet spaces. - A symplectic manifold is a kind of manifold which is used to represent the phase spaces in classical mechanics. They are endowed with a 2-form that defines the Poisson bracket. A closely related type of manifold is a contact manifold. - A combinatorial manifold is a kind of manifold which is discretization of a manifold. It usually means a piecewise linear manifold made by simplicial complexes. - A digital manifold is a special kind of combinatorial manifold which is defined in digital space. See digital topology. ## Classification and invariants Different notions of manifolds have different notions of classification and invariant; in this section we focus on smooth closed manifolds. The classification of smooth closed manifolds is well understood in principle, except in dimension 4: in low dimensions (2 and 3) it is geometric, via the uniformization theorem and the solution of the Poincaré conjecture, and in high dimension (5 and above) it is algebraic, via surgery theory. This is a classification in principle: the general question of whether two smooth manifolds are diffeomorphic is not computable in general. Further, specific computations remain difficult, and there are many open questions. Orientable surfaces can be visualized, and their diffeomorphism classes enumerated, by genus. Given two orientable surfaces, one can determine if they are diffeomorphic by computing their respective genera and comparing: they are diffeomorphic if and only if the genera are equal, so the genus forms a complete set of invariants. This is much harder in higher dimensions: higher-dimensional manifolds cannot be directly visualized (though visual intuition is useful in understanding them), nor can their diffeomorphism classes be enumerated, nor can one in general determine if two different descriptions of a higher-dimensional manifold refer to the same object. However, one can determine if two manifolds are different if there is some intrinsic characteristic that differentiates them. Such criteria are commonly referred to as invariants, because, while they may be defined in terms of some presentation (such as the genus in terms of a triangulation), they are the same relative to all possible descriptions of a particular manifold: they are invariant under different descriptions. One could hope to develop an arsenal of invariant criteria that would definitively classify all manifolds up to isomorphism. It is known that for manifolds of dimension 4 and higher, no program exists that can decide whether two manifolds are diffeomorphic. Smooth manifolds have a rich set of invariants, coming from point-set topology, classic algebraic topology, and geometric topology. The most familiar invariants, which are visible for surfaces, are orientability (a normal invariant, also detected by homology) and genus (a homological invariant). Smooth closed manifolds have no local invariants (other than dimension), though geometric manifolds have local invariants, notably the curvature of a Riemannian manifold and the torsion of a manifold equipped with an affine connection. This distinction between local invariants and no local invariants is a common way to distinguish between geometry and topology. All invariants of a smooth closed manifold are thus global. Algebraic topology is a source of a number of important global invariant properties. Some key criteria include the simply connected property and orientability (see below). Indeed, several branches of mathematics, such as homology and homotopy theory, and the theory of characteristic classes were founded in order to study invariant properties of manifolds. ## Surfaces ### Orientability In dimensions two and higher, a simple but important invariant criterion is the question of whether a manifold admits a meaningful orientation. Consider a topological manifold with charts mapping to $$ \R^n $$ . Given an ordered basis for $$ \R^n $$ , a chart causes its piece of the manifold to itself acquire a sense of ordering, which in 3-dimensions can be viewed as either right-handed or left-handed. Overlapping charts are not required to agree in their sense of ordering, which gives manifolds an important freedom. For some manifolds, like the sphere, charts can be chosen so that overlapping regions agree on their "handedness"; these are orientable manifolds. For others, this is impossible. The latter possibility is easy to overlook, because any closed surface embedded (without self-intersection) in three-dimensional space is orientable. Some illustrative examples of non-orientable manifolds include: (1) the Möbius strip, which is a manifold with boundary, (2) the Klein bottle, which must intersect itself in its 3-space representation, and (3) the real projective plane, which arises naturally in geometry. Möbius strip Begin with an infinite circular cylinder standing vertically, a manifold without boundary. Slice across it high and low to produce two circular boundaries, and the cylindrical strip between them. This is an orientable manifold with boundary, upon which "surgery" will be performed. Slice the strip open, so that it could unroll to become a rectangle, but keep a grasp on the cut ends. Twist one end 180°, making the inner surface face out, and glue the ends back together seamlessly. This results in a strip with a permanent half-twist: the Möbius strip. Its boundary is no longer a pair of circles, but (topologically) a single circle; and what was once its "inside" has merged with its "outside", so that it now has only a single side. Similarly to the Klein Bottle below, this two dimensional surface would need to intersect itself in two dimensions, but can easily be constructed in three or more dimensions. Klein bottle Take two Möbius strips; each has a single loop as a boundary. Straighten out those loops into circles, and let the strips distort into cross-caps. Gluing the circles together will produce a new, closed manifold without boundary, the Klein bottle. Closing the surface does nothing to improve the lack of orientability, it merely removes the boundary. Thus, the Klein bottle is a closed surface with no distinction between inside and outside. In three-dimensional space, a Klein bottle's surface must pass through itself. Building a Klein bottle which is not self-intersecting requires four or more dimensions of space. #### Real projective plane Begin with a sphere centered on the origin. Every line through the origin pierces the sphere in two opposite points called antipodes. Although there is no way to do so physically, it is possible (by considering a quotient space) to mathematically merge each antipode pair into a single point. The closed surface so produced is the real projective plane, yet another non-orientable surface. It has a number of equivalent descriptions and constructions, but this route explains its name: all the points on any given line through the origin project to the same "point" on this "plane". ### Genus and the Euler characteristic For two dimensional manifolds a key invariant property is the genus, or "number of handles" present in a surface. A torus is a sphere with one handle, a double torus is a sphere with two handles, and so on. Indeed, it is possible to fully characterize compact, two-dimensional manifolds on the basis of genus and orientability. In higher-dimensional manifolds genus is replaced by the notion of Euler characteristic, and more generally Betti numbers and homology and cohomology. ## Maps of manifolds Just as there are various types of manifolds, there are various types of maps of manifolds. In addition to continuous functions and smooth functions generally, there are maps with special properties. In geometric topology a basic type are embeddings, of which knot theory is a central example, and generalizations such as immersions, submersions, covering spaces, and ramified covering spaces. Basic results include the Whitney embedding theorem and Whitney immersion theorem. In Riemannian geometry, one may ask for maps to preserve the Riemannian metric, leading to notions of isometric embeddings, isometric immersions, and Riemannian submersions; a basic result is the Nash embedding theorem. ### Scalar-valued functions A basic example of maps between manifolds are scalar-valued functions on a manifold, $$ f\colon M \to \R $$ or $$ f\colon M \to \Complex, $$ sometimes called regular functions or functionals, by analogy with algebraic geometry or linear algebra. These are of interest both in their own right, and to study the underlying manifold. In geometric topology, most commonly studied are Morse functions, which yield handlebody decompositions, while in mathematical analysis, one often studies solution to partial differential equations, an important example of which is harmonic analysis, where one studies harmonic functions: the kernel of the Laplace operator. This leads to such functions as the spherical harmonics, and to heat kernel methods of studying manifolds, such as hearing the shape of a drum and some proofs of the Atiyah–Singer index theorem. ## Generalizations of manifolds Infinite dimensional manifolds The definition of a manifold can be generalized by dropping the requirement of finite dimensionality. Thus an infinite dimensional manifold is a topological space locally homeomorphic to a topological vector space over the reals. This omits the point-set axioms, allowing higher cardinalities and non-Hausdorff manifolds; and it omits finite dimension, allowing structures such as Hilbert manifolds to be modeled on Hilbert spaces, Banach manifolds to be modeled on Banach spaces, and Fréchet manifolds to be modeled on Fréchet spaces. Usually one relaxes one or the other condition: manifolds with the point-set axioms are studied in general topology, while infinite-dimensional manifolds are studied in functional analysis. Orbifolds An orbifold is a generalization of manifold allowing for certain kinds of "singularities" in the topology. Roughly speaking, it is a space which locally looks like the quotients of some simple space (e.g. Euclidean space) by the actions of various finite groups. The singularities correspond to fixed points of the group actions, and the actions must be compatible in a certain sense. Algebraic varieties and schemes Non-singular algebraic varieties over the real or complex numbers are manifolds. One generalizes this first by allowing singularities, secondly by allowing different fields, and thirdly by emulating the patching construction of manifolds: just as a manifold is glued together from open subsets of Euclidean space, an algebraic variety is glued together from affine algebraic varieties, which are zero sets of polynomials over algebraically closed fields. Schemes are likewise glued together from affine schemes, which are a generalization of algebraic varieties. Both are related to manifolds, but are constructed algebraically using sheaves instead of atlases. Because of singular points, a variety is in general not a manifold, though linguistically the French variété, German Mannigfaltigkeit and English manifold are largely synonymous. In French an algebraic variety is called une variété algébrique (an algebraic variety), while a smooth manifold is called une variété différentielle (a differential variety). Stratified space A "stratified space" is a space that can be divided into pieces ("strata"), with each stratum a manifold, with the strata fitting together in prescribed ways (formally, a filtration by closed subsets). There are various technical definitions, notably a Whitney stratified space (see Whitney conditions) for smooth manifolds and a topologically stratified space for topological manifolds. Basic examples include manifold with boundary (top dimensional manifold and codimension 1 boundary) and manifolds with corners (top dimensional manifold, codimension 1 boundary, codimension 2 corners). Whitney stratified spaces are a broad class of spaces, including algebraic varieties, analytic varieties, semialgebraic sets, and subanalytic sets. CW-complexes A CW complex is a topological space formed by gluing disks of different dimensionality together. In general the resulting space is singular, hence not a manifold. However, they are of central interest in algebraic topology, especially in homotopy theory. Homology manifolds A homology manifold is a space that behaves like a manifold from the point of view of homology theory. These are not all manifolds, but (in high dimension) can be analyzed by surgery theory similarly to manifolds, and failure to be a manifold is a local obstruction, as in surgery theory. Differential spaces Let $$ M $$ be a nonempty set. Suppose that some family of real functions on $$ M $$ was chosen. Denote it by $$ C \subseteq \R^M $$ . It is an algebra with respect to the pointwise addition and multiplication. Let $$ M $$ be equipped with the topology induced by $$ C $$ . Suppose also that the following conditions hold. First: for every $$ H \in C^\infty\left(\R^n\right) $$ , where $$ n \in \N $$ , and arbitrary $$ f_1, \dots , f_n \in C $$ , the composition $$ H \circ \left(f_1, \dots, f_n\right) \in C $$ . Second: every function, which in every point of $$ M $$ locally coincides with some function from $$ C $$ , also belongs to $$ C $$ . A pair $$ (M, C) $$ for which the above conditions hold, is called a Sikorski differential space.
https://en.wikipedia.org/wiki/Manifold
Coalesced hashing, also called coalesced chaining, is a strategy of collision resolution in a hash table that forms a hybrid of separate chaining and open addressing. ## Separate chaining hash table In a separate chaining hash table, items that hash to the same address are placed on a list (or "chain") at that address. This technique can result in a great deal of wasted memory because the table itself must be large enough to maintain a load factor that performs well (typically twice the expected number of items), and extra memory must be used for all but the first item in a chain (unless list headers are used, in which case extra memory must be used for all items in a chain). ## Example Given a sequence "qrj," "aty," "qur," "dim," "ofu," "gcl," "rhv," "clq," "ecd," "qsu" of randomly generated three character long strings, the following table would be generated (using Bob Jenkins' One-at-a-Time hash algorithm) with a table of size 10: (null) "clq" "qur" (null) (null) "dim" "aty" "qsu" "rhv" "qrj" "ofu" "gcl" "ecd" (null) This strategy is effective, efficient, and very easy to implement. However, sometimes the extra memory use might be prohibitive, and the most common alternative, open addressing, has uncomfortable disadvantages that decrease performance. The primary disadvantage of open addressing is primary and secondary clustering, in which searches may access long sequences of used buckets that contain items with different hash addresses; items with one hash address can thus lengthen searches for items with other hash addresses. One solution to these issues is coalesced hashing. Coalesced hashing uses a similar technique as separate chaining, but instead of allocating new nodes for the linked list, buckets in the actual table are used. The first empty bucket in the table at the time of a collision is considered the collision bucket. When a collision occurs anywhere in the table, the item is placed in the collision bucket and a link is made between the chain and the collision bucket. It is possible for a newly inserted item to collide with items with a different hash address, such as the case in the example in the image when item "clq" is inserted. The chain for "clq" is said to "coalesce" with the chain of "qrj," hence the name of the algorithm. However, the extent of coalescing is minor compared with the clustering exhibited by open addressing. For example, when coalescing occurs, the length of the chain grows by only 1, whereas in open addressing, search sequences of arbitrary length may combine. ## The cellar An important optimization, to reduce the effect of coalescing, is to restrict the address space of the hash function to only a subset of the table. For example, if the table has size M with buckets numbered from 0 to M − 1, we can restrict the address space so that the hash function only assigns addresses to the first N locations in the table. The remaining M − N buckets, called the cellar, are used exclusively for storing items that collide during insertion. No coalescing can occur until the cellar is exhausted. The optimal choice of N relative to M depends upon the load factor (or fullness) of the table. A careful analysis shows that the value N = 0.86 × M yields near-optimum performance for most load factors.Jiří Vyskočil, Marko Genyk-Berezovskyj. "Coalesced hashing". 2010. ## Variants Other variants for insertion are also possible that have improved search time. Deletion algorithms have been developed that preserve randomness, and thus the average search time analysis still holds after deletions. ## Implementation Insertion in C: ```c /* htab is the hash table, N is the size of the address space of the hash function, and M is the size of the entire table including the cellar. Collision buckets are allocated in decreasing order, starting with bucket M-1. */ int insert ( char key[] ) { unsigned h = hash ( key, strlen ( key ) ) % N; if ( htab[h] == NULL ) { /* Make a new chain */ htab[h] = make_node ( key, NULL ); } else { struct node *it; int cursor = M-1; /* Find the first empty bucket */ while ( cursor >= 0 && htab[cursor] != NULL ) --cursor; /* The table is full, terminate unsuccessfully */ if ( cursor == -1 ) return -1; htab[cursor] = make_node ( key, NULL ); /* Find the last node in the chain and point to it */ it = htab[h]; while ( it->next != NULL ) it = it->next; it->next = htab[cursor]; } return 0; } ``` One benefit of this strategy is that the search algorithm for separate chaining can be used without change in a coalesced hash table. Lookup in C: ```c char *find ( char key[] ) { unsigned h = hash ( key, strlen ( key ) ) % N; if ( htab[h] != NULL ) { struct node *it; /* Search the chain at index h */ for ( it = htab[h]; it != NULL; it = it->next ) { if ( strcmp ( key, it->data ) == 0 ) return it->data; } } return NULL; } ``` ## Performance Deletion may be hard.Grant Weddell. "Hashing". p. 10-11. Coalesced chaining avoids the effects of primary and secondary clustering, and as a result can take advantage of the efficient search algorithm for separate chaining. If the chains are short, this strategy is very efficient and can be highly condensed, memory-wise. As in open addressing, deletion from a coalesced hash table is awkward and potentially expensive, and resizing the table is terribly expensive and should be done rarely, if ever. ## References Category:Hashing Category:Articles with example C code
https://en.wikipedia.org/wiki/Coalesced_hashing
Deep learning is a subset of machine learning that focuses on utilizing multilayered neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be either supervised, semi-supervised or unsupervised. Some common deep learning network architectures include fully connected networks, deep belief networks, recurrent neural networks, convolutional neural networks, generative adversarial networks, transformers, and neural radiance fields. These architectures have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance. Early forms of neural networks were inspired by information processing and distributed communication nodes in biological systems, particularly the human brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose. ## Overview Most modern deep learning models are based on multi-layered neural networks such as convolutional neural networks and transformers, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines. Fundamentally, deep learning refers to a class of machine learning algorithms in which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in an image recognition model, the raw input may be an image (represented as a tensor of pixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face. Importantly, a deep learning process can learn which features to optimally place at which level on its own. Prior to deep learning, machine learning techniques often involved hand-crafted feature engineering to transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the model discovers useful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction. The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function. Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively. Deep learning architectures can be constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features improve performance. Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data is more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are deep belief networks. The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986, and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons.Co-evolving recurrent neurons learn deep memory POMDPs. Proc. GECCO, Washington, D. C., pp. 1795–1802, ACM Press, New York, NY, USA, 2005. Although the history of its appearance is apparently more complicated. ## Interpretations ### Deep neural networks are generally interpreted in terms of the universal approximation theorem or probabilistic inference. The classic universal approximation theorem concerns the capacity of feedforward neural networks with a single hidden layer of finite size to approximate continuous functions. In 1989, the first proof was published by George Cybenko for sigmoid activation functions and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik. Recent work also showed that universal approximation also holds for non-bounded activation functions such as Kunihiko Fukushima's rectified linear unit. The universal approximation theorem for deep neural networks concerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al. proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator. The probabilistic interpretation derives from the field of machine learning. It features inference, as well as the optimization concepts of training and testing, related to fitting and generalization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative distribution function. The probabilistic interpretation led to the introduction of dropout as regularizer in neural networks. The probabilistic interpretation was introduced by researchers including Hopfield, Widrow and Narendra and popularized in surveys such as the one by Bishop. ## History ### Before 1980 There are two types of artificial neural network (ANN): feedforward neural network (FNN) or multilayer perceptron (MLP) and recurrent neural networks (RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s, Wilhelm Lenz and Ernst Ising created the Ising model which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was republished by John Hopfield in 1982. Other early recurrent neural networks were published by Kaoru Nakano in 1971. Already in 1948, Alan Turing produced work on "Intelligent Machinery" that was not published in his lifetime, containing "ideas related to artificial evolution and learning RNNs". Frank Rosenblatt (1958) proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight). The book cites an earlier network by R. D. Joseph (1960) "functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptive multilayer perceptrons with learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion. The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in 1965. They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates". The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique. In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning. Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation. Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. The modern form of backpropagation was first published in Seppo Linnainmaa's master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams. "Learning Internal Representations by Error Propagation ". David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986. ### 1980s- ### 2000s The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation.Alexander Waibel et al., Phoneme Recognition Using Time-Delay Neural Networks IEEE Transactions on Acoustics, Speech, and Signal Processing, Volume 37, No. 3, pp. 328. – 339 March 1989. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition. In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images. Recurrent neural networks (RNN) were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study problems in cognitive psychology. In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991, Jürgen Schmidhuber proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning where each RNN tries to predict its own next input, which is the next unexpected input of the RNN below. This "neural history compressor" uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network. In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. The "P" in ChatGPT refers to such pre-training. Sepp Hochreiter's diploma thesis (1991) implemented the neural history compressor, and identified and analyzed the vanishing gradient problem. Hochreiter proposed recurrent residual connections to solve the vanishing gradient problem. This led to the long short-term memory (LSTM), published in 1995. LSTM can learn "very deep learning" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999, which became the standard RNN architecture. In 1991, Jürgen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used in generative adversarial networks (GANs). During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112 ). A 1988 network became state of the art in protein structure prediction, an early application of deep learning to bioinformatics. Both shallow and deep learning (e.g., recurrent nets) of ANNs for speech recognition have been explored for many years. These methods never outperformed non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. Key difficulties have been analyzed, including gradient diminishing and weak temporal correlation structure in neural predictive models. Additional difficulties were the lack of training data and limited computing power. Most speech recognition researchers moved away from neural nets to pursue generative modeling. An exception was at SRI International in the late 1990s. Funded by the US government's NSA and DARPA, SRI researched in speech and speaker recognition. The speaker recognition team led by Larry Heck reported significant success with deep neural networks in speech processing in the 1998 NIST Speaker Recognition benchmark. It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning. The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features in the late 1990s, showing its superiority over the Mel-Cepstral features that contain stages of fixed transformation from spectrograms. The raw features of speech, waveforms, later produced excellent larger-scale results. 2000s ## Neural networks entered a lull, and simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks. In 2003, LSTM became competitive with traditional speech recognizers on certain tasks. In 2006, Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it with connectionist temporal classification (CTC) in stacks of LSTMs. In 2009, it became the first RNN to win a pattern recognition contest, in connected handwriting recognition. In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh deep belief networks were developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionally fine-tuned using supervised backpropagation. They could model high-dimensional probability distributions, such as the distribution of MNIST images, but convergence was slow. The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun. Industrial applications of deep learning to large-scale speech recognition started around 2010. The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems. The nature of the recognition errors produced by the two types of systems was characteristically different, offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems. Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition. That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models. In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees. ### Deep learning revolution The deep learning revolution started around CNN- and GPU-based computer vision. Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years, including CNNs, faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning. A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004. In 2009, Raina, Madhavan, and Andrew Ng reported a 100M deep belief network trained on 30 Nvidia GeForce GTX 280 GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training. In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly. In 2012, Andrew Ng and Jeff Dean created an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken from YouTube videos. In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3. The success in image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs.. In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and the residual neural network (ResNet) in Dec 2015. ResNet behaves like an open-gated Highway Net. Around the same time, deep learning started impacting the field of art. Early examples included Google DeepDream (2015), and neural style transfer (2015), both of which were based on pretrained image classification neural networks, such as VGG-19. Generative adversarial network (GAN) by (Ian Goodfellow et al., 2014) (based on Jürgen Schmidhuber's principle of artificial curiosity) became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022). In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available through Google Voice Search on smartphone. Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR). Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved. Convolutional neural networks were superseded for ASR by LSTM. but are more successful in computer vision. Yoshua Bengio, Geoffrey Hinton and Yann LeCun were awarded the 2018 Turing Award for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing". Neural networks Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming. An ANN is based on a collection of connected units called artificial neurons, (analogous to biological neurons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream. Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go"). Deep neural networks A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers. There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions. These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm. For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer, and complex DNN have many layers, hence the name "deep" networks. DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives. The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network. For instance, it was proved that sparse multivariate polynomials are exponentially easier to approximate with DNNs than with shallow networks. Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets. DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights. That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data. Recurrent neural networks, in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use. Convolutional neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR). #### Challenges As with ANNs, many issues can arise with naively trained DNNs. Two common issues are overfitting and computation time. DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data. Regularization methods such as Ivakhnenko's unit pruning or weight decay ( $$ \ell_2 $$ -regularization) or sparsity ( $$ \ell_1 $$ -regularization) can be applied during training to combat overfitting. Alternatively dropout regularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies. Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction. Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting. DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), the learning rate, and initial weights. Sweeping through the parameter space for optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such as batching (computing the gradient on several training examples at once rather than individual examples) speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations. Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.Ting Qin, et al. "Continuous CMAC-QRLS and its systolic array". . Neural Processing Letters 22.1 (2005): 1-16. ## Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI . OpenAI estimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months. Special electronic circuits called deep learning processors were designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) in Huawei cellphones and cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform. Cerebras Systems has also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2). Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage. In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs). In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications. ## Applications ### Automatic speech recognition Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates is competitive with traditional speech recognizers on certain tasks. The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight major dialects of American English, where each speaker reads 10 sentences. Its small size lets many configurations be tried. More importantly, the TIMIT task concerns phone-sequence recognition, which, unlike word-sequence recognition, allows weak phone bigram language models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991. Method Percent phoneerror rate (PER) (%) Randomly Initialized RNN 26.1 Bayesian Triphone GMM-HMM 25.6 Hidden Trajectory (Generative) Model 24.8 Monophone Randomly Initialized DNN 23.4 Monophone DBN-DNN 22.4 Triphone GMM-HMM with BMMI Training 21.7 Monophone DBN-DNN on fbank 20.7 Convolutional DNN 20.0 Convolutional DNN w. Heterogeneous Pooling 18.7 Ensemble DNN/CNN/RNN 18.3 Bidirectional LSTM 17.8 Hierarchical Convolutional Deep Maxout Network 16.5 The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas: - Scale-up/out and accelerated DNN training and decoding - Sequence discriminative training - Feature processing by deep models with solid understanding of the underlying mechanisms - Adaptation of DNNs and related deep models - Multi-task and transfer learning by DNNs and related deep models - CNNs and how to design them to best exploit domain knowledge of speech - RNN and its rich LSTM variants - Other types of deep models including tensor-based models and integrated deep generative/discriminative models. All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning. ### Image recognition A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available. Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces. Deep learning-trained vehicles now interpret 360° camera views. Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes. ### Visual art processing Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of - identifying the style period of a given painting - Neural Style Transfer capturing the style of a given artwork and applying it in a visually pleasing manner to an arbitrary photograph or video - generating striking imagery based on random visual input fields. ### Natural language processing Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling. Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN. Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing. Deep neural architectures provide the best results for constituency parsing, sentiment analysis, information retrieval, spoken language understanding, machine translation, contextual entity linking, writing style recognition, named-entity recognition (token classification), text classification, and others. Recent developments generalize word embedding to sentence embedding. Google Translate (GT) uses a large end-to-end long short-term memory (LSTM) network. Google Neural Machine Translation (GNMT) uses an example-based machine translation method in which the system "learns from millions of examples". It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages. The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations". GT uses English as an intermediate between most language pairs. ### Drug discovery and toxicology A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects. Research has explored use of deep learning to predict the biomolecular targets, off-targets, and toxic effects of environmental chemicals in nutrients, household products and drugs. AtomNet is a deep learning system for structure-based rational drug design. AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis. In 2017 graph neural networks were used for the first time to predict various properties of molecules in a large toxicology data set. In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice. ### Customer relationship management Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value. ### Recommendation systems Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks. ### Bioinformatics An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships. In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data. Deep neural networks have shown unparalleled performance in predicting protein structure, according to the sequence of the amino acids that make it up. In 2020, AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods. ### Deep Neural Network Estimations Deep neural networks can be used to estimate the entropy of a stochastic process and called Neural Joint Entropy Estimator (NJEE). Such an estimation provides insights on the effects of input random variables on an independent random variable. Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in image classification tasks, the NJEE maps a vector of pixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes. ### Medical image analysis Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement. Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency. ### Mobile advertising Finding the appropriate mobile audience for mobile advertising is always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection. ### Image restoration Deep learning has been successfully applied to inverse problems such as denoising, super-resolution, inpainting, and film colorization. These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration" which trains on an image dataset, and Deep Image Prior, which trains on the image that needs restoration. ### Financial fraud detection Deep learning is being successfully applied to financial fraud detection, tax evasion detection, and anti-money laundering. ### Materials science In November 2023, researchers at Google DeepMind and Lawrence Berkeley National Laboratory announced that they had developed an AI system known as GNoME. This system has contributed to materials science by discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganic crystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through the Materials Project database, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds. ### Military The United States Department of Defense applied deep learning to train robots in new tasks through observation. ### Partial differential equations Physics informed neural networks have been used to solve partial differential equations in both forward and inverse problems in a data driven manner. One example is the reconstructing fluid flow governed by the Navier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventional CFD methods rely on. ### Deep backward stochastic differential equation method Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden. In addition, the integration of Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems. ### Image reconstruction Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging and ultrasound imaging. ### Weather prediction Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems. ### Epigenetic clock An epigenetic clock is a biochemical test that can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples. The clock uses information from 1000 CpG sites and predicts people with certain conditions older than healthy controls: IBD, frontotemporal dementia, ovarian cancer, obesity. The aging clock was planned to be released for public use in 2021 by an Insilico Medicine spinoff company Deep Longevity. ## Relation to human cognitive and brain development Deep learning is closely related to a class of theories of brain development (specifically, neocortical development) proposed by cognitive neuroscientists in the early 1990s. These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave of nerve growth factor) support the self-organization somewhat analogous to the neural networks utilized in deep learning models. Like the neocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack of transducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature". A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of the backpropagation algorithm have been proposed in order to increase its processing realism. Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchical generative models and deep belief networks, may be closer to biological reality. In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex. Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons and neural populations. Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system both at the single-unit and at the population levels. ## Commercial activity Facebook's AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them. Google's DeepMind Technologies developed a system capable of learning how to play Atari video games using only pixels as data input. In 2015 they demonstrated their AlphaGo system, which learned the game of Go well enough to beat a professional Go player. Google Translate uses a neural network to translate between more than 100 languages. In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories. As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation. Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job". ## Criticism and comment Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science. ### Theory A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear. (e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as a black box, with most confirmations done empirically, rather than theoretically. In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article on The Guardian's website. ### Errors Some deep learning architectures display problematic behaviors, such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014) and misclassifying minuscule perturbations of correctly classified images (2013). Goertzel hypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-component artificial general intelligence (AGI) architectures. These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar decompositions of observed entities and events. Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI). ### Cyber threat As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception. By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack". In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system. One defense is reverse image search, in which a possible fake image is submitted to a site such as TinEye that can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken. Another group showed that certain psychedelic spectacles could fool a facial recognition system into thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers to stop signs and caused an ANN to misclassify them. ANNs can however be further trained to detect attempts at deception, potentially leading attackers and defenders into an arms race similar to the kind that already defines the malware defense industry. ANNs have been trained to defeat ANN-based anti-malware software by repeatedly attacking a defense with malware that was continually altered by a genetic algorithm until it tricked the anti-malware while retaining its ability to damage the target. In 2016, another group demonstrated that certain sounds could make the Google Now voice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)". In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery. ### Data collection ethics The deep learning systems that are trained using supervised learning often rely on data that is created and/or annotated by humans. It has been argued that not only low-paid clickwork (such as on Amazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of human microwork that are often not recognized as such. The philosopher Rainer Mühlhoff distinguishes five types of "machinic capture" of human microwork to generate training data: (1) gamification (the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g. CAPTCHAs for image recognition or click-tracking on Google search results pages), (3) exploitation of social motivations (e.g. tagging faces on Facebook to obtain labeled facial images), (4) information mining (e.g. by leveraging quantified-self devices such as activity trackers) and (5) clickwork.
https://en.wikipedia.org/wiki/Deep_learning%23Deep_neural_networks
In graph theory, the shortest path problem is the problem of finding a path between two vertices (or nodes) in a graph such that the sum of the weights of its constituent edges is minimized. The problem of finding the shortest path between two intersections on a road map may be modeled as a special case of the shortest path problem in graphs, where the vertices correspond to intersections and the edges correspond to road segments, each weighted by the length or distance of each segment. ## Definition The shortest path problem can be defined for graphs whether undirected, directed, or mixed. The definition for undirected graphs states that every edge can be traversed in either direction. ### Directed graph s require that consecutive vertices be connected by an appropriate directed edge. Two vertices are adjacent when they are both incident to a common edge. A path in an undirected graph is a sequence of vertices $$ P = ( v_1, v_2, \ldots, v_n ) \in V \times V \times \cdots \times V $$ such that $$ v_i $$ is adjacent to $$ v_{i+1} $$ for $$ 1 \leq i < n $$ . Such a path $$ P $$ is called a path of length $$ n-1 $$ from $$ v_1 $$ to $$ v_n $$ . (The $$ v_i $$ are variables; their numbering relates to their position in the sequence and need not relate to a canonical labeling.) Let $$ E = \{e_{i, j}\} $$ where $$ e_{i, j} $$ is the edge incident to both $$ v_i $$ and $$ v_j $$ . Given a real-valued weight function $$ f: E \rightarrow \mathbb{R} $$ , and an undirected (simple) graph $$ G $$ , the shortest path from $$ v $$ to $$ v' $$ is the path $$ P = ( v_1, v_2, \ldots, v_n ) $$ (where $$ v_1 = v $$ and $$ v_n = v' $$ ) that over all possible $$ n $$ minimizes the sum $$ \sum_{i =1}^{n-1} f(e_{i, i+1}). $$ When each edge in the graph has unit weight or $$ f: E \rightarrow \{1\} $$ , this is equivalent to finding the path with fewest edges. The problem is also sometimes called the single-pair shortest path problem, to distinguish it from the following variations: - The single-source shortest path problem, in which we have to find shortest paths from a source vertex v to all other vertices in the graph. - The single-destination shortest path problem, in which we have to find shortest paths from all vertices in the directed graph to a single destination vertex v. This can be reduced to the single-source shortest path problem by reversing the arcs in the directed graph. - The all-pairs shortest path problem, in which we have to find shortest paths between every pair of vertices v, v' in the graph. These generalizations have significantly more efficient algorithms than the simplistic approach of running a single-pair shortest path algorithm on all relevant pairs of vertices. ## Algorithms Several well-known algorithms exist for solving this problem and its variants. - Dijkstra's algorithm solves the single-source shortest path problem with only non-negative edge weights. - Bellman–Ford algorithm solves the single-source problem if edge weights may be negative. - A* search algorithm solves for single-pair shortest path using heuristics to try to speed up the search. - Floyd–Warshall algorithm solves all pairs shortest paths. - Johnson's algorithm solves all pairs shortest paths, and may be faster than Floyd–Warshall on sparse graphs. - Viterbi algorithm solves the shortest stochastic path problem with an additional probabilistic weight on each node. Additional algorithms and associated evaluations may be found in . ## Single-source shortest paths ### ### Undirected graph s Weights Time complexity Author + O(V2) + O((E + V) log V) (binary heap) + O(E + V log V) (Fibonacci heap) O(E) (requires constant-time multiplication) + ### Unweighted graphs Algorithm Time complexity Author Breadth-first search O(E + V) ### Directed acyclic graphs An algorithm using topological sorting can solve the single-source shortest path problem in time in arbitrarily-weighted directed acyclic graphs. ### Directed graphs with nonnegative weights The following table is taken from , with some corrections and additions. A green background indicates an asymptotically best bound in the table; L is the maximum length (or weight) among all edges, assuming integer edge weights. Weights Algorithm Time complexity Author Bellman–Ford algorithm , , Dijkstra's algorithm with list , , Minty (see ), Dijkstra's algorithm with binary heap Dijkstra's algorithm with Fibonacci heap , Quantum Dijkstra algorithm with adjacency list Dürr et al. 2006 Dial's algorithm (Dijkstra's algorithm using a bucket queue with L buckets) , Gabow's algorithm , Thorup ### Directed graphs with arbitrary weights without negative cycles Weights Algorithm Time complexity Author Bellman–Ford algorithm , , Johnson-Dijkstra with binary heap Johnson-Dijkstra with Fibonacci heap , , adapted after Johnson's technique applied to Dial's algorithm , adapted after Interior-point method with Laplacian solverInterior-point method with flow solverRobust interior-point method with sketching interior-point method with dynamic min-ratio cycle data structureBased on low-diameter decompositionHop-limited shortest paths ### Directed graphs with arbitrary weights with negative cycles Finds a negative cycle or calculates distances to all vertices. Weights Algorithm Time complexity Author Andrew V. Goldberg ### Planar graphs with nonnegative weights Weights Algorithm Time complexity Author ## ## Applications Network flows are a fundamental concept in graph theory and operations research, often used to model problems involving the transportation of goods, liquids, or information through a network. A network flow problem typically involves a directed graph where each edge represents a pipe, wire, or road, and each edge has a capacity, which is the maximum amount that can flow through it. The goal is to find a feasible flow that maximizes the flow from a source node to a sink node. Shortest Path Problems can be used to solve certain network flow problems, particularly when dealing with single-source, single-sink networks. In these scenarios, we can transform the network flow problem into a series of shortest path problems. ### Transformation Steps 1. Create a Residual Graph: 1. For each edge (u, v) in the original graph, create two edges in the residual graph: 1. (u, v) with capacity c(u, v) 1. (v, u) with capacity 0 1. The residual graph represents the remaining capacity available in the network. 1. Find the Shortest Path: 1. Use a shortest path algorithm (e.g., Dijkstra's algorithm, Bellman-Ford algorithm) to find the shortest path from the source node to the sink node in the residual graph. 1. Augment the Flow: 1. Find the minimum capacity along the shortest path. 1. Increase the flow on the edges of the shortest path by this minimum capacity. 1. Decrease the capacity of the edges in the forward direction and increase the capacity of the edges in the backward direction. 1. Update the Residual Graph: 1. Update the residual graph based on the augmented flow. 1. Repeat: 1. Repeat steps 2-4 until no more paths can be found from the source to the sink. ## All-pairs shortest paths The all-pairs shortest path problem finds the shortest paths between every pair of vertices , in the graph. The all-pairs shortest paths problem for unweighted directed graphs was introduced by , who observed that it could be solved by a linear number of matrix multiplications that takes a total time of . Undirected graph Weights Time complexity Algorithm + Floyd–Warshall algorithm Seidel's algorithm (expected running time) + applied to every vertex (requires constant-time multiplication). Directed graph Weights Time complexity Algorithm (no negative cycles) Floyd–Warshall algorithm (no negative cycles) Quantum search (no negative cycles) Johnson–Dijkstra (no negative cycles) Applications Shortest path algorithms are applied to automatically find directions between physical locations, such as driving directions on web mapping websites like MapQuest or Google Maps. For this application fast specialized algorithms are available. If one represents a nondeterministic abstract machine as a graph where vertices describe states and edges describe possible transitions, shortest path algorithms can be used to find an optimal sequence of choices to reach a certain goal state, or to establish lower bounds on the time needed to reach a given state. For example, if vertices represent the states of a puzzle like a Rubik's Cube and each directed edge corresponds to a single move or turn, shortest path algorithms can be used to find a solution that uses the minimum possible number of moves. In a networking or telecommunications mindset, this shortest path problem is sometimes called the min-delay path problem and usually tied with a widest path problem. For example, the algorithm may seek the shortest (min-delay) widest path, or widest shortest (min-delay) path. A more lighthearted application is the games of "six degrees of separation" that try to find the shortest path in graphs like movie stars appearing in the same film. Other applications, often studied in operations research, include plant and facility layout, robotics, transportation, and VLSI design. ### Road networks A road network can be considered as a graph with positive weights. The nodes represent road junctions and each edge of the graph is associated with a road segment between two junctions. The weight of an edge may correspond to the length of the associated road segment, the time needed to traverse the segment, or the cost of traversing the segment. Using directed edges it is also possible to model one-way streets. Such graphs are special in the sense that some edges are more important than others for long-distance travel (e.g. highways). This property has been formalized using the notion of highway dimension. There are a great number of algorithms that exploit this property and are therefore able to compute the shortest path a lot quicker than would be possible on general graphs. All of these algorithms work in two phases. In the first phase, the graph is preprocessed without knowing the source or target node. The second phase is the query phase. In this phase, source and target node are known. The idea is that the road network is static, so the preprocessing phase can be done once and used for a large number of queries on the same road network. The algorithm with the fastest known query time is called hub labeling and is able to compute shortest path on the road networks of Europe or the US in a fraction of a microsecond. Other techniques that have been used are: - ALT (A* search, landmarks, and triangle inequality) - Arc flags - Contraction hierarchies - Transit node routing - Reach-based pruning - Labeling - Hub labels ## Related problems For shortest path problems in computational geometry, see Euclidean shortest path. The shortest multiple disconnected path is a representation of the primitive path network within the framework of Reptation theory. The widest path problem seeks a path so that the minimum label of any edge is as large as possible. Other related problems may be classified into the following categories. ### Paths with constraints Unlike the shortest path problem, which can be solved in polynomial time in graphs without negative cycles, shortest path problems which include additional constraints on the desired solution path are called Constrained Shortest Path First, and are harder to solve. One example is the constrained shortest path problem, which attempts to minimize the total cost of the path while at the same time maintaining another metric below a given threshold. This makes the problem NP-complete (such problems are not believed to be efficiently solvable for large sets of data, see P = NP problem). Another NP-complete example requires a specific set of vertices to be included in the path, which makes the problem similar to the Traveling Salesman Problem (TSP). The TSP is the problem of finding the shortest path that goes through every vertex exactly once, and returns to the start. The problem of finding the longest path in a graph is also NP-complete. ### Partial observability The Canadian traveller problem and the stochastic shortest path problem are generalizations where either the graph is not completely known to the mover, changes over time, or where actions (traversals) are probabilistic. ### Strategic shortest paths Sometimes, the edges in a graph have personalities: each edge has its own selfish interest. An example is a communication network, in which each edge is a computer that possibly belongs to a different person. Different computers have different transmission speeds, so every edge in the network has a numeric weight equal to the number of milliseconds it takes to transmit a message. Our goal is to send a message between two points in the network in the shortest time possible. If we know the transmission-time of each computer (the weight of each edge), then we can use a standard shortest-paths algorithm. If we do not know the transmission times, then we have to ask each computer to tell us its transmission-time. But, the computers may be selfish: a computer might tell us that its transmission time is very long, so that we will not bother it with our messages. A possible solution to this problem is to use a variant of the VCG mechanism, which gives the computers an incentive to reveal their true weights. ### Negative cycle detection In some cases, the main goal is not to find the shortest path, but only to detect if the graph contains a negative cycle. Some shortest-paths algorithms can be used for this purpose: - The Bellman–Ford algorithm can be used to detect a negative cycle in time $$ O(|V||E|) $$ . - Cherkassky and Goldberg survey several other algorithms for negative cycle detection. ## General algebraic framework on semirings: the algebraic path problem Many problems can be framed as a form of the shortest path for some suitably substituted notions of addition along a path and taking the minimum. The general approach to these is to consider the two operations to be those of a semiring. Semiring multiplication is done along the path, and the addition is between paths. This general framework is known as the algebraic path problem. Most of the classic shortest-path algorithms (and new ones) can be formulated as solving linear systems over such algebraic structures. More recently, an even more general framework for solving these (and much less obviously related problems) has been developed under the banner of valuation algebras. ## Shortest path in stochastic time-dependent networks In real-life, a transportation network is usually stochastic and time-dependent. The travel duration on a road segment depends on many factors such as the amount of traffic (origin-destination matrix), road work, weather, accidents and vehicle breakdowns. A more realistic model of such a road network is a stochastic time-dependent (STD) network. There is no accepted definition of optimal path under uncertainty (that is, in stochastic road networks). It is a controversial subject, despite considerable progress during the past decade. One common definition is a path with the minimum expected travel time. The main advantage of this approach is that it can make use of efficient shortest path algorithms for deterministic networks. However, the resulting optimal path may not be reliable, because this approach fails to address travel time variability. To tackle this issue, some researchers use travel duration distribution instead of its expected value. So, they find the probability distribution of total travel duration using different optimization methods such as dynamic programming and Dijkstra's algorithm . These methods use stochastic optimization, specifically stochastic dynamic programming to find the shortest path in networks with probabilistic arc length. The terms travel time reliability and travel time variability are used as opposites in the transportation research literature: the higher the variability, the lower the reliability of predictions. To account for variability, researchers have suggested two alternative definitions for an optimal path under uncertainty. The most reliable path is one that maximizes the probability of arriving on time given a travel time budget. An α-reliable path is one that minimizes the travel time budget required to arrive on time with a given probability.
https://en.wikipedia.org/wiki/Shortest_path_problem
In lab experiments that study chaos theory, approaches designed to control chaos are based on certain observed system behaviors. Any chaotic attractor contains an infinite number of unstable, periodic orbits. Chaotic dynamics, then, consists of a motion where the system state moves in the neighborhood of one of these orbits for a while, then falls close to a different unstable, periodic orbit where it remains for a limited time and so forth. This results in a complicated and unpredictable wandering over longer periods of time. Control of chaos is the stabilization, by means of small system perturbations, of one of these unstable periodic orbits. The result is to render an otherwise chaotic motion more stable and predictable, which is often an advantage. The perturbation must be tiny compared to the overall size of the attractor of the system to avoid significant modification of the system's natural dynamics. Several techniques have been devised for chaos control, but most are developments of two basic approaches: the Ott–Grebogi–Yorke (OGY) method and Pyragas continuous control. Both methods require a previous determination of the unstable periodic orbits of the chaotic system before the controlling algorithm can be designed. ## OGY method Edward Ott, Celso Grebogi and James A. Yorke were the first to make the key observation that the infinite number of unstable periodic orbits typically embedded in a chaotic attractor could be taken advantage of for the purpose of achieving control by means of applying only very small perturbations. After making this general point, they illustrated it with a specific method, since called the Ott–Grebogi–Yorke (OGY) method of achieving stabilization of a chosen unstable periodic orbit. In the OGY method, small, wisely chosen, kicks are applied to the system once per cycle, to maintain it near the desired unstable periodic orbit. To start, one obtains information about the chaotic system by analyzing a slice of the chaotic attractor. This slice is a Poincaré section. After the information about the section has been gathered, one allows the system to run and waits until it comes near a desired periodic orbit in the section. Next, the system is encouraged to remain on that orbit by perturbing the appropriate parameter. When the control parameter is actually changed, the chaotic attractor is shifted and distorted somewhat. If all goes according to plan, the new attractor encourages the system to continue on the desired trajectory. One strength of this method is that it does not require a detailed model of the chaotic system but only some information about the Poincaré section. It is for this reason that the method has been so successful in controlling a wide variety of chaotic systems. The weaknesses of this method are in isolating the Poincaré section and in calculating the precise perturbations necessary to attain stability. ## Pyragas method In the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is practically zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY methods are part of a general class of methods called "closed loop" or "feedback" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time. The method was proposed by Lithuanian physicist . ## Applications Experimental control of chaos by one or both of these methods has been achieved in a variety of systems, including turbulent fluids, oscillating chemical reactions, magneto-mechanical oscillators and cardiac tissues. attempt the control of chaotic bubbling with the OGY method and using electrostatic potential as the primary control variable. Forcing two systems into the same state is not the only way to achieve synchronization of chaos. Both control of chaos and synchronization constitute parts of cybernetical physics, a research area on the border between physics and control theory. ## References ## External links - Chaos control bibliography (1997–2000) Category:Chaos theory Category:Nonlinear systems
https://en.wikipedia.org/wiki/Control_of_chaos
In physics, Newtonian dynamics (also known as Newtonian mechanics) is the study of the dynamics of a particle or a small body according to Newton's laws of motion. ## Mathematical generalizations Typically, the Newtonian dynamics occurs in a three-dimensional Euclidean space, which is flat. However, in mathematics Newton's laws of motion can be generalized to multidimensional and curved spaces. Often the term Newtonian dynamics is narrowed to Newton's second law $$ \displaystyle m\,\mathbf a=\mathbf F $$ . ## Newton's second law in a multidimensional space Consider $$ \displaystyle N $$ particles with masses $$ \displaystyle m_1,\,\ldots,\,m_N $$ in the regular three-dimensional Euclidean space. Let $$ \displaystyle \mathbf r_1,\,\ldots,\,\mathbf r_N $$ be their radius-vectors in some inertial coordinate system. Then the motion of these particles is governed by Newton's second law applied to each of them The three-dimensional radius-vectors $$ \displaystyle\mathbf r_1,\,\ldots,\,\mathbf r_N $$ can be built into a single $$ \displaystyle n=3N $$ -dimensional radius-vector. Similarly, three-dimensional velocity vectors $$ \displaystyle\mathbf v_1,\,\ldots,\,\mathbf v_N $$ can be built into a single $$ \displaystyle n=3N $$ -dimensional velocity vector: In terms of the multidimensional vectors () the equations () are written as i.e. they take the form of Newton's second law applied to a single particle with the unit mass $$ \displaystyle m=1 $$ . Definition. The equations () are called the equations of a Newtonian dynamical system in a flat multidimensional Euclidean space, which is called the configuration space of this system. Its points are marked by the radius-vector $$ \displaystyle\mathbf r $$ . The space whose points are marked by the pair of vectors $$ \displaystyle(\mathbf r,\mathbf v) $$ is called the phase space of the dynamical system (). ## Euclidean structure The configuration space and the phase space of the dynamical system () both are Euclidean spaces, i. e. they are equipped with a Euclidean structure. The Euclidean structure of them is defined so that the kinetic energy of the single multidimensional particle with the unit mass $$ \displaystyle m=1 $$ is equal to the sum of kinetic energies of the three-dimensional particles with the masses $$ \displaystyle m_1,\,\ldots,\,m_N $$ : ## Constraints and internal coordinates In some cases the motion of the particles with the masses $$ \displaystyle m_1,\,\ldots,\,m_N $$ can be constrained. Typical constraints look like scalar equations of the form Constraints of the form () are called holonomic and scleronomic. In terms of the radius-vector $$ \displaystyle\mathbf r $$ of the Newtonian dynamical system () they are written as Each such constraint reduces by one the number of degrees of freedom of the Newtonian dynamical system (). Therefore, the constrained system has $$ \displaystyle n=3\,N-K $$ degrees of freedom. Definition. The constraint equations () define an $$ \displaystyle n $$ -dimensional manifold $$ \displaystyle M $$ within the configuration space of the Newtonian dynamical system (). This manifold $$ \displaystyle M $$ is called the configuration space of the constrained system. Its tangent bundle $$ \displaystyle TM $$ is called the phase space of the constrained system. Let $$ \displaystyle q^1,\,\ldots,\,q^n $$ be the internal coordinates of a point of $$ \displaystyle M $$ . Their usage is typical for the Lagrangian mechanics. The radius-vector $$ \displaystyle\mathbf r $$ is expressed as some definite function of $$ \displaystyle q^1,\,\ldots,\,q^n $$ : The vector-function () resolves the constraint equations () in the sense that upon substituting () into () the equations () are fulfilled identically in $$ \displaystyle q^1,\,\ldots,\,q^n $$ . ## Internal presentation of the velocity vector The velocity vector of the constrained Newtonian dynamical system is expressed in terms of the partial derivatives of the vector-function (): The quantities $$ \displaystyle\dot q^1,\,\ldots,\,\dot q^n $$ are called internal components of the velocity vector. Sometimes they are denoted with the use of a separate symbol and then treated as independent variables. The quantities are used as internal coordinates of a point of the phase space $$ \displaystyle TM $$ of the constrained Newtonian dynamical system. ## Embedding and the induced Riemannian metric Geometrically, the vector-function () implements an embedding of the configuration space $$ \displaystyle M $$ of the constrained Newtonian dynamical system into the $$ \displaystyle 3\,N $$ -dimensional flat configuration space of the unconstrained Newtonian dynamical system (). Due to this embedding the Euclidean structure of the ambient space induces the Riemannian metric onto the manifold $$ \displaystyle M $$ . The components of the metric tensor of this induced metric are given by the formula where $$ \displaystyle(\ ,\ ) $$ is the scalar product associated with the Euclidean structure (). ## Kinetic energy of a constrained Newtonian dynamical system Since the Euclidean structure of an unconstrained system of $$ \displaystyle N $$ particles is introduced through their kinetic energy, the induced Riemannian structure on the configuration space $$ \displaystyle N $$ of a constrained system preserves this relation to the kinetic energy: The formula () is derived by substituting () into () and taking into account (). ## Constraint forces For a constrained Newtonian dynamical system the constraints described by the equations () are usually implemented by some mechanical framework. This framework produces some auxiliary forces including the force that maintains the system within its configuration manifold $$ \displaystyle M $$ . Such a maintaining force is perpendicular to $$ \displaystyle M $$ . It is called the normal force. The force $$ \displaystyle\mathbf F $$ from () is subdivided into two components The first component in () is tangent to the configuration manifold $$ \displaystyle M $$ . The second component is perpendicular to $$ \displaystyle M $$ . In coincides with the normal force $$ \displaystyle\mathbf N $$ . Like the velocity vector (), the tangent force $$ \displaystyle\mathbf F_\parallel $$ has its internal presentation The quantities $$ F^1,\,\ldots,\,F^n $$ in () are called the internal components of the force vector. ## Newton's second law in a curved space The Newtonian dynamical system () constrained to the configuration manifold $$ \displaystyle M $$ by the constraint equations () is described by the differential equations where $$ \Gamma^s_{ij} $$ are Christoffel symbols of the metric connection produced by the Riemannian metric (). ## Relation to Lagrange equations Mechanical systems with constraints are usually described by Lagrange equations: where $$ T=T(q^1,\ldots,q^n,w^1,\ldots,w^n) $$ is the kinetic energy the constrained dynamical system given by the formula (). The quantities $$ Q_1,\,\ldots,\,Q_n $$ in () are the inner covariant components of the tangent force vector $$ \mathbf F_\parallel $$ (see () and ()). They are produced from the inner contravariant components $$ F^1,\,\ldots,\,F^n $$ of the vector $$ \mathbf F_\parallel $$ by means of the standard index lowering procedure using the metric (): The equations () are equivalent to the equations (). However, the metric () and other geometric features of the configuration manifold $$ \displaystyle M $$ are not explicit in (). The metric () can be recovered from the kinetic energy $$ \displaystyle T $$ by means of the formula
https://en.wikipedia.org/wiki/Newtonian_dynamics
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. ## Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it. At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables. The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept. ## Overview The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit. Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system. For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because: - The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability. - The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. ## Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood. - The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid. - The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos. ## History Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state. Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system. In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics. Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others. Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period. In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft. ## Formal definition In the most general sense,Mazzola C. and Giunti M. (2012), "Reversible dynamics and the directionality of time". In Minati G., Abram M., Pessa E. (eds.), Methods, models, simulations and approaches towards a general theory of change, pp. 161–171, Singapore: World Scientific. . a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function $$ \Phi: U \subseteq (T \times X) \to X $$ with $$ \mathrm{proj}_{2}(U) = X $$ (where $$ \mathrm{proj}_{2} $$ is the 2nd projection map) and for any x in X: $$ \Phi(0,x) = x $$ $$ \Phi(t_2,\Phi(t_1,x)) = \Phi(t_2 + t_1, x), $$ for $$ \, t_1,\, t_2 + t_1 \in I(x) $$ and $$ \ t_2 \in I(\Phi(t_1, x)) $$ , where we have defined the set $$ I(x) := \{ t \in T : (t,x) \in U \} $$ for any x in X. In particular, in the case that $$ U = T \times X $$ we have for every x in X that $$ I(x) = T $$ and thus that Φ defines a monoid action of T on X. The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system. We often write $$ \Phi_x(t) \equiv \Phi(t,x) $$ $$ \Phi^t(x) \equiv \Phi(t,x) $$ if we take one of the variables as constant. The function $$ \Phi_x:I(x) \to X $$ is called the flow through x and its graph is called the trajectory through x. The set $$ \gamma_x \equiv\{\Phi(t,x) : t \in I(x)\} $$ is called the orbit through x. The orbit through x is the image of the flow through x. A subset S of the state space X is called Φ-invariant if for all x in S and all t in T $$ \Phi(t,x) \in S. $$ Thus, in particular, if S is Φ-invariant, $$ I(x) = T $$ for all x in S. That is, the flow through x must be defined for all time for every element of S. More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor. ### Geometrical definition In the geometrical definition, a dynamical system is the tuple $$ \langle \mathcal{T}, \mathcal{M}, f\rangle $$ . $$ \mathcal{T} $$ is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. $$ \mathcal{M} $$ is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with $$ t\in\mathcal{T} $$ ) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain $$ \mathcal{T} $$ into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain $$ \mathcal{T} $$ . #### Real dynamical system A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow. #### Discrete dynamical system A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade. #### Cellular automaton A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice. #### Multidimensional generalization Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing. #### Compactification of a dynamical system Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*). In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected. ### Measure theoretical definition A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has $$ \Phi^{-1}\sigma \in \Sigma $$ . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has $$ \mu(\Phi^{-1}\sigma ) = \mu(\sigma) $$ . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system. The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates $$ \Phi^n = \Phi \circ \Phi \circ \dots \circ \Phi $$ for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated. #### Relation to geometric definition The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance. Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution. For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems. ## Construction of dynamical systems The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following: $$ \dot{\boldsymbol{x}}=\boldsymbol{v}(t,\boldsymbol{x}) $$ $$ \boldsymbol{x}|_=\boldsymbol{x}_0 $$ where - $$ \dot{\boldsymbol{x}} $$ represents the velocity of the material point x - M is a finite dimensional manifold - v: T × M → TM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM. There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions. Depending on the properties of this vector field, the mechanical system is called - autonomous, when v(t, x) = v(x) - homogeneous when v(t, 0) = 0 for all t The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above $$ \boldsymbol(t)=\Phi(t,\boldsymbol_0) $$ The dynamical system is then (T, M, Φ). Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy $$ \dot{\boldsymbol{x}}-\boldsymbol{v}(t,\boldsymbol{x})=0 \qquad\Leftrightarrow\qquad \mathfrak\left(t,\Phi(t,\boldsymbol_0)\right)=0 $$ where $$ \mathfrak{G}:{{(T\times M)}^M}\to\mathbf{C} $$ is a functional from the set of evolution functions to the field of the complex numbers. This equation is useful when modeling mechanical systems with complicated constraints. Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations. Examples - Arnold's cat map - Baker's map is an example of a chaotic piecewise linear map - Billiards and outer billiards - Bouncing ball dynamics - Circle map - Complex quadratic polynomial - Double pendulum - Dyadic transformation - Dynamical system simulation - Hénon map - Irrational rotation - Kaplan–Yorke map - List of chaotic maps - Lorenz system - Quadratic map simulation system - Rössler map - Swinging Atwood's machine - Tent map Linear dynamical systems Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t). ### Flows For a flow, the vector field v(x) is an affine function of the position in the phase space, that is, $$ \dot{x} = v(x) = A x + b, $$ with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b: $$ \Phi^t(x_1) = x_1 + b t. $$ When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0, $$ \Phi^t(x_0) = e^{t A} x_0. $$ When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin. The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior. ### Maps A discrete-time, affine dynamical system has the form of a matrix difference equation: $$ x_{n+1} = A x_n + b, $$ with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map. As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point. There are also many other discrete dynamical systems. ## Local dynamics The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible. ### Rectification A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem. The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches. ### Near periodic orbits In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0. The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part $$ h^{-1} \circ F \circ h(x) = J \cdot x. $$ This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem. ### Conjugation results The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic. In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic. The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point. ## Bifurcation theory When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation. Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems. The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory. Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations. ## Ergodic systems In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that $$ \mathrm{vol} (A) = \mathrm{vol} ( \Phi^t(A) ). $$ In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure. In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution. For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms. One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω). The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator, $$ (U^t a)(x) = a(\Phi^{-t}(x)). $$ By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U. The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems. ## Nonlinear dynamical systems and chaos Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold). This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?" The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear. ### Solutions of finite duration For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line. As example, the equation: $$ y'= -\text{sgn}(y)\sqrt{|y|},\,\,y(0)=1 $$ Admits the finite duration solution: $$ y(t)=\frac{1}{4}\left(1-\frac{t}{2}+\left|1-\frac{t}{2}\right|\right)^2 $$ that is zero for $$ t \geq 2 $$ and is not Lipschitz continuous at its ending time $$ t = 2. $$
https://en.wikipedia.org/wiki/Dynamical_system
In mathematics, a basic algebraic operation is any one of the common operations of elementary algebra, which include addition, subtraction, multiplication, division, raising to a whole number power, and taking roots (fractional power). These operations may be performed on numbers, in which case they are often called arithmetic operations. They may also be performed, in a similar way, on variables, algebraic expressions, and more generally, on elements of algebraic structures, such as groups and fields. An algebraic operation may also be defined more generally as a function from a Cartesian power of a given set to the same set. The term algebraic operation may also be used for operations that may be defined by compounding basic algebraic operations, such as the dot product. In calculus and mathematical analysis, algebraic operation is also used for the operations that may be defined by purely algebraic methods. For example, exponentiation with an integer or rational exponent is an algebraic operation, but not the general exponentiation with a real or complex exponent. Also, the derivative is an operation on numerical functions and algebraic expressions that is not algebraic. ## Notation Multiplication symbols are usually omitted, and implied, when there is no operator between two variables or terms, or when a coefficient is used. For example, 3 × x2 is written as 3x2, and 2 × x × y is written as 2xy. Sometimes, multiplication symbols are replaced with either a dot or center-dot, so that x × y is written as either x . y or x · y. Plain text, programming languages, and calculators also use a single asterisk to represent the multiplication symbol, and it must be explicitly used; for example, 3x is written as 3 * x. Rather than using the ambiguous division sign (÷), division is usually represented with a vinculum, a horizontal line, as in . In plain text and programming languages, a slash (also called a solidus) is used, e.g. 3 / (x + 1). Exponents are usually formatted using superscripts, as in x2. In plain text, the TeX mark-up language, and some programming languages such as MATLAB and Julia, the caret symbol, ^, represents exponents, so x2 is written as x ^ 2.George Grätzer, First Steps in LaTeX, Publisher Springer, 1999, , 9780817641320, page 17 In programming languages such as Ada, Fortran, Perl, Python and Ruby, a double asterisk is used, so x2 is written as x ** 2. The plus–minus sign, ±, is used as a shorthand notation for two expressions written as one, representing one expression with a plus sign, the other with a minus sign. For example, y = x ± 1 represents the two equations y = x + 1 and y = x − 1. Sometimes, it is used for denoting a positive-or-negative term such as ±x. ## Arithmetic vs algebraic operations Algebraic operations work in the same way as arithmetic operations, as can be seen in the table below. OperationArithmeticAlgebraComments Additionequivalent to:equivalent to: Subtractionequivalent to:equivalent to: Multiplication or   or   or   or   or   or   is the same as Division  or   or     or   or   Exponentiation          is the same as   is the same as Note: the use of the letters $$ a $$ and $$ b $$ is arbitrary, and the examples would have been equally valid if $$ x $$ and $$ y $$ were used. ## Properties of arithmetic and algebraic operations PropertyArithmeticAlgebraComments CommutativityAddition and multiplication arecommutative and associative.Ron Larson, Robert Hostetler, Bruce H. Edwards, Algebra And Trigonometry: A Graphing Approach, Publisher: Cengage Learning, 2007, , 9780618851959, 1114 pages, page 7 Subtraction and division are not: e.g. Associativity
https://en.wikipedia.org/wiki/Algebraic_operation
Artificial intelligence (AI) refers to the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs. High-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); virtual assistants (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., Chat ### GPT and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." Various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include learning, reasoning, knowledge representation, planning, natural language processing, perception, and support for robotics. ### General intelligence —the ability to complete any task performed by a human on an at least equal level—is among the field's long-term goals. To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics. AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields. Artificial intelligence was founded as an academic discipline in 1956, and the field went through multiple cycles of optimism throughout its history, followed by periods of disappointment and loss of funding, known as AI winters. Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques. This growth accelerated further after 2017 with the transformer architecture, and by the early 2020s many billions of dollars were being invested in AI and the field experienced rapid ongoing progress in what has become known as the AI boom. The emergence of advanced generative AI in the midst of the AI boom and its ability to create and modify content exposed several unintended consequences and harms in the present and raised concerns about the risks of AI and its long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology. ## Goals The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research. ### Reasoning and problem-solving Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions. By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics. Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow. Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments. Accurate and efficient reasoning is an unsolved problem. ### Knowledge representation Knowledge representation and knowledge engineering allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval, scene interpretation, clinical decision support, knowledge discovery (mining "interesting" and actionable inferences from large databases), and other areas. A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge. Knowledge bases need to represent things such as objects, properties, categories, and relations between objects; situations, events, states, and time; causes and effects; knowledge about knowledge (what we know about what other people know); default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing); and many other aspects and domains of knowledge. Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous); and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally). There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications. ### Planning and decision-making An "agent" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen. In automated planning, the agent has a specific goal. In automated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility. In classical planning, the agent knows exactly what the effect of any action will be. In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked. In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning), or the agent can seek information to improve its preferences. Information value theory can be used to weigh the value of exploratory or experimental actions. The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be. A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state. The policy could be calculated (e.g., by iteration), be heuristic, or it can be learned. Game theory describes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents. ### Learning Machine learning is the study of programs that can improve their performance on a given task automatically. It has been a part of AI from the beginning. There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance. Supervised learning requires labeling the training data with the expected answers, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input). In reinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good". Transfer learning is when the knowledge gained from one problem is applied to a new problem. ### Deep learning is a type of machine learning that runs inputs through biologically inspired artificial neural networks for all of these types of learning. Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization. ### Natural language processing Natural language processing (NLP) allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering. Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem). Margaret Masterman believed that it was meaning and not grammar that was the key to understanding languages, and that thesauri and not dictionaries should be the basis of computational language structure. Modern deep learning techniques for NLP include word embedding (representing words, typically as vectors encoding their meaning), transformers (a deep learning architecture using an attention mechanism), and others. In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text, and by 2023, these models were able to get human-level scores on the bar exam, SAT test, GRE test, and many other real-world applications. ### Perception Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input. The field includes speech recognition, image classification, facial recognition, object recognition,object tracking, and robotic perception. ### Social intelligence Affective computing is a field that comprises systems that recognize, interpret, process, or simulate human feeling, emotion, and mood. For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction. However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents. Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the effects displayed by a videotaped subject. General intelligence A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence. ## Techniques AI research uses a wide variety of techniques to accomplish the goals above. ### Search and optimization AI can solve many problems by intelligently searching through many possible solutions. There are two very different kinds of search used in AI: state space search and local search. #### State space search State space search searches through a tree of possible states to try to find a goal state. For example, planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. Simple exhaustive searches are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. "Heuristics" or "rules of thumb" can help prioritize choices that are more likely to reach a goal. Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and countermoves, looking for a winning position. #### Local search Local search uses mathematical optimization to find a solution to a problem. It begins with some form of guess and refines it incrementally. Gradient descent is a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize a loss function. Variants of gradient descent are commonly used to train neural networks, through the backpropagation algorithm. Another type of local search is evolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them, selecting only the fittest to survive each generation. Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails). ### Logic Formal logic is used for reasoning and knowledge representation. Formal logic comes in two main forms: propositional logic (which operates on statements that are true or false and uses logical connectives such as "and", "or", "not" and "implies") and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as "Every X is a Y" and "There are some Xs that are Ys"). Deductive reasoning in logic is the process of proving a new statement (conclusion) from other statements that are given and assumed to be true (the premises). Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules. Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem. In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved. Inference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages. Fuzzy logic assigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true. Non-monotonic logics, including logic programming with negation as failure, are designed to handle default reasoning. Other specialized versions of logic have been developed to describe many complex domains. ### Probabilistic methods for uncertain reasoning Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis, and information value theory. These tools include models such as Markov decision processes, dynamic decision networks, game theory and mechanism design. Bayesian networks are a tool that can be used for reasoning (using the Bayesian inference algorithm), learning (using the expectation–maximization algorithm), planning (using decision networks) and perception (using dynamic Bayesian networks). Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g., hidden Markov models or Kalman filters). ### Classifiers and statistical learning methods The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand. Classifiers are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience. There are many kinds of classifiers in use. The decision tree is the simplest and most widely used symbolic machine learning algorithm. K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s. The naive Bayes classifier is reportedly the "most widely used learner" at Google, due in part to its scalability. Neural networks are also used as classifiers. ### Artificial neural networks An artificial neural network is based on a collection of nodes also known as artificial neurons, which loosely model the neurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once the weight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers. Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm. Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function. In feedforward neural networks the signal passes in only one direction. Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks. Perceptrons use only a single layer of neurons; deep learning uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are "close" to each other—this is especially important in image processing, where a local set of neurons must identify an "edge" before the network can identify an object. Deep learning Deep learning uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces. Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification, and others. The reason that deep learning performs so well in so many applications is not known as of 2021. The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s) but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet. GPT Generative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pretrained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT models are prone to generating falsehoods called "hallucinations". These can be reduced with RLHF and quality data, but the problem has been getting worse for reasoning systems. Such systems are used in chatbots, which allow people to ask a question or request a task in simple text. Current models and services include Gemini (formerly Bard), ChatGPT, Grok, Claude, Copilot, and LLaMA. Multimodal GPT models can process different types of data (modalities) such as images, videos, sound, and text. ### Hardware and software In the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training. Specialized programming languages such as Prolog were used in early AI research, but general-purpose programming languages like Python have become predominant. The transistor density in integrated circuits has been observed to roughly double every 18 months—a trend known as Moore's law, named after the Intel co-founder Gordon Moore, who first identified it. Improvements in GPUs have been even faster, a trend sometimes called Huang's law, named after Nvidia co-founder and CEO Jensen Huang. ## Applications AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's iPhoto and TikTok). The deployment of AI may be overseen by a Chief automation officer (CAO). ### Health and medicine The application of AI in medicine and medical research has the potential to increase patient care and quality of life. Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients. For medical research, AI is an important tool for processing and integrating big data. This is particularly important for organoid and tissue engineering development which use microscopy imaging as a key technique in fabrication. It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research. New AI tools can deepen the understanding of biomedically relevant pathways. For example, AlphaFold 2 (2021) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein. In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria. In 2024, researchers used machine learning to accelerate the search for Parkinson's disease drug treatments. Their aim was to identify compounds that block the clumping, or aggregation, of alpha-synuclein (the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold. ### Games Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997. In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then, in 2017, it defeated Ke Jie, who was the best Go player in the world. Other programs handle imperfect-information games, such as the poker-playing program Pluribus. DeepMind developed increasingly generalistic reinforcement learning models, such as with MuZero, which could be trained to play chess, Go, or Atari games. In 2019, DeepMind's AlphaStar achieved grandmaster level in StarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map. In 2021, an AI agent competed in a PlayStation Gran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning. In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseen open-world video games by observing screen output, as well as executing short, specific tasks in response to natural language instructions. ### Mathematics Large language models, such as GPT-4, Gemini, Claude, LLaMa or Mistral, are increasingly used in mathematics. These probabilistic models are versatile, but can also produce wrong answers in the form of hallucinations. They sometimes need a large database of mathematical problems to learn from, but also methods such as supervised fine-tuning or trained classifiers with human-annotated data to improve answers for new problems and learn from corrections. A February 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data. One technique to improve their performance involves training the models to produce correct reasoning steps, rather than just the correct result. The Alibaba Group developed a version of its Qwen models called Qwen2-Math, that achieved state-of-the-art performance on several mathematical benchmarks, including 84% accuracy on the MATH dataset of competition mathematics problems. In January 2025, Microsoft proposed the technique rStar-Math that leverages Monte Carlo tree search and step-by-step reasoning, enabling a relatively small language model like Qwen-7B to solve 53% of the AIME 2024 and 90% of the MATH benchmark problems. Alternatively, dedicated models for mathematical problem solving with higher precision for the outcome including proof of theorems have been developed such as AlphaTensor, AlphaGeometry and AlphaProof all from Google DeepMind, Llemma from EleutherAI or Julius. When natural language is used to describe mathematical problems, converters can transform such prompts into a formal language such as Lean to define mathematical tasks. Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics. Topological deep learning integrates various topological approaches. ### Finance Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years. According to Nicolas Firzli, director of the World Pensions & Investments Forum, it may be too early to see the emergence of highly innovative AI-informed financial products and services. He argues that "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I'm not sure it will unleash a new wave of [e.g., sophisticated] pension innovation." ### Military Various countries are deploying AI military applications. The main applications enhance command and control, communications, sensors, integration and interoperability. Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles. AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles, both human operated and autonomous. AI has been used in military operations in Iraq, Syria, Israel and Ukraine. ### Generative AI ### Agents Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks. ### Sexuality Applications of AI in this domain include AI-enabled menstruation and fertility trackers that analyze user data to offer prediction, AI-integrated sex toys (e.g., teledildonics), AI-generated sexual education content, and AI agents that simulate sexual and romantic partners (e.g., Replika). AI is also used for the production of non-consensual deepfake pornography, raising significant ethical and legal concerns. AI technologies have also been used to attempt to identify online gender-based violence and online sexual grooming of minors. ### Other industry-specific tasks There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes. A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management. AI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions. In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water. Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights." For example, it is used for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. Additionally, it could be used for activities in space, such as space exploration, including the analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation. During the 2024 Indian elections, US$50 million was spent on authorized AI-generated content, notably by creating deepfakes of allied (including sometimes deceased) politicians to better engage with voters, and by translating speeches to various local languages. ## Ethics AI has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of DeepMind hopes to "solve intelligence, and then use that to solve everything else". However, as the use of AI has become widespread, several unintended consequences and risks have been identified. In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning. ### Risks and harm #### Privacy and copyright Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright. AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency. Sensitive user data collected may include online activity records, geolocation data, video, or audio. For example, in order to build speech recognition algorithms, Amazon has recorded millions of private conversations and allowed temporary workers to listen to and transcribe some of them. Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy. AI developers argue that this is the only way to deliver valuable applications and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy. Since 2016, some privacy experts, such as Cynthia Dwork, have begun to view privacy in terms of fairness. Brian Christian wrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'." Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work". Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file. In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI. Another discussed approach is to envision a separate sui generis system of protection for creations generated by AI to ensure fair attribution and compensation for human authors. #### Dominance by tech giants The commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft. Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace. #### Power needs and environmental impacts In January 2024, the International Energy Agency (IEA) released Electricity 2024, Analysis and Forecast to 2026, forecasting electric power use. This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation. Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms. A 2024 Goldman Sachs Research Paper, AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation...." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means. Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all. In 2024, the Wall Street Journal reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US). Nvidia CEO Jen-Hsun Huang said nuclear power is a good option for the data centers. In September 2024, Microsoft announced an agreement with Constellation Energy to re-open the Three Mile Island nuclear power plant to provide Microsoft with 100% of all electric power produced by the plant for 20 years. Reopening the plant, which suffered a partial nuclear meltdown of its Unit 2 reactor in 1979, will require Constellation to get through strict regulatory processes which will include extensive safety scrutiny from the US Nuclear Regulatory Commission. If approved (this will be the first ever US re-commissioning of a nuclear plant), over 835 megawatts of power – enough for 800,000 homes – of energy will be produced. The cost for re-opening and upgrading is estimated at $1.6 billion (US) and is dependent on tax breaks for nuclear power contained in the 2022 US Inflation Reduction Act. The US government and the state of Michigan are investing almost $2 billion (US) to reopen the Palisades Nuclear reactor on Lake Michigan. Closed since 2022, the plant is planned to be reopened in October 2025. The Three Mile Island facility will be renamed the Crane Clean Energy Center after Chris Crane, a nuclear proponent and former CEO of Exelon who was responsible for Exelon spinoff of Constellation. After the last approval in September 2023, Taiwan suspended the approval of data centers north of Taoyuan with a capacity of more than 5 MW in 2024, due to power supply shortages. Taiwan aims to phase out nuclear power by 2025. On the other hand, Singapore imposed a ban on the opening of data centers in 2019 due to electric power, but in 2022, lifted this ban. Although most nuclear plants in Japan have been shut down after the 2011 Fukushima nuclear accident, according to an October 2024 Bloomberg article in Japanese, cloud gaming services company Ubitus, in which Nvidia has a stake, is looking for land in Japan near nuclear power plant for a new data center for generative AI. Ubitus CEO Wesley Kuo said nuclear power plants are the most efficient, cheap and stable power for AI. On 1 November 2024, the Federal Energy Regulatory Commission (FERC) rejected an application submitted by Talen Energy for approval to supply some electricity from the nuclear power station Susquehanna to Amazon's data center. According to the Commission Chairman Willie L. Phillips, it is a burden on the electricity grid as well as a significant cost shifting concern to households and other business sectors. In 2025 a report prepared by the International Energy Agency estimated the greenhouse gas emissions from the energy consumption of AI at 180 million tons. By 2035, these emissions could rise to 300-500 million tonnes depending on what measures will be taken. This is below 1.5% of the energy sector emissions. The emissions reduction potential of AI was estimated at 5% of the energy sector emissions, but rebound effects (for example if people will pass from public transport to autonomous cars) can reduce it. #### Misinformation YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation. This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government. The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took some steps to mitigate the problem. In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda. One such potential malicious use is deepfakes for computational propaganda. AI pioneer Geoffrey Hinton expressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks. AI researchers at Microsoft, OpenAI, universities and other organisations have suggested using "personhood credentials" as a way to overcome online deception enabled by AI models. #### Algorithmic bias and fairness Machine learning applications will be biased if they learn from biased data. The developers may not be aware that the bias exists. Bias can be introduced by the way training data is selected and by the way a model is deployed. If a biased algorithm is used to make decisions that can seriously harm people (as it can in medicine, finance, recruitment, housing or policing) then the algorithm may cause discrimination. The field of fairness studies how to prevent harms from algorithmic biases. On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people, a problem called "sample size disparity". Google "fixed" this problem by preventing the system from labelling anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon. COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend. In 2017, several researchers showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data. A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender". Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work." Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions as recommendations, some of these "recommendations" will likely be racist. Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is descriptive rather than prescriptive. Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women. There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category is distributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negative stereotypes or render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict with anti-discrimination laws. At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022), the Association for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed. #### Lack of transparency Many AI systems are so complex that their designers cannot explain how they reach their decisions. Particularly with deep neural networks, in which there are a large amount of non-linear relationships between inputs and outputs. But some popular explainability techniques exist. It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with a ruler as "cancerous", because pictures of malignancies typically include a ruler to show the scale. Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading. People who have been harmed by an algorithm's decision have a right to an explanation. Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection ### Regulation in 2016 included an explicit statement that this right exists. Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used. DARPA established the XAI ("Explainable Artificial Intelligence") program in 2014 to try to solve these problems. Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output. LIME can locally approximate a model's outputs with a simpler, interpretable model. Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned. Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning. For generative pre-trained transformers, Anthropic developed a technique based on dictionary learning that associates patterns of neuron activations with human-understandable concepts. #### Bad actors and weaponized AI Artificial intelligence provides a number of tools that are useful to bad actors, such as authoritarian governments, terrorists, criminals or rogue states. A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision. Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction. Even when used in conventional warfare, they currently cannot reliably choose targets and could potentially kill an innocent person. In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed. By 2015, over fifty countries were reported to be researching battlefield robots. AI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware. All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China. There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours. #### Technological unemployment Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment. In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI. A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed. Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk". The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies. In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence. Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously". Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy. From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement. #### Existential risk It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, "spell the end of the human race". This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character. These sci-fi scenarios are misleading in several ways. First, AI does not require human-like sentience to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager). Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead." In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is "fundamentally on our side". Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are built on language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive. The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI. Personalities such as Stephen Hawking, Bill Gates, and Elon Musk, as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI. In May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google". He notably mentioned risks of an AI takeover, and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI. In 2023, many leading AI experts endorsed the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Some other researchers were more optimistic. AI pioneer Jürgen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier." While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors." Andrew Ng also argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests." Yann LeCun "scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction." In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine. However, after 2016, the study of current and future risks and possible solutions became a serious area of research. ### Ethical machines and alignment Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk. Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas. The field of machine ethics is also called computational morality, and was founded at an AAAI symposium in 2005. Other approaches include Wendell Wallach's "artificial moral agents" and Stuart J. Russell's three principles for developing provably beneficial machines. ### Open source Active organizations in the AI open-source community include Hugging Face, Google, EleutherAI and Meta. Various AI models, such as Llama 2, Mistral or Stable Diffusion, have been made open-weight, meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freely fine-tuned, which allows companies to specialize them with their own data and for their own use-case. Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitate bioterrorism) and that once released on the Internet, they cannot be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses. ### Frameworks Artificial Intelligence projects can be guided by ethical considerations during the design, development, and implementation of an AI system. An AI framework such as the Care and Act Framework, developed by the Alan Turing Institute and based on the SUM values, outlines four main ethical dimensions, defined as follows: - Respect the dignity of individual people - Connect with other people sincerely, openly, and inclusively - Care for the wellbeing of everyone - Protect social values, justice, and the public interest Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others; however, these principles are not without criticism, especially regards to the people chosen to contribute to these frameworks. Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers. The UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities. Regulation The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally. According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone. Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI. Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia. The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology. Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI. In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years. In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics. In 2024, the Council of Europe created the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories. In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks". A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity. In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important". In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks. 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence. In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI. ## History The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning. This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an "electronic brain". They developed several areas of research that would become part of AI, such as McCullouch and Pitts design for "artificial neurons" in 1943, and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that "machine intelligence" was plausible. The field of AI research was founded at a workshop at Dartmouth College in 1956. The attendees became the leaders of AI research in the 1960s. They and their students produced programs that the press described as "astonishing": computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English. Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s. Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field. In 1965 Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do". In 1967 Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved". They had, however, underestimated the difficulty of the problem. In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill and ongoing pressure from the U.S. Congress to fund more productive projects. Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether. The "AI winter", a period when obtaining funding for AI projects was difficult, followed. In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began. Up to this point, most of AI's funding had gone to projects that used high-level symbols to represent mental objects like plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition, and began to look into "sub-symbolic" approaches. Rodney Brooks rejected "representation" in general and focussed directly on engineering machines that move and survive. Judea Pearl, Lofti Zadeh, and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic. But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others. In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks. AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics). By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence" (a tendency known as the AI effect). However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s. Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field. For many specific tasks, other methods were abandoned. Deep learning's success was based on both hardware improvements (faster computers, graphics processing units, cloud computing) and access to large amounts of data (including curated datasets, such as ImageNet). Deep learning's success led to an enormous increase in interest and funding in AI. The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019. In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study. In the late 2010s and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program taught only the game's rules and developed a strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. ChatGPT, launched on November 30, 2022, became the fastest-growing consumer software application in history, gaining over 100 million users in two months. It marked what is widely regarded as AI's breakout year, bringing it into the public consciousness. These programs, and others, inspired an aggressive AI boom, where large companies began investing billions of dollars in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI". About 800,000 "AI"-related U.S. job openings existed in 2022. According to PitchBook research, 22% of newly funded startups in 2024 claimed to be AI companies. ## Philosophy Philosophical debates have historically sought to determine the nature of intelligence and how to make intelligent machines. Another major focus has been whether machines can be conscious, and the associated ethical implications. Many other topics in philosophy are relevant to AI, such as epistemology and free will. Rapid advancements have intensified public discussions on the philosophy and ethics of AI. ### Defining artificial intelligence Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?" He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour". He devised the Turing test, which measures the ability of a machine to simulate human conversation. Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people but "it is usual to have a polite convention that everyone thinks." Russell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure. However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineering texts", they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons. AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence". McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world". Another AI founder, Marvin Minsky, similarly describes it as "the ability to solve hard problems". The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals. These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible. Another definition has been adopted by Google, a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence. Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI, with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did "not actually use AI in a material way". ### Evaluating approaches to AI No established unifying theory or paradigm has guided AI research for most of its history. The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers. #### Symbolic AI and its limits Symbolic AI (or "GOFAI") simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action." However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult. Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge. Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him. The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence, in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches. #### Neat vs. scruffy "Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s, but eventually was seen as irrelevant. Modern AI has elements of both. #### Soft vs. hard computing Finding a provably correct or optimal solution is intractable for many important problems. Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks. #### Narrow vs. general AI AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals. General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The sub-field of artificial general intelligence studies this area exclusively. ### Machine consciousness, sentience, and mind The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on." However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction. #### Consciousness David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness. The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like. #### Computationalism and functionalism Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam. Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds." Searle challenges this claim with his Chinese room argument, which attempts to show that even a computer capable of perfectly simulating human behavior would not have a mind. #### AI welfare and rights It is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree. But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals. Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights. Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society. In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities. Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own. Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited. ## Future ### Superintelligence and the singularity A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind. If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity". However, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do. ### Transhumanism Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines may merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in the writings of Aldous Huxley and Robert Ettinger. Edward Fredkin argues that "artificial intelligence is the next step in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his 1998 book Darwin Among the Machines: The Evolution of Global Intelligence. ### Decomputing Arguments for decomputing have been raised by Dan McQuillan (Resisting AI: An Anti-fascist Approach to Artificial Intelligence, 2022), meaning an opposition to the sweeping application and expansion of artificial intelligence. Similar to degrowth, the approach criticizes AI as an outgrowth of the systemic issues and capitalist world we live in. It argues that a different future is possible, in which distance between people is reduced rather than increased through AI intermediaries. ## In fiction Thought-capable artificial beings have appeared as storytelling devices since antiquity, and have been a persistent theme in science fiction. A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture. Isaac Asimov introduced the Three Laws of Robotics in many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics; while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity. Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.
https://en.wikipedia.org/wiki/Artificial_intelligence
Fault tree analysis (FTA) is a type of failure analysis in which an undesired state of a system is examined. This analysis method is mainly used in safety engineering and reliability engineering to understand how systems can fail, to identify the best ways to reduce risk and to determine (or get a feeling for) event rates of a safety accident or a particular system level (functional) failure. FTA is used in the aerospace, nuclear power, chemical and process, pharmaceutical, petrochemical and other high-hazard industries; but is also used in fields as diverse as risk factor identification relating to social service system failure. FTA is also used in software engineering for debugging purposes and is closely related to cause-elimination technique used to detect bugs. In aerospace, the more general term "system failure condition" is used for the "undesired state" / top event of the fault tree. These conditions are classified by the severity of their effects. The most severe conditions require the most extensive fault tree analysis. These system failure conditions and their classification are often previously determined in the functional hazard analysis. ## Usage Fault tree analysis can be used to: - understand the logic leading to the top event / undesired state. - show compliance with the (input) system safety / reliability requirements. - prioritize the contributors leading to the top event- creating the critical equipment/parts/events lists for different importance measures - monitor and control the safety performance of the complex system (e.g., is a particular aircraft safe to fly when fuel valve x malfunctions? For how long is it allowed to fly with the valve malfunction?). - minimize and optimize resources. - assist in designing a system. The FTA can be used as a design tool that helps to create (output / lower level) requirements. - function as a diagnostic tool to identify and correct causes of the top event. It can help with the creation of diagnostic manuals / processes. ## History Fault tree analysis (FTA) was originally developed in 1962 at Bell Laboratories by H.A. Watson, under a U.S. Air Force Ballistics Systems Division contract to evaluate the Minuteman I Intercontinental Ballistic Missile (ICBM) Launch Control System. The use of fault trees has since gained widespread support and is often used as a failure analysis tool by reliability experts. Following the first published use of FTA in the 1962 Minuteman I Launch Control Safety Study, Boeing and AVCO expanded use of FTA to the entire Minuteman II system in 1963–1964. FTA received extensive coverage at a 1965 System Safety Symposium in Seattle sponsored by Boeing and the University of Washington. Boeing began using FTA for civil aircraft design around 1966. Subsequently, within the U.S. military, application of FTA for use with fuses was explored by Picatinny Arsenal in the 1960s and 1970s. In 1976 the U.S. Army Materiel Command incorporated FTA into an Engineering Design Handbook on Design for Reliability. The Reliability ## Analysis Center at Rome Laboratory and its successor organizations now with the Defense Technical Information Center (Reliability Information Analysis Center, and now Defense Systems Information Analysis Center) has published documents on FTA and reliability block diagrams since the 1960s. MIL-HDBK-338B provides a more recent reference. In 1970, the U.S. Federal Aviation Administration (FAA) published a change to 14 CFR 25.1309 airworthiness regulations for transport category aircraft in the Federal Register at 35 FR 5665 (1970-04-08). This change adopted failure probability criteria for aircraft systems and equipment and led to widespread use of FTA in civil aviation. In 1998, the FAA published Order 8040.4, establishing risk management policy including hazard analysis in a range of critical activities beyond aircraft certification, including air traffic control and modernization of the U.S. National Airspace System. This led to the publication of the FAA System Safety Handbook, which describes the use of FTA in various types of formal hazard analysis. Early in the Apollo program the question was asked about the probability of successfully sending astronauts to the moon and returning them safely to Earth. A risk, or reliability, calculation of some sort was performed and the result was a mission success probability that was unacceptably low. This result discouraged NASA from further quantitative risk or reliability analysis until after the Challenger accident in 1986. Instead, NASA decided to rely on the use of failure modes and effects analysis (FMEA) and other qualitative methods for system safety assessments. After the Challenger accident, the importance of probabilistic risk assessment (PRA) and FTA in systems risk and reliability analysis was realized and its use at NASA has begun to grow and now FTA is considered as one of the most important system reliability and safety analysis techniques. Within the nuclear power industry, the U.S. Nuclear Regulatory Commission began using PRA methods including FTA in 1975, and significantly expanded PRA research following the 1979 incident at Three Mile Island. This eventually led to the 1981 publication of the NRC Fault Tree Handbook NUREG–0492, and mandatory use of PRA under the NRC's regulatory authority. Following process industry disasters such as the 1984 Bhopal disaster and 1988 Piper Alpha explosion, in 1992 the United States Department of Labor Occupational Safety and Health Administration (OSHA) published in the Federal Register at 57 FR 6356 (1992-02-24) its Process Safety Management (PSM) standard in 19 CFR 1910.119. OSHA PSM recognizes FTA as an acceptable method for process hazard analysis (PHA). Today FTA is widely used in system safety and reliability engineering, and in all major fields of engineering. ## Methodology FTA methodology is described in several industry and government standards, including NRC NUREG–0492 for the nuclear power industry, an aerospace-oriented revision to NUREG–0492 for use by NASA, SAE ARP4761 for civil aerospace, MIL–HDBK–338 for military systems, IEC standard IEC 61025 is intended for cross-industry use and has been adopted as European Norm EN 61025. Any sufficiently complex system is subject to failure as a result of one or more subsystems failing. The likelihood of failure, however, can often be reduced through improved system design. Fault tree analysis maps the relationship between faults, subsystems, and redundant safety design elements by creating a logic diagram of the overall system. The undesired outcome is taken as the root ('top event') of a tree of logic. For instance, the undesired outcome of a metal stamping press operation being considered might be a human appendage being stamped. Working backward from this top event it might be determined that there are two ways this could happen: during normal operation or during maintenance operation. This condition is a logical OR. Considering the branch of the hazard occurring during normal operation, perhaps it is determined that there are two ways this could happen: the press cycles and harms the operator, or the press cycles and harms another person. This is another logical OR. A design improvement can be made by requiring the operator to press two separate buttons to cycle the machine—this is a safety feature in the form of a logical AND. The button may have an intrinsic failure rate—this becomes a fault stimulus that can be analyzed. When fault trees are labeled with actual numbers for failure probabilities, computer programs can calculate failure probabilities from fault trees. When a specific event is found to have more than one effect event, i.e. it has impact on several subsystems, it is called a common cause or common mode. Graphically speaking, it means this event will appear at several locations in the tree. Common causes introduce dependency relations between events. The probability computations of a tree which contains some common causes are much more complicated than regular trees where all events are considered as independent. Not all software tools available on the market provide such capability. The tree is usually written out using conventional logic gate symbols. A cut set is a combination of events, typically component failures, causing the top event. If no event can be removed from a cut set without failing to cause the top event, then it is called a minimal cut set. Some industries use both fault trees and event trees (see Probabilistic Risk Assessment). An event tree starts from an undesired initiator (loss of critical supply, component failure etc.) and follows possible further system events through to a series of final consequences. As each new event is considered, a new node on the tree is added with a split of probabilities of taking either branch. The probabilities of a range of 'top events' arising from the initial event can then be seen. Classic programs include the Electric Power Research Institute's (EPRI) CAFTA software, which is used by many of the US nuclear power plants and by a majority of US and international aerospace manufacturers, and the Idaho National Laboratory's SAPHIRE, which is used by the U.S. Government to evaluate the safety and reliability of nuclear reactors, the Space Shuttle, and the International Space Station. Outside the US, the software RiskSpectrum is a popular tool for fault tree and event tree analysis, and is licensed for use at more than 60% of the world's nuclear power plants for probabilistic safety assessment. Professional-grade free software is also widely available; SCRAM is an open-source tool that implements the Open-PSA Model Exchange Format open standard for probabilistic safety assessment applications. ## Graphic symbols The basic symbols used in FTA are grouped as events, gates, and transfer symbols. Minor variations may be used in FTA software. ### Event symbols Event symbols are used for primary events and intermediate events. Primary events are not further developed on the fault tree. Intermediate events are found at the output of a gate. The event symbols are shown below: The primary event symbols are typically used as follows: - Basic eventfailure or error in a system component or element (example: switch stuck in open position) - External eventnormally expected to occur (not of itself a fault) - Undeveloped eventan event about which insufficient information is available, or which is of no consequence - Conditioning eventconditions that restrict or affect logic gates (example: mode of operation in effect) An intermediate event gate can be used immediately above a primary event to provide more room to type the event description. FTA is a top-to-bottom approach. ### Gate symbols Gate symbols describe the relationship between input and output events. The symbols are derived from Boolean logic symbols: The gates work as follows: - OR gatethe output occurs if any input occurs. - AND gatethe output occurs only if all inputs occur (inputs are independent from the source). - Exclusive OR gatethe output occurs if exactly one input occurs. - Priority AND gatethe output occurs if the inputs occur in a specific sequence specified by a conditioning event. - Inhibit gatethe output occurs if the input occurs under an enabling condition specified by a conditioning event. ### Transfer symbols Transfer symbols are used to connect the inputs and outputs of related fault trees, such as the fault tree of a subsystem to its system. NASA prepared a complete document about FTA through practical incidents. ## Basic mathematical foundation Events in a fault tree are associated with statistical probabilities or Poisson-Exponentially distributed constant rates. For example, component failures may typically occur at some constant failure rate λ (a constant hazard function). In this simplest case, failure probability depends on the rate λ and the exposure time t: $$ P = 1 - e^{- \lambda t} $$ where: $$ P \approx \lambda t $$ if $$ \lambda t < 0.001 $$ A fault tree is often normalized to a given time interval, such as a flight hour or an average mission time. Event probabilities depend on the relationship of the event hazard function to this interval. Unlike conventional logic gate diagrams in which inputs and outputs hold the binary values of TRUE (1) or FALSE (0), the gates in a fault tree output probabilities related to the set operations of Boolean logic. The probability of a gate's output event depends on the input event probabilities. An AND gate represents a combination of independent events. That is, the probability of any input event to an AND gate is unaffected by any other input event to the same gate. In set theoretic terms, this is equivalent to the intersection of the input event sets, and the probability of the AND gate output is given by: P (A and B) = P (A ∩ B) = P(A) P(B) An OR gate, on the other hand, corresponds to set union: P (A or B) = P (A ∪ B) = P(A) + P(B) - P (A ∩ B) Since failure probabilities on fault trees tend to be small (less than .01), P (A ∩ B) usually becomes a very small error term, and the output of an OR gate may be conservatively approximated by using an assumption that the inputs are mutually exclusive events: P (A or B) ≈ P(A) + P(B), P (A ∩ B) ≈ 0 An exclusive OR gate with two inputs represents the probability that one or the other input, but not both, occurs: P (A xor B) = P(A) + P(B) - 2P (A ∩ B) Again, since P (A ∩ B) usually becomes a very small error term, the exclusive OR gate has limited value in a fault tree. Quite often, Poisson-Exponentially distributed rates are used to quantify a fault tree instead of probabilities. Rates are often modeled as constant in time while probability is a function of time. Poisson-Exponential events are modelled as infinitely short so no two events can overlap. An OR gate is the superposition (addition of rates) of the two input failure frequencies or failure rates which are modeled as Poisson point processes. The output of an AND gate is calculated using the unavailability (Q1) of one event thinning the Poisson point process of the other event (λ2). The unavailability (Q2) of the other event then thins the Poisson point process of the first event (λ1). The two resulting Poisson point processes are superimposed according to the following equations. The output of an AND gate is the combination of independent input events 1 and 2 to the AND gate: Failure Frequency = λ1Q2 + λ2Q1 where Q = 1 - e-λt ≈ λt if λt < 0.001 Failure Frequency ≈ λ1λ2t2 + λ2λ1t1 if λ1t1 < 0.001 and λ2t2 < 0.001 In a fault tree, unavailability (Q) may be defined as the unavailability of safe operation and may not refer to the unavailability of the system operation depending on how the fault tree was structured. The input terms to the fault tree must be carefully defined. Analysis Many different approaches can be used to model a FTA, but the most common and popular way can be summarized in a few steps. A single fault tree is used to analyze one and only one undesired event, which may be subsequently fed into another fault tree as a basic event. Though the nature of the undesired event may vary dramatically, a FTA follows the same procedure for any undesired event; be it a delay of 0.25 ms for the generation of electrical power, an undetected cargo bay fire, or the random, unintended launch of an ICBM. FTA analysis involves five steps: 1. Define the undesired event to study. 1. Definition of the undesired event can be very hard to uncover, although some of the events are very easy and obvious to observe. An engineer with a wide knowledge of the design of the system is the best person to help define and number the undesired events. Undesired events are used then to make FTAs. Each FTA is limited to one undesired event. 1. Obtain an understanding of the system. 1. Once the undesired event is selected, all causes with probabilities of affecting the undesired event of 0 or more are studied and analyzed. Getting exact numbers for the probabilities leading to the event is usually impossible for the reason that it may be very costly and time-consuming to do so. Computer software is used to study probabilities; this may lead to less costly system analysis. System analysts can help with understanding the overall system. System designers have full knowledge of the system and this knowledge is very important for not missing any cause affecting the undesired event. For the selected event all causes are then numbered and sequenced in the order of occurrence and then are used for the next step which is drawing or constructing the fault tree. 1. Construct the fault tree. 1. After selecting the undesired event and having analyzed the system so that we know all the causing effects (and if possible their probabilities) we can now construct the fault tree. Fault tree is based on AND and OR gates which define the major characteristics of the fault tree. 1. Evaluate the fault tree. 1. After the fault tree has been assembled for a specific undesired event, it is evaluated and analyzed for any possible improvement or in other words study the risk management and find ways for system improvement. A wide range of qualitative and quantitative analysis methods can be applied. This step is as an introduction for the final step which will be to control the hazards identified. In short, in this step we identify all possible hazards affecting the system in a direct or indirect way. 1. Control the hazards identified. 1. This step is very specific and differs largely from one system to another, but the main point will always be that after identifying the hazards all possible methods are pursued to decrease the probability of occurrence. ## Comparison with other analytical methods FTA is a deductive, top-down method aimed at analyzing the effects of initiating faults and events on a complex system. This contrasts with failure mode and effects analysis (FMEA), which is an inductive, bottom-up analysis method aimed at analyzing the effects of single component or function failures on equipment or subsystems. FTA is very good at showing how resistant a system is to single or multiple initiating faults. It is not good at finding all possible initiating faults. FMEA is good at exhaustively cataloging initiating faults, and identifying their local effects. It is not good at examining multiple failures or their effects at a system level. FTA considers external events, FMEA does not. In civil aerospace the usual practice is to perform both FTA and FMEA, with a failure mode effects summary (FMES) as the interface between FMEA and FTA. Alternatives to FTA include dependence diagram (DD), also known as reliability block diagram (RBD) and Markov analysis. A dependence diagram is equivalent to a success tree analysis (STA), the logical inverse of an FTA, and depicts the system using paths instead of gates. DD and STA produce probability of success (i.e., avoiding a top event) rather than probability of a top event.
https://en.wikipedia.org/wiki/Fault_tree_analysis
Introduction to Solid State Physics, known colloquially as Kittel, is a classic condensed matter physics textbook written by American physicist Charles Kittel in 1953. The book has been highly influential and has seen widespread adoption; Marvin L. Cohen remarked in 2019 that Kittel's content choices in the original edition played a large role in defining the field of solid-state physics. It was also the first proper textbook covering this new field of physics. The book is published by John Wiley and Sons and, as of 2018, it is in its ninth edition and has been reprinted many times as well as translated into over a dozen languages, including Chinese, French, German, Hungarian, Indonesian, Italian, Japanese, Korean, Malay, Romanian, Russian, Spanish, and Turkish. In some later editions, the eighteenth chapter, titled Nanostructures, was written by Paul McEuen. Along with its competitor Ashcroft and Mermin, the book is considered a standard textbook in condensed matter physics. ## Background Kittel received his PhD from the University of Wisconsin–Madison in 1941 under his advisor Gregory Breit. Before being promoted to professor of physics at UC Berkeley in 1951, Kittel held several other positions. He worked for the Naval Ordnance Laboratory from 1940 to 1942, was a research physicist in the US Navy until 1945, worked at the Research Laboratory of Electronics at MIT from 1945 to 1947 and at Bell Labs from 1947 to 1951, and was a visiting associate professor at UC Berkeley from 1950 until his promotion. Henry Ehrenreich has noted that before the first edition of Introduction to Solid State Physics came out in 1953, there were no other textbooks on the subject; rather, the young field's study material was spread across several prominent articles and treatises. The field of solid state physics was very new at the time of writing and was defined by only a few treatises that, in the Ehrenreich's view, expounded rather than explained the topics and were not suitable as textbooks. ## Content The book covers a wide range of topics in solid state physics, including Bloch's theorem, crystals, magnetism, phonons, Fermi gases, magnetic resonance, and surface physics. The chapters are broken into sections that highlight the topics. Table of contents (8th ed.) Chapter Title Topics1Crystal StructureCrystal structure2Wave Diffraction and the Reciprocal Latticediffraction, Bragg Law, Fourier analysis, reciprocal lattice vectors, Laue equations, Brillouin zone, atomic form factor3Crystal Binding and Elastic ConstantsVan der Waals force, Ionic crystals, covalent crystals, metals4Phonons I. Crystal Vibrationsphonons5Phonons II. Thermal Propertiesphonons6Free Electron Fermi GasFermi gas, free electron model7Energy Bandsnearly free electron model, Bloch's theorem, Kronig-Penney model, crystal momentum8Semiconductor Crystalsband gap, electron holes, semimetals, superlattices9Fermi Surfaces and MetalsFermi surfaces10Superconductivitysuperconductivity, BCS theory, superconductors11Diamagnetism and Paramagnetismdiamagnetism and paramagnetism12Ferromagnetism and Antiferromagnetismferromagnetism and antiferromagnetism13Magnetic Resonancemagnetic resonance14Plasmons, Polaritons, and Polaronsplasmons, polaritons, polarons15Optical Processes and Excitonsexcitons, Kramers-Kronig relations16Dielectrics And FerroelectricsMaxwell equations in matter17Surface and Interface Physicssurface physics18Nanostructures (by Paul McEuen)electron microscopy, optical microscopy19Noncrystalline Solidsglasses20Point Defectslattice defects21Dislocationsshear strength of crystals, dislocations, hardness of materials22AlloysHume-Rothery rules, electrical conductivity, Kondo effect ## Reception Marvin L. Cohen and Morrel H. Cohen, in an obituary for Kittel in 2019, remarked that the original book "was not only the dominant text for teaching in the field, it was on the bookshelf of researchers in academia and industry throughout the world", though they did not provide any time frame on when it may have been surpassed as the dominant text. They also noted that Kittel's content choices played a large role in defining the field of solid-state physics. The book is a classic textbook in the subject and has seen use as a comparative benchmark in the reviews of other books in condensed matter physics. In a 1969 review of another book, Robert G. Chambers noted that there were not many textbooks covering these topics, as "since 1953, Kittel's classic Introduction to Solid State Physics has dominated the field so effectively that few competitors have appeared", noting that the third edition continues that legacy. Before continuing, the reviewer noted that the book was too long for some uses and that less thorough works would be welcome. - Several notable reviews of the first edition were published in 1954, including Arthur James Cochran Wilson, Leslie Fleetwood Bates, and Kenneth Standley, among others. - Gwyn Owain Jones reviewed the book in 1955. - The second edition of the book was reviewed by Robert W. Hellwarth in 1957 and Leslie Fleetwood Bates, among others. - The third edition of the book also received reviews, including one by Donald F. Holcomb. - A German translation of the book has also received several reviews. ## Publication history ### Original editions - - - - - - - - - ### Reprints - - - - - - ### Foreign translations Language Title Translators Year Publisher Location IdentifiersSpanish1965ReverteBarcelonaHungarianFerenc Kedves1966Műszaki KiadóBudapestArabicMahmūd Mukhtār1968Maktabat al-Nahdah al-Misriyah nushir haḍa al-kitạb maʻah muassasat Frankilīn; Mūassasat Frankilīnal-Qāhirah; al-Qāhirah; New YorkJapanese良清 宇野1968丸善 TokyoRomanianAnatolie Hristev; Cornelia C. Rusu1972Editura TehnicăBucureştiFrenchAlain Honnart1972DunodParisJapanese1974丸善 TokyoRussianA. A Gusev1978"Nauka", Glav. red. fiziko-matematicheskoĭ lit-ryMoskvaJapaneseRyōsei Uno1978丸善 TōkyōModern GreekC. Papageorgopoulos1976G. PneumatikouAthēnaChinese翁上林1979臺北市徐氏基金會 臺北市SpanishSerna Alcaraz, J., Serna Alcaraz, C. R., Piqueras de Noriega, J.1981RevertéBarcelonaCzechMiloš Matyáš1985AcademiaPrahaJapaneseUno, Ryōsei., 宇野, 良清1988丸善MalayKhiruddin Abdullah; Karsono Ahmad Dasuki1995Penerbit Universiti Sains MalaysiaPulau PinangKoreanU, Chong-ch'ŏn., 우종천.1997Pŏmhan Sŏjok Chusik HoesaSŏulJapaneseRyōsei Uno, 良清 宇野1998MaruzenTōkyōPolishWiesława Korczak1999Wydawnictwo Naukowe PWNWarszawaJapaneseUno, Ryōsei., Tsuya, Noboru., Niizeki, Komajirō., 宇野, 良清, 津屋, 昇2005MaruzenTōkyōGermanSiegfried Hunklinger2005München Wien OldenbourgMünchenChineseHong, Lian-hui (wu li xue), Liu, Li-ji., Wei, Rong-jun., Kittel, Charles, 1916-, 洪連輝 (物理學), 劉立基.2006Gao liTai bei xian wu gu xiangFrenchPaul McEuen2007DunodParisItalianEnnio Bonetti; Carlo Bottani; Franco Ciccacci2008C.E.A.MilanoTurkishGülsen Önengüt; Demir Önengüt; H İbrahim Somyürek2014Palme YayıncılıkAnkaraSpanishJ. Anguilar Peris; J. de la Rubia Pacheco2018RevertéBarcelonaKoreanUjongcheon., Sin, Seongcheol., Yu, Geonho., I, Seongik., I, Jaeil.2019Tekseuteu BukseuSeoulPolishWiesława Korczak; Tadeusz Skośkiewicz; Andrzej Wiśniewski; Wydawnictwo Naukowe PWN2012Wydawnictwo Naukowe PWNWarszawa
https://en.wikipedia.org/wiki/Introduction_to_Solid_State_Physics
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first moment) is a generalization of the weighted average. Informally, the expected value is the mean of the possible values a random variable can take, weighted by the probability of those outcomes. Since it is obtained through arithmetic, the expected value sometimes may not even be included in the sample data set; it is not the value you would expect to get in reality. The expected value of a random variable with a finite number of outcomes is a weighted average of all possible outcomes. In the case of a continuum of possible outcomes, the expectation is defined by integration. In the axiomatic foundation for probability provided by measure theory, the expectation is given by Lebesgue integration. The expected value of a random variable is often denoted by , , or , with also often stylized as $$ \mathbb{E} $$ or . ## History The idea of the expected value originated in the middle of the 17th century from the study of the so-called problem of points, which seeks to divide the stakes in a fair way between two players, who have to end their game before it is properly finished. This problem had been debated for centuries. Many conflicting proposals and solutions had been suggested over the years when it was posed to Blaise Pascal by French writer and amateur mathematician Chevalier de Méré in 1654. Méré claimed that this problem could not be solved and that it showed just how flawed mathematics was when it came to its application to the real world. Pascal, being a mathematician, was provoked and determined to solve the problem once and for all. He began to discuss the problem in the famous series of letters to Pierre de Fermat. Soon enough, they both independently came up with a solution. They solved the problem in different computational ways, but their results were identical because their computations were based on the same fundamental principle. The principle is that the value of a future gain should be directly proportional to the chance of getting it. This principle seemed to have come naturally to both of them. They were very pleased by the fact that they had found essentially the same solution, and this in turn made them absolutely convinced that they had solved the problem conclusively; however, they did not publish their findings. They only informed a small circle of mutual scientific friends in Paris about it. In Dutch mathematician Christiaan Huygens' book, he considered the problem of points, and presented a solution based on the same principle as the solutions of Pascal and Fermat. Huygens published his treatise in 1657, (see Huygens (1657)) "De ratiociniis in ludo aleæ" on probability theory just after visiting Paris. The book extended the concept of expectation by adding rules for how to calculate expectations in more complicated situations than the original problem (e.g., for three or more players), and can be seen as the first successful attempt at laying down the foundations of the theory of probability. In the foreword to his treatise, Huygens wrote: In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the expectations of random variables. ### Etymology Neither Pascal nor Huygens used the term "expectation" in its modern sense. In particular, Huygens writes: More than a hundred years later, in 1814, Pierre-Simon Laplace published his tract "Théorie analytique des probabilités", where the concept of expected value was defined explicitly: ## Notations The use of the letter to denote "expected value" goes back to W. A. Whitworth in 1901. The symbol has since become popular for English writers. In German, stands for Erwartungswert, in Spanish for esperanza matemática, and in French for espérance mathématique. When "E" is used to denote "expected value", authors use a variety of stylizations: the expectation operator can be stylized as (upright), (italic), or $$ \mathbb{E} $$ (in blackboard bold), while a variety of bracket notations (such as , , and ) are all used. Another popular notation is . , , and $$ \overline{X} $$ are commonly used in physics. is used in Russian-language literature. ## Definition As discussed above, there are several context-dependent ways of defining the expected value. The simplest and original definition deals with the case of finitely many possible outcomes, such as in the flip of a coin. With the theory of infinite series, this can be extended to the case of countably many possible outcomes. It is also very common to consider the distinct case of random variables dictated by (piecewise-)continuous probability density functions, as these arise in many natural contexts. All of these specific definitions may be viewed as special cases of the general definition based upon the mathematical tools of measure theory and Lebesgue integration, which provide these different contexts with an axiomatic foundation and common language. Any definition of expected value may be extended to define an expected value of a multidimensional random variable, i.e. a random vector . It is defined component by component, as . Similarly, one may define the expected value of a random matrix with components by . ### Random variables with finitely many outcomes Consider a random variable with a finite list of possible outcomes, each of which (respectively) has probability of occurring. The expectation of is defined as $$ \operatorname{E}[X] =x_1p_1 + x_2p_2 + \cdots + x_kp_k. $$ Since the probabilities must satisfy , it is natural to interpret as a weighted average of the values, with weights given by their probabilities . In the special case that all possible outcomes are equiprobable (that is, ), the weighted average is given by the standard average. In the general case, the expected value takes into account the fact that some outcomes are more likely than others. #### #### Examples - Let $$ X $$ represent the outcome of a roll of a fair six-sided die. More specifically, $$ X $$ will be the number of pips showing on the top face of the die after the toss. The possible values for $$ X $$ are 1, 2, 3, 4, 5, and 6, all of which are equally likely with a probability of . The expectation of $$ X $$ is $$ \operatorname{E}[X] = 1 \cdot \frac{1}{6} + 2 \cdot \frac{1}{6} + 3\cdot\frac{1}{6} + 4\cdot\frac{1}{6} + 5\cdot\frac{1}{6} + 6\cdot\frac{1}{6} = 3.5. $$ If one rolls the die $$ n $$ times and computes the average (arithmetic mean) of the results, then as $$ n $$ grows, the average will almost surely converge to the expected value, a fact known as the strong law of large numbers. - The roulette game consists of a small ball and a wheel with 38 numbered pockets around the edge. As the wheel is spun, the ball bounces around randomly until it settles down in one of the pockets. Suppose random variable $$ X $$ represents the (monetary) outcome of a $1 bet on a single number ("straight up" bet). If the bet wins (which happens with probability in American roulette), the payoff is $35; otherwise the player loses the bet. The expected profit from such a bet will be $$ \operatorname{E}[\,\text{gain from }\$1\text{ bet}\,] = -\$1 \cdot \frac{37}{38} + \$35 \cdot \frac{1}{38} = -\$\frac{1}{19}. $$ That is, the expected value to be won from a $1 bet is −$. Thus, in 190 bets, the net loss will probably be about $10. ### Random variables with countably infinitely many outcomes Informally, the expectation of a random variable with a countably infinite set of possible outcomes is defined analogously as the weighted average of all possible outcomes, where the weights are given by the probabilities of realizing each given value. This is to say that $$ \operatorname{E}[X] = \sum_{i=1}^\infty x_i\, p_i, $$ where are the possible outcomes of the random variable and are their corresponding probabilities. In many non-mathematical textbooks, this is presented as the full definition of expected values in this context. However, there are some subtleties with infinite summation, so the above formula is not suitable as a mathematical definition. In particular, the Riemann series theorem of mathematical analysis illustrates that the value of certain infinite sums involving positive and negative summands depends on the order in which the summands are given. Since the outcomes of a random variable have no naturally given order, this creates a difficulty in defining expected value precisely. For this reason, many mathematical textbooks only consider the case that the infinite sum given above converges absolutely, which implies that the infinite sum is a finite number independent of the ordering of summands. In the alternative case that the infinite sum does not converge absolutely, one says the random variable does not have finite expectation. Examples - Suppose $$ x_i = i $$ and $$ p_i = \tfrac{c}{i \cdot 2^i} $$ for $$ i = 1, 2, 3, \ldots, $$ where $$ c = \tfrac{1}{\ln 2} $$ is the scaling factor which makes the probabilities sum to 1. Then we have $$ \operatorname{E}[X] \,= \sum_i x_i p_i = 1(\tfrac{c}{2}) + 2(\tfrac{c}{8}) + 3 (\tfrac{c}{24}) + \cdots \,= \, \tfrac{c}{2} + \tfrac{c}{4} + \tfrac{c}{8} + \cdots \,=\, c \,=\, \tfrac{1}{\ln 2}. $$ ### Random variables with density Now consider a random variable which has a probability density function given by a function on the real number line. This means that the probability of taking on a value in any given open interval is given by the integral of over that interval. The expectation of is then given by the integral $$ \operatorname{E}[X] = \int_{-\infty}^\infty x f(x)\, dx. $$ A general and mathematically precise formulation of this definition uses measure theory and Lebesgue integration, and the corresponding theory of absolutely continuous random variables is described in the next section. The density functions of many common distributions are piecewise continuous, and as such the theory is often developed in this restricted setting. For such functions, it is sufficient to only consider the standard Riemann integration. Sometimes continuous random variables are defined as those corresponding to this special class of densities, although the term is used differently by various authors. Analogously to the countably-infinite case above, there are subtleties with this expression due to the infinite region of integration. Such subtleties can be seen concretely if the distribution of is given by the Cauchy distribution , so that . It is straightforward to compute in this case that $$ \int_a^b xf(x)\,dx=\int_a^b \frac{x}{x^2+\pi^2}\,dx=\frac{1}{2}\ln\frac{b^2+\pi^2}{a^2+\pi^2}. $$ The limit of this expression as and does not exist: if the limits are taken so that , then the limit is zero, while if the constraint is taken, then the limit is . To avoid such ambiguities, in mathematical textbooks it is common to require that the given integral converges absolutely, with left undefined otherwise. However, measure-theoretic notions as given below can be used to give a systematic definition of for more general random variables . ### Arbitrary real-valued random variables All definitions of the expected value may be expressed in the language of measure theory. In general, if is a real-valued random variable defined on a probability space , then the expected value of , denoted by , is defined as the Lebesgue integral $$ \operatorname{E} [X] = \int_\Omega X\,d\operatorname{P}. $$ Despite the newly abstract situation, this definition is extremely similar in nature to the very simplest definition of expected values, given above, as certain weighted averages. This is because, in measure theory, the value of the Lebesgue integral of is defined via weighted averages of approximations of which take on finitely many values. Moreover, if given a random variable with finitely or countably many possible values, the Lebesgue theory of expectation is identical to the summation formulas given above. However, the Lebesgue theory clarifies the scope of the theory of probability density functions. A random variable is said to be absolutely continuous if any of the following conditions are satisfied: - there is a nonnegative measurable function on the real line such that $$ \operatorname{P}(X \in A) = \int_A f(x) \, dx, $$ for any Borel set , in which the integral is Lebesgue. - the cumulative distribution function of is absolutely continuous. - for any Borel set of real numbers with Lebesgue measure equal to zero, the probability of being valued in is also equal to zero - for any positive number there is a positive number such that: if is a Borel set with Lebesgue measure less than , then the probability of being valued in is less than . These conditions are all equivalent, although this is nontrivial to establish. In this definition, is called the probability density function of (relative to Lebesgue measure). According to the change-of-variables formula for Lebesgue integration, combined with the law of the unconscious statistician, it follows that $$ \operatorname{E}[X] \equiv \int_\Omega X\,d\operatorname{P} = \int_\Reals x f(x)\, dx $$ for any absolutely continuous random variable . The above discussion of continuous random variables is thus a special case of the general Lebesgue theory, due to the fact that every piecewise-continuous function is measurable. The expected value of any real-valued random variable $$ X $$ can also be defined on the graph of its cumulative distribution function $$ F $$ by a nearby equality of areas. In fact, $$ \operatorname{E}[X] = \mu $$ with a real number $$ \mu $$ if and only if the two surfaces in the $$ x $$ - $$ y $$ -plane, described by $$ x \le \mu, \;\, 0\le y \le F(x) \quad\text{or}\quad x \ge \mu, \;\, F(x) \le y \le 1 $$ respectively, have the same finite area, i.e. if $$ \int_{-\infty}^\mu F(x)\,dx = \int_\mu^\infty \big(1 - F(x)\big)\,dx $$ and both improper Riemann integrals converge. Finally, this is equivalent to the representation $$ \operatorname{E}[X] = \int_0^\infty \bigl(1 - F(x)\bigr) \, dx - \int_{-\infty}^0 F(x) \, dx, $$ also with convergent integrals. ### Infinite expected values Expected values as defined above are automatically finite numbers. However, in many cases it is fundamental to be able to consider expected values of . This is intuitive, for example, in the case of the St. Petersburg paradox, in which one considers a random variable with possible outcomes , with associated probabilities , for ranging over all positive integers. According to the summation formula in the case of random variables with countably many outcomes, one has $$ \operatorname{E}[X]= \sum_{i=1}^\infty x_i\,p_i = 2\cdot \frac{1}{2}+4\cdot\frac{1}{4} + 8\cdot\frac{1}{8}+ 16\cdot\frac{1}{16}+ \cdots = 1 + 1 + 1 + 1 + \cdots. $$ It is natural to say that the expected value equals . There is a rigorous mathematical theory underlying such ideas, which is often taken as part of the definition of the Lebesgue integral. The first fundamental observation is that, whichever of the above definitions are followed, any nonnegative random variable whatsoever can be given an unambiguous expected value; whenever absolute convergence fails, then the expected value can be defined as . The second fundamental observation is that any random variable can be written as the difference of two nonnegative random variables. Given a random variable , one defines the positive and negative parts by and . These are nonnegative random variables, and it can be directly checked that . Since and are both then defined as either nonnegative numbers or , it is then natural to define: $$ \operatorname{E}[X] = \begin{cases} \operatorname{E}[X^+] - \operatorname{E}[X^-] & \text{if } \operatorname{E}[X^+] < \infty \text{ and } \operatorname{E}[X^-] < \infty;\\ +\infty & \text{if } \operatorname{E}[X^+] = \infty \text{ and } \operatorname{E}[X^-] < \infty;\\ -\infty & \text{if } \operatorname{E}[X^+] < \infty \text{ and } \operatorname{E}[X^-] = \infty;\\ \text{undefined} & \text{if } \operatorname{E}[X^+] = \infty \text{ and } \operatorname{E}[X^-] = \infty. \end{cases} $$ According to this definition, exists and is finite if and only if and are both finite. Due to the formula , this is the case if and only if is finite, and this is equivalent to the absolute convergence conditions in the definitions above. As such, the present considerations do not define finite expected values in any cases not previously considered; they are only useful for infinite expectations. - In the case of the St. Petersburg paradox, one has and so as desired. - Suppose the random variable takes values with respective probabilities . Then it follows that takes value with probability for each positive integer , and takes value with remaining probability. Similarly, takes value with probability for each positive integer and takes value with remaining probability. Using the definition for non-negative random variables, one can show that both and (see Harmonic series). Hence, in this case the expectation of is undefined. - Similarly, the Cauchy distribution, as discussed above, has undefined expectation. ## Expected values of common distributions The following table gives the expected values of some commonly occurring probability distributions. The third column gives the expected values both in the form immediately given by the definition, as well as in the simplified form obtained by computation therefrom. The details of these computations, which are not always straightforward, can be found in the indicated references. DistributionNotationMean E(X)BernoulliBinomialPoissonGeometricUniformExponentialNormalStandard NormalParetoCauchy is undefined ## Properties The basic properties below (and their names in bold) replicate or follow immediately from those of Lebesgue integral. Note that the letters "a.s." stand for "almost surely"—a central property of the Lebesgue integral. Basically, one says that an inequality like $$ X \geq 0 $$ is true almost surely, when the probability measure attributes zero-mass to the complementary event $$ \left\{ X < 0 \right\}. $$ - Non-negativity: If $$ X \geq 0 $$ (a.s.), then $$ \operatorname{E}[X] \geq 0. $$ - of expectation: The expected value operator (or expectation operator) $$ \operatorname{E}[\cdot] $$ is linear in the sense that, for any random variables $$ X $$ and $$ Y, $$ and a constant $$ a, $$ $$ \begin{align} \operatorname{E}[X + Y] &= \operatorname{E}[X] + \operatorname{E}[Y], \\ \operatorname{E}[aX] &= a \operatorname{E}[X], \end{align} $$ whenever the right-hand side is well-defined. By induction, this means that the expected value of the sum of any finite number of random variables is the sum of the expected values of the individual random variables, and the expected value scales linearly with a multiplicative constant. Symbolically, for $$ N $$ random variables $$ X_{i} $$ and constants $$ a_{i} (1\leq i \leq N), $$ we have $$ \operatorname{E}\left[\sum_{i=1}^{N}a_{i}X_{i}\right] = \sum_{i=1}^{N}a_{i}\operatorname{E}[X_{i}]. $$ If we think of the set of random variables with finite expected value as forming a vector space, then the linearity of expectation implies that the expected value is a linear form on this vector space. - Monotonicity: If $$ X\leq Y $$ (a.s.), and both $$ \operatorname{E}[X] $$ and $$ \operatorname{E}[Y] $$ exist, then $$ \operatorname{E}[X]\leq\operatorname{E}[Y]. $$ Proof follows from the linearity and the non-negativity property for $$ Z=Y-X, $$ since $$ Z\geq 0 $$ (a.s.). - Non-degeneracy: If $$ \operatorname{E}[|X|]=0, $$ then $$ X=0 $$ (a.s.). - If $$ X = Y $$ (a.s.), then $$ \operatorname{E}[X] = \operatorname{E}[ Y]. $$ In other words, if X and Y are random variables that take different values with probability zero, then the expectation of X will equal the expectation of Y. - If $$ X = c $$ (a.s.) for some real number , then $$ \operatorname{E}[X] = c. $$ In particular, for a random variable $$ X $$ with well-defined expectation, $$ \operatorname{E}[\operatorname{E}[X]] = \operatorname{E}[X]. $$ A well defined expectation implies that there is one number, or rather, one constant that defines the expected value. Thus follows that the expectation of this constant is just the original expected value. - As a consequence of the formula as discussed above, together with the triangle inequality, it follows that for any random variable $$ X $$ with well-defined expectation, one has $$ |\operatorname{E}[X]| \leq \operatorname{E}|X|. $$ - Let denote the indicator function of an event , then is given by the probability of . This is nothing but a different way of stating the expectation of a Bernoulli random variable, as calculated in the table above. - Formulas in terms of CDF: If $$ F(x) $$ is the cumulative distribution function of a random variable , then $$ \operatorname{E}[X] = \int_{-\infty}^\infty x\,dF(x), $$ where the values on both sides are well defined or not well defined simultaneously, and the integral is taken in the sense of Lebesgue-Stieltjes. As a consequence of integration by parts as applied to this representation of , it can be proved that $$ \operatorname{E}[X] = \int_0^\infty (1-F(x))\,dx - \int^0_{-\infty} F(x)\,dx, $$ with the integrals taken in the sense of Lebesgue. As a special case, for any random variable valued in the nonnegative integers , one has $$ \operatorname{E}[X] = \sum _{n=0}^\infty \Pr(X>n), $$ where denotes the underlying probability measure. - Non-multiplicativity: In general, the expected value is not multiplicative, i.e. $$ \operatorname{E}[XY] $$ is not necessarily equal to $$ \operatorname{E}[X]\cdot \operatorname{E}[Y]. $$ If $$ X $$ and $$ Y $$ are independent, then one can show that $$ \operatorname{E}[XY]=\operatorname{E}[X] \operatorname{E}[Y]. $$ If the random variables are dependent, then generally $$ \operatorname{E}[XY] \neq \operatorname{E}[X] \operatorname{E}[Y], $$ although in special cases of dependency the equality may hold. - Law of the unconscious statistician: The expected value of a measurable function of $$ X, $$ $$ g(X), $$ given that $$ X $$ has a probability density function $$ f(x), $$ is given by the inner product of $$ f $$ and $$ g $$ : $$ \operatorname{E}[g(X)] = \int_{\R} g(x) f(x)\, dx . $$ This formula also holds in multidimensional case, when $$ g $$ is a function of several random variables, and $$ f $$ is their joint density. ### Inequalities Concentration inequalities control the likelihood of a random variable taking on large values. Markov's inequality is among the best-known and simplest to prove: for a nonnegative random variable and any positive number , it states that $$ \operatorname{P}(X\geq a)\leq\frac{\operatorname{E}[X]}{a}. $$ If is any random variable with finite expectation, then Markov's inequality may be applied to the random variable to obtain Chebyshev's inequality $$ \operatorname{P}(|X-\text{E}[X]|\geq a)\leq\frac{\operatorname{Var}[X]}{a^2}, $$ where is the variance. These inequalities are significant for their nearly complete lack of conditional assumptions. For example, for any random variable with finite expectation, the Chebyshev inequality implies that there is at least a 75% probability of an outcome being within two standard deviations of the expected value. However, in special cases the Markov and Chebyshev inequalities often give much weaker information than is otherwise available. For example, in the case of an unweighted dice, Chebyshev's inequality says that odds of rolling between 1 and 6 is at least 53%; in reality, the odds are of course 100%. The Kolmogorov inequality extends the Chebyshev inequality to the context of sums of random variables. The following three inequalities are of fundamental importance in the field of mathematical analysis and its applications to probability theory. - Jensen's inequality: Let be a convex function and a random variable with finite expectation. Then $$ f(\operatorname{E}(X)) \leq \operatorname{E} (f(X)). $$ Part of the assertion is that the negative part of has finite expectation, so that the right-hand side is well-defined (possibly infinite). Convexity of can be phrased as saying that the output of the weighted average of two inputs under-estimates the same weighted average of the two outputs; Jensen's inequality extends this to the setting of completely general weighted averages, as represented by the expectation. In the special case that for positive numbers , one obtains the Lyapunov inequality $$ \left(\operatorname{E}|X|^s\right)^{1/s} \leq \left(\operatorname{E}|X|^t\right)^{1/t}. $$ This can also be proved by the Hölder inequality. In measure theory, this is particularly notable for proving the inclusion of , in the special case of probability spaces. - Hölder's inequality: if and are numbers satisfying , then $$ \operatorname{E}|XY|\leq(\operatorname{E}|X|^p)^{1/p}(\operatorname{E}|Y|^q)^{1/q}. $$ for any random variables and . The special case of is called the Cauchy–Schwarz inequality, and is particularly well-known. - Minkowski inequality: given any number , for any random variables and with and both finite, it follows that is also finite and $$ \Bigl(\operatorname{E}|X+Y|^p\Bigr)^{1/p}\leq\Bigl(\operatorname{E}|X|^p\Bigr)^{1/p}+\Bigl(\operatorname{E}|Y|^p\Bigr)^{1/p}. $$ The Hölder and Minkowski inequalities can be extended to general measure spaces, and are often given in that context. By contrast, the Jensen inequality is special to the case of probability spaces. ### Expectations under convergence of random variables In general, it is not the case that $$ \operatorname{E}[X_n] \to \operatorname{E}[X] $$ even if $$ X_n\to X $$ pointwise. Thus, one cannot interchange limits and expectation, without additional conditions on the random variables. To see this, let $$ U $$ be a random variable distributed uniformly on $$ [0,1]. $$ For $$ n\geq 1, $$ define a sequence of random variables $$ X_n = n \cdot \mathbf{1}\left\{ U \in \left(0,\tfrac{1}{n}\right)\right\}, $$ with $$ \mathbf{1}\{A\} $$ being the indicator function of the event $$ A. $$ Then, it follows that $$ X_n \to 0 $$ pointwise. But, $$ \operatorname{E}[X_n] = n \cdot \Pr\left(U \in \left[ 0, \tfrac{1}{n}\right] \right) = n \cdot \tfrac{1}{n} = 1 $$ for each $$ n. $$ Hence, $$ \lim_{n \to \infty} \operatorname{E}[X_n] = 1 \neq 0 = \operatorname{E}\left[ \lim_{n \to \infty} X_n \right]. $$ Analogously, for general sequence of random variables $$ \{ Y_n : n \geq 0\}, $$ the expected value operator is not $$ \sigma $$ -additive, i.e. $$ \operatorname{E}\left[\sum^\infty_{n=0} Y_n\right] \neq \sum^\infty_{n=0}\operatorname{E}[Y_n]. $$ An example is easily obtained by setting $$ Y_0 = X_1 $$ and $$ Y_n = X_{n+1} - X_n $$ for $$ n \geq 1, $$ where $$ X_n $$ is as in the previous example. A number of convergence results specify exact conditions which allow one to interchange limits and expectations, as specified below. - Monotone convergence theorem: Let $$ \{X_n : n \geq 0\} $$ be a sequence of random variables, with $$ 0 \leq X_n \leq X_{n+1} $$ (a.s) for each $$ n \geq 0. $$ Furthermore, let $$ X_n \to X $$ pointwise. Then, the monotone convergence theorem states that $$ \lim_n\operatorname{E}[X_n]=\operatorname{E}[X]. $$ Using the monotone convergence theorem, one can show that expectation indeed satisfies countable additivity for non-negative random variables. In particular, let $$ \{X_i\}_{i=0}^\infty $$ be non-negative random variables. It follows from the monotone convergence theorem that $$ \operatorname{E}\left[\sum^\infty_{i=0}X_i\right] = \sum^\infty_{i=0}\operatorname{E}[X_i]. $$ - Fatou's lemma: Let $$ \{ X_n \geq 0 : n \geq 0\} $$ be a sequence of non-negative random variables. Fatou's lemma states that $$ \operatorname{E}[\liminf_n X_n] \leq \liminf_n \operatorname{E}[X_n]. $$ Corollary. Let $$ X_n \geq 0 $$ with $$ \operatorname{E}[X_n] \leq C $$ for all $$ n \geq 0. $$ If $$ X_n \to X $$ (a.s), then $$ \operatorname{E}[X] \leq C. $$ Proof is by observing that $$ X = \liminf_n X_n $$ (a.s.) and applying Fatou's lemma. - Dominated convergence theorem: Let $$ \{X_n : n \geq 0 \} $$ be a sequence of random variables. If $$ X_n\to X $$ pointwise (a.s.), $$ |X_n|\leq Y \leq +\infty $$ (a.s.), and $$ \operatorname{E}[Y]<\infty. $$ Then, according to the dominated convergence theorem, - $$ \operatorname{E}|X| \leq \operatorname{E}[Y] <\infty $$ ; - $$ \lim_n\operatorname{E}[X_n]=\operatorname{E}[X] $$ - $$ \lim_n\operatorname{E}|X_n - X| = 0. $$ - Uniform integrability: In some cases, the equality $$ \lim_n\operatorname{E}[X_n]=\operatorname{E}[\lim_n X_n] $$ holds when the sequence $$ \{X_n\} $$ is uniformly integrable. ### Relationship with characteristic function The probability density function $$ f_X $$ of a scalar random variable $$ X $$ is related to its characteristic function $$ \varphi_X $$ by the inversion formula: $$ f_X(x) = \frac{1}{2\pi}\int_{\mathbb{R}} e^{-itx}\varphi_X(t) \, dt. $$ For the expected value of $$ g(X) $$ (where $$ g:{\mathbb R}\to{\mathbb R} $$ is a Borel function), we can use this inversion formula to obtain $$ \operatorname{E}[g(X)] = \frac{1}{2\pi} \int_\Reals g(x) \left[ \int_\Reals e^{-itx}\varphi_X(t) \, dt \right] dx. $$ If $$ \operatorname{E}[g(X)] $$ is finite, changing the order of integration, we get, in accordance with Fubini–Tonelli theorem, $$ \operatorname{E}[g(X)] = \frac{1}{2\pi} \int_\Reals G(t) \varphi_X(t) \, dt, $$ where $$ G(t) = \int_\Reals g(x) e^{-itx} \, dx $$ is the Fourier transform of $$ g(x). $$ The expression for $$ \operatorname{E}[g(X)] $$ also follows directly from the Plancherel theorem. ## Uses and applications The expectation of a random variable plays an important role in a variety of contexts. In statistics, where one seeks estimates for unknown parameters based on available data gained from samples, the sample mean serves as an estimate for the expectation, and is itself a random variable. In such settings, the sample mean is considered to meet the desirable criterion for a "good" estimator in being unbiased; that is, the expected value of the estimate is equal to the true value of the underlying parameter. For a different example, in decision theory, an agent making an optimal choice in the context of incomplete information is often assumed to maximize the expected value of their utility function. It is possible to construct an expected value equal to the probability of an event by taking the expectation of an indicator function that is one if the event has occurred and zero otherwise. This relationship can be used to translate properties of expected values into properties of probabilities, e.g. using the law of large numbers to justify estimating probabilities by frequencies. The expected values of the powers of X are called the moments of X; the moments about the mean of X are expected values of powers of . The moments of some random variables can be used to specify their distributions, via their moment generating functions. To empirically estimate the expected value of a random variable, one repeatedly measures observations of the variable and computes the arithmetic mean of the results. If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate). The law of large numbers demonstrates (under fairly mild conditions) that, as the size of the sample gets larger, the variance of this estimate gets smaller. This property is often exploited in a wide variety of applications, including general problems of statistical estimation and machine learning, to estimate (probabilistic) quantities of interest via Monte Carlo methods, since most quantities of interest can be written in terms of expectation, e.g. $$ \operatorname{P}({X \in \mathcal{A}}) = \operatorname{E}[{\mathbf 1}_{\mathcal{A}}], $$ where $$ {\mathbf 1}_{\mathcal{A}} $$ is the indicator function of the set $$ \mathcal{A}. $$ In classical mechanics, the center of mass is an analogous concept to expectation. For example, suppose X is a discrete random variable with values xi and corresponding probabilities pi. Now consider a weightless rod on which are placed weights, at locations xi along the rod and having masses pi (whose sum is one). The point at which the rod balances is E[X]. Expected values can also be used to compute the variance, by means of the computational formula for the variance $$ \operatorname{Var}(X)= \operatorname{E}[X^2] - (\operatorname{E}[X])^2. $$ A very important application of the expectation value is in the field of quantum mechanics. The expectation value of a quantum mechanical operator $$ \hat{A} $$ operating on a quantum state vector $$ |\psi\rangle $$ is written as $$ \langle\hat{A}\rangle = \langle\psi|\hat{A}|\psi\rangle. $$ The uncertainty in $$ \hat{A} $$ can be calculated by the formula $$ (\Delta A)^2 = \langle\hat{A}^2\rangle - \langle \hat{A} \rangle^2 $$ .
https://en.wikipedia.org/wiki/Expected_value
In abstract algebra, a branch of mathematics, a simple ring is a non-zero ring that has no two-sided ideal besides the zero ideal and itself. In particular, a commutative ring is a simple ring if and only if it is a field. The center of a simple ring is necessarily a field. It follows that a simple ring is an associative algebra over this field. It is then called a simple algebra over this field. Several references (e.g., or ) require in addition that a simple ring be left or right Artinian (or equivalently semi-simple). Under such terminology a non-zero ring with no non-trivial two-sided ideals is called quasi-simple. Rings which are simple as rings but are not a simple module over themselves do exist: a full matrix ring over a field does not have any nontrivial two-sided ideals (since any ideal of $$ M_n(R) $$ is of the form $$ M_n(I) $$ with $$ I $$ an ideal of $$ R $$ ), but it has nontrivial left ideals (for example, the sets of matrices which have some fixed zero columns). An immediate example of a simple ring is a division ring, where every nonzero element has a multiplicative inverse, for instance, the quaternions. Also, for any $$ n \ge 1 $$ , the algebra of $$ n \times n $$ matrices with entries in a division ring is simple. Joseph Wedderburn proved that if a ring $$ R $$ is a finite-dimensional simple algebra over a field $$ k $$ , it is isomorphic to a matrix algebra over some division algebra over $$ k $$ . In particular, the only simple rings that are finite-dimensional algebras over the real numbers are rings of matrices over either the real numbers, the complex numbers, or the quaternions. Wedderburn proved these results in 1907 in his doctoral thesis, On hypercomplex numbers, which appeared in the Proceedings of the London Mathematical Society. His thesis classified finite-dimensional simple and also semisimple algebras over fields. Simple algebras are building blocks of semisimple algebras: any finite-dimensional semisimple algebra is a Cartesian product, in the sense of algebras, of finite-dimensional simple algebras. One must be careful of the terminology: not every simple ring is a semisimple ring, and not every simple algebra is a semisimple algebra. However, every finite-dimensional simple algebra is a semisimple algebra, and every simple ring that is left- or right-artinian is a semisimple ring. Wedderburn's result was later generalized to semisimple rings in the Wedderburn–Artin theorem: this says that every semisimple ring is a finite product of matrix rings over division rings. As a consequence of this generalization, every simple ring that is left- or right-artinian is a matrix ring over a division ring. ## Examples Let $$ \mathbb{R} $$ be the field of real numbers, $$ \mathbb{C} $$ be the field of complex numbers, and $$ \mathbb{H} $$ the quaternions. - A central simple algebra (sometimes called a Brauer algebra) is a simple finite-dimensional algebra over a field $$ F $$ whose center is $$ F $$ . - Every finite-dimensional simple algebra over $$ \mathbb{R} $$ is isomorphic to an algebra of $$ n \times n $$ matrices with entries in $$ \mathbb{R} $$ , $$ \mathbb{C} $$ , or $$ \mathbb{H} $$ . Every central simple algebra over $$ \mathbb{R} $$ is isomorphic to an algebra of $$ n \times n $$ matrices with entries $$ \mathbb{R} $$ or $$ \mathbb{H} $$ . These results follow from the Frobenius theorem. - Every finite-dimensional simple algebra over $$ \mathbb{C} $$ is a central simple algebra, and is isomorphic to a matrix ring over $$ \mathbb{C} $$ . - Every finite-dimensional central simple algebra over a finite field is isomorphic to a matrix ring over that field. - Over a field of characteristic zero, the Weyl algebra is simple but not semisimple, and in particular not a matrix algebra over a division algebra over its center; the Weyl algebra is infinite-dimensional, so Wedderburn's theorem does not apply to it.
https://en.wikipedia.org/wiki/Simple_ring
A parallel database system seeks to improve performance through parallelization of various operations, such as loading data, building indexes and evaluating queries. Although data may be stored in a distributed fashion, the distribution is governed solely by performance considerations. Parallel databases improve processing and input/output speeds by using multiple CPUs and disks in parallel. Centralized and client–server database systems are not powerful enough to handle such applications. In parallel processing, many operations are performed simultaneously, as opposed to serial processing, in which the computational steps are performed sequentially. Parallel databases can be roughly divided into two groups, the first group of architecture is the multiprocessor architecture, the alternatives of which are the following: Shared-memory architecture Where multiple processors share the main memory (RAM) space but each processor has its own disk (HDD). If many processes run simultaneously, the speed is reduced, the same as a computer when many parallel tasks run and the computer slows down. Shared-disk architecture Where each node has its own main memory, but all nodes share mass storage, usually a storage area network. In practice, each node usually also has multiple processors. Shared-nothing architecture Where each node has its own mass storage as well as main memory. The other architecture group is called hybrid architecture, which includes: - Non-Uniform Memory Architecture (NUMA), which involves the non-uniform memory access. - Cluster (shared nothing + shared disk: SAN/NAS), which is formed by a group of connected computers. in this switches or hubs are used to connect different computers its most cheapest way and simplest way only simple topologies are used to connect different computers . much smarter if switches are implemented. ## Types of parallelism Intraquery parallelismA single query that is executed in parallel using multiple processors or disks. Independent parallelism Execution of each operation individually in different processors only if they can be executed independent of each other. For example, if we need to join four tables, then two can be joined at one processor and the other two can be joined at another processor. Final join can be done later. Pipe-lined parallelism Execution of different operations in pipe-lined fashion. For example, if we need to join three tables, one processor may join two tables and send the result set records as and when they are produced to the other processor. In the other processor the third table can be joined with the incoming records and the final result can be produced. Intraoperation parallelism Execution of single complex or large operations in parallel in multiple processors. For example, ORDER BY clause of a query that tries to execute on millions of records can be parallelized on multiple processors. ## References Category:Types of databases
https://en.wikipedia.org/wiki/Parallel_database
Complex dynamics, or holomorphic dynamics, is the study of dynamical systems obtained by iterating a complex analytic mapping. This article focuses on the case of algebraic dynamics, where a polynomial or rational function is iterated. In geometric terms, that amounts to iterating a mapping from some algebraic variety to itself. The related theory of arithmetic dynamics studies iteration over the rational numbers or the p-adic numbers instead of the complex numbers. ## Dynamics in complex dimension 1 A simple example that shows some of the main issues in complex dynamics is the mapping $$ f(z)=z^2 $$ from the complex numbers C to itself. It is helpful to view this as a map from the complex projective line $$ \mathbf{CP}^1 $$ to itself, by adding a point $$ \infty $$ to the complex numbers. ( $$ \mathbf{CP}^1 $$ has the advantage of being compact.) The basic question is: given a point $$ z $$ in $$ \mathbf{CP}^1 $$ , how does its orbit (or forward orbit) $$ z,\; f(z)=z^2,\; f(f(z))=z^4, f(f(f(z)))=z^8,\; \ldots $$ behave, qualitatively? The answer is: if the absolute value |z| is less than 1, then the orbit converges to 0, in fact more than exponentially fast. If |z| is greater than 1, then the orbit converges to the point $$ \infty $$ in $$ \mathbf{CP}^1 $$ , again more than exponentially fast. (Here 0 and $$ \infty $$ are superattracting fixed points of f, meaning that the derivative of f is zero at those points. An attracting fixed point means one where the derivative of f has absolute value less than 1.) On the other hand, suppose that $$ |z|=1 $$ , meaning that z is on the unit circle in C. At these points, the dynamics of f is chaotic, in various ways. For example, for almost all points z on the circle in terms of measure theory, the forward orbit of z is dense in the circle, and in fact uniformly distributed on the circle. There are also infinitely many periodic points on the circle, meaning points with $$ f^r(z)=z $$ for some positive integer r. (Here $$ f^r(z) $$ means the result of applying f to z r times, $$ f(f(\cdots(f(z))\cdots)) $$ .) Even at periodic points z on the circle, the dynamics of f can be considered chaotic, since points near z diverge exponentially fast from z upon iterating f. (The periodic points of f on the unit circle are repelling: if $$ f^r(z)=z $$ , the derivative of $$ f^r $$ at z has absolute value greater than 1.) Pierre Fatou and Gaston Julia showed in the late 1910s that much of this story extends to any complex algebraic map from $$ \mathbf{CP}^1 $$ to itself of degree greater than 1. (Such a mapping may be given by a polynomial $$ f(z) $$ with complex coefficients, or more generally by a rational function.) Namely, there is always a compact subset of $$ \mathbf{CP}^1 $$ , the Julia set, on which the dynamics of f is chaotic. For the mapping $$ f(z)=z^2 $$ , the Julia set is the unit circle. For other polynomial mappings, the Julia set is often highly irregular, for example a fractal in the sense that its Hausdorff dimension is not an integer. This occurs even for mappings as simple as $$ f(z)=z^2+c $$ for a constant $$ c\in\mathbf{C} $$ . The Mandelbrot set is the set of complex numbers c such that the Julia set of $$ f(z)=z^2+c $$ is connected. There is a rather complete classification of the possible dynamics of a rational function $$ f\colon\mathbf{CP}^1\to \mathbf{CP}^1 $$ in the Fatou set, the complement of the Julia set, where the dynamics is "tame". Namely, Dennis Sullivan showed that each connected component U of the Fatou set is pre-periodic, meaning that there are natural numbers $$ a<b $$ such that $$ f^a(U)=f^b(U) $$ . Therefore, to analyze the dynamics on a component U, one can assume after replacing f by an iterate that $$ f(U)=U $$ . Then either (1) U contains an attracting fixed point for f; (2) U is parabolic in the sense that all points in U approach a fixed point in the boundary of U; (3) U is a Siegel disk, meaning that the action of f on U is conjugate to an irrational rotation of the open unit disk; or (4) U is a Herman ring, meaning that the action of f on U is conjugate to an irrational rotation of an open annulus. (Note that the "backward orbit" of a point z in U, the set of points in $$ \mathbf{CP}^1 $$ that map to z under some iterate of f, need not be contained in U.) ## The equilibrium measure of an endomorphism Complex dynamics has been effectively developed in any dimension. This section focuses on the mappings from complex projective space $$ \mathbf{CP}^n $$ to itself, the richest source of examples. The main results for $$ \mathbf{CP}^n $$ have been extended to a class of rational maps from any projective variety to itself. Note, however, that many varieties have no interesting self-maps. Let f be an endomorphism of $$ \mathbf{CP}^n $$ , meaning a morphism of algebraic varieties from $$ \mathbf{CP}^n $$ to itself, for a positive integer n. Such a mapping is given in homogeneous coordinates by $$ f([z_0,\ldots,z_n])=[f_0(z_0,\ldots,z_n),\ldots,f_n(z_0,\ldots,z_n)] $$ for some homogeneous polynomials $$ f_0,\ldots,f_n $$ of the same degree d that have no common zeros in $$ \mathbf{CP}^n $$ . (By Chow's theorem, this is the same thing as a holomorphic mapping from $$ \mathbf{CP}^n $$ to itself.) Assume that d is greater than 1; then the degree of the mapping f is $$ d^n $$ , which is also greater than 1. Then there is a unique probability measure $$ \mu_f $$ on $$ \mathbf{CP}^n $$ , the equilibrium measure of f, that describes the most chaotic part of the dynamics of f. (It has also been called the Green measure or measure of maximal entropy.) This measure was defined by Hans Brolin (1965) for polynomials in one variable, by Alexandre Freire, Artur Lopes, Ricardo Mañé, and Mikhail Lyubich for $$ n=1 $$ (around 1983), and by John Hubbard, Peter Papadopol, John Fornaess, and Nessim Sibony in any dimension (around 1994). The small Julia set $$ J^*(f) $$ is the support of the equilibrium measure in $$ \mathbf{CP}^n $$ ; this is simply the Julia set when $$ n=1 $$ . ### Examples - For the mapping $$ f(z)=z^2 $$ on $$ \mathbf{CP}^1 $$ , the equilibrium measure $$ \mu_f $$ is the Haar measure (the standard measure, scaled to have total measure 1) on the unit circle $$ |z|=1 $$ . - More generally, for an integer $$ d>1 $$ , let $$ f\colon \mathbf{CP}^n\to\mathbf{CP}^n $$ be the mapping $$ f([z_0,\ldots,z_n])=[z_0^d,\ldots,z_n^d]. $$ Then the equilibrium measure $$ \mu_f $$ is the Haar measure on the n-dimensional torus $$ \{[1,z_1,\ldots,z_n]: |z_1|=\cdots=|z_n|=1\}. $$ For more general holomorphic mappings from $$ \mathbf{CP}^n $$ to itself, the equilibrium measure can be much more complicated, as one sees already in complex dimension 1 from pictures of Julia sets. ### Characterizations of the equilibrium measure A basic property of the equilibrium measure is that it is invariant under f, in the sense that the pushforward measure $$ f_*\mu_f $$ is equal to $$ \mu_f $$ . Because f is a finite morphism, the pullback measure $$ f^*\mu_f $$ is also defined, and $$ \mu_f $$ is totally invariant in the sense that $$ f^*\mu_f=\deg(f)\mu_f $$ . One striking characterization of the equilibrium measure is that it describes the asymptotics of almost every point in $$ \mathbf{CP}^n $$ when followed backward in time, by Jean-Yves Briend, Julien Duval, Tien-Cuong Dinh, and Sibony. Namely, for a point z in $$ \mathbf{CP}^n $$ and a positive integer r, consider the probability measure $$ (1/d^{rn})(f^r)^*(\delta_z) $$ which is evenly distributed on the $$ d^{rn} $$ points w with $$ f^r(w)=z $$ . Then there is a Zariski closed subset $$ E\subsetneq \mathbf{CP}^n $$ such that for all points z not in E, the measures just defined converge weakly to the equilibrium measure $$ \mu_f $$ as r goes to infinity. In more detail: only finitely many closed complex subspaces of $$ \mathbf{CP}^n $$ are totally invariant under f (meaning that $$ f^{-1}(S)=S $$ ), and one can take the exceptional set E to be the unique largest totally invariant closed complex subspace not equal to $$ \mathbf{CP}^n $$ . Another characterization of the equilibrium measure (due to Briend and Duval) is as follows. For each positive integer r, the number of periodic points of period r (meaning that $$ f^r(z)=z $$ ), counted with multiplicity, is $$ (d^{r(n+1)}-1)/(d^r-1) $$ , which is roughly $$ d^{rn} $$ . Consider the probability measure which is evenly distributed on the points of period r. Then these measures also converge to the equilibrium measure $$ \mu_f $$ as r goes to infinity. Moreover, most periodic points are repelling and lie in $$ J^*(f) $$ , and so one gets the same limit measure by averaging only over the repelling periodic points in $$ J^*(f) $$ . There may also be repelling periodic points outside $$ J^*(f) $$ . The equilibrium measure gives zero mass to any closed complex subspace of $$ \mathbf{CP}^n $$ that is not the whole space. Since the periodic points in $$ J^*(f) $$ are dense in $$ J^*(f) $$ , it follows that the periodic points of f are Zariski dense in $$ \mathbf{CP}^n $$ . A more algebraic proof of this Zariski density was given by Najmuddin Fakhruddin. Another consequence of $$ \mu_f $$ giving zero mass to closed complex subspaces not equal to $$ \mathbf{CP}^n $$ is that each point has zero mass. As a result, the support $$ J^*(f) $$ of $$ \mu_f $$ has no isolated points, and so it is a perfect set. The support $$ J^*(f) $$ of the equilibrium measure is not too small, in the sense that its Hausdorff dimension is always greater than zero. In that sense, an endomorphism of complex projective space with degree greater than 1 always behaves chaotically at least on part of the space. (There are examples where $$ J^*(f) $$ is all of $$ \mathbf{CP}^n $$ .) Another way to make precise that f has some chaotic behavior is that the topological entropy of f is always greater than zero, in fact equal to $$ n\log d $$ , by Mikhail Gromov, Michał Misiurewicz, and Feliks Przytycki. For any continuous endomorphism f of a compact metric space X, the topological entropy of f is equal to the maximum of the measure-theoretic entropy (or "metric entropy") of all f-invariant measures on X. For a holomorphic endomorphism f of $$ \mathbf{CP}^n $$ , the equilibrium measure $$ \mu_f $$ is the unique invariant measure of maximal entropy, by Briend and Duval. This is another way to say that the most chaotic behavior of f is concentrated on the support of the equilibrium measure. Finally, one can say more about the dynamics of f on the support of the equilibrium measure: f is ergodic and, more strongly, mixing with respect to that measure, by Fornaess and Sibony. It follows, for example, that for almost every point with respect to $$ \mu_f $$ , its forward orbit is uniformly distributed with respect to $$ \mu_f $$ . ### Lattès maps A Lattès map is an endomorphism f of $$ \mathbf{CP}^n $$ obtained from an endomorphism of an abelian variety by dividing by a finite group. In this case, the equilibrium measure of f is absolutely continuous with respect to Lebesgue measure on $$ \mathbf{CP}^n $$ . Conversely, by Anna Zdunik, François Berteloot, and Christophe Dupont, the only endomorphisms of $$ \mathbf{CP}^n $$ whose equilibrium measure is absolutely continuous with respect to Lebesgue measure are the Lattès examples. That is, for all non-Lattès endomorphisms, $$ \mu_f $$ assigns its full mass 1 to some Borel set of Lebesgue measure 0. In dimension 1, more is known about the "irregularity" of the equilibrium measure. Namely, define the Hausdorff dimension of a probability measure $$ \mu $$ on $$ \mathbf{CP}^1 $$ (or more generally on a smooth manifold) by $$ \dim(\mu)=\inf \{\dim_H(Y):\mu(Y)=1\}, $$ where $$ \dim_H(Y) $$ denotes the Hausdorff dimension of a Borel set Y. For an endomorphism f of $$ \mathbf{CP}^1 $$ of degree greater than 1, Zdunik showed that the dimension of $$ \mu_f $$ is equal to the Hausdorff dimension of its support (the Julia set) if and only if f is conjugate to a Lattès map, a Chebyshev polynomial (up to sign), or a power map $$ f(z)=z^{\pm d} $$ with $$ d\geq 2 $$ . (In the latter cases, the Julia set is all of $$ \mathbf{CP}^1 $$ , a closed interval, or a circle, respectively.) Thus, outside those special cases, the equilibrium measure is highly irregular, assigning positive mass to some closed subsets of the Julia set with smaller Hausdorff dimension than the whole Julia set. ## Automorphisms of projective varieties More generally, complex dynamics seeks to describe the behavior of rational maps under iteration. One case that has been studied with some success is that of automorphisms of a smooth complex projective variety X, meaning isomorphisms f from X to itself. The case of main interest is where f acts nontrivially on the singular cohomology $$ H^*(X,\mathbf{Z}) $$ . Gromov and Yosef Yomdin showed that the topological entropy of an endomorphism (for example, an automorphism) of a smooth complex projective variety is determined by its action on cohomology. Explicitly, for X of complex dimension n and $$ 0\leq p\leq n $$ , let $$ d_p $$ be the spectral radius of f acting by pullback on the Hodge cohomology group $$ H^{p,p}(X)\subset H^{2p}(X,\mathbf{C}) $$ . Then the topological entropy of f is $$ h(f)=\max_p \log d_p. $$ (The topological entropy of f is also the logarithm of the spectral radius of f on the whole cohomology $$ H^*(X,\mathbf{C}) $$ .) Thus f has some chaotic behavior, in the sense that its topological entropy is greater than zero, if and only if it acts on some cohomology group with an eigenvalue of absolute value greater than 1. Many projective varieties do not have such automorphisms, but (for example) many rational surfaces and K3 surfaces do have such automorphisms. Let X be a compact Kähler manifold, which includes the case of a smooth complex projective variety. Say that an automorphism f of X has simple action on cohomology if: there is only one number p such that $$ d_p $$ takes its maximum value, the action of f on $$ H^{p,p}(X) $$ has only one eigenvalue with absolute value $$ d_p $$ , and this is a simple eigenvalue. For example, Serge Cantat showed that every automorphism of a compact Kähler surface with positive topological entropy has simple action on cohomology. (Here an "automorphism" is complex analytic but is not assumed to preserve a Kähler metric on X. In fact, every automorphism that preserves a metric has topological entropy zero.) For an automorphism f with simple action on cohomology, some of the goals of complex dynamics have been achieved. Dinh, Sibony, and Henry de Thélin showed that there is a unique invariant probability measure $$ \mu_f $$ of maximal entropy for f, called the equilibrium measure (or Green measure, or measure of maximal entropy). (In particular, $$ \mu_f $$ has entropy $$ \log d_p $$ with respect to f.) The support of $$ \mu_f $$ is called the small Julia set $$ J^*(f) $$ . Informally: f has some chaotic behavior, and the most chaotic behavior is concentrated on the small Julia set. At least when X is projective, $$ J^*(f) $$ has positive Hausdorff dimension. (More precisely, $$ \mu_f $$ assigns zero mass to all sets of sufficiently small Hausdorff dimension.) ### Kummer automorphisms Some abelian varieties have an automorphism of positive entropy. For example, let E be a complex elliptic curve and let X be the abelian surface $$ E\times E $$ . Then the group $$ GL(2,\mathbf{Z}) $$ of invertible $$ 2\times 2 $$ integer matrices acts on X. Any group element f whose trace has absolute value greater than 2, for example $$ \begin{pmatrix}2&1\\1&1\end{pmatrix} $$ , has spectral radius greater than 1, and so it gives a positive-entropy automorphism of X. The equilibrium measure of f is the Haar measure (the standard Lebesgue measure) on X. The Kummer automorphisms are defined by taking the quotient space by a finite group of an abelian surface with automorphism, and then blowing up to make the surface smooth. The resulting surfaces include some special K3 surfaces and rational surfaces. For the Kummer automorphisms, the equilibrium measure has support equal to X and is smooth outside finitely many curves. Conversely, Cantat and Dupont showed that for all surface automorphisms of positive entropy except the Kummer examples, the equilibrium measure is not absolutely continuous with respect to Lebesgue measure. In this sense, it is usual for the equilibrium measure of an automorphism to be somewhat irregular. ### Saddle periodic points A periodic point z of f is called a saddle periodic point if, for a positive integer r such that $$ f^r(z)=z $$ , at least one eigenvalue of the derivative of $$ f^r $$ on the tangent space at z has absolute value less than 1, at least one has absolute value greater than 1, and none has absolute value equal to 1. (Thus f is expanding in some directions and contracting at others, near z.) For an automorphism f with simple action on cohomology, the saddle periodic points are dense in the support $$ J^*(f) $$ of the equilibrium measure $$ \mu_f $$ . On the other hand, the measure $$ \mu_f $$ vanishes on closed complex subspaces not equal to X. It follows that the periodic points of f (or even just the saddle periodic points contained in the support of $$ \mu_f $$ ) are Zariski dense in X. For an automorphism f with simple action on cohomology, f and its inverse map are ergodic and, more strongly, mixing with respect to the equilibrium measure $$ \mu_f $$ . It follows that for almost every point z with respect to $$ \mu_f $$ , the forward and backward orbits of z are both uniformly distributed with respect to $$ \mu_f $$ . A notable difference with the case of endomorphisms of $$ \mathbf{CP}^n $$ is that for an automorphism f with simple action on cohomology, there can be a nonempty open subset of X on which neither forward nor backward orbits approach the support $$ J^*(f) $$ of the equilibrium measure. For example, Eric Bedford, Kyounghee Kim, and Curtis McMullen constructed automorphisms f of a smooth projective rational surface with positive topological entropy (hence simple action on cohomology) such that f has a Siegel disk, on which the action of f is conjugate to an irrational rotation. Points in that open set never approach $$ J^*(f) $$ under the action of f or its inverse. At least in complex dimension 2, the equilibrium measure of f describes the distribution of the isolated periodic points of f. (There may also be complex curves fixed by f or an iterate, which are ignored here.) Namely, let f be an automorphism of a compact Kähler surface X with positive topological entropy $$ h(f)=\log d_1 $$ . Consider the probability measure which is evenly distributed on the isolated periodic points of period r (meaning that $$ f^r(z)=z $$ ). Then this measure converges weakly to $$ \mu_f $$ as r goes to infinity, by Eric Bedford, Lyubich, and John Smillie. The same holds for the subset of saddle periodic points, because both sets of periodic points grow at a rate of $$ (d_1)^r $$ .
https://en.wikipedia.org/wiki/Complex_dynamics
Electrostatics is a branch of physics that studies slow-moving or stationary electric charges. Since classical times, it has been known that some materials, such as amber, attract lightweight particles after rubbing. The Greek word (), meaning 'amber', was thus the root of the word electricity. Electrostatic phenomena arise from the forces that electric charges exert on each other. Such forces are described by ## Coulomb's law . There are many examples of electrostatic phenomena, from those as simple as the attraction of plastic wrap to one's hand after it is removed from a package, to the apparently spontaneous explosion of grain silos, the damage of electronic components during manufacturing, and photocopier and laser printer operation. The electrostatic model accurately predicts electrical phenomena in "classical" cases where the velocities are low and the system is macroscopic so no quantum effects are involved. It also plays a role in quantum mechanics, where additional terms also need to be included. Coulomb's law Coulomb's law states that: The force is along the straight line joining them. If the two charges have the same sign, the electrostatic force between them is repulsive; if they have different signs, the force between them is attractive. If $$ r $$ is the distance (in meters) between two charges, then the force between two point charges $$ Q $$ and $$ q $$ is: $$ F = {1\over 4\pi\varepsilon_0}{|Qq|\over r^2}, $$ where ε0 = is the vacuum permittivity. The SI unit of ε0 is equivalently A2⋅s4 ⋅kg−1⋅m−3 or C2⋅N−1⋅m−2 or F⋅m−1. ## Electric field The electric field, $$ \mathbf E $$ , in units of Newtons per Coulomb or volts per meter, is a vector field that can be defined everywhere, except at the location of point charges (where it diverges to infinity). It is defined as the electrostatic force $$ \mathbf , $$ on a hypothetical small test charge at the point due to Coulomb's law, divided by the charge $$ q $$ $$ \mathbf E = {\mathbf F \over q} $$ Electric field lines are useful for visualizing the electric field. Field lines begin on positive charge and terminate on negative charge. They are parallel to the direction of the electric field at each point, and the density of these field lines is a measure of the magnitude of the electric field at any given point. A collection of $$ n $$ particles of charge $$ q_i $$ , located at points $$ \mathbf r_i $$ (called source points) generates the electric field at $$ \mathbf r $$ (called the field point) of: $$ \mathbf E(\mathbf r) = {1\over4\pi\varepsilon_0} \sum_{i=1}^n q_i {\hat\mathbf {r-r_i}\over {|\mathbf {r-r_i}|}^2} = {1\over 4\pi\varepsilon_0} \sum_{i=1}^n q_i {\mathbf {r-r_i}\over {|\mathbf {r-r_i}|}^3}, $$ where $$ \mathbf r-\mathbf r_i $$ is the displacement vector from a source point $$ \mathbf r_i $$ to the field point $$ \mathbf r $$ , and $$ \hat\mathbf {r-r_i} \ \stackrel{\mathrm{def}}{=}\ \frac{\mathbf {r-r_i}}{|\mathbf {r-r_i}|} $$ is the unit vector of the displacement vector that indicates the direction of the field due to the source at point $$ \mathbf{r_i} $$ . For a single point charge, $$ q $$ , at the origin, the magnitude of this electric field is $$ E = q/4\pi\varepsilon_0 r^2 $$ and points away from that charge if it is positive. The fact that the force (and hence the field) can be calculated by summing over all the contributions due to individual source particles is an example of the superposition principle. The electric field produced by a distribution of charges is given by the volume charge density $$ \rho(\mathbf r) $$ and can be obtained by converting this sum into a triple integral: $$ \mathbf E(\mathbf r) = \frac{1}{4\pi\varepsilon_0} \iiint \, \rho(\mathbf r') {\mathbf {r-r'}\over {|\mathbf {r-r'}|}^3} \mathrm{d}^3|\mathbf r'| $$ ### Gauss's law Gauss's law states that "the total electric flux through any closed surface in free space of any shape drawn in an electric field is proportional to the total electric charge enclosed by the surface." Many numerical problems can be solved by considering a Gaussian surface around a body. Mathematically, Gauss's law takes the form of an integral equation: $$ \Phi_E = \oint_S\mathbf E\cdot \mathrm{d}\mathbf A = {Q_\text{enclosed}\over\varepsilon_0} = \int_V{\rho\over\varepsilon_0}\mathrm{d}^3 r, $$ where $$ \mathrm{d}^3 r =\mathrm{d}x \ \mathrm{d}y \ \mathrm{d}z $$ is a volume element. If the charge is distributed over a surface or along a line, replace $$ \rho\,\mathrm{d}^3r $$ by $$ \sigma \, \mathrm{d}A $$ or $$ \lambda \, \mathrm{d}\ell $$ . The divergence theorem allows Gauss's Law to be written in differential form: $$ \nabla\cdot\mathbf E = {\rho\over\varepsilon_0}. $$ where $$ \nabla \cdot $$ is the divergence operator. ### Poisson and Laplace equations The definition of electrostatic potential, combined with the differential form of Gauss's law (above), provides a relationship between the potential Φ and the charge density ρ: $$ {\nabla}^2 \phi = - {\rho\over\varepsilon_0}. $$ This relationship is a form of Poisson's equation. In the absence of unpaired electric charge, the equation becomes Laplace's equation: $$ {\nabla}^2 \phi = 0, $$ ## Electrostatic approximation If the electric field in a system can be assumed to result from static charges, that is, a system that exhibits no significant time-varying magnetic fields, the system is justifiably analyzed using only the principles of electrostatics. This is called the "electrostatic approximation". The validity of the electrostatic approximation rests on the assumption that the electric field is irrotational, or nearly so: $$ \nabla\times\mathbf E \approx 0. $$ From Faraday's law, this assumption implies the absence or near-absence of time-varying magnetic fields: $$ {\partial\mathbf B\over\partial t} \approx 0. $$ In other words, electrostatics does not require the absence of magnetic fields or electric currents. Rather, if magnetic fields or electric currents do exist, they must not change with time, or in the worst-case, they must change with time only very slowly. In some problems, both electrostatics and magnetostatics may be required for accurate predictions, but the coupling between the two can still be ignored. Electrostatics and magnetostatics can both be seen as non-relativistic Galilean limits for electromagnetism. In addition, conventional electrostatics ignore quantum effects which have to be added for a complete description. ### Electrostatic potential As the electric field is irrotational, it is possible to express the electric field as the gradient of a scalar function, $$ \phi $$ , called the electrostatic potential (also known as the voltage). An electric field, $$ E $$ , points from regions of high electric potential to regions of low electric potential, expressed mathematically as $$ \mathbf E = -\nabla\phi. $$ The gradient theorem can be used to establish that the electrostatic potential is the amount of work per unit charge required to move a charge from point $$ a $$ to point $$ b $$ with the following line integral: $$ -\int_a^b {\mathbf E\cdot \mathrm{d}\mathbf \ell} = \phi (\mathbf b) -\phi(\mathbf a). $$ From these equations, we see that the electric potential is constant in any region for which the electric field vanishes (such as occurs inside a conducting object). ### Electrostatic energy A test particle's potential energy, $$ U_\mathrm{E}^{\text{single}} $$ , can be calculated from a line integral of the work, $$ q_n\mathbf E\cdot\mathrm d\mathbf\ell $$ . We integrate from a point at infinity, and assume a collection of $$ N $$ particles of charge $$ Q_n $$ , are already situated at the points $$ \mathbf r_i $$ . This potential energy (in Joules) is: $$ U_\mathrm{E}^{\text{single}}=q\phi(\mathbf r)=\frac{q }{4\pi \varepsilon_0}\sum_{i=1}^N \frac{Q_i}{\left \|\mathcal{\mathbf R_i} \right \|} $$ where $$ \mathbf\mathcal {R_i} = \mathbf r - \mathbf r_i $$ is the distance of each charge $$ Q_i $$ from the test charge $$ q $$ , which situated at the point $$ \mathbf r $$ , and $$ \phi(\mathbf r) $$ is the electric potential that would be at $$ \mathbf r $$ if the test charge were not present. If only two charges are present, the potential energy is $$ Q_1 Q_2/(4\pi\varepsilon_0 r) $$ . The total electric potential energy due a collection of N charges is calculating by assembling these particles one at a time: $$ U_\mathrm{E}^{\text{total}} = \frac{1 }{4\pi\varepsilon _0}\sum_{j=1}^N Q_j \sum_{i=1}^{j-1} \frac{Q_i}{r_{ij}}= \frac{1}{2}\sum_{i=1}^N Q_i\phi_i , $$ where the following sum from, j = 1 to N, excludes i = j: $$ \phi_i = \frac{1}{4\pi \varepsilon _0} \sum_{\stackrel{j=1}{j \ne i}}^N \frac{Q_j}{r_{ij}}. $$ This electric potential, $$ \phi_i $$ is what would be measured at $$ \mathbf r_i $$ if the charge $$ Q_i $$ were missing. This formula obviously excludes the (infinite) energy that would be required to assemble each point charge from a disperse cloud of charge. The sum over charges can be converted into an integral over charge density using the prescription $$ \sum (\cdots) \rightarrow \int(\cdots)\rho \, \mathrm d^3r $$ : $$ U_\mathrm{E}^{\text{total}} = \frac{1}{2} \int\rho(\mathbf r)\phi(\mathbf r) \, \mathrm{d}^3 r = \frac{\varepsilon_0 }{2} \int \left|{\mathbf{E}}\right|^2 \, \mathrm{d}^3 r, $$ This second expression for electrostatic energy uses the fact that the electric field is the negative gradient of the electric potential, as well as vector calculus identities in a way that resembles integration by parts. These two integrals for electric field energy seem to indicate two mutually exclusive formulas for electrostatic energy density, namely $$ \frac{1}{2}\rho\phi $$ and $$ \frac{1}{2}\varepsilon_0 E^2 $$ ; they yield equal values for the total electrostatic energy only if both are integrated over all space. ### Electrostatic pressure On a conductor, a surface charge will experience a force in the presence of an electric field. This force is the average of the discontinuous electric field at the surface charge. This average in terms of the field just outside the surface amounts to: $$ P = \frac{ \varepsilon_0 }{2} E^2, $$ This pressure tends to draw the conductor into the field, regardless of the sign of the surface charge.
https://en.wikipedia.org/wiki/Electrostatics
Number theory is a branch of pure mathematics devoted primarily to the study of the integers and arithmetic functions. Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers). Integers can be considered either in themselves or as solutions to equations ( ### Diophantine geometry ). Questions in number theory can often be understood through the study of analytical objects, such as the Riemann zeta function, that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, as for instance how irrational numbers can be approximated by fractions (Diophantine approximation). Number theory is one of the oldest branches of mathematics alongside geometry. One quirk of number theory is that it deals with statements that are simple to understand but are very difficult to solve. Examples of this are #### Fermat 's Last Theorem, which was proved 358 years after the original formulation, and Goldbach's conjecture, which remains unsolved since the 18th century. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." It was regarded as the example of pure mathematics with no applications outside mathematics until the 1970s, when it became known that prime numbers would be used as the basis for the creation of public-key cryptography algorithms. ## History Number theory is the branch of mathematics that studies integers and their properties and relations. The integers comprise a set that extends the set of natural numbers $$ \{1, 2, 3, \dots\} $$ to include number $$ 0 $$ and the negation of natural numbers $$ \{-1, -2, -3, \dots\} $$ . Number theorists study prime numbers as well as the properties of mathematical objects constructed from integers (for example, rational numbers), or defined as generalizations of the integers (for example, algebraic integers). Number theory is closely related to arithmetic and some authors use the terms as synonyms. However, the word "arithmetic" is used today to mean the study of numerical operations and extends to the real numbers. In a more specific sense, number theory is restricted to the study of integers and focuses on their properties and relationships. Traditionally, it is known as higher arithmetic. By the early twentieth century, the term number theory had been widely adopted. The term number means whole numbers, which refers to either the natural numbers or the integers. ### Elementary number theory studies aspects of integers that can be investigated using elementary methods such as elementary proofs. ### Analytic number theory , by contrast, relies on complex numbers and techniques from analysis and calculus. ### Algebraic number theory employs algebraic structures such as fields and rings to analyze the properties of and relations between numbers. Geometric number theory uses concepts from geometry to study numbers. Further branches of number theory are probabilistic number theory, combinatorial number theory, computational number theory, and applied number theory, which examines the application of number theory to science and technology. ### Origins #### Ancient Mesopotamia The earliest historical find of an arithmetical nature is a fragment of a table: Plimpton 322 (Larsa, Mesopotamia, c. 1800 BC), a broken clay tablet, contains a list of "Pythagorean triples", that is, integers $$ (a,b,c) $$ such that $$ a^2+b^2=c^2 $$ . The triples are too numerous and too large to have been obtained by brute force. The heading over the first column reads: "The of the diagonal which has been subtracted such that the width..." The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity $$ \left(\frac{1}{2} \left(x - \frac{1}{x}\right)\right)^2 + 1 = \left(\frac{1}{2} \left(x + \frac{1}{x} \right)\right)^2, $$ which is implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and then reordered by $$ c/a $$ , presumably for actual use as a "table", for example, with a view to applications. It is not known what these applications may have been, or whether there could have been any; Babylonian astronomy, for example, truly came into its own many centuries later. It has been suggested instead that the table was a source of numerical examples for school problems. Plimpton 322 tablet is the only surviving evidence of what today would be called number theory within Babylonian mathematics, though a kind of Babylonian algebra was much more developed. #### Ancient Greece Although other civilizations probably influenced Greek mathematics at the beginning, all evidence of such borrowings appear relatively late,Herodotus (II. 81) and Isocrates (Busiris 28), cited in: . On Thales, see Eudemus ap. Proclus, 65.7, (for example, ) cited in: . Proclus was using a work by Eudemus of Rhodes (now lost), the Catalogue of Geometers. See also introduction, on Proclus's reliability. and it is likely that Greek (the theoretical or philosophical study of numbers) is an indigenous tradition. Aside from a few fragments, most of what is known about Greek mathematics in the 6th to 4th centuries BC (the Archaic and Classical periods) comes through either the reports of contemporary non-mathematicians or references from mathematical works in the early Hellenistic period. In the case of number theory, this means largely Plato, Aristotle, and Euclid. Plato had a keen interest in mathematics, and distinguished clearly between and calculation (). Plato reports in his dialogue Theaetetus that Theodorus had proven that $$ \sqrt{3}, \sqrt{5}, \dots, \sqrt{17} $$ are irrational. Theaetetus, a disciple of Theodorus's, worked on distinguishing different kinds of incommensurables, and was thus arguably a pioneer in the study of number systems. Aristotle further claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: ("They say Plato learned all things Pythagorean"). Euclid devoted part of his Elements (Books VII–IX) to topics that belong to elementary number theory, including prime numbers and divisibility. He gave an algorithm, the Euclidean algorithm, for computing the greatest common divisor of two numbers (Prop. VII.2) and a proof implying the infinitude of primes (Prop. IX.20). There is also older material likely based on Pythagorean teachings (Prop. IX.21–34), such as "odd times even is even" and "if an odd number measures [= divides] an even number, then it also measures [= divides] half of it". This is all that is needed to prove that is irrational. Pythagoreans apparently gave great importance to the odd and the even. The discovery that $$ \sqrt{2} $$ is irrational is credited to the early Pythagoreans, sometimes assigned to Hippasus, who was expelled or split from the Pythagorean community as a result. This forced a distinction between numbers (integers and the rationals—the subjects of arithmetic) and lengths and proportions (which may be identified with real numbers, whether rational or not). The Pythagorean tradition also spoke of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc., are seen now as more natural than triangular numbers, pentagonal numbers, etc., the study of the sums of triangular and pentagonal numbers would prove fruitful in the early modern period (17th to early 19th centuries). An epigram published by Lessing in 1773 appears to be a letter sent by Archimedes to Eratosthenes. The epigram proposed what has become known as Archimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamed Pell's equation). As far as it is known, such equations were first successfully treated by Indian mathematicians. It is not known whether Archimedes himself had a method of solution. ##### Late Antiquity Aside from the elementary work of Neopythagoreans such as Nicomachus and Theon of Smyrna, the foremost authority in in Late Antiquity was Diophantus of Alexandria, who probably lived in the 3rd century AD, approximately five hundred years after Euclid. Little is known about his life, but he wrote two works that are extant: On Polygonal Numbers, a short treatise written in the Euclidean manner on the subject, and the Arithmetica, a work on pre-modern algebra (namely, the use of algebra to solve numerical problems). Six out of the thirteen books of Diophantus's Arithmetica survive in the original Greek and four more survive in an Arabic translation. The is a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the form $$ f(x,y)=z^2 $$ or $$ f(x,y,z)=w^2 $$ . In modern parlance, Diophantine equations are polynomial equations to which rational or integer solutions are sought. #### Asia The Chinese remainder theorem appears as an exercise in Sunzi Suanjing (between the third and fifth centuries). (There is one important step glossed over in Sunzi's solution: it is the problem that was later solved by Āryabhaṭa's Kuṭṭaka – see below.) The result was later generalized with a complete solution called Da-yan-shu () in Qin Jiushao's 1247 Mathematical Treatise in Nine Sections which was translated into English in early nineteenth century by British missionary Alexander Wylie. There is also some numerical mysticism in Chinese mathematics, but, unlike that of the Pythagoreans, it seems to have led nowhere. While Greek astronomy probably influenced Indian learning, to the point of introducing trigonometry, it seems to be the case that Indian mathematics is otherwise an autochthonous tradition; in particular, there is no evidence that Euclid's Elements reached India before the eighteenth century. Āryabhaṭa (476–550 AD) showed that pairs of simultaneous congruences $$ n\equiv a_1 \bmod m_1 $$ , $$ n\equiv a_2 \bmod m_2 $$ could be solved by a method he called kuṭṭaka, or pulveriser; this is a procedure close to (a generalization of) the Euclidean algorithm, which was probably discovered independently in India. Āryabhaṭa seems to have had in mind applications to astronomical calculations. Brahmagupta (628 AD) started the systematic study of indefinite quadratic equations—in particular, the misnamed Pell equation, in which Archimedes may have first been interested, and which did not start to be solved in the West until the time of Fermat and #### Euler . Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (the chakravala, or "cyclic method") for solving Pell's equation was finally found by Jayadeva (cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears in Bhāskara II's Bīja-gaṇita (twelfth century). Indian mathematics remained largely unknown in Europe until the late eighteenth century; Brahmagupta and Bhāskara's work was translated into English in 1817 by Henry Colebrooke. #### Arithmetic in the Islamic golden age In the early ninth century, the caliph al-Ma'mun ordered translations of many Greek mathematical works and at least one Sanskrit work (the Sindhind, which may or may not be Brahmagupta's Brāhmasphuṭasiddhānta). Diophantus's main work, the Arithmetica, was translated into Arabic by Qusta ibn Luqa (820–912). Part of the treatise al-Fakhri (by al-Karajī, 953 – c. 1029) builds on it to some extent. According to Rashed Roshdi, Al-Karajī's contemporary Ibn al-Haytham knew what would later be called Wilson's theorem. #### Western Europe in the Middle Ages Other than a treatise on squares in arithmetic progression by Fibonacci—who traveled and studied in north Africa and Constantinople—no number theory to speak of was done in western Europe during the Middle Ages. Matters started to change in Europe in the late Renaissance, thanks to a renewed study of the works of Greek antiquity. A catalyst was the textual emendation and translation into Latin of Diophantus' Arithmetica. ### Early modern number theory Fermat Pierre de Fermat (1607–1665) never published his writings but communicated through correspondence instead. Accordingly, his work on number theory is contained almost entirely in letters to mathematicians and in private marginal notes. Although he drew inspiration from classical sources, in his notes and letters Fermat scarcely wrote any proofs—he had no models in the area. Over his lifetime, Fermat made the following contributions to the field: - One of Fermat's first interests was perfect numbers (which appear in Euclid, Elements IX) and amicable numbers; these topics led him to work on integer divisors, which were from the beginning among the subjects of the correspondence (1636 onwards) that put him in touch with the mathematical community of the day. - In 1638, Fermat claimed, without proof, that all whole numbers can be expressed as the sum of four squares or fewer. - Fermat's little theorem (1640): if a is not divisible by a prime p, then $$ a^{p-1} \equiv 1 \bmod p. $$ - If a and b are coprime, then $$ a^2 + b^2 $$ is not divisible by any prime congruent to −1 modulo 4; and every prime congruent to 1 modulo 4 can be written in the form $$ a^2 + b^2 $$ . These two statements also date from 1640; in 1659, Fermat stated to Huygens that he had proven the latter statement by the method of infinite descent. - In 1657, Fermat posed the problem of solving $$ x^2 - N y^2 = 1 $$ as a challenge to English mathematicians. The problem was solved in a few months by Wallis and Brouncker. Fermat considered their solution valid, but pointed out they had provided an algorithm without a proof (as had Jayadeva and Bhaskara, though Fermat was not aware of this). He stated that a proof could be found by infinite descent. - Fermat stated and proved (by infinite descent) in the appendix to Observations on Diophantus (Obs. XLV) that $$ x^{4} + y^{4} = z^{4} $$ has no non-trivial solutions in the integers. Fermat also mentioned to his correspondents that $$ x^3 + y^3 = z^3 $$ has no non-trivial solutions, and that this could also be proven by infinite descent. The first known proof is due to Euler (1753; indeed by infinite descent). - Fermat claimed (Fermat's Last Theorem) to have shown there are no solutions to $$ x^n + y^n = z^n $$ for all $$ n\geq 3 $$ ; this claim appears in his annotations in the margins of his copy of Diophantus. Euler The interest of Leonhard Euler (1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateur Goldbach, pointed him towards some of Fermat's work on the subject. This has been called the "rebirth" of modern number theory, after Fermat's relative lack of success in getting his contemporaries' attention for the subject. Euler's work on number theory includes the following: - Proofs for Fermat's statements. This includes Fermat's little theorem (generalised by Euler to non-prime moduli); the fact that $$ p = x^2 + y^2 $$ if and only if $$ p\equiv 1 \bmod 4 $$ ; initial work towards a proof that every integer is the sum of four squares (the first complete proof is by Joseph-Louis Lagrange (1770), soon improved by Euler himself); the lack of non-zero integer solutions to $$ x^4 + y^4 = z^2 $$ (implying the case n=4 of Fermat's last theorem, the case n=3 of which Euler also proved by a related method). - Pell's equation, first misnamed by Euler. He wrote on the link between continued fractions and Pell's equation. - First steps towards analytic number theory. In his work of sums of four squares, partitions, pentagonal numbers, and the distribution of prime numbers, Euler pioneered the use of what can be seen as analysis (in particular, infinite series) in number theory. Since he lived before the development of complex analysis, most of his work is restricted to the formal manipulation of power series. He did, however, do some very notable (though not fully rigorous) early work on what would later be called the Riemann zeta function. - Quadratic forms. Following Fermat's lead, Euler did further research on the question of which primes can be expressed in the form $$ x^2 + N y^2 $$ , some of it prefiguring quadratic reciprocity. - Diophantine equations. Euler worked on some Diophantine equations of genus 0 and 1. In particular, he studied Diophantus's work; he tried to systematise it, but the time was not yet ripe for such an endeavour—algebraic geometry was still in its infancy. He did notice there was a connection between Diophantine problems and elliptic integrals, whose study he had himself initiated. #### Lagrange, Legendre, and Gauss Joseph-Louis Lagrange (1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations; for instance, the four-square theorem and the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studied quadratic forms in full generality (as opposed to $$ m X^2 + n Y^2 $$ ), including defining their equivalence relation, showing how to put them in reduced form, etc. Adrien-Marie Legendre (1752–1833) was the first to state the law of quadratic reciprocity. He also conjectured what amounts to the prime number theorem and Dirichlet's theorem on arithmetic progressions. He gave a full treatment of the equation $$ a x^2 + b y^2 + c z^2 = 0 $$ and worked on quadratic forms along the lines later developed fully by Gauss. In his old age, he was the first to prove Fermat's Last Theorem for $$ n=5 $$ (completing work by Peter Gustav Lejeune Dirichlet, and crediting both him and Sophie Germain). Carl Friedrich Gauss (1777–1855) worked in a wide variety of fields in both mathematics and physics including number theory, analysis, differential geometry, geodesy, magnetism, astronomy and optics. The Disquisitiones Arithmeticae (1801), which he wrote three years earlier when he was 21, had an immense influence in the area of number theory and set its agenda for much of the 19th century. Gauss proved in this work the law of quadratic reciprocity and developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests. The last section of the Disquisitiones established a link between roots of unity and number theory: The theory of the division of the circle...which is treated in sec. 7 does not belong by itself to arithmetic, but its principles can only be drawn from higher arithmetic.From the preface of Disquisitiones Arithmeticae; the translation is taken from In this way, Gauss arguably made forays towards Évariste Galois's work and the area algebraic number theory. ### Maturity and division into subfields Starting early in the nineteenth century, the following developments gradually took place: - The rise to self-consciousness of number theory (or higher arithmetic) as a field of study. - The development of much of modern mathematics necessary for basic modern number theory: complex analysis, group theory, Galois theory—accompanied by greater rigor in analysis and abstraction in algebra. - The rough subdivision of number theory into its modern subfields—in particular, analytic and algebraic number theory. Algebraic number theory may be said to start with the study of reciprocity and cyclotomy, but truly came into its own with the development of abstract algebra and early ideal theory and valuation theory; see below. A conventional starting point for analytic number theory is Dirichlet's theorem on arithmetic progressions (1837), whose proof introduced L-functions and involved some asymptotic analysis and a limiting process on a real variable. The first use of analytic ideas in number theory actually goes back to Euler (1730s), who used formal power series and non-rigorous (or implicit) limiting arguments. The use of complex analysis in number theory comes later: the work of Bernhard Riemann (1859) on the zeta function is the canonical starting point; Jacobi's four-square theorem (1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms). The American Mathematical Society awards the Cole Prize in Number Theory. Moreover, number theory is one of the three mathematical subdisciplines rewarded by the Fermat Prize. ## Main subdivisions Elementary number theory Elementary number theory deals with the topics in number theory by means of basic methods in arithmetic. Its primary subjects of study are divisibility, factorization, and primality, as well as congruences in modular arithmetic. Other topic in elementary number theory are Diophantine equations, continued fraction, integer partitions, and Diophantine approximations. Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithms. Multiplication, for instance, is an operation that combines two numbers, referred to as factors, to form a single number, termed the product, such as $$ 2 \times 3 = 6 $$ . Divisibility is a property between two nonzero integers related to division. An integer $$ a $$ is said to be divisible by a nonzero integer $$ b $$ if $$ a $$ is a multiple of $$ b $$ ; that is, if there exists an integer $$ q $$ such that $$ a = bq $$ . An equivalent formulation is that $$ b $$ divides $$ a $$ and is denoted by a vertical bar, which in this case is $$ b | a $$ . Conversely, if this were not the case, then $$ a $$ would not be divided evenly by $$ b $$ , resulting in a remainder. Euclid's division lemma asserts that $$ a $$ and $$ b $$ can generally be written as $$ a = bq + r $$ , where the remainder $$ r < b $$ accounts for the leftover quantity. Elementary number theory studies divisibility rules in order to quickly identify if a given integer is divisible by a fixed divisor. For instance, it is known that any integer is divisible by 3 if its decimal digit sum is divisible by 3. A common divisor of several nonzero integers is an integer that divides all of them. The greatest common divisor (gcd) is the largest of such divisors. Two integers are said to be coprime or relatively prime to one another if their greatest common divisor, and simultaneously their only divisor, is 1. The Euclidean algorithm computes the greatest common divisor of two integers $$ a,b $$ by means of repeatedly applying the division lemma and shifting the divisor and remainder after every step. Elementary number theory studies the divisibility properties of integers such as parity (even and odd numbers), prime numbers, and perfect numbers. A prime number is an integer greater than 1 whose only positive divisors are 1 and the prime itself. A positive integer greater than 1 that is not prime is called a composite number. Euclid's theorem demonstrates that there are infinitely many prime numbers that comprise the set {2, 3, 5, 7, 11, ...}. The sieve of Eratosthenes was devised as an efficient algorithm for identifying all primes up to a given natural number by eliminating all composite numbers. Factorization is a method of expressing a number as a product. Specifically in number theory, integer factorization is the decomposition of an integer into a product of integers. The process of repeatedly applying this procedure until all factors are prime is known as prime factorization. A fundamental property of primes is shown in Euclid's lemma. It is a consequence of the lemma that if a prime divides a product of integers, then that prime divides at least one of the factors in the product. The unique factorization theorem is the fundamental theorem of arithmetic that relates to prime factorization. The theorem states that every integer greater than 1 can be factorised into a product of prime numbers and that this factorisation is unique up to the order of the factors. For example, $$ 120 $$ is expressed uniquely as $$ 2 \times 2 \times 2 \times 3 \times 5 $$ or simply $$ 2^3 \times 3 \times 5 $$ . Modular arithmetic works with finite sets of integers and introduces the concepts of congruence and residue classes. A congruence of two integers $$ a, b $$ modulo $$ n $$ is an equivalence relation whereby $$ n | (a - b) $$ is true. This written as $$ a \equiv b \pmod{n} $$ , where $$ a $$ is said to be congruent to $$ b $$ modulo $$ n $$ . Performing Euclidean division on both $$ a $$ and $$ n $$ , and on $$ b $$ and $$ n $$ , yields the same remainder. A residue class modulo $$ n $$ is a set that contains all integers congruent to a specified $$ r $$ modulo $$ n $$ . For example, $$ 6\Z + 1 $$ contains all multiples of 6 incremented by 1. An influential theorem is Fermat's little theorem, which states that if $$ p $$ is prime, then for any integer $$ a $$ , the equation $$ a^p \equiv a \pmod{p} $$ is true. Equivalently, if $$ a $$ is coprime to $$ p $$ , then More specifically, elementary number theory works with elementary proofs, a term that excludes the use of complex numbers but may include basic analysis. For example, the prime number theorem was first proven using complex analysis in 1896, but an elementary proof was found only in 1949 by Erdős and Selberg. The term is somewhat ambiguous. For example, proofs based on complex Tauberian theorems, such as Wiener–Ikehara, are often seen as quite enlightening but not elementary despite using Fourier analysis, not complex analysis. Here as elsewhere, an elementary proof may be longer and more difficult for most readers than a more advanced proof. Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, many of the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics. Analytic number theory Analytic number theory may be defined - in terms of its tools, as the study of the integers by means of tools from real and complex analysis; or - in terms of its concerns, as the study within number theory of estimates on the size and density of certain numbers (e.g., primes), as opposed to identities. Some subjects generally considered to be part of analytic number theory (e.g., sieve theory) are better covered by the second rather than the first definition. Small sieves, for instance, use little analysis and yet still belong to analytic number theory. The following are examples of problems in analytic number theory: the prime number theorem, the Goldbach conjecture, the twin prime conjecture, the Hardy–Littlewood conjectures, the Waring problem and the Riemann hypothesis. Some of the most important tools of analytic number theory are the circle method, sieve methods and L-functions (or, rather, the study of their properties). The theory of modular forms (and, more generally, automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory. One may ask analytic questions about algebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may define prime ideals (generalizations of prime numbers in the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This question can be answered by means of an examination of Dedekind zeta functions, which are generalizations of the Riemann zeta function, a key analytic object at the roots of the subject. This is an example of a general procedure in analytic number theory: deriving information about the distribution of a sequence (here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function. Algebraic number theory An algebraic number is any complex number that is a solution to some polynomial equation $$ f(x)=0 $$ with rational coefficients; for example, every solution $$ x $$ of $$ x^5 + (11/2) x^3 - 7 x^2 + 9 = 0 $$ is an algebraic number. Fields of algebraic numbers are also called algebraic number fields, or shortly number fields. Algebraic number theory studies algebraic number fields. It could be argued that the simplest kind of number fields, namely quadratic fields, were already studied by Gauss, as the discussion of quadratic forms in Disquisitiones Arithmeticae can be restated in terms of ideals and norms in quadratic fields. (A quadratic field consists of all numbers of the form $$ a + b \sqrt{d} $$ , where $$ a $$ and $$ b $$ are rational numbers and $$ d $$ is a fixed rational number whose square root is not rational.) For that matter, the eleventh-century chakravala method amounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such. The grounds of the subject were set in the late nineteenth century, when ideal numbers, the theory of ideals and valuation theory were introduced; these are three complementary ways of dealing with the lack of unique factorization in algebraic number fields. (For example, in the field generated by the rationals and $$ \sqrt{-5} $$ , the number $$ 6 $$ can be factorised both as $$ 6 = 2 \cdot 3 $$ and $$ 6 = (1 + \sqrt{-5}) ( 1 - \sqrt{-5}) $$ ; all of $$ 2 $$ , $$ 3 $$ , $$ 1 + \sqrt{-5} $$ and $$ 1 - \sqrt{-5} $$ are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (by Kummer) seems to have come from the study of higher reciprocity laws, that is, generalizations of quadratic reciprocity. Number fields are often studied as extensions of smaller number fields: a field L is said to be an extension of a field K if L contains K. (For example, the complex numbers C are an extension of the reals R, and the reals R are an extension of the rationals Q.) Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensions L of K such that the Galois group Gal(L/K) of L over K is an abelian group—are relatively well understood. Their classification was the object of the programme of class field theory, which was initiated in the late nineteenth century (partly by Kronecker and Eisenstein) and carried out largely in 1900–1950. An example of an active area of research in algebraic number theory is Iwasawa theory. The Langlands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields. Diophantine geometry The central problem of Diophantine geometry is to determine when a Diophantine equation has integer or rational solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object. For example, an equation in two variables defines a curve in the plane. More generally, an equation or system of equations in two or more variables defines a curve, a surface, or some other such object in -dimensional space. In Diophantine geometry, one asks whether there are any rational points (points all of whose coordinates are rationals) or integral points (points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is whether there are finitely or infinitely many rational points on a given curve or surface. Consider, for instance, the Pythagorean equation $$ x^2+y^2 = 1 $$ . One would like to know its rational solutions, namely $$ (x,y) $$ such that x and y are both rational. This is the same as asking for all integer solutions to $$ a^2 + b^2 = c^2 $$ ; any solution to the latter equation gives us a solution $$ x = a/c $$ , $$ y = b/c $$ to the former. It is also the same as asking for all points with rational coordinates on the curve described by $$ x^2 + y^2 = 1 $$ (a circle of radius 1 centered on the origin). The rephrasing of questions on equations in terms of points on curves is felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve (that is, rational or integer solutions to an equation $$ f(x,y)=0 $$ , where $$ f $$ is a polynomial in two variables) depends crucially on the genus of the curve. A major achievement of this approach is Wiles's proof of Fermat's Last Theorem, for which other geometrical notions are just as crucial. There is also the closely linked area of Diophantine approximations: given a number $$ x $$ , determine how well it can be approximated by rational numbers. One seeks approximations that are good relative to the amount of space required to write the rational number: call $$ a/q $$ (with $$ \gcd(a,q)=1 $$ ) a good approximation to $$ x $$ if $$ |x-a/q|<\frac{1}{q^c} $$ , where $$ c $$ is large. This question is of special interest if $$ x $$ is an algebraic number. If $$ x $$ cannot be approximated well, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that of height) are critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest in transcendental number theory: if a number can be approximated better than any algebraic number, then it is a transcendental number. It is by this argument that and e have been shown to be transcendental. Diophantine geometry should not be confused with the geometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory. Arithmetic geometry is a contemporary term for the same domain covered by Diophantine geometry, particularly when one wishes to emphasize the connections to modern algebraic geometry (for example, in Faltings's theorem) rather than to techniques in Diophantine approximations. ## Other subfields The areas below date from no earlier than the mid-twentieth century, even if they are based on older material. For example, although algorithms in number theory have a long history, the modern study of computability began only in the 1930s and 1940s, while computational complexity theory emerged in the 1970s. ### Probabilistic number theory Probabilistic number theory starts with questions such as the following: Take an integer at random between one and a million. How likely is it to be prime? (this is just another way of asking how many primes there are between one and a million). How many prime divisors will have on average? What is the probability that it will have many more or many fewer divisors or prime divisors than the average? Much of probabilistic number theory can be seen as an important special case of the study of variables that are almost, but not quite, mutually independent. For example, the event that a random integer between one and a million be divisible by two and the event that it be divisible by three are almost independent, but not quite. It is sometimes said that probabilistic combinatorics uses the fact that whatever happens with probability greater than $$ 0 $$ must happen sometimes; one may say with equal justice that many applications of probabilistic number theory hinge on the fact that whatever is unusual must be rare. If certain algebraic objects (say, rational or integer solutions to certain equations) can be shown to be in the tail of certain sensibly defined distributions, it follows that there must be few of them; this is a very concrete non-probabilistic statement following from a probabilistic one. At times, a non-rigorous, probabilistic approach leads to a number of heuristic algorithms and open problems, notably Cramér's conjecture. ### Arithmetic combinatorics and additive number theory Combinatorics in number theory starts with questions like the following: Does a fairly "thick" infinite set $$ A $$ contain many elements in arithmetic progression: $$ a $$ , $$ a+b, a+2 b, a+3 b, \ldots, a+10b $$ ? Should it be possible to write large integers as sums of elements of $$ A $$ ? These questions are characteristic of arithmetic combinatorics, a coalescing field that subsumes additive number theory (which concerns itself with certain very specific sets $$ A $$ of arithmetic significance, such as the primes or the squares), some of the geometry of numbers, as well as some rapidly developing new material. Its focus on issues of growth and distribution accounts in part for its developing links with ergodic theory, finite group theory, model theory, and other fields. The term additive combinatorics is also used; however, the sets $$ A $$ being studied need not be sets of integers, but rather subsets of non-commutative groups, for which the multiplication symbol, not the addition symbol, is traditionally used; they can also be subsets of rings, in which case the growth of $$ A+A $$ and $$ A $$ · $$ A $$ may be compared. ### Computational number theory While the word algorithm goes back only to certain readers of al-Khwārizmī, careful descriptions of methods of solution are older than proofs: such methods (that is, algorithms) are as old as any recognisable mathematics—ancient Egyptian, Babylonian, Vedic, Chinese—whereas proofs appeared only with the Greeks of the classical period. An early case is that of what is now called the Euclidean algorithm. In its basic form (namely, as an algorithm for computing the greatest common divisor) it appears as Proposition 2 of Book VII in Elements, together with a proof of correctness. However, in the form that is often used in number theory (namely, as an algorithm for finding integer solutions to an equation $$ a x + b y = c $$ , or, what is the same, for finding the quantities whose existence is assured by the Chinese remainder theorem) it first appears in the works of Āryabhaṭa (fifth to sixth centuries) as an algorithm called kuṭṭaka ("pulveriser"), without a proof of correctness. There are two main questions: "Can this be computed?" and "Can it be computed rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. Fast algorithms for testing primality are now known, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring. The difficulty of a computation can be useful: modern protocols for encrypting messages (for example, RSA) depend on functions that are known to all, but whose inverses are known only to a chosen few, and would take one too long a time to figure out on one's own. For example, these functions can be such that their inverses can be computed only if certain large integers are factorized. While many difficult computational problems outside number theory are known, most working encryption protocols nowadays are based on the difficulty of a few number-theoretical problems. Some things may not be computable at all; in fact, this can be proven in some instances. For instance, in 1970, it was proven, as a solution to Hilbert's tenth problem, that there is no Turing machine which can solve all Diophantine equations. In particular, this means that, given a computably enumerable set of axioms, there are Diophantine equations for which there is no proof, starting from the axioms, of whether the set of equations has or does not have integer solutions. (i.e., Diophantine equations for which there are no integer solutions, since, given a Diophantine equation with at least one solution, the solution itself provides a proof of the fact that a solution exists. It cannot be proven that a particular Diophantine equation is of this kind, since this would imply that it has no solutions.) ## Applications For a long time, number theory in general, and the study of prime numbers in particular, was seen as the canonical example of pure mathematics, with no applications outside of mathematics other than the use of prime numbered gear teeth to distribute wear evenly. In particular, number theorists such as British mathematician G. H. Hardy prided themselves on doing work that had absolutely no military significance. The number-theorist Leonard Dickson (1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory. This vision of the purity of number theory was shattered in the 1970s, when it was publicly announced that prime numbers could be used as the basis for the creation of public-key cryptography algorithms. Schemes such as RSA are based on the difficulty of factoring large composite numbers into their prime factors. These applications have led to significant study of algorithms for computing with prime numbers, and in particular of primality testing, methods for determining whether a given number is prime. Prime numbers are also used in computing for checksums, hash tables, and pseudorandom number generators. In 1974, Donald Knuth said "virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations". Elementary number theory is taught in discrete mathematics courses for computer scientists. It also has applications to the continuous in numerical analysis. Number theory has now several modern applications spanning diverse areas such as: - Computer science: The fast Fourier transform (FFT) algorithm, which is used to efficiently compute the discrete Fourier transform, has important applications in signal processing and data analysis. - Physics: The Riemann hypothesis has connections to the distribution of prime numbers and has been studied for its potential implications in physics. - Error correction codes: The theory of finite fields and algebraic geometry have been used to construct efficient error-correcting codes. - Communications: The design of cellular telephone networks requires knowledge of the theory of modular forms, which is a part of analytic number theory. - Study of musical scales: the concept of "equal temperament", which is the basis for most modern Western music, involves dividing the octave into 12 equal parts. This has been studied using number theory and in particular the properties of the 12th root of 2.
https://en.wikipedia.org/wiki/Number_theory
QUEEN - Quantum Entangled Enhanced Network Quantum Entangled Enhanced Networks - QUEEN form an important element of quantum computing and quantum communication systems. Quantum networks facilitate the transmission of information in the form of quantum bits, also called qubits, between physically separated quantum processors. A quantum processor is a machine able to perform quantum circuits on a certain number of qubits. Quantum networks work in a similar way to classical networks. The main difference is that quantum networking, like quantum computing, is better at solving certain problems, such as modeling quantum systems. ## Basics ### ### Quantum networks for computation Networked quantum computing or distributed quantum computing works by linking multiple quantum processors through a quantum network by sending qubits in between them. Doing this creates a quantum computing cluster and therefore creates more computing potential. Less powerful computers can be linked in this way to create one more powerful processor. This is analogous to connecting several classical computers to form a computer cluster in classical computing. Like classical computing, this system is scalable by adding more and more quantum computers to the network. Currently quantum processors are only separated by short distances. ### Quantum networks for communication In the realm of quantum communication, one wants to send qubits from one quantum processor to another over long distances. This way, local quantum networks can be intra connected into a quantum internet. A quantum internet supports many applications, which derive their power from the fact that by creating quantum entangled qubits, information can be transmitted between the remote quantum processors. Most applications of a quantum internet require only very modest quantum processors. For most quantum internet protocols, such as quantum key distribution in quantum cryptography, it is sufficient if these processors are capable of preparing and measuring only a single qubit at a time. This is in contrast to quantum computing where interesting applications can be realized only if the (combined) quantum processors can easily simulate more qubits than a classical computer (around 60). ### Quantum internet applications require only small quantum processors, often just a single qubit, because quantum entanglement can already be realized between just two qubits. A simulation of an entangled quantum system on a classical computer cannot simultaneously provide the same security and speed. ### Overview of the elements of a quantum network The basic structure of a quantum network and more generally a quantum internet is analogous to a classical network. First, we have end nodes on which applications are ultimately run. These end nodes are quantum processors of at least one qubit. Some applications of a quantum internet require quantum processors of several qubits as well as a quantum memory at the end nodes. Second, to transport qubits from one node to another, we need communication lines. For the purpose of quantum communication, standard telecom fibers can be used. For networked quantum computing, in which quantum processors are linked at short distances, different wavelengths are chosen depending on the exact hardware platform of the quantum processor. Third, to make maximum use of communication infrastructure, one requires optical switches capable of delivering qubits to the intended quantum processor. These switches need to preserve quantum coherence, which makes them more challenging to realize than standard optical switches. Finally, one requires a quantum repeater to transport qubits over long distances. ### Repeaters appear in between end nodes. Since qubits cannot be copied (No-cloning theorem), classical signal amplification is not possible. By necessity, a quantum repeater works in a fundamentally different way than a classical repeater. ## Elements of a quantum network ### End nodes: quantum processors End nodes can both receive and emit information. Telecommunication lasers and parametric down-conversion combined with photodetectors can be used for quantum key distribution. In this case, the end nodes can in many cases be very simple devices consisting only of beamsplitters and photodetectors. However, for many protocols more sophisticated end nodes are desirable. These systems provide advanced processing capabilities and can also be used as quantum repeaters. Their chief advantage is that they can store and retransmit quantum information without disrupting the underlying quantum state. The quantum state being stored can either be the relative spin of an electron in a magnetic field or the energy state of an electron. They can also perform quantum logic gates. One way of realizing such end nodes is by using color centers in diamond, such as the nitrogen-vacancy center. This system forms a small quantum processor featuring several qubits. NV centers can be utilized at room temperatures. Small scale quantum algorithms and quantum error correction has already been demonstrated in this system, as well as the ability to entangle two and three quantum processors, and perform deterministic quantum teleportation. Another possible platform are quantum processors based on ion traps, which utilize radio-frequency magnetic fields and lasers. In a multispecies trapped-ion node network, photons entangled with a parent atom are used to entangle different nodes. Also, cavity quantum electrodynamics (Cavity QED) is one possible method of doing this. In Cavity QED, photonic quantum states can be transferred to and from atomic quantum states stored in single atoms contained in optical cavities. This allows for the transfer of quantum states between single atoms using optical fiber in addition to the creation of remote entanglement between distant atoms. ### Communication lines: physical layer Over long distances, the primary method of operating quantum networks is to use optical networks and photon-based qubits. This is due to optical networks having a reduced chance of decoherence. Optical networks have the advantage of being able to re-use existing optical fiber. Alternately, free space networks can be implemented that transmit quantum information through the atmosphere or through a vacuum. #### Fiber optic networks Optical networks using existing telecommunication fiber can be implemented using hardware similar to existing telecommunication equipment. This fiber can be either single-mode or multi-mode, with single-mode allowing for more precise communication. At the sender, a single photon source can be created by heavily attenuating a standard telecommunication laser such that the mean number of photons per pulse is less than 1. For receiving, an avalanche photodetector can be used. Various methods of phase or polarization control can be used such as interferometers and beam splitters. In the case of entanglement based protocols, entangled photons can be generated through spontaneous parametric down-conversion. In both cases, the telecom fiber can be multiplexed to send non-quantum timing and control signals. In 2020 a team of researchers affiliated with several institutions in China has succeeded in sending entangled quantum memories over a 50-kilometer coiled fiber cable. #### Free space networks Free space quantum networks operate similar to fiber optic networks but rely on line of sight between the communicating parties instead of using a fiber optic connection. Free space networks can typically support higher transmission rates than fiber optic networks and do not have to account for polarization scrambling caused by optical fiber. However, over long distances, free space communication is subject to an increased chance of environmental disturbance on the photons. Free space communication is also possible from a satellite to the ground. A quantum satellite capable of entanglement distribution over a distance of 1,203 km has been demonstrated. The experimental exchange of single photons from a global navigation satellite system at a slant distance of 20,000 km has also been reported. These satellites can play an important role in linking smaller ground-based networks over larger distances. In free-space networks, atmospheric conditions such as turbulence, scattering, and absorption present challenges that affect the fidelity of transmitted quantum states. To mitigate these effects, researchers employ adaptive optics, advanced modulation schemes, and error correction techniques. The resilience of QKD protocols against eavesdropping plays a crucial role in ensuring the security of the transmitted data. Specifically, protocols like BB84 and decoy-state schemes have been adapted for free-space environments to improve robustness against potential security vulnerabilities. Repeaters Long-distance communication is hindered by the effects of signal loss and decoherence inherent to most transport mediums such as optical fiber. In classical communication, amplifiers can be used to boost the signal during transmission, but in a quantum network amplifiers cannot be used since qubits cannot be copied – known as the no-cloning theorem. That is, to implement an amplifier, the complete state of the flying qubit would need to be determined, something which is both unwanted and impossible. #### Trusted repeaters An intermediary step which allows the testing of communication infrastructure are trusted repeaters. Importantly, a trusted repeater cannot be used to transmit qubits over long distances. Instead, a trusted repeater can only be used to perform quantum key distribution with the additional assumption that the repeater is trusted. Consider two end nodes A and B, and a trusted repeater R in the middle. A and R now perform quantum key distribution to generate a key $$ k_{AR} $$ . Similarly, R and B run quantum key distribution to generate a key $$ k_{RB} $$ . A and B can now obtain a key $$ k_{AB} $$ between themselves as follows: A sends $$ k_{AB} $$ to R encrypted with the key $$ k_{AR} $$ . R decrypts to obtain $$ k_{AB} $$ . R then re-encrypts $$ k_{AB} $$ using the key $$ k_{RB} $$ and sends it to B. B decrypts to obtain $$ k_{AB} $$ . A and B now share the key $$ k_{AB} $$ . The key is secure from an outside eavesdropper, but clearly the repeater R also knows $$ k_{AB} $$ . This means that any subsequent communication between A and B does not provide end to end security, but is only secure as long as A and B trust the repeater R. #### Quantum repeaters A true quantum repeater allows the end to end generation of quantum entanglement, and thus by using quantum teleportation the end to end transmission of qubits. In quantum key distribution protocols one can test for such entanglement. This means that when making encryption keys, the sender and receiver are secure even if they do not trust the quantum repeater. Any other application of a quantum internet also requires the end to end transmission of qubits, and thus a quantum repeater. Quantum repeaters allow entanglement and can be established at distant nodes without physically sending an entangled qubit the entire distance. In this case, the quantum network consists of many short distance links of perhaps tens or hundreds of kilometers. In the simplest case of a single repeater, two pairs of entangled qubits are established: $$ |A\rangle $$ and $$ |R_a\rangle $$ located at the sender and the repeater, and a second pair $$ |R_b\rangle $$ and $$ |B\rangle $$ located at the repeater and the receiver. These initial entangled qubits can be easily created, for example through parametric down conversion, with one qubit physically transmitted to an adjacent node. At this point, the repeater can perform a Bell measurement on the qubits $$ |R_a\rangle $$ and $$ |R_b\rangle $$ thus teleporting the quantum state of $$ |R_a\rangle $$ onto $$ |B\rangle $$ . This has the effect of "swapping" the entanglement such that $$ |A\rangle $$ and $$ |B\rangle $$ are now entangled at a distance twice that of the initial entangled pairs. It can be seen that a network of such repeaters can be used linearly or in a hierarchical fashion to establish entanglement over great distances. Hardware platforms suitable as end nodes above can also function as quantum repeaters. However, there are also hardware platforms specific only to the task of acting as a repeater, without the capabilities of performing quantum gates. #### Error correction Error correction can be used in quantum repeaters. Due to technological limitations, however, the applicability is limited to very short distances as quantum error correction schemes capable of protecting qubits over long distances would require an extremely large amount of qubits and hence extremely large quantum computers. Errors in communication can be broadly classified into two types: Loss errors (due to optical fiber/environment) and operation errors (such as depolarization, dephasing etc.). While redundancy can be used to detect and correct classical errors, redundant qubits cannot be created due to the no-cloning theorem. As a result, other types of error correction must be introduced such as the Shor code or one of a number of more general and efficient codes. All of these codes work by distributing the quantum information across multiple entangled qubits so that operation errors as well as loss errors can be corrected. In addition to quantum error correction, classical error correction can be employed by quantum networks in special cases such as quantum key distribution. In these cases, the goal of the quantum communication is to securely transmit a string of classical bits. Traditional error correction codes such as Hamming codes can be applied to the bit string before encoding and transmission on the quantum network. #### Entanglement purification Quantum decoherence can occur when one qubit from a maximally entangled bell state is transmitted across a quantum network. Entanglement purification allows for the creation of nearly maximally entangled qubits from a large number of arbitrary weakly entangled qubits, and thus provides additional protection against errors. Entanglement purification (also known as Entanglement distillation) has already been demonstrated in Nitrogen-vacancy centers in diamond. ## Applications A quantum internet supports numerous applications, enabled by quantum entanglement. In general, quantum entanglement is well suited for tasks that require coordination, synchronization or privacy. Examples of such applications include quantum key distribution, clock stabilization, protocols for distributed system problems such as leader election or Byzantine agreement, extending the baseline of telescopes, as well as position verification, secure identification and two-party cryptography in the noisy-storage model. A quantum internet also enables secure access to a quantum computer in the cloud. Specifically, a quantum internet enables very simple quantum devices to connect to a remote quantum computer in such a way that computations can be performed there without the quantum computer finding out what this computation actually is (the input and output quantum states can not be measured without destroying the computation, but the circuit composition used for the calculation will be known). ### Secure communications When it comes to communicating in any form the largest issue has always been keeping these communications private. Quantum networks would allow for information to be created, stored and transmitted, potentially achieving "a level of privacy, security and computational clout that is impossible to achieve with today’s Internet." By applying a quantum operator that the user selects to a system of information the information can then be sent to the receiver without a chance of an eavesdropper being able to accurately be able to record the sent information without either the sender or receiver knowing. Unlike classical information that is transmitted in bits and assigned either a 0 or 1 value, the quantum information used in quantum networks uses quantum bits (qubits), which can have both 0 and 1 value at the same time, being in a state of superposition. This works because if a listener tries to listen in then they will change the information in an unintended way by listening, thereby tipping their hand to the people on whom they are attacking. Secondly, without the proper quantum operator to decode the information they will corrupt the sent information without being able to use it themselves. Furthermore, qubits can be encoded in a variety of materials, including in the polarization of photons or the spin states of electrons. ## Current status Quantum internet One example of a prototype quantum communication network is the eight-user city-scale quantum network described in a paper published in September 2020. The network located in Bristol used already deployed fibre-infrastructure and worked without active switching or trusted nodes. Text and images are available under a Creative Commons Attribution 4.0 International License. In 2022, Researchers at the University of Science and Technology of China and Jinan Institute of Quantum Technology demonstrated quantum entanglement between two memory devices located at 12.5 km apart from each other within an urban environment. In the same year, Physicist at the Delft University of Technology in Netherlands has taken a significant step toward the network of the future by using a technique called quantum teleportation that sends data to three physical locations which was previously only possible with two locations. In 2024, researchers in the U.K and Germany achieved a first by producing, storing, and retrieving quantum information. This milestone involved interfacing a quantum dot light source and a quantum memory system, paving the way for practical applications despite challenges like quantum information loss over long distances. In February 2025, researchers from Oxford University experimentally demonstrated the distribution of quantum computations between two photonically interconnected trapped-ion modules. Each module contained dedicated network and circuit qubits, and they were separated by approximately two meters. The team achieved deterministic teleportation of a controlled-Z gate between two circuit qubits located in separate modules, attaining an 86% fidelity. This experiment also marked the first implementation of a distributed quantum algorithm comprising multiple non-local two-qubit gates, specifically Grover's search algorithm, which was executed with a 71% success rate. These advancements represented significant progress toward scalable quantum computing and the development of a quantum internet. Quantum networks for computation In 2021, researchers at the Max Planck Institute of Quantum Optics in Germany reported a first prototype of quantum logic gates for distributed quantum computers. ### Experimental quantum modems A research team at the Max-Planck-Institute of Quantum Optics in Garching, Germany is finding success in transporting quantum data from flying and stable qubits via infrared spectrum matching. This requires a sophisticated, super-cooled yttrium silicate crystal to sandwich erbium in a mirrored environment to achieve resonance matching of infrared wavelengths found in fiber optic networks. The team successfully demonstrated the device works without data loss. ### Mobile quantum networks In 2021, researchers in China reported the successful transmission of entangled photons between drones, used as nodes for the development of mobile quantum networks or flexible network extensions. This could be the first work in which entangled particles were sent between two moving devices. Also, it has been researched the application of quantum communications to improve 6G mobile networks for joint detection and data transfer with quantum entanglement, where there are possible advantages such as security and energy efficiency. ### Quantum key distribution networks Several test networks have been deployed that are tailored to the task of quantum key distribution either at short distances (but connecting many users), or over larger distances by relying on trusted repeaters. These networks do not yet allow for the end to end transmission of qubits or the end to end creation of entanglement between far away nodes. + Major quantum network projects and QKD protocols implemented Quantum network Start BB84 BBM92 E91 DPS COW DARPA Quantum Network 2001 SECOCQ QKD network in Vienna 2003 Tokyo QKD network 2009 Hierarchical network in Wuhu, China 2009 Geneva area network (SwissQuantum) 2010 DARPA Quantum Network Starting in the early 2000s, DARPA began sponsorship of a quantum network development project with the aim of implementing secure communication. The DARPA Quantum Network became operational within the BBN Technologies laboratory in late 2003 and was expanded further in 2004 to include nodes at Harvard and Boston Universities. The network consists of multiple physical layers including fiber optics supporting phase-modulated lasers and entangled photons as well free-space links. SECOQC Vienna QKD network From 2003 to 2008 the Secure Communication based on Quantum Cryptography (SECOQC) project developed a collaborative network between a number of European institutions. The architecture chosen for the SECOQC project is a trusted repeater architecture which consists of point-to-point quantum links between devices where long distance communication is accomplished through the use of repeaters. Chinese hierarchical network In May 2009, a hierarchical quantum network was demonstrated in Wuhu, China. The hierarchical network consists of a backbone network of four nodes connecting a number of subnets. The backbone nodes are connected through an optical switching quantum router. Nodes within each subnet are also connected through an optical switch and are connected to the backbone network through a trusted relay. Geneva area network (SwissQuantum) The SwissQuantum network developed and tested between 2009 and 2011 linked facilities at CERN with the University of Geneva and hepia in Geneva. The SwissQuantum program focused on transitioning the technologies developed in the SECOQC and other research quantum networks into a production environment. In particular the integration with existing telecommunication networks, and its reliability and robustness. Tokyo QKD network In 2010, a number of organizations from Japan and the European Union setup and tested the Tokyo QKD network. The Tokyo network build upon existing QKD technologies and adopted a SECOQC like network architecture. For the first time, one-time-pad encryption was implemented at high enough data rates to support popular end-user application such as secure voice and video conferencing. Previous large-scale QKD networks typically used classical encryption algorithms such as AES for high-rate data transfer and use the quantum-derived keys for low rate data or for regularly re-keying the classical encryption algorithms. Beijing-Shanghai Trunk Line In September 2017, a 2000-km quantum key distribution network between Beijing and Shanghai, China, was officially opened. This trunk line will serve as a backbone connecting quantum networks in Beijing, Shanghai, Jinan in Shandong province and Hefei in Anhui province. During the opening ceremony, two employees from the Bank of Communications completed a transaction from Shanghai to Beijing using the network. The State Grid Corporation of China is also developing a managing application for the link. The line uses 32 trusted nodes as repeaters. A quantum telecommunication network has been also put into service in Wuhan, capital of central China's Hubei Province, which will be connected to the trunk. Other similar city quantum networks along the Yangtze River are planned to follow. In 2021, researchers working on this network of networks reported that they combined over 700 optical fibers with two QKD-ground-to-satellite links using a trusted relay structure for a total distance between nodes of up to ~4,600 km, which makes it Earth's largest integrated quantum communication network. IQNET IQNET (Intelligent Quantum Networks and Technologies) was founded in 2017 by Caltech and AT&T. Together, they are collaborating with the Fermi National Accelerator Laboratory, and the Jet Propulsion Laboratory. In December 2020, IQNET published a work in PRX Quantum that reported a successful teleportation of time-bin qubits across 44 km of fiber. For the first time, the published work includes a theoretical modelling of the experimental setup. The two test beds for performed measurements were the Caltech Quantum Network and the Fermilab Quantum Network. This research represents an important step in establishing a quantum internet of the future, which would revolutionise the fields of secure communication, data storage, precision sensing, and computing.
https://en.wikipedia.org/wiki/Quantum_network
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov. Markov chains have many applications as statistical models of real-world processes. They provide the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in areas including Bayesian statistics, biology, chemistry, economics, finance, information theory, physics, signal processing, and speech processing. The adjectives Markovian and Markov are used to describe something that is related to a Markov process. ## Principles ### Definition A Markov process is a stochastic process that satisfies the Markov property (sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history. In other words, conditional on the present state of the system, its future and past states are independent. A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space). ### Types of Markov chains The system's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Countable state spaceContinuous or general state spaceDiscrete-time(discrete-time) Markov chain on a countable or finite state spaceMarkov chain on a measurable state space (for example, Harris chain)Continuous-timeContinuous-time Markov process or Markov jump processAny continuous stochastic process with the Markov property (for example, the Wiener process) Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, a discrete-time Markov chain (DTMC), but a few authors use the term "Markov process" to refer to a continuous-time Markov chain (CTMC) without explicit mention.Dodge, Y. (2003) The Oxford Dictionary of Statistical Terms, OUP. (entry for "Markov chain") In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see ### Markov model ). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the state space of a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (see #### Variations ). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. ### Transitions The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important. ## History Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906. Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of the Poisson process. Markov was interested in studying an extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who claimed independence was necessary for the weak law of large numbers to hold. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains. In 1912 Henri Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé. Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains. Andrey Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov–Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in 1930s, and then later Eugene Dynkin, starting in the 1950s. ## Examples - Mark V. Shaney is a third-order Markov chain program, and a Markov text generator. It ingests the sample text (the Tao Te Ching, or the posts of a Usenet group) and creates a massive list of every sequence of three successive words (triplet) which occurs in the text. It then chooses two words at random, and looks for a word which follows those two in one of the triplets in its massive list. If there is more than one, it picks at random (identical triplets count separately, so a sequence which occurs twice is twice as likely to be picked as one which only occurs once). It then adds that word to the generated text. Then, in the same way, it picks a triplet that starts with the second and third words in the generated text, and that gives a fourth word. It adds the fourth word, then repeats with the third and fourth words, and so on. - Random walks based on integers and the gambler's ruin problem are examples of Markov processes. Some variations of these processes were studied hundreds of years earlier in the context of independent variables. Two important examples of Markov processes are the Wiener process, also known as the Brownian motion process, and the Poisson process, which are considered the most important and central stochastic processes in the theory of stochastic processes. These two processes are Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time. - A famous Markov chain is the so-called "drunkard's walk", a random walk on the number line where, at each step, the position may change by +1 or −1 with equal probability. From any position there are two possible transitions, to the next or previous integer. The transition probabilities depend only on the current position, not on the manner in which the position was reached. For example, the transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other transition probabilities from 5 are 0. These probabilities are independent of whether the system was previously in 4 or 6. - A series of independent states (for example, a series of coin flips) satisfies the formal definition of a Markov chain. However, the theory is usually applied only when the probability distribution of the next state depends on the current one. ### A non-Markov example Suppose that there is a coin purse containing five coins worth 25¢, five coins worth 10¢ and five coins worth 5¢, and one by one, coins are randomly drawn from the purse and are set on a table. If $$ X_n $$ represents the total value of the coins set on the table after draws, with $$ X_0 = 0 $$ , then the sequence $$ \{X_n : n\in\mathbb{N}\} $$ is not a Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. Thus $$ X_6 = \$0.50 $$ . If we know not just $$ X_6 $$ , but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that $$ X_7 \geq \$0.60 $$ with probability 1. But if we do not know the earlier values, then based only on the value $$ X_6 $$ we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about $$ X_7 $$ are impacted by our knowledge of values prior to $$ X_6 $$ . However, it is possible to model this scenario as a Markov process. Instead of defining $$ X_n $$ to represent the total value of the coins on the table, we could define $$ X_n $$ to represent the count of the various coin types on the table. For instance, $$ X_6 = 1,0,5 $$ could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by $$ 6\times 6\times 6=216 $$ possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in state $$ X_1 = 0,1,0 $$ . The probability of achieving $$ X_2 $$ now depends on $$ X_1 $$ ; for example, the state $$ X_2 = 1,0,1 $$ is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of the $$ X_n = i,j,k $$ state depends exclusively on the outcome of the $$ X_{n-1}= \ell,m,p $$ state. ## Formal definition ### Discrete-time Markov chain A discrete-time Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states: $$ \Pr(X_{n+1}=x\mid X_1=x_1, X_2=x_2, \ldots, X_n=x_n) = \Pr(X_{n+1}=x\mid X_n=x_n), $$ if both conditional probabilities are well defined, that is, if $$ \Pr(X_1=x_1,\ldots,X_n=x_n)>0. $$ The possible values of Xi form a countable set S called the state space of the chain. Variations - Time-homogeneous Markov chains are processes where $$ \Pr(X_{n+1}=x\mid X_n=y) = \Pr(X_n = x \mid X_{n-1} = y) $$ for all n. The probability of the transition is independent of n. - Stationary Markov chains are processes where $$ \Pr(X_{0}=x_0, X_{1} = x_1, \ldots, X_{k} = x_k) = \Pr(X_{n}=x_0, X_{n+1} = x_1, \ldots, X_{n+k} = x_k) $$ for all n and k. Every stationary chain can be proved to be time-homogeneous by Bayes' rule.A necessary and sufficient condition for a time-homogeneous Markov chain to be stationary is that the distribution of $$ X_0 $$ is a stationary distribution of the Markov chain. - A Markov chain with memory (or a Markov chain of order m) where m is finite, is a process satisfying $$ \begin{align} {} &\Pr(X_n=x_n\mid X_{n-1}=x_{n-1}, X_{n-2}=x_{n-2}, \dots , X_1=x_1) \\ # &\Pr(X_n=x_n\mid X_{n-1}=x_{n-1}, X_{n-2}=x_{n-2}, \dots, X_{n-m} x_{n-m}) \text{ for }n > m \end{align} $$ In other words, the future state depends on the past m states. It is possible to construct a chain $$ (Y_n) $$ from $$ (X_n) $$ which has the 'classical' Markov property by taking as state space the ordered m-tuples of X values, i.e., $$ Y_n= \left( X_n,X_{n-1},\ldots,X_{n-m+1} \right) $$ . ### Continuous-time Markov chain A continuous-time Markov chain (Xt)t ≥ 0 is defined by a finite or countable state space S, a transition rate matrix Q with dimensions equal to that of the state space and initial probability distribution defined on the state space. For i ≠ j, the elements qij are non-negative and describe the rate of the process transitions from state i to state j. The elements qii are chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process. #### Infinitesimal definition Let $$ X_t $$ be the random variable describing the state of the process at time t, and assume the process is in a state i at time t. Then, knowing $$ X_t = i $$ , $$ X_{t+h}=j $$ is independent of previous values $$ \left( X_s : s < t \right) $$ , and as h → 0 for all j and for all t, $$ \Pr(X(t+h) = j \mid X(t) = i) = \delta_{ij} + q_{ij}h + o(h), $$ where $$ \delta_{ij} $$ is the Kronecker delta, using the little-o notation. The $$ q_{ij} $$ can be seen as measuring how quickly the transition from i to j happens. #### Jump chain/holding time definition Define a discrete-time Markov chain Yn to describe the nth jump of the process and variables S1, S2, S3, ... to describe holding times in each of the states where Si follows the exponential distribution with rate parameter −qYiYi. #### Transition probability definition For any value n = 0, 1, 2, 3, ... and times indexed up to this value of n: t0, t1, t2, ... and all states recorded at these times i0, i1, i2, i3, ... it holds that $$ \Pr(X_{t_{n+1}} = i_{n+1} \mid X_{t_0} = i_0 , X_{t_1} = i_1 , \ldots, X_{t_n} = i_n ) = p_{i_n i_{n+1}}( t_{n+1} - t_n) $$ where pij is the solution of the forward equation (a first-order differential equation) $$ P'(t) = P(t) Q $$ with initial condition P(0) is the identity matrix. ### Finite state space If the state space is finite, the transition probability distribution can be represented by a matrix, called the transition matrix, with the (i, j)th element of P equal to $$ p_{ij} = \Pr(X_{n+1}=j\mid X_n=i). $$ Since each row of P sums to one and all elements are non-negative, P is a right stochastic matrix. #### Stationary distribution relation to eigenvectors and simplices A stationary distribution is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix P on it and so is defined by $$ \pi\mathbf{P} = \pi. $$ By comparing this definition with that of an eigenvector we see that the two concepts are related and that $$ \pi=\frac{e}{\sum_i{e_i}} $$ is a normalized ( $$ \sum_i \pi_i=1 $$ ) multiple of a left eigenvector e of the transition matrix P with an eigenvalue of 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distribution $$ \textstyle \pi_i $$ are associated with the state space of P and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as $$ \sum_i 1 \cdot \pi_i=1 $$ we see that the dot product of π with a vector whose components are all 1 is unity and that π lies on a simplex. #### Time-homogeneous Markov chain with a finite state space If the Markov chain is time-homogeneous, then the transition matrix P is the same after each step, so the k-step transition probability can be computed as the k-th power of the transition matrix, Pk. If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution . Additionally, in this case Pk converges to a rank-one matrix in which each row is the stationary distribution : $$ \lim_{k\to\infty}\mathbf{P}^k=\mathbf{1}\pi $$ where 1 is the column vector with all entries equal to 1. This is stated by the Perron–Frobenius theorem. If, by whatever means, $$ \lim_{k\to\infty}\mathbf{P}^k $$ is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matrices P, the limit $$ \lim_{k\to\infty}\mathbf{P}^k $$ does not exist while the stationary distribution does, as shown by this example: $$ \mathbf P=\begin{pmatrix} 0& 1\\ 1& 0 \end{pmatrix} \qquad \mathbf P^{2k}=I \qquad \mathbf P^{2k+1}=\mathbf P $$ $$ \begin{pmatrix}\frac{1}{2}&\frac{1}{2}\end{pmatrix}\begin{pmatrix} 0& 1\\ 1& 0 \end{pmatrix}=\begin{pmatrix}\frac{1}{2}&\frac{1}{2}\end{pmatrix} $$ (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let P be an n×n matrix, and define $$ \mathbf{Q} = \lim_{k\to\infty}\mathbf{P}^k. $$ It is always true that $$ \mathbf{QP} = \mathbf{Q}. $$ Subtracting Q from both sides and factoring then yields $$ \mathbf{Q}(\mathbf{P} - \mathbf{I}_{n}) = \mathbf{0}_{n,n} , $$ where In is the identity matrix of size n, and 0n,n is the zero matrix of size n×n. Multiplying together stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a stochastic matrix to solve for Q. Including the fact that the sum of each the rows in P is 1, there are n+1 equations for determining n unknowns, so it is computationally easier if on the one hand one selects one row in Q and substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector 0, and next left-multiplies this latter vector by the inverse of transformed former matrix to find Q. Here is one method for doing so: first, define the function f(A) to return the matrix A with its right-most column replaced with all 1's. If [f(P − In)]−1 exists then $$ \mathbf{Q}=f(\mathbf{0}_{n,n})[f(\mathbf{P}-\mathbf{I}_n)]^{-1}. $$ Explain: The original matrix equation is equivalent to a system of n×n linear equations in n×n variables. And there are n more linear equations from the fact that Q is a right stochastic matrix whose each row sums to 1. So it needs any n×n independent linear equations of the (n×n+n) equations to solve for the n×n variables. In this example, the n equations from "Q multiplied by the right-most column of (P-In)" have been replaced by the n stochastic ones. One thing to notice is that if P has an element Pi,i on its main diagonal that is equal to 1 and the ith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powers Pk. Hence, the ith row or column of Q will have the 1 and the 0's in the same positions as in P. #### Convergence speed to the stationary distribution As stated earlier, from the equation $$ \boldsymbol{\pi} = \boldsymbol{\pi} \mathbf{P}, $$ (if exists) the stationary (or steady state) distribution is a left eigenvector of row stochastic matrix P. Then assuming that P is diagonalizable or equivalently that P has n linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is, defective matrices, one may start with the Jordan normal form of P and proceed with a bit more involved set of arguments in a similar way.) Let U be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector of P and let Σ be the diagonal matrix of left eigenvalues of P, that is, Σ = diag(λ1,λ2,λ3,...,λn). Then by eigendecomposition $$ \mathbf{P} = \mathbf{U\Sigma U}^{-1} . $$ Let the eigenvalues be enumerated such that: $$ 1 = |\lambda_1 |> |\lambda_2 | \geq |\lambda_3 | \geq \cdots \geq |\lambda_n|. $$ Since P is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no other which solves the stationary distribution equation above). Let ui be the i-th column of U matrix, that is, ui is the left eigenvector of P corresponding to λi. Also let x be a length n row vector that represents a valid probability distribution; since the eigenvectors ui span $$ \R^n, $$ we can write $$ \mathbf{x}^\mathsf{T} = \sum_{i=1}^n a_i \mathbf{u}_i, \qquad a_i \in \R. $$ If we multiply x with P from right and continue this operation with the results, in the end we get the stationary distribution . In other words, = a1 u1 ← xPP...P = xPk as k → ∞. That means $$ \begin{align} \boldsymbol{\pi}^{(k)} &= \mathbf{x} \left (\mathbf{U\Sigma U}^{-1} \right ) \left (\mathbf{U\Sigma U}^{-1} \right )\cdots \left (\mathbf{U\Sigma U}^{-1} \right ) \\ &= \mathbf{xU\Sigma}^k \mathbf{U}^{-1} \\ &= \left (a_1\mathbf{u}_1^\mathsf{T} + a_2\mathbf{u}_2^\mathsf{T} + \cdots + a_n\mathbf{u}_n^\mathsf{T} \right )\mathbf{U\Sigma}^k\mathbf{U}^{-1} \\ &= a_1\lambda_1^k\mathbf{u}_1^\mathsf{T} + a_2\lambda_2^k\mathbf{u}_2^\mathsf{T} + \cdots + a_n\lambda_n^k\mathbf{u}_n^\mathsf{T} && u_i \bot u_j \text{ for } i\neq j \\ & = \lambda_1^k\left\{a_1\mathbf{u}_1^\mathsf{T} + a_2\left(\frac{\lambda_2}{\lambda_1}\right)^k\mathbf{u}_2^\mathsf{T} + a_3\left(\frac{\lambda_3}{\lambda_1}\right)^k\mathbf{u}_3^\mathsf{T} + \cdots + a_n\left(\frac{\lambda_n}{\lambda_1}\right)^k\mathbf{u}_n^\mathsf{T}\right\} \end{align} $$ Since is parallel to u1(normalized by L2 norm) and (k) is a probability vector, (k) approaches to a1 u1 = as k → ∞ with a speed in the order of λ2/λ1 exponentially. This follows because $$ |\lambda_2| \geq \cdots \geq |\lambda_n|, $$ hence λ2/λ1 is the dominant term. The smaller the ratio is, the faster the convergence is. Random noise in the state distribution can also speed up this convergence to the stationary distribution. ### General state space #### Harris chains Many results for Markov chains with finite state space can be generalized to chains with uncountable state space through Harris chains. The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space. #### Locally interacting Markov chains "Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form. See interacting particle system and stochastic cellular automata (probabilistic cellular automata). See for instance Interaction of Markov Processes or. ## Properties Two states are said to communicate with each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class is closed if the probability of leaving the class is zero. A Markov chain is irreducible if there is one communicating class, the state space. A state has period if is the greatest common divisor of the number of transitions by which can be reached, starting from . That is: $$ k = \gcd\{ n > 0: \Pr(X_n = i \mid X_0 = i) > 0\} $$ The state is periodic if $$ k > 1 $$ ; otherwise $$ k = 1 $$ and the state is aperiodic. A state i is said to be transient if, starting from i, there is a non-zero probability that the chain will never return to i. It is called recurrent (or persistent) otherwise. For a recurrent state i, the mean hitting time is defined as: $$ M_i = E[T_i]=\sum_{n=1}^\infty n\cdot f_{ii}^{(n)}. $$ State i is positive recurrent if $$ M_i $$ is finite and null recurrent otherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property. A state i is called absorbing if there are no outgoing transitions from the state. ### Irreducibility Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic. If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given by $$ \pi_i = 1/E[T_i] $$ . ### Ergodicity A state i is said to be ergodic if it is aperiodic and positive recurrent. In other words, a state i is ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integer $$ k $$ such that all entries of $$ M^k $$ are positive. It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a number N such that any state can be reached from any other state in any number of steps less or equal to a number N. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled with N = 1. A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic. #### Terminology Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones. In fact, merely irreducible Markov chains correspond to ergodic processes, defined according to ergodic theory. Some authors call a matrix primitive if there exists some integer $$ k $$ such that all entries of $$ M^k $$ are positive. Some authors call it regular. #### Index of primitivity The index of primitivity, or exponent, of a regular matrix, is the smallest $$ k $$ such that all entries of $$ M^k $$ are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry of $$ M $$ is zero or positive, and therefore can be found on a directed graph with $$ \mathrm{sign}(M) $$ as its adjacency matrix. There are several combinatorial results about the exponent when there are finitely many states. Let $$ n $$ be the number of states, then - The exponent is $$ \leq (n-1)^2 + 1 $$ . The only case where it is an equality is when the graph of $$ M $$ goes like $$ 1 \to 2 \to \dots \to n \to 1 \text{ and } 2 $$ . - If $$ M $$ has $$ k \geq 1 $$ diagonal entries, then its exponent is $$ \leq 2n-k-1 $$ . - If $$ \mathrm{sign}(M) $$ is symmetric, then $$ M^2 $$ has positive diagonal entries, which by previous proposition means its exponent is $$ \leq 2n-2 $$ . - (Dulmage-Mendelsohn theorem) The exponent is $$ \leq n+s(n-2) $$ where $$ s $$ is the girth of the graph. It can be improved to $$ \leq (d+1)+s(d+1-2) $$ , where $$ d $$ is the diameter of the graph. ### Measure-preserving dynamical system If a Markov chain has a stationary distribution, then it can be converted to a measure-preserving dynamical system: Let the probability space be $$ \Omega = \Sigma^\N $$ , where $$ \Sigma $$ is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. Let $$ T: \Omega \to \Omega $$ be the shift operator: $$ T(X_0, X_1, \dots) = (X_1, \dots) $$ . Similarly we can construct such a dynamical system with $$ \Omega = \Sigma^\Z $$ instead. Since irreducible Markov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains. In ergodic theory, a measure-preserving dynamical system is called ergodic if any measurable subset $$ S $$ such that $$ T^{-1}(S) = S $$ implies $$ S = \emptyset $$ or $$ \Omega $$ (up to a null set). The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain is irreducible if its corresponding measure-preserving dynamical system is ergodic. ### Markovian representations In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, let X be a non-Markovian process. Then define a process Y, such that each state of Y represents a time-interval of states of X. Mathematically, this takes the form: $$ Y(t) = \big\{ X(s): s \in [a(t), b(t)] \, \big\}. $$ If Y has the Markov property, then it is a Markovian representation of X. An example of a non-Markovian process with a Markovian representation is an autoregressive time series of order greater than one. ### Hitting times The hitting time is the time, starting in a given set of states, until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition. #### Expected hitting times For a subset of states A ⊆ S, the vector kA of hitting times (where element $$ k_i^A $$ represents the expected value, starting in state i that the chain enters one of the states in the set A) is the minimal non-negative solution to $$ \begin{align} k_i^A = 0 & \text{ for } i \in A\\ -\sum_{j \in S} q_{ij} k_j^A = 1&\text{ for } i \notin A. \end{align} $$ ### Time reversal For a CTMC Xt, the time-reversed process is defined to be $$ \hat X_t = X_{T-t} $$ . By Kelly's lemma this process has the same stationary distribution as the forward process. A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. ### Embedded Markov chain One method of finding the stationary probability distribution, , of an ergodic continuous-time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by $$ s_{ij} = \begin{cases} \frac{q_{ij}}{\sum_{k \neq i} q_{ik}} & \text{if } i \neq j \\ 0 & \text{otherwise}. \end{cases} $$ From this, S may be written as $$ S = I - \left( \operatorname{diag}(Q) \right)^{-1} Q $$ where I is the identity matrix and diag(Q) is the diagonal matrix formed by selecting the main diagonal from the matrix Q and setting all other elements to zero. To find the stationary probability distribution vector, we must next find $$ \varphi $$ such that $$ \varphi S = \varphi, $$ with $$ \varphi $$ being a row vector, such that all elements in $$ \varphi $$ are greater than 0 and = 1. From this, may be found as $$ \pi = {-\varphi (\operatorname{diag}(Q))^{-1} \over \left\| \varphi (\operatorname{diag}(Q))^{-1} \right\|_1}. $$ (S may be periodic, even if Q is not. Once is found, it must be normalized to a unit vector.) Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton. ## Special types of Markov chains Markov model Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: System state is fully observableSystem state is partially observableSystem is autonomousMarkov chainHidden Markov modelSystem is controlledMarkov decision processPartially observable Markov decision process ### Bernoulli scheme A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as a Bernoulli process. Note, however, by the Ornstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme; thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states that any stationary stochastic process is isomorphic to a Bernoulli scheme; the Markov chain is just one such example. ### Subshift of finite type When the Markov matrix is replaced by the adjacency matrix of a finite graph, the resulting shift is termed a topological Markov chain or a subshift of finite type. A Markov matrix that is compatible with the adjacency matrix can then provide a measure on the subshift. Many chaotic dynamical systems are isomorphic to topological Markov chains; examples include diffeomorphisms of closed manifolds, the Prouhet–Thue–Morse system, the Chacon system, sofic systems, context-free systems and block-coding systems. ## Applications Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends, wind power, stochastic terrorism, and solar irradiance. The Markov chain forecasting models utilize a variety of settings, from discretizing the time series, to hidden Markov models combined with wavelets, and the Markov chain mixture distribution model (MCM). ### Physics Markovian systems appear extensively in thermodynamics and statistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description. For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects. Markov chains are used in lattice QCD simulations. ### Chemistry A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state. The classical model of enzyme activity, Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains. An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicals in silico towards a desired class of compounds such as drugs or natural products. As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds. Also, the growth (and composition) of copolymers may be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due to steric effects, second-order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has been suggested that the crystallization and growth of some epitaxial superlattice oxide materials can be accurately described by Markov chains. ### Biology Markov chains are used in various areas of biology. Notable examples include: - Phylogenetics and bioinformatics, where most models of DNA evolution use continuous-time Markov chains to describe the nucleotide present at a given site in the genome. - Population dynamics, where Markov chains are in particular a central tool in the theoretical study of matrix population models. - Neurobiology, where Markov chains have been used, e.g., to simulate the mammalian neocortex. - Systems biology, for instance with the modeling of viral infection of single cells. - Compartmental models for disease outbreak and epidemic modeling. ### Testing Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing. ### Solar irradiance variability Solar irradiance variability assessments are useful for solar power applications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains, also including modeling the two states of clear and cloudiness as a two-state Markov chain. ### Speech recognition Hidden Markov models have been used in automatic speech recognition systems. ### Information theory Markov chains are used throughout information processing. Claude Shannon's famous 1948 paper A Mathematical Theory of Communication, which in a single step created the field of information theory, opens by introducing the concept of entropy by modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters. Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy encoding techniques such as arithmetic coding. They also allow effective state estimation and pattern recognition. Markov chains also play an important role in reinforcement learning. Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use the Viterbi algorithm for error correction), speech recognition and bioinformatics (such as in rearrangements detection). The LZMA lossless data compression algorithm combines Markov chains with Lempel-Ziv compression to achieve very high compression ratios. ### Queueing theory Markov chains are the basis for the analytical treatment of queues (queueing theory). Agner Krarup Erlang initiated the subject in 1917. This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth). Numerous queueing models use continuous-time Markov chains. For example, an M/M/1 queue is a CTMC on the non-negative integers where upward transitions from i to i + 1 occur at rate λ according to a Poisson process and describe job arrivals, while transitions from i to i – 1 (for i > 1) occur at rate μ (job service times are exponentially distributed) and describe completed services (departures) from the queue. ### Internet applications The PageRank of a webpage as used by Google is defined by a Markov chain. It is the probability to be at page $$ i $$ in the stationary distribution on the following Markov chain on all (known) webpages. If $$ N $$ is the number of known webpages, and a page $$ i $$ has $$ k_i $$ links to it then it has transition probability $$ \frac{\alpha}{k_i} + \frac{1-\alpha}{N} $$ for all pages that are linked to and $$ \frac{1-\alpha}{N} $$ for all pages that are not linked to. The parameter $$ \alpha $$ is taken to be about 0.15. Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user. ### Statistics Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process called Markov chain Monte Carlo (MCMC). In recent years this has revolutionized the practicability of Bayesian inference methods, allowing a wide range of posterior distributions to be simulated and their parameters found numerically. ### Conflict and combat In 1971 a Naval Postgraduate School Master's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain and Lanchester's laws. In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict." ### Economics and finance Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes. D. G. Champernowne built a Markov chain model of the distribution of income in 1953. Herbert A. Simon and co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes. Louis Bachelier was the first to observe that stock prices followed a random walk. The random walk was later seen as evidence in favor of the efficient-market hypothesis and random walk models were popular in the literature of the 1960s. Regime-switching models of business cycles were popularized by James D. Hamilton (1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions). A more recent example is the Markov switching multifractal model of Laurent E. Calvet and Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models. It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns. Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in a general equilibrium setting. Credit rating agencies produce annual tables of the transition probabilities for bonds of different credit ratings. ### Social sciences Markov chains are generally used in describing path-dependent arguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due to Karl Marx's , tying economic development to the rise of capitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of the middle class, the ratio of urban to rural residence, the rate of political mobilization, etc., will generate a higher probability of transitioning from authoritarian to democratic regime. ### Music Markov chains are employed in algorithmic music composition, particularly in software such as Csound, Max, and SuperCollider. In a first-order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desirable metric. + 1st-order matrix Note A C E A 0.1 0.6 0.3 C 0.25 0.05 0.7 E 0.7 0.3 0 + 2nd-order matrix Notes A D G AA 0.18 0.6 0.22 AD 0.5 0.5 0 AG 0.15 0.75 0.1 DD 0 0 1 DA 0.25 0 0.75 DG 0.9 0.1 0 GG 0.4 0.4 0.2 GA 0.5 0.25 0.25 GD 1 0 0 A second-order Markov chain can be introduced by considering the current state and also the previous state, as indicated in the second table. Higher, nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced by a first-order system. Markov chains can be used structurally, as in Xenakis's Analogique A and B. Markov chains are also used in systems which use a Markov model to react interactively to music input. Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed. ### Games and sports Markov chains can be used to model many games of chance. The children's games Snakes and Ladders and "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares). Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team. He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such as bunting and base stealing and differences when playing on grass vs. AstroTurf. ### Markov text generators Markov processes can also be used to generate superficially real-looking text given a sample document. Markov processes are used in a variety of recreational "parody generator" software (see dissociated press, Jeff Harrison, Mark V. Shaney, and Academias Neutronium). Several open-source text generation libraries using Markov chains exist.
https://en.wikipedia.org/wiki/Markov_chain
Quantum engineering is the development of technology that capitalizes on the laws of quantum mechanics. This type of engineering uses quantum mechanics to develop technologies such as quantum sensors and quantum computers. Devices that rely on quantum mechanical effects such as lasers, MRI imagers and transistors have revolutionized many areas of technology. New technologies are being developed that rely on phenomena such as quantum coherence and on progress achieved in the last century in understanding and controlling atomic-scale systems. Quantum mechanical effects are used as a resource in novel technologies with far-reaching applications, including quantum sensors and novel imaging techniques, secure communication (quantum internet) and quantum computing. ## History The field of quantum technology was explored in a 1997 book by Gerard J. Milburn. It was then followed by a 2003 article by Milburn and Jonathan P. Dowling, and a separate publication by David Deutsch on the same year. The application of quantum mechanics was evident in several technologies. These include laser systems, transistors and semiconductor devices, as well as other devices such as MRI imagers. The UK Defence Science and Technology Laboratory (DSTL) grouped these devices as 'quantum 1.0' to differentiate them from what it dubbed as 'quantum 2.0'. This is a definition of the class of devices that actively create, manipulate, and read out quantum states of matter using the effects of superposition and entanglement. From 2010 onwards, multiple governments have established programmes to explore quantum technologies, such as the UK National Quantum Technologies Programme, which created four quantum 'hubs'. These hubs are found at the Centre for Quantum Technologies in Singapore, and QuTech, a Dutch center to develop a topological quantum computer. In 2016, the European Union introduced the Quantum Technology Flagship, a €1 Billion, 10-year-long megaproject, similar in size to earlier European Future and Emerging Technologies Flagship projects. In December 2018, the United States passed the National Quantum Initiative Act, which provides a US$1 billion annual budget for quantum research. China is building the world's largest quantum research facility with a planned investment of 76 billion Yuan (approx. €10 Billion). Indian government has also invested 8000 crore Rupees (approx. US$1.02 Billion) over 5-years to boost quantum technologies under its National Quantum Mission. In the private sector, large companies have made multiple investments in quantum technologies. Organizations such as Google, D-wave systems, and University of California Santa Barbara have formed partnerships and investments to develop quantum technology. ## Applications ### Secure communications Quantum secure communication is a method that is expected to be 'quantum safe' in the advent of quantum computing systems that could break current cryptography systems using methods such as Shor's algorithm. These methods include quantum key distribution (QKD), a method of transmitting information using entangled light in a way that makes any interception of the transmission obvious to the user. Another method is the quantum random number generator, which is capable of producing truly random numbers unlike non-quantum algorithms that merely imitate randomness. ### Computing Quantum computers are expected to have a number of important uses in computing fields such as optimization and machine learning. They are perhaps best known for their expected ability to carry out Shor's algorithm, which can be used to factorize large numbers and is an important process in the securing of data transmissions. Quantum simulators are types of quantum computers intended to simulate a real world system, such as a chemical compound. Quantum simulators are simpler to build as opposed to general purpose quantum computers because complete control over every component is not necessary. Current quantum simulators under development include ultracold atoms in optical lattices, trapped ions, arrays of superconducting qubits, and others. ### Sensors Quantum sensors are expected to have a number of applications in a wide variety of fields including positioning systems, communication technology, electric and magnetic field sensors, gravimetry as well as geophysical areas of research such as civil engineering and seismology. ## Education programs Quantum engineering is evolving into its own engineering discipline. The quantum industry requires a quantum-literate workforce, a missing resource at the moment. Currently, scientists in the field of quantum technology have mostly either a physics or engineering background and have acquired their ”quantum engineering skills” by experience. A survey of more than twenty companies aimed to understand the scientific, technical, and “soft” skills required of new hires into the quantum industry. Results show that companies often look for people that are familiar with quantum technologies and simultaneously possess excellent hands-on lab skills. Several technical universities have launched education programs in this domain. For example, ETH Zurich has initiated a Master of Science in Quantum Engineering, a joint venture between the electrical engineering department (D-ITET) and the physics department (D-PHYS), EPFL offers a dedicated Master's program in Quantum Science and Engineering, combining coursework in quantum physics and engineering with research opportunities, and the University of Waterloo has launched integrated postgraduate engineering programs within the Institute for Quantum Computing. Similar programs are being pursued at Delft University, Technical University of Munich, MIT, CentraleSupélec and other technical universities. In the realm of undergraduate studies, opportunities for specialization are sparse. Nevertheless, some institutions have begun to offer programs. The Université de Sherbrooke offers a Bachelor of Science in quantum information, University of Waterloo offers a quantum specialization in its electrical engineering program, and the University of New South Wales offers a bachelor of quantum engineering. A report on the development of this bachelor degree has been published in IEEE Transactions on Quantum Engineering. Students are trained in signal and information processing, optoelectronics and photonics, integrated circuits (bipolar, CMOS) and electronic hardware architectures (VLSI, FPGA, ASIC). In addition, they are exposed to emerging applications such as quantum sensing, quantum communication and cryptography and quantum information processing. They learn the principles of quantum simulation and quantum computing, and become familiar with different quantum processing platforms, such as trapped ions, and superconducting circuits. Hands-on laboratory projects help students to develop the technical skills needed for the practical realization of quantum devices, consolidating their education in quantum science and technologies.
https://en.wikipedia.org/wiki/Quantum_engineering
System dynamics (SD) is an approach to understanding the nonlinear behaviour of complex systems over time using stocks, flows, internal feedback loops, table functions and time delays. ## Overview System dynamics is a methodology and mathematical modeling technique to frame, understand, and discuss complex issues and problems. Originally developed in the 1950s to help corporate managers improve their understanding of industrial processes, SD is currently being used throughout the public and private sector for policy analysis and design. Convenient graphical user interface (GUI) system dynamics software developed into user friendly versions by the 1990s and have been applied to diverse systems. SD models solve the problem of simultaneity (mutual causation) by updating all variables in small time increments with positive and negative feedbacks and time delays structuring the interactions and control. The best known SD model is probably the 1972 The Limits to Growth. This model forecast that exponential growth of population and capital, with finite resource sources and sinks and perception delays, would lead to economic collapse during the 21st century under a wide variety of growth scenarios. System dynamics is an aspect of systems theory as a method to understand the dynamic behavior of complex systems. The basis of the method is the recognition that the structure of any system, the many circular, interlocking, sometimes time-delayed relationships among its components, is often just as important in determining its behavior as the individual components themselves. Examples are chaos theory and social dynamics. It is also claimed that because there are often properties-of-the-whole which cannot be found among the properties-of-the-elements, in some cases the behavior of the whole cannot be explained in terms of the behavior of the parts. ## History System dynamics was created during the mid-1950s by Professor Jay Forrester of the Massachusetts Institute of Technology. In 1956, Forrester accepted a professorship in the newly formed MIT Sloan School of Management. His initial goal was to determine how his background in science and engineering could be brought to bear, in some useful way, on the core issues that determine the success or failure of corporations. Forrester's insights into the common foundations that underlie engineering, which led to the creation of system dynamics, were triggered, to a large degree, by his involvement with managers at General Electric (GE) during the mid-1950s. At that time, the managers at GE were perplexed because employment at their appliance plants in Kentucky exhibited a significant three-year cycle. The business cycle was judged to be an insufficient explanation for the employment instability. From hand simulations (or calculations) of the stock-flow-feedback structure of the GE plants, which included the existing corporate decision-making structure for hiring and layoffs, Forrester was able to show how the instability in GE employment was due to the internal structure of the firm and not to an external force such as the business cycle. These hand simulations were the start of the field of system dynamics. During the late 1950s and early 1960s, Forrester and a team of graduate students moved the emerging field of system dynamics from the hand-simulation stage to the formal computer modeling stage. Richard Bennett created the first system dynamics computer modeling language called SIMPLE (Simulation of Industrial Management Problems with Lots of ### Equations ) in the spring of 1958. In 1959, Phyllis Fox and Alexander Pugh wrote the first version of DYNAMO (DYNAmic MOdels), an improved version of SIMPLE, and the system dynamics language became the industry standard for over thirty years. Forrester published the first, and still classic, book in the field titled Industrial Dynamics in 1961. From the late 1950s to the late 1960s, system dynamics was applied almost exclusively to corporate/managerial problems. In 1968, however, an unexpected occurrence caused the field to broaden beyond corporate modeling. John F. Collins, the former mayor of Boston, was appointed a visiting professor of Urban Affairs at MIT. The result of the Collins-Forrester collaboration was a book titled Urban Dynamics. The Urban Dynamics model presented in the book was the first major non-corporate application of system dynamics. In 1967, Richard M. Goodwin published the first edition of his paper "A Growth Cycle", which was the first attempt to apply the principles of system dynamics to economics. He devoted most of his life teaching what he called "Economic Dynamics", which could be considered a precursor of modern Non-equilibrium economics. The second major noncorporate application of system dynamics came shortly after the first. In 1970, Jay Forrester was invited by the Club of Rome to a meeting in Bern, Switzerland. The Club of Rome is an organization devoted to solving what its members describe as the "predicament of mankind"—that is, the global crisis that may appear sometime in the future, due to the demands being placed on the Earth's carrying capacity (its sources of renewable and nonrenewable resources and its sinks for the disposal of pollutants) by the world's exponentially growing population. At the Bern meeting, Forrester was asked if system dynamics could be used to address the predicament of mankind. His answer, of course, was that it could. On the plane back from the Bern meeting, Forrester created the first draft of a system dynamics model of the world's socioeconomic system. He called this model WORLD1. Upon his return to the United States, Forrester refined WORLD1 in preparation for a visit to MIT by members of the Club of Rome. Forrester called the refined version of the model WORLD2. Forrester published WORLD2 in a book titled World Dynamics. ## Topics in systems dynamics The primary elements of system dynamics diagrams are feedback, accumulation of flows into stocks and time delays. As an illustration of the use of system dynamics, imagine an organisation that plans to introduce an innovative new durable consumer product. The organisation needs to understand the possible market dynamics in order to design marketing and production plans. ### Causal loop diagrams In the system dynamics methodology, a problem or a system (e.g., ecosystem, political system or mechanical system) may be represented as a causal loop diagram. A causal loop diagram is a simple map of a system with all its constituent components and their interactions. By capturing interactions and consequently the feedback loops (see figure below), a causal loop diagram reveals the structure of a system. By understanding the structure of a system, it becomes possible to ascertain a system's behavior over a certain time period. The causal loop diagram of the new product introduction may look as follows: There are two feedback loops in this diagram. The positive reinforcement (labeled R) loop on the right indicates that the more people have already adopted the new product, the stronger the word-of-mouth impact. There will be more references to the product, more demonstrations, and more reviews. This positive feedback should generate sales that continue to grow. The second feedback loop on the left is negative reinforcement (or "balancing" and hence labeled B). Clearly, growth cannot continue forever, because as more and more people adopt, there remain fewer and fewer potential adopters. Both feedback loops act simultaneously, but at different times they may have different strengths. Thus one might expect growing sales in the initial years, and then declining sales in the later years. However, in general a causal loop diagram does not specify the structure of a system sufficiently to permit determination of its behavior from the visual representation alone. ### Stock and flow diagrams Causal loop diagrams aid in visualizing a system's structure and behavior, and analyzing the system qualitatively. To perform a more detailed quantitative analysis, a causal loop diagram is transformed to a stock and flow diagram. A stock and flow model helps in studying and analyzing the system in a quantitative way; such models are usually built and simulated using computer software. A stock is the term for any entity that accumulates or depletes over time. A flow is the rate of change in a stock. In this example, there are two stocks: Potential adopters and Adopters. There is one flow: New adopters. For every new adopter, the stock of potential adopters declines by one, and the stock of adopters increases by one. Equations The real power of system dynamics is utilised through simulation. Although it is possible to perform the modeling in a spreadsheet, there are a variety of software packages that have been optimised for this. The steps involved in a simulation are: - Define the problem boundary. - Identify the most important stocks and flows that change these stock levels. - Identify sources of information that impact the flows. - Identify the main feedback loops. - Draw a causal loop diagram that links the stocks, flows and sources of information. - Write the equations that determine the flows. - Estimate the parameters and initial conditions. These can be estimated using statistical methods, expert opinion, market research data or other relevant sources of information. - Simulate the model and analyse results. In this example, the equations that change the two stocks via the flow are: $$ \ \mbox{Potential adopters} = - \int_{0} ^{t} \mbox{New adopters }\,dt $$ $$ \ \mbox{Adopters} = \int_{0} ^{t} \mbox{New adopters }\,dt $$ ### Equations in discrete time List of all the equations in discrete time, in their order of execution in each year, for years 1 to 15 : $$ 1) \ \mbox{Probability that contact has not yet adopted}=\mbox{Potential adopters} / (\mbox{Potential adopters } + \mbox{ Adopters}) $$ $$ 2) \ \mbox{Imitators}=q \cdot \mbox{Adopters} \cdot \mbox{Probability that contact has not yet adopted} $$ $$ 3) \ \mbox{Innovators}=p \cdot \mbox{Potential adopters} $$ $$ 4) \ \mbox{New adopters}=\mbox{Innovators}+\mbox{Imitators} $$ $$ 4.1) \ \mbox{Potential adopters}\ -= \mbox{New adopters } $$ $$ 4.2) \ \mbox{Adopters}\ += \mbox{New adopters } $$ $$ \ p=0.03 $$ $$ \ q=0.4 $$ #### Dynamic simulation results The dynamic simulation results show that the behaviour of the system would be to have growth in adopters that follows a classic s-curve shape. The increase in adopters is very slow initially, then exponential growth for a period, followed ultimately by saturation. ### Equations in continuous time To get intermediate values and better accuracy, the model can run in continuous time: we multiply the number of units of time and we proportionally divide values that change stock levels. In this example we multiply the 15 years by 4 to obtain 60 quarters, and we divide the value of the flow by 4. Dividing the value is the simplest with the Euler method, but other methods could be employed instead, such as Runge–Kutta methods. List of the equations in continuous time for trimesters = 1 to 60 : - They are the same equations as in the section Equation in discrete time above, except equations 4.1 and 4.2 replaced by following : $$ 10) \ \mbox{Valve New adopters}\ = \mbox{New adopters} \cdot TimeStep $$ $$ 10.1) \ \mbox{Potential adopters}\ -= \mbox{Valve New adopters} $$ $$ 10.2) \ \mbox{Adopters}\ += \mbox{Valve New adopters } $$ $$ \ TimeStep = 1/4 $$ - In the below stock and flow diagram, the intermediate flow 'Valve New adopters' calculates the equation : $$ \ \mbox{Valve New adopters}\ = \mbox{New adopters } \cdot TimeStep $$ ## Application System dynamics has found application in a wide range of areas, for example population, agriculture, epidemiological, ecological and economic systems, which usually interact strongly with each other. System dynamics have various "back of the envelope" management applications. They are a potent tool to: - Teach system thinking reflexes to persons being coached - Analyze and compare assumptions and mental models about the way things work - Gain qualitative insight into the workings of a system or the consequences of a decision - Recognize archetypes of dysfunctional systems in everyday practice Computer software is used to simulate a system dynamics model of the situation being studied. Running "what if" simulations to test certain policies on such a model can greatly aid in understanding how the system changes over time. System dynamics is very similar to systems thinking and constructs the same causal loop diagrams of systems with feedback. However, system dynamics typically goes further and utilises simulation to study the behaviour of systems and the impact of alternative policies. System dynamics has been used to investigate resource dependencies, and resulting problems, in product development.Nelson P. Repenning (1999). Resource dependence in product development improvement efforts, MIT Sloan School of Management Department of Operations Management/System Dynamics Group, Dec 1999. A system dynamics approach to macroeconomics, known as Minsky, has been developed by the economist Steve Keen. This has been used to successfully model world economic behaviour from the apparent stability of the Great Moderation to the 2008 financial crisis. ### Example: Growth and decline of companies The figure above is a causal loop diagram of a system dynamics model created to examine forces that may be responsible for the growth or decline of life insurance companies in the United Kingdom. A number of this figure's features are worth mentioning. The first is that the model's negative feedback loops are identified by C's, which stand for Counteracting loops. The second is that double slashes are used to indicate places where there is a significant delay between causes (i.e., variables at the tails of arrows) and effects (i.e., variables at the heads of arrows). This is a common causal loop diagramming convention in system dynamics. Third, is that thicker lines are used to identify the feedback loops and links that author wishes the audience to focus on. This is also a common system dynamics diagramming convention. Last, it is clear that a decision maker would find it impossible to think through the dynamic behavior inherent in the model, from inspection of the figure alone. ### Example: Piston motion 1. Objective: study of a crank-connecting rod system. We want to model a crank-connecting rod system through a system dynamic model. Two different full descriptions of the physical system with related systems of equations can be found here and here ; they give the same results. In this example, the crank, with variable radius and angular frequency, will drive a piston with a variable connecting rod length. 1. System dynamic modeling: the system is now modeled, according to a stock and flow system dynamic logic. The figure below shows the stock and flow diagram 1. Simulation: the behavior of the crank-connecting rod dynamic system can then be simulated. The next figure is a 3D simulation created using procedural animation. Variables of the model animate all parts of this animation: crank, radius, angular frequency, rod length, and piston position.
https://en.wikipedia.org/wiki/System_dynamics
Spiking neural networks (SNNs) are artificial neural networks (ANN) that mimic natural neural networks. These models leverage timing of discrete spikes as the main information carrier. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model. While spike rates can be considered the analogue of the variable output of a traditional ANN, neurobiology research indicated that high speed processing cannot be performed solely through a rate-based scheme. For example humans can perform an image recognition task requiring no more than 10ms of processing time per neuron through the successive layers (going from the retina to the temporal lobe). This time window is too short for rate-based encoding. The precise spike timings in a small set of spiking neurons also has a higher information coding capacity compared with a rate-based approach. The most prominent spiking neuron model is the leaky integrate-and-fire model. In that model, the momentary activation level (modeled as a differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher or lower, until the state eventually either decays or—if the firing threshold is reached—the neuron fires. After firing, the state variable is reset to a lower value. Various decoding methods exist for interpreting the outgoing spike train as a real-value number, relying on either the frequency of spikes (rate-code), the time-to-first-spike after stimulation, or the interval between spikes. ## History Many multi-layer artificial neural networks are fully connected, receiving input from every neuron in the previous layer and signalling every neuron in the subsequent layer. Although these networks have achieved breakthroughs, they do not match biological networks and do not mimic neurons. The biology-inspired Hodgkin–Huxley model of a spiking neuron was proposed in 1952. This model described how action potentials are initiated and propagated. Communication between neurons, which requires the exchange of chemical neurotransmitters in the synaptic gap, is described in models such as the integrate-and-fire model, FitzHugh–Nagumo model (1961–1962), and Hindmarsh–Rose model (1984). The leaky integrate-and-fire model (or a derivative) is commonly used as it is easier to compute than Hodgkin–Huxley. While the notion of an artificial spiking neural network became popular only in the twenty-first century, Vreeken, J. (2003). Spiking neural networks, an introduction studies between 1980 and 1995 supported the concept. The first models of this type of ANN appeared to simulate non-algorithmic intelligent information processing systems.Peretto, P. (1984). Collective properties of neural networks: a statistical physics approach. Biological cybernetics, 50(1), 51-62. However, the notion of the spiking neural network as a mathematical model was first worked on in the early 1970s. As of 2019 SNNs lagged behind ANNs in accuracy, but the gap is decreasing, and has vanished on some tasks. ## Underpinnings Information in the brain is represented as action potentials (neuron spikes), which may group into spike trains or coordinated waves. A fundamental question of neuroscience is to determine whether neurons communicate by a rate or temporal code. Temporal coding implies that a single spiking neuron can replace hundreds of hidden units on a conventional neural net. SNNs define a neuron's current state as its potential (possibly modeled as a differential equation). An input pulse causes the potential to rise and then gradually decline. Encoding schemes can interpret these pulse sequences as a number, considering pulse frequency and pulse interval. Using the precise time of pulse occurrence, a neural network can consider more information and offer better computing properties. SNNs compute in the continuous domain. Such neurons test for activation only when their potentials reach a certain value. When a neuron is activated, it produces a signal that is passed to connected neurons, accordingly raising or lowering their potentials. The SNN approach produces a continuous output instead of the binary output of traditional ANNs. Pulse trains are not easily interpretable, hence the need for encoding schemes. However, a pulse train representation may be more suited for processing spatiotemporal data (or real-world sensory data classification). SNNs connect neurons only to nearby neurons so that they process input blocks separately (similar to CNN using filters). They consider time by encoding information as pulse trains so as not to lose information. This avoids the complexity of a recurrent neural network (RNN). Impulse neurons are more powerful computational units than traditional artificial neurons. SNNs are theoretically more powerful than so called "second-generation networks" defined as ANNs "based on computational units that apply activation function with a continuous set of possible output values to a weighted sum (or polynomial) of the inputs"; however, SNN training issues and hardware requirements limit their use. Although unsupervised biologically inspired learning methods are available such as Hebbian learning and STDP, no effective supervised training method is suitable for SNNs that can provide better performance than second-generation networks. Spike-based activation of SNNs is not differentiable, thus gradient descent-based backpropagation (BP) is not available. SNNs have much larger computational costs for simulating realistic neural models than traditional ANNs. Pulse-coupled neural networks (PCNN) are often confused with SNNs. A PCNN can be seen as a kind of SNN. Researchers are actively working on various topics. The first concerns differentiability. The expressions for both the forward- and backward-learning methods contain the derivative of the neural activation function which is not differentiable because a neuron's output is either 1 when it spikes, and 0 otherwise. This all-or-nothing behavior disrupts gradients and makes these neurons unsuitable for gradient-based optimization. Approaches to resolving it include: - resorting to entirely biologically inspired local learning rules for the hidden units - translating conventionally trained “rate-based” NNs to SNNs - smoothing the network model to be continuously differentiable - defining an SG (Surrogate Gradient) as a continuous relaxation of the real gradients The second concerns the optimization algorithm. Standard BP can be expensive in terms of computation, memory, and communication and may be poorly suited to the hardware that implements it (e.g., a computer, brain, or neuromorphic device). Incorporating additional neuron dynamics such as Spike Frequency Adaptation (SFA) is a notable advance, enhancing efficiency and computational power. These neurons sit between biological complexity and computational complexity. Originating from biological insights, SFA offers significant computational benefits by reducing power usage, especially in cases of repetitive or intense stimuli. This adaptation improves signal/noise clarity and introduces an elementary short-term memory at the neuron level, which in turn, improves accuracy and efficiency. This was mostly achieved using compartmental neuron models. The simpler versions are of neuron models with adaptive thresholds, are an indirect way of achieving SFA. It equips SNNs with improved learning capabilities, even with constrained synaptic plasticity, and elevates computational efficiency. This feature lessens the demand on network layers by decreasing the need for spike processing, thus lowering computational load and memory access time—essential aspects of neural computation. Moreover, SNNs utilizing neurons capable of SFA achieve levels of accuracy that rival those of conventional ANNs, while also requiring fewer neurons for comparable tasks. This efficiency streamlines the computational workflow and conserves space and energy, while maintaining technical integrity. High-performance deep spiking neural networks can operate with 0.3 spikes per neuron. ## Applications SNNs can in principle be applied to the same applications as traditional ANNs. In addition, SNNs can model the central nervous system of biological organisms, such as an insect seeking food without prior knowledge of the environment. Due to their relative realism, they can be used to study biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, recordings of this circuit can be compared to the output of a corresponding SNN, evaluating the plausibility of the hypothesis. SNNs lack effective training mechanisms, which can complicate some applications, including computer vision. When using SNNs for image based data, the images need to be converted into binary spike trains. Types of encodings include: - Temporal coding; generating one spike per neuron, in which spike latency is inversely proportional to the pixel intensity. - Rate coding: converting pixel intensity into a spike train, where the number of spikes is proportional to the pixel intensity. - Direct coding; using a trainable layer to generate a floating-point value for each time step. The layer converts each pixel at a certain time step into a floating-point value, and then a threshold is used on the generated floating-point values to pick either zero or one. - Phase coding; encoding temporal information into spike patterns based on a global oscillator. - Burst coding; transmitting spikes in bursts, increasing communication reliability. ## Software A diverse range of application software can simulate SNNs. This software can be classified according to its uses: ### SNN simulation These simulate complex neural models. Large networks usually require lengthy processing. Candidates include: - Brian – developed by Romain Brette and Dan Goodman at the École Normale Supérieure; - GENESIS (the GEneral NEural SImulation System) – developed in James Bower's laboratory at Caltech; - NEST – developed by the NEST Initiative; - NEURON – mainly developed by Michael Hines, John W. Moore and Ted Carnevale in Yale University and Duke University; - RAVSim (Runtime Tool) – mainly developed by Sanaullah in Bielefeld University of Applied Sciences and Arts; ## Hardware Sutton and Barton proposed that future neuromorphic architectures will comprise billions of nanosynapses, which require a clear understanding of the accompanying physical mechanisms. Experimental systems based on ferroelectric tunnel junctions have been used to show that STDP can be harnessed from heterogeneous polarization switching. Through combined scanning probe imaging, electrical transport and atomic-scale molecular dynamics, conductance variations can be modelled by nucleation-dominated domain reversal. Simulations showed that arrays of ferroelectric nanosynapses can autonomously learn to recognize patterns in a predictable way, opening the path towards unsupervised learning. ## Benchmarks Classification capabilities of spiking networks trained according to unsupervised learning methods have been tested on benchmark datasets such as Iris, Wisconsin Breast Cancer or Statlog Landsat dataset. Various approaches to information encoding and network design have been used such as a 2-layer feedforward network for data clustering and classification. Based on Hopfield (1995) the authors implemented models of local receptive fields combining the properties of radial basis functions and spiking neurons to convert input signals having a floating-point representation into a spiking representation.
https://en.wikipedia.org/wiki/Spiking_neural_network
Heisenberg's uncertainty relation is one of the fundamental results in quantum mechanics. Later Robertson proved the uncertainty relation for two general non-commuting observables, which was strengthened by Schrödinger. However, the conventional uncertainty relation like the Robertson-Schrödinger relation cannot give a non-trivial bound for the product of variances of two incompatible observables because the lower bound in the uncertainty inequalities can be null and hence trivial even for observables that are incompatible on the state of the system. The Heisenberg–Robertson–Schrödinger uncertainty relation was proved at the dawn of quantum formalism and is ever-present in the teaching and research on quantum mechanics. After about 85 years of existence of the uncertainty relation this problem was solved recently by Lorenzo Maccone and Arun K. Pati. The standard uncertainty relations are expressed in terms of the product of variances of the measurement results of the observables $$ A $$ and $$ B $$ , and the product can be null even when one of the two variances is different from zero. However, the stronger uncertainty relations due to Maccone and Pati provide different uncertainty relations, based on the sum of variances that are guaranteed to be nontrivial whenever the observables are incompatible on the state of the quantum system. (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., He et al., and Ref. due to Huang.) ## The Maccone–Pati uncertainty relations The Heisenberg–Robertson or Schrödinger uncertainty relations do not fully capture the incompatibility of observables in a given quantum state. The stronger uncertainty relations give non-trivial bounds on the sum of the variances for two incompatible observables. For two non-commuting observables $$ A $$ and $$ B $$ the first stronger uncertainty relation is given by $$ \Delta A^2 + \Delta B^2 \ge \pm i \langle \Psi|[A, B]|\Psi \rangle + | \langle \Psi|(A \pm i B)|{\bar \Psi} \rangle|^2, $$ where $$ \Delta A^2 = \langle \Psi |A^2 |\Psi \rangle - \langle \Psi |A |\Psi \rangle^2 $$ , $$ \Delta B^2 = \langle \Psi |B^2 |\Psi \rangle - \langle \Psi |B |\Psi \rangle^2 $$ , $$ |{\bar \Psi} \rangle $$ is a vector that is orthogonal to the state of the system, i.e., $$ \langle \Psi| {\bar \Psi} \rangle = 0 $$ and one should choose the sign of $$ \pm i \langle \Psi|[A, B]|\Psi \rangle $$ so that this is a positive number. The other non-trivial stronger uncertainty relation is given by $$ \Delta A^2 + \Delta B^2 \ge \frac{1}{2}| \langle {\bar \Psi}_{A+B} |(A + B)| \Psi \rangle|^2, $$ where $$ | {\bar \Psi}_{A+B} \rangle $$ is a unit vector orthogonal to $$ |\Psi \rangle $$ . The form of $$ | {\bar \Psi}_{A+B} \rangle $$ implies that the right-hand side of the new uncertainty relation is nonzero unless $$ | \Psi\rangle $$ is an eigenstate of $$ (A + B) $$ . One can prove an improved version of the Heisenberg–Robertson uncertainty relation which reads as $$ \Delta A \Delta B \ge \frac{ \pm \frac{i}{2} \langle \Psi|[A, B]|\Psi \rangle }{1- \frac{1}{2} | \langle \Psi|( \frac{A}{\Delta A} \pm i \frac{B}{\Delta B} )| {\bar \Psi} \rangle|^2 }. $$ The Heisenberg–Robertson uncertainty relation follows from the above uncertainty relation. ## Remarks In quantum theory, one should distinguish between the uncertainty relation and the uncertainty principle. The former refers solely to the preparation of the system which induces a spread in the measurement outcomes, and does not refer to the disturbance induced by the measurement. The uncertainty principle captures the measurement disturbance by the apparatus and the impossibility of joint measurements of incompatible observables. The Maccone–Pati uncertainty relations refer to preparation uncertainty relations. These relations set strong limitations for the nonexistence of common eigenstates for incompatible observables. The Maccone–Pati uncertainty relations have been experimentally tested for qutrit systems. The new uncertainty relations not only capture the incompatibility of observables but also of quantities that are physically measurable (as variances can be measured in the experiment). ## References ## Other sources - Research Highlight, NATURE ASIA, 19 January 2015, "Heisenberg's uncertainty relation gets stronger" Category:Quantum mechanics Category:Mathematical physics
https://en.wikipedia.org/wiki/Stronger_uncertainty_relations
Topological geometry deals with incidence structures consisting of a point set $$ P $$ and a family $$ \mathfrak{L} $$ of subsets of $$ P $$ called lines or circles etc. such that both $$ P $$ and $$ \mathfrak{L} $$ carry a topology and all geometric operations like joining points by a line or intersecting lines are continuous. As in the case of topological groups, many deeper results require the point space to be (locally) compact and connected. This generalizes the observation that the line joining two distinct points in the Euclidean plane depends continuously on the pair of points and the intersection point of two lines is a continuous function of these lines. ## Linear geometries Linear geometries are incidence structures in which any two distinct points $$ x $$ and $$ y $$ are joined by a unique line $$ xy $$ . Such geometries are called topological if $$ xy $$ depends continuously on the pair $$ (x,y) $$ with respect to given topologies on the point set and the line set. The dual of a linear geometry is obtained by interchanging the roles of points and lines. A survey of linear topological geometries is given in Chapter 23 of the Handbook of incidence geometry. The most extensively investigated topological linear geometries are those which are also dual topological linear geometries. Such geometries are known as topological projective planes. ## History A systematic study of these planes began in 1954 with a paper by Skornyakov. Earlier, the topological properties of the real plane had been introduced via ordering relations on the affine lines, see, e.g., Hilbert, Coxeter, and O. Wyler. The completeness of the ordering is equivalent to local compactness and implies that the affine lines are homeomorphic to $$ \R $$ and that the point space is connected. Note that the rational numbers do not suffice to describe our intuitive notions of plane geometry and that some extension of the rational field is necessary. In fact, the equation $$ x^2 + y^2 = 3 $$ for a circle has no rational solution. ## Topological projective planes The approach to the topological properties of projective planes via ordering relations is not possible, however, for the planes coordinatized by the complex numbers, the quaternions or the octonion algebra. The point spaces as well as the line spaces of these classical planes (over the real numbers, the complex numbers, the quaternions, and the octonions) are compact manifolds of dimension $$ 2^m,\, 1 \le m \le 4 $$ . ### Topological dimension The notion of the dimension of a topological space plays a prominent rôle in the study of topological, in particular of compact connected planes. For a normal space $$ X $$ , the dimension $$ \dim X $$ can be characterized as follows: If $$ \mathbb{S}_n $$ denotes the $$ n $$ -sphere, then if, and only if, for every closed subspace each continuous map has a continuous extension . For details and other definitions of a dimension see and the references given there, in particular Engelking or Fedorchuk. ### 2-dimensional planes The lines of a compact topological plane with a 2-dimensional point space form a family of curves homeomorphic to a circle, and this fact characterizes these planes among the topological projective planes. Equivalently, the point space is a surface. Early examples not isomorphic to the classical real plane $$ {\mathcal E} $$ have been given by Hilbert and Moulton. The continuity properties of these examples have not been considered explicitly at that time, they may have been taken for granted. Hilbert’s construction can be modified to obtain uncountably many pairwise non-isomorphic $$ 2 $$ -dimensional compact planes. The traditional way to distinguish $$ {\mathcal E} $$ from the other $$ 2 $$ -dimensional planes is by the validity of Desargues’s theorem or the theorem of Pappos (see, e.g., Pickert for a discussion of these two configuration theorems). The latter is known to imply the former (Hessenberg). The theorem of Desargues expresses a kind of homogeneity of the plane. In general, it holds in a projective plane if, and only if, the plane can be coordinatized by a (not necessarily commutative) field, hence it implies that the group of automorphisms is transitive on the set of quadrangles ( $$ 4 $$ points no $$ 3 $$ of which are collinear). In the present setting, a much weaker homogeneity condition characterizes $$ {\mathcal E} $$ : Theorem. If the automorphism group of a -dimensional compact plane is transitive on the point set (or the line set), then has a compact subgroup which is even transitive on the set of flags (=incident point-line pairs), and is classical. The automorphism group $$ \Sigma = \operatorname{Aut}{\mathcal P} $$ of a $$ 2 $$ -dimensional compact plane $$ {\mathcal P} $$ , taken with the topology of uniform convergence on the point space, is a locally compact group of dimension at most $$ 8 $$ , in fact even a Lie group. All $$ 2 $$ -dimensional planes such that $$ \dim\Sigma \ge 3 $$ can be described explicitly; those with $$ \dim\Sigma = 4 $$ are exactly the Moulton planes, the classical plane $$ {\mathcal E} $$ is the only $$ 2 $$ -dimensional plane with $$ \dim \Sigma > 4 $$ ; see also. ### Compact connected planes The results on $$ 2 $$ -dimensional planes have been extended to compact planes of dimension $$ >2 $$ . This is possible due to the following basic theorem: Topology of compact planes. If the dimension of the point space of a compact connected projective plane is finite, then with . Moreover, each line is a homotopy sphere of dimension , see or. Special aspects of 4-dimensional planes are treated in, more recent results can be found in. The lines of a $$ 4 $$ -dimensional compact plane are homeomorphic to the $$ 2 $$ -sphere; in the cases $$ m>2 $$ the lines are not known to be manifolds, but in all examples which have been found so far the lines are spheres. A subplane $$ {\mathcal B} $$ of a projective plane $$ {\mathcal P} $$ is said to be a Baer subplane, if each point of $$ {\mathcal P} $$ is incident with a line of $$ {\mathcal B} $$ and each line of $$ {\mathcal P} $$ contains a point of $$ {\mathcal B} $$ . A closed subplane $$ {\mathcal B} $$ is a Baer subplane of a compact connected plane $$ {\mathcal P} $$ if, and only if, the point space of $$ {\mathcal B} $$ and a line of $$ {\mathcal P} $$ have the same dimension. Hence the lines of an 8-dimensional plane $$ \mathcal P $$ are homeomorphic to a sphere $$ \mathbb{S}_4 $$ if $$ {\mathcal P} $$ has a closed Baer subplane. Homogeneous planes. If is a compact connected projective plane and if is transitive on the point set of , then has a flag-transitive compact subgroup and is classical, see or. In fact, $$ \Phi $$ is an elliptic motion group. Let $$ \mathcal P $$ be a compact plane of dimension $$ 2^m,\; m=2,3,4 $$ , and write $$ \Sigma = \operatorname{Aut}{\mathcal P} $$ . If $$ \dim\Sigma > 8,18,40 $$ , then $$ {\mathcal P} $$ is classical, and $$ \operatorname{Aut}{\mathcal P} $$ is a simple Lie group of dimension $$ 16,35,78 $$ respectively. All planes $$ \mathcal P $$ with $$ \dim\Sigma = 8,18,40 $$ are known explicitly. The planes with $$ \dim\Sigma = 40 $$ are exactly the projective closures of the affine planes coordinatized by a so-called mutation $$ (\mathbb{O},+,\circ) $$ of the octonion algebra $$ (\mathbb{O},+, \ \,) $$ , where the new multiplication $$ \circ $$ is defined as follows: choose a real number $$ t $$ with $$ 1/2 < t \ne 1 $$ and put $$ a \circ b = t\cdot a b + (1-t)\cdot b a $$ . Vast families of planes with a group of large dimension have been discovered systematically starting from assumptions about their automorphism groups, see, e.g.,. Many of them are projective closures of translation planes (affine planes admitting a sharply transitive group of automorphisms mapping each line to a parallel), cf.; see also for more recent results in the case $$ m=3 $$ and for $$ m=4 $$ . ## Compact projective spaces Subplanes of projective spaces of geometrical dimension at least 3 are necessarily Desarguesian, see §1 or §16 or. Therefore, all compact connected projective spaces can be coordinatized by the real or complex numbers or the quaternion field. ## Stable planes The classical non-euclidean hyperbolic plane can be represented by the intersections of the straight lines in the real plane with an open circular disk. More generally, open (convex) parts of the classical affine planes are typical stable planes. A survey of these geometries can be found in, for the $$ 2 $$ -dimensional case see also. Precisely, a stable plane $$ {\mathcal S} $$ is a topological linear geometry $$ (P,\mathfrak{L}) $$ such that 1. $$ P $$ is a locally compact space of positive finite dimension, 1. each line $$ L\in\mathfrak{L} $$ is a closed subset of $$ P $$ , and $$ \mathfrak{L} $$ is a Hausdorff space, 1. the set $$ \{(K,L) \mid K \ne L,\;K \cap L \ne \emptyset\} $$ is an open subspace $$ \mathfrak{O} \subset \mathfrak{L}^2 $$ ( stability), 1. the map $$ (K,L) \mapsto K \cap L:\mathfrak{O} \to P $$ is continuous. Note that stability excludes geometries like the $$ 3 $$ -dimensional affine space over $$ \R $$ or $$ \Complex $$ . A stable plane $$ {\mathcal S} $$ is a projective plane if, and only if, $$ P $$ is compact. As in the case of projective planes, line pencils are compact and homotopy equivalent to a sphere of dimension $$ 2^{m-1} $$ , and $$ \dim P = 2^m $$ with $$ m\in\{1,2,3,4\} $$ , see or. Moreover, the point space $$ P $$ is locally contractible. Compact groups of (proper) 'stable planes are rather small. Let $$ \Phi_d $$ denote a maximal compact subgroup of the automorphism group of the classical $$ d $$ -dimensional projective plane $$ {\mathcal P}_d $$ . Then the following theorem holds: If a -dimensional stable plane admits a compact group of automorphisms such that , then , see. Flag-homogeneous stable planes. Let be a stable plane. If the automorphism group is flag-transitive, then is a classical projective or affine plane, or is isomorphic to the interior of the absolute sphere of the hyperbolic polarity of a classical plane; see. In contrast to the projective case, there is an abundance of point-homogeneous stable planes, among them vast classes of translation planes, see and. ## Symmetric planes Affine translation planes have the following property: - There exists a point transitive closed subgroup $$ \Delta $$ of the automorphism group which contains a unique reflection at some and hence at each point. More generally, a symmetric plane is a stable plane $$ {\mathcal S} = (P,\mathfrak{L}) $$ satisfying the aforementioned condition; see, cf. for a survey of these geometries. By Corollary 5.5, the group $$ \Delta $$ is a Lie group and the point space $$ P $$ is a manifold. It follows that $$ {\mathcal S} $$ is a symmetric space. By means of the Lie theory of symmetric spaces, all symmetric planes with a point set of dimension $$ 2 $$ or $$ 4 $$ have been classified. They are either translation planes or they are determined by a Hermitian form. An easy example is the real hyperbolic plane. ## Circle geometries Classical models are given by the plane sections of a quadratic surface $$ S $$ in real projective $$ 3 $$ -space; if $$ S $$ is a sphere, the geometry is called a Möbius plane. The plane sections of a ruled surface (one-sheeted hyperboloid) yield the classical Minkowski plane, cf. for generalizations. If $$ S $$ is an elliptic cone without its vertex, the geometry is called a Laguerre plane. Collectively these planes are sometimes referred to as Benz planes. A topological Benz plane is classical, if each point has a neighbourhood which is isomorphic to some open piece of the corresponding classical Benz plane. ### Möbius planes Möbius planes consist of a family $$ \mathfrak{C} $$ of circles, which are topological 1-spheres, on the $$ 2 $$ -sphere $$ S $$ such that for each point $$ p $$ the derived structure $$ (S\setminus\{p\},\{C\setminus\{p\}\mid p\in C\in\mathfrak{C}\}) $$ is a topological affine plane. In particular, any $$ 3 $$ distinct points are joined by a unique circle. The circle space $$ \mathfrak{C} $$ is then homeomorphic to real projective $$ 3 $$ -space with one point deleted. A large class of examples is given by the plane sections of an egg-like surface in real $$ 3 $$ -space. ### Homogeneous Möbius planes If the automorphism group of a Möbius plane is transitive on the point set or on the set of circles, or if , then is classical and , see. In contrast to compact projective planes there are no topological Möbius planes with circles of dimension $$ >1 $$ , in particular no compact Möbius planes with a $$ 4 $$ -dimensional point space. All 2-dimensional Möbius planes such that $$ \dim\Sigma \ge 3 $$ can be described explicitly. ### Laguerre planes The classical model of a Laguerre plane consists of a circular cylindrical surface $$ C $$ in real $$ 3 $$ -space $$ \R^3 $$ as point set and the compact plane sections of $$ C $$ as circles. Pairs of points which are not joined by a circle are called parallel. Let $$ P $$ denote a class of parallel points. Then $$ C \setminus P $$ is a plane $$ \R^2 $$ , the circles can be represented in this plane by parabolas of the form $$ y = ax^2+bx+c $$ . In an analogous way, the classical $$ 4 $$ -dimensional Laguerre plane is related to the geometry of complex quadratic polynomials. In general, the axioms of a locally compact connected Laguerre plane require that the derived planes embed into compact projective planes of finite dimension. A circle not passing through the point of derivation induces an oval in the derived projective plane. By or, circles are homeomorphic to spheres of dimension $$ 1 $$ or $$ 2 $$ . Hence the point space of a locally compact connected Laguerre plane is homeomorphic to the cylinder $$ C $$ or it is a $$ 4 $$ -dimensional manifold, cf. A large class of $$ 2 $$ -dimensional examples, called ovoidal Laguerre planes, is given by the plane sections of a cylinder in real 3-space whose base is an oval in $$ \R^2 $$ . The automorphism group of a $$ 2d $$ -dimensional Laguerre plane ( $$ d = 1, 2 $$ ) is a Lie group with respect to the topology of uniform convergence on compact subsets of the point space; furthermore, this group has dimension at most $$ 7d $$ . All automorphisms of a Laguerre plane which fix each parallel class form a normal subgroup, the kernel of the full automorphism group. The $$ 2 $$ -dimensional Laguerre planes with $$ \dim\Sigma=5 $$ are exactly the ovoidal planes over proper skew parabolae. The classical $$ 2d $$ -dimensional Laguerre planes are the only ones such that $$ \dim\Sigma > 5d $$ , see, cf. also. ### Homogeneous Laguerre planes If the automorphism group of a -dimensional Laguerre plane is transitive on the set of parallel classes, and if the kernel is transitive on the set of circles, then is classical, see 2.1,2. However, transitivity of the automorphism group on the set of circles does not suffice to characterize the classical model among the $$ 2d $$ -dimensional Laguerre planes. ### Minkowski planes The classical model of a Minkowski plane has the torus $$ \mathbb{S}_1 \times \mathbb{S}_1 $$ as point space, circles are the graphs of real fractional linear maps on $$ \mathbb{S}_1 = \R \cup\{\infty\} $$ . As with Laguerre planes, the point space of a locally compact connected Minkowski plane is $$ 1 $$ - or $$ 2 $$ -dimensional; the point space is then homeomorphic to a torus or to $$ \mathbb{S}_2 \times \mathbb{S}_2 $$ , see. ### Homogeneous Minkowski planes If the automorphism group of a Minkowski plane of dimension is flag-transitive, then is classical. The automorphism group of a $$ 2d $$ -dimensional Minkowski plane is a Lie group of dimension at most $$ 6d $$ . All $$ 2 $$ -dimensional Minkowski planes such that $$ \dim\Sigma \ge 4 $$ can be described explicitly. The classical $$ 2d $$ -dimensional Minkowski plane is the only one with $$ \dim\Sigma > 4d $$ , see. ## Notes ## References - - - - - - - - - - - Category:Topology Category:Incidence geometry
https://en.wikipedia.org/wiki/Topological_geometry
The uncertainty principle, also known as Heisenberg's indeterminacy principle, is a fundamental concept in quantum mechanics. It states that there is a limit to the precision with which certain pairs of physical properties, such as position and momentum, can be simultaneously known. In other words, the more accurately one property is measured, the less accurately the other property can be known. More formally, the uncertainty principle is any of a variety of mathematical inequalities asserting a fundamental limit to the product of the accuracy of certain related pairs of measurements on a quantum system, such as position, x, and momentum, p. Such paired-variables are known as complementary variables or canonically conjugate variables. First introduced in 1927 by German physicist Werner Heisenberg,Werner Heisenberg (1989), Encounters with Einstein and Other Essays on People, Places and Particles, Princeton University Press, p. 53. Kumar, Manjit. Quantum: Einstein, Bohr, and the great debate about the nature of reality. 1st American ed., 2008. Chap. 10, Note 37. the formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928: where $$ \hbar = \frac{h}{2\pi} $$ is the reduced Planck constant. The quintessentially quantum mechanical uncertainty principle comes in many forms other than position–momentum. The energy–time relationship is widely used to relate quantum state lifetime to measured energy widths but its formal derivation is fraught with confusing issues about the nature of time. The basic principle has been extended in numerous directions; it must be considered in many kinds of fundamental physical measurements. ## Position–momentum It is vital to illustrate how the principle applies to relatively intelligible physical situations since it is indiscernible on the macroscopic scales that humans experience. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily. Mathematically, in wave mechanics, the uncertainty relation between position and momentum arises because the expressions of the wavefunction in the two corresponding orthonormal bases in Hilbert space are Fourier transforms of one another (i.e., position and momentum are conjugate variables). A nonzero function and its Fourier transform cannot both be sharply localized at the same time. A similar tradeoff between the variances of Fourier conjugates arises in all systems underlain by Fourier analysis, for example in sound waves: A pure tone is a sharp spike at a single frequency, while its Fourier transform gives the shape of the sound wave in the time domain, which is a completely delocalized sine wave. In quantum mechanics, the two key points are that the position of the particle takes the form of a matter wave, and momentum is its Fourier conjugate, assured by the de Broglie relation , where is the wavenumber. In matrix mechanics, the mathematical formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value (the eigenvalue). For example, if a measurement of an observable is performed, then the system is in a particular eigenstate of that observable. However, the particular eigenstate of the observable need not be an eigenstate of another observable : If so, then it does not have a unique associated measurement for it, as the system is not in an eigenstate of that observable. ### Visualization The uncertainty principle can be visualized using the position- and momentum-space wavefunctions for one spinless particle with mass in one dimension. The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely, the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. These wavefunctions are Fourier transforms of each other: mathematically, the uncertainty principle expresses the relationship between conjugate variables in the transform. ### Wave mechanics interpretation According to the de Broglie hypothesis, every object in the universe is associated with a wave. Thus every object, from an elementary particle to atoms, molecules and on up to planets and beyond are subject to the uncertainty principle. The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is $$ \psi(x) \propto e^{ik_0 x} = e^{ip_0 x/\hbar} ~. $$ The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is $$ \operatorname P [a \leq X \leq b] = \int_a^b |\psi(x)|^2 \, \mathrm{d}x ~. $$ In the case of the single-mode plane wave, $$ |\psi(x)|^2 $$ is 1 if $$ X=x $$ and 0 otherwise. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. On the other hand, consider a wave function that is a sum of many waves, which we may write as $$ \psi(x) \propto \sum_n A_n e^{i p_n x/\hbar}~, $$ where An represents the relative contribution of the mode pn to the overall total. The figures to the right show how with the addition of many plane waves, the wave packet can become more localized. We may take this a step further to the continuum limit, where the wave function is an integral over all possible modes $$ \psi(x) = \frac{1}{\sqrt{2 \pi \hbar}} \int_{-\infty}^\infty \varphi(p) \cdot e^{i p x/\hbar} \, dp ~, $$ with $$ \varphi(p) $$ representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that $$ \varphi(p) $$ is the Fourier transform of $$ \psi(x) $$ and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta. One way to quantify the precision of the position and momentum is the standard deviation σ. Since $$ |\psi(x)|^2 $$ is a probability density function for position, we calculate its standard deviation. The precision of the position is improved, i.e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i.e. increased σp. Another way of stating this is that σx and σp have an inverse relationship or are at least bounded from below. This is the uncertainty principle, the exact limit of which is the Kennard bound. ### Proof of the Kennard inequality using wave mechanics We are interested in the variances of position and momentum, defined as $$ \sigma_x^2 = \int_{-\infty}^\infty x^2 \cdot |\psi(x)|^2 \, dx - \left( \int_{-\infty}^\infty x \cdot |\psi(x)|^2 \, dx \right)^2 $$ $$ \sigma_p^2 = \int_{-\infty}^\infty p^2 \cdot |\varphi(p)|^2 \, dp - \left( \int_{-\infty}^\infty p \cdot |\varphi(p)|^2 \, dp \right)^2~. $$ Without loss of generality, we will assume that the means vanish, which just amounts to a shift of the origin of our coordinates. (A more general proof that does not make this assumption is given below.) This gives us the simpler form $$ \sigma_x^2 = \int_{-\infty}^\infty x^2 \cdot |\psi(x)|^2 \, dx $$ $$ \sigma_p^2 = \int_{-\infty}^\infty p^2 \cdot |\varphi(p)|^2 \, dp~. $$ The function $$ f(x) = x \cdot \psi(x) $$ can be interpreted as a vector in a function space. We can define an inner product for a pair of functions u(x) and v(x) in this vector space: $$ \langle u \mid v \rangle = \int_{-\infty}^\infty u^*(x) \cdot v(x) \, dx, $$ where the asterisk denotes the complex conjugate. With this inner product defined, we note that the variance for position can be written as $$ \sigma_x^2 = \int_{-\infty}^\infty |f(x)|^2 \, dx = \langle f \mid f \rangle ~. $$ We can repeat this for momentum by interpreting the function $$ \tilde{g}(p)=p \cdot \varphi(p) $$ as a vector, but we can also take advantage of the fact that $$ \psi(x) $$ and $$ \varphi(p) $$ are Fourier transforms of each other. We evaluate the inverse Fourier transform through integration by parts: $$ \begin{align} g(x) &= \frac{1}{\sqrt{2 \pi \hbar}} \cdot \int_{-\infty}^\infty \tilde{g}(p) \cdot e^{ipx/\hbar} \, dp \\ &= \frac{1}{\sqrt{2 \pi \hbar}} \int_{-\infty}^\infty p \cdot \varphi(p) \cdot e^{ipx/\hbar} \, dp \\ &= \frac{1}{2 \pi \hbar} \int_{-\infty}^\infty \left[ p \cdot \int_{-\infty}^\infty \psi(\chi) e^{-ip\chi/\hbar} \, d\chi \right] \cdot e^{ipx/\hbar} \, dp \\ &= \frac{i}{2 \pi} \int_{-\infty}^\infty \left[ \cancel{ \left. \psi(\chi) e^{-ip\chi/\hbar} \right|_{-\infty}^\infty } - \int_{-\infty}^\infty \frac{d\psi(\chi)}{d\chi} e^{-ip\chi/\hbar} \, d\chi \right] \cdot e^{ipx/\hbar} \, dp \\ &= -i \int_{-\infty}^\infty \frac{d\psi(\chi)}{d\chi} \left[ \frac{1}{2 \pi}\int_{-\infty}^\infty \, e^{ip(x - \chi)/\hbar} \, dp \right]\, d\chi\\ &= -i \int_{-\infty}^\infty \frac{d\psi(\chi)}{d\chi} \left[ \delta\left(\frac{x - \chi }{\hbar}\right) \right]\, d\chi\\ &= -i \hbar \int_{-\infty}^\infty \frac{d\psi(\chi)}{d\chi} \left[ \delta\left(x - \chi \right) \right]\, d\chi\\ &= -i \hbar \frac{d\psi(x)}{dx} \\ &= \left( -i \hbar \frac{d}{dx} \right) \cdot \psi(x) , \end{align} $$ where $$ v=\frac{\hbar}{-ip}e^{-ip\chi/\hbar} $$ in the integration by parts, the cancelled term vanishes because the wave function vanishes at both infinities and $$ |e^{-ip\chi/\hbar}|=1 $$ , and then use the Dirac delta function which is valid because $$ \dfrac{d\psi(\chi)}{d\chi} $$ does not depend on p . The term $$ -i \hbar \frac{d}{dx} $$ is called the momentum operator in position space. Applying Plancherel's theorem, we see that the variance for momentum can be written as $$ \sigma_p^2 = \int_{-\infty}^\infty |\tilde{g}(p)|^2 \, dp = \int_{-\infty}^\infty |g(x)|^2 \, dx = \langle g \mid g \rangle. $$ The Cauchy–Schwarz inequality asserts that $$ \sigma_x^2 \sigma_p^2 = \langle f \mid f \rangle \cdot \langle g \mid g \rangle \ge |\langle f \mid g \rangle|^2 ~. $$ The modulus squared of any complex number z can be expressed as $$ |z|^{2} = \Big(\text{Re}(z)\Big)^{2}+\Big(\text{Im}(z)\Big)^{2} \geq \Big(\text{Im}(z)\Big)^{2} = \left(\frac{z-z^{\ast}}{2i}\right)^{2}. $$ we let $$ z=\langle f|g\rangle $$ and $$ z^{*}=\langle g\mid f\rangle $$ and substitute these into the equation above to get $$ |\langle f\mid g\rangle|^2 \geq \left(\frac{\langle f\mid g\rangle-\langle g \mid f \rangle}{2i}\right)^2 ~. $$ All that remains is to evaluate these inner products. $$ \begin{align} \langle f\mid g\rangle-\langle g\mid f\rangle &= \int_{-\infty}^\infty \psi^*(x) \, x \cdot \left(-i \hbar \frac{d}{dx}\right) \, \psi(x) \, dx - \int_{-\infty}^\infty \psi^*(x) \, \left(-i \hbar \frac{d}{dx}\right) \cdot x \, \psi(x) \, dx \\ &= i \hbar \cdot \int_{-\infty}^\infty \psi^*(x) \left[ \left(-x \cdot \frac{d\psi(x)}{dx}\right) + \frac{d(x \psi(x))}{dx} \right] \, dx \\ &= i \hbar \cdot \int_{-\infty}^\infty \psi^*(x) \left[ \left(-x \cdot \frac{d\psi(x)}{dx}\right) + \psi(x) + \left(x \cdot \frac{d\psi(x)}{dx}\right)\right] \, dx \\ &= i \hbar \cdot \int_{-\infty}^\infty \psi^*(x) \psi(x) \, dx \\ &= i \hbar \cdot \int_{-\infty}^\infty |\psi(x)|^2 \, dx \\ &= i \hbar \end{align} $$ Plugging this into the above inequalities, we get $$ \sigma_x^2 \sigma_p^2 \ge |\langle f \mid g \rangle|^2 \ge \left(\frac{\langle f\mid g\rangle-\langle g\mid f\rangle}{2i}\right)^2 = \left(\frac{i \hbar}{2 i}\right)^2 = \frac{\hbar^2}{4} $$ and taking the square root $$ \sigma_x \sigma_p \ge \frac{\hbar}{2}~. $$ with equality if and only if p and x are linearly dependent. Note that the only physics involved in this proof was that $$ \psi(x) $$ and $$ \varphi(p) $$ are wave functions for position and momentum, which are Fourier transforms of each other. A similar result would hold for any pair of conjugate variables. ### Matrix mechanics interpretation In matrix mechanics, observables such as position and momentum are represented by self-adjoint operators. When considering pairs of observables, an important quantity is the commutator. For a pair of operators and $$ \hat{B} $$ , one defines their commutator as $$ [\hat{A},\hat{B}]=\hat{A}\hat{B}-\hat{B}\hat{A}. $$ In the case of position and momentum, the commutator is the canonical commutation relation $$ [\hat{x},\hat{p}]=i \hbar. $$ The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let $$ |\psi\rangle $$ be a right eigenstate of position with a constant eigenvalue . By definition, this means that $$ \hat{x}|\psi\rangle = x_0 |\psi\rangle. $$ Applying the commutator to $$ |\psi\rangle $$ yields $$ [\hat{x},\hat{p}] | \psi \rangle = (\hat{x}\hat{p}-\hat{p}\hat{x}) | \psi \rangle = (\hat{x} - x_0 \hat{I}) \hat{p} \, | \psi \rangle = i \hbar | \psi \rangle, $$ where is the identity operator. Suppose, for the sake of proof by contradiction, that $$ |\psi\rangle $$ is also a right eigenstate of momentum, with constant eigenvalue . If this were true, then one could write $$ (\hat{x} - x_0 \hat{I}) \hat{p} \, | \psi \rangle = (\hat{x} - x_0 \hat{I}) p_0 \, | \psi \rangle = (x_0 \hat{I} - x_0 \hat{I}) p_0 \, | \psi \rangle=0. $$ On the other hand, the above canonical commutation relation requires that $$ [\hat{x},\hat{p}] | \psi \rangle=i \hbar | \psi \rangle \ne 0. $$ This implies that no quantum state can simultaneously be both a position and a momentum eigenstate. When a state is measured, it is projected onto an eigenstate in the basis of the relevant observable. For example, if a particle's position is measured, then the state amounts to a position eigenstate. This means that the state is not a momentum eigenstate, however, but rather it can be represented as a sum of multiple momentum basis eigenstates. In other words, the momentum must be less precise. This precision may be quantified by the standard deviations, $$ \sigma_x=\sqrt{\langle \hat{x}^2 \rangle-\langle \hat{x}\rangle^2} $$ $$ \sigma_p=\sqrt{\langle \hat{p}^2 \rangle-\langle \hat{p}\rangle^2}. $$ As in the wave mechanics interpretation above, one sees a tradeoff between the respective precisions of the two, quantified by the uncertainty principle. ### Quantum harmonic oscillator stationary states Consider a one-dimensional quantum harmonic oscillator. It is possible to express the position and momentum operators in terms of the creation and annihilation operators: $$ \hat x = \sqrt{\frac{\hbar}{2m\omega}}(a+a^\dagger) $$ $$ \hat p = i\sqrt{\frac{m \omega\hbar}{2}}(a^\dagger-a). $$ Using the standard rules for creation and annihilation operators on the energy eigenstates, $$ a^{\dagger}|n\rangle=\sqrt{n+1}|n+1\rangle $$ $$ a|n\rangle=\sqrt{n}|n-1\rangle, $$ the variances may be computed directly, $$ \sigma_x^2 = \frac{\hbar}{m\omega} \left( n+\frac{1}{2}\right) $$ $$ \sigma_p^2 = \hbar m\omega \left( n+\frac{1}{2}\right)\, . $$ The product of these standard deviations is then $$ \sigma_x \sigma_p = \hbar \left(n+\frac{1}{2}\right) \ge \frac{\hbar}{2}.~ $$ In particular, the above Kennard bound is saturated for the ground state , for which the probability density is just the normal distribution. ### Quantum harmonic oscillators with Gaussian initial condition In a quantum harmonic oscillator of characteristic angular frequency ω, place a state that is offset from the bottom of the potential by some displacement x0 as $$ \psi(x)=\left(\frac{m \Omega}{\pi \hbar}\right)^{1/4} \exp{\left( -\frac{m \Omega (x-x_0)^2}{2\hbar}\right)}, $$ where Ω describes the width of the initial state but need not be the same as ω. Through integration over the propagator, we can solve for the -dependent solution. After many cancelations, the probability densities reduce to $$ |\Psi(x,t)|^2 \sim \mathcal{N}\left( x_0 \cos{(\omega t)} , \frac{\hbar}{2 m \Omega} \left( \cos^2(\omega t) + \frac{\Omega^2}{\omega^2} \sin^2{(\omega t)} \right)\right) $$ $$ |\Phi(p,t)|^2 \sim \mathcal{N}\left( -m x_0 \omega \sin(\omega t), \frac{\hbar m \Omega}{2} \left( \cos^2{(\omega t)} + \frac{\omega^2}{\Omega^2} \sin^2{(\omega t)} \right)\right), $$ where we have used the notation $$ \mathcal{N}(\mu, \sigma^2) $$ to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as $$ \begin{align} \sigma_x \sigma_p&=\frac{\hbar}{2}\sqrt{\left( \cos^2{(\omega t)} + \frac{\Omega^2}{\omega^2} \sin^2{(\omega t)} \right)\left( \cos^2{(\omega t)} + \frac{\omega^2}{\Omega^2} \sin^2{(\omega t)} \right)} \\ &= \frac{\hbar}{4}\sqrt{3+\frac{1}{2}\left(\frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2}\right)-\left(\frac{1}{2}\left(\frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2}\right)-1\right) \cos{(4 \omega t)}} \end{align} $$ From the relations $$ \frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2} \ge 2, \quad |\cos(4 \omega t)| \le 1, $$ we can conclude the following (the right most equality holds only when ): $$ \sigma_x \sigma_p \ge \frac{\hbar}{4}\sqrt{3+\frac{1}{2} \left(\frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2}\right)-\left(\frac{1}{2} \left(\frac{\Omega^2}{\omega^2}+\frac{\omega^2}{\Omega^2}\right)-1\right)} = \frac{\hbar}{2}. $$ ### Coherent states A coherent state is a right eigenstate of the annihilation operator, $$ \hat{a}|\alpha\rangle=\alpha|\alpha\rangle, $$ which may be represented in terms of Fock states as $$ |\alpha\rangle =e^{-{|\alpha|^2\over2}} \sum_{n=0}^\infty {\alpha^n \over \sqrt{n!}}|n\rangle $$ In the picture where the coherent state is a massive particle in a quantum harmonic oscillator, the position and momentum operators may be expressed in terms of the annihilation operators in the same formulas above and used to calculate the variances, $$ \sigma_x^2 = \frac{\hbar}{2 m \omega}, $$ $$ \sigma_p^2 = \frac{\hbar m \omega}{2}. $$ Therefore, every coherent state saturates the Kennard bound $$ \sigma_x \sigma_p = \sqrt{\frac{\hbar}{2 m \omega}} \, \sqrt{\frac{\hbar m \omega}{2}} = \frac{\hbar}{2}. $$ with position and momentum each contributing an amount $$ \sqrt{\hbar/2} $$ in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general. ### Particle in a box Consider a particle in a one-dimensional box of length $$ L $$ . The eigenfunctions in position and momentum space are $$ \psi_n(x,t) =\begin{cases} A \sin(k_n x)\mathrm{e}^{-\mathrm{i}\omega_n t}, & 0 < x < L,\\ 0, & \text{otherwise,} \end{cases} $$ and $$ \varphi_n(p,t)=\sqrt{\frac{\pi L}{\hbar}}\,\,\frac{n\left(1-(-1)^ne^{-ikL} \right) e^{-i \omega_n t}}{\pi ^2 n^2-k^2 L^2}, $$ where $$ \omega_n=\frac{\pi^2 \hbar n^2}{8 L^2 m} $$ and we have used the de Broglie relation $$ p=\hbar k $$ . The variances of $$ x $$ and $$ p $$ can be calculated explicitly: $$ \sigma_x^2=\frac{L^2}{12}\left(1-\frac{6}{n^2\pi^2}\right) $$ $$ \sigma_p^2=\left(\frac{\hbar n\pi}{L}\right)^2. $$ The product of the standard deviations is therefore $$ \sigma_x \sigma_p = \frac{\hbar}{2} \sqrt{\frac{n^2\pi^2}{3}-2}. $$ For all $$ n=1, \, 2, \, 3,\, \ldots $$ , the quantity $$ \sqrt{\frac{n^2\pi^2}{3}-2} $$ is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when $$ n = 1 $$ , in which case $$ \sigma_x \sigma_p = \frac{\hbar}{2} \sqrt{\frac{\pi^2}{3}-2} \approx 0.568 \hbar > \frac{\hbar}{2}. $$ ### Constant momentum Assume a particle initially has a momentum space wave function described by a normal distribution around some constant momentum p0 according to $$ \varphi(p) = \left(\frac{x_0}{\hbar \sqrt{\pi}} \right)^{1/2} \exp\left(\frac{-x_0^2 (p-p_0)^2}{2\hbar^2}\right), $$ where we have introduced a reference scale $$ x_0=\sqrt{\hbar/m\omega_0} $$ , with $$ \omega_0>0 $$ describing the width of the distribution—cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are $$ \Phi(p,t) = \left(\frac{x_0}{\hbar \sqrt{\pi}} \right)^{1/2} \exp\left(\frac{-x_0^2 (p-p_0)^2}{2\hbar^2}-\frac{ip^2 t}{2m\hbar}\right), $$ $$ \Psi(x,t) = \left(\frac{1}{x_0 \sqrt{\pi}} \right)^{1/2} \frac{e^{-x_0^2 p_0^2 /2\hbar^2}}{\sqrt{1+i\omega_0 t}} \, \exp\left(-\frac{(x-ix_0^2 p_0/\hbar)^2}{2x_0^2 (1+i\omega_0 t)}\right). $$ Since $$ \langle p(t) \rangle = p_0 $$ and $$ \sigma_p(t) = \hbar /(\sqrt{2}x_0) $$ , this can be interpreted as a particle moving along with constant momentum at arbitrarily high precision. On the other hand, the standard deviation of the position is $$ \sigma_x = \frac{x_0}{\sqrt{2}} \sqrt{1+\omega_0^2 t^2} $$ such that the uncertainty product can only increase with time as $$ \sigma_x(t) \sigma_p(t) = \frac{\hbar}{2} \sqrt{1+\omega_0^2 t^2} $$ ## Mathematical formalism Starting with Kennard's derivation of position-momentum uncertainty, Howard Percy Robertson developed a formulation for arbitrary Hermitian operators $$ \hat{\mathcal{O}} $$ expressed in terms of their standard deviation $$ \sigma_{\mathcal{O}} = \sqrt{\langle \hat{\mathcal{O}}^2 \rangle-\langle \hat{\mathcal{O}}\rangle^2}, $$ where the brackets $$ \langle\hat{\mathcal{O}}\rangle $$ indicate an expectation value of the observable represented by operator $$ \hat{\mathcal{O}} $$ . For a pair of operators $$ \hat{A} $$ and $$ \hat{B} $$ , define their commutator as $$ [\hat{A},\hat{B}]=\hat{A}\hat{B}-\hat{B}\hat{A}, $$ and the Robertson uncertainty relation is given by $$ \sigma_A \sigma_B \geq \left| \frac{1}{2i}\langle[\hat{A},\hat{B}]\rangle \right| = \frac{1}{2}\left|\langle[\hat{A},\hat{B}]\rangle \right|. $$ Erwin Schrödinger showed how to allow for correlation between the operators, giving a stronger inequality, known as the Robertson–Schrödinger uncertainty relation, where the anticommutator, $$ \{\hat{A},\hat{B}\}=\hat{A}\hat{B}+\hat{B}\hat{A} $$ is used. ### Phase space In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function $$ W(x,p) $$ with star product ★ and a function f, the following is generally true: $$ \langle f^* \star f \rangle =\int (f^* \star f) \, W(x,p) \, dx \, dp \ge 0 ~. $$ Choosing $$ f = a + bx + cp $$ , we arrive at $$ \langle f^* \star f \rangle =\begin{bmatrix}a^* & b^* & c^* \end{bmatrix}\begin{bmatrix}1 & \langle x \rangle & \langle p \rangle \\ \langle x \rangle & \langle x \star x \rangle & \langle x \star p \rangle \\ \langle p \rangle & \langle p \star x \rangle & \langle p \star p \rangle \end{bmatrix}\begin{bmatrix}a \\ b \\ c\end{bmatrix} \ge 0 ~. $$ Since this positivity condition is true for all a, b, and c, it follows that all the eigenvalues of the matrix are non-negative. The non-negative eigenvalues then imply a corresponding non-negativity condition on the determinant, $$ \det\begin{bmatrix}1 & \langle x \rangle & \langle p \rangle \\ \langle x \rangle & \langle x \star x \rangle & \langle x \star p \rangle \\ \langle p \rangle & \langle p \star x \rangle & \langle p \star p \rangle \end{bmatrix} = \det\begin{bmatrix}1 & \langle x \rangle & \langle p \rangle \\ \langle x \rangle & \langle x^2 \rangle & \left\langle xp + \frac{i\hbar}{2} \right\rangle \\ \langle p \rangle & \left\langle xp - \frac{i\hbar}{2} \right\rangle & \langle p^2 \rangle \end{bmatrix} \ge 0~, $$ or, explicitly, after algebraic manipulation, $$ \sigma_x^2 \sigma_p^2 = \left( \langle x^2 \rangle - \langle x \rangle^2 \right)\left( \langle p^2 \rangle - \langle p \rangle^2 \right)\ge \left( \langle xp \rangle - \langle x \rangle \langle p \rangle \right)^2 + \frac{\hbar^2}{4} ~. $$ ### Examples Since the Robertson and Schrödinger relations are for general operators, the relations can be applied to any two observables to obtain specific uncertainty relations. A few of the most common relations found in the literature are given below. - Position–linear momentum uncertainty relation: for the position and linear momentum operators, the canonical commutation relation $$ [\hat{x}, \hat{p}] = i\hbar $$ implies the Kennard inequality from above: $$ \sigma_x \sigma_p \geq \frac{\hbar}{2}. $$ - Angular momentum uncertainty relation: For two orthogonal components of the total angular momentum operator of an object: $$ \sigma_{J_i} \sigma_{J_j} \geq \frac{\hbar}{2} \big|\langle J_k\rangle\big|, $$ where i, j, k are distinct, and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for $$ [J_x, J_y] = i \hbar \varepsilon_{xyz} J_z $$ , a choice $$ \hat{A} = J_x $$ , $$ \hat{B} = J_y $$ , in angular momentum multiplets, ψ = |j, m⟩, bounds the Casimir invariant (angular momentum squared, $$ \langle J_x^2+ J_y^2 + J_z^2 \rangle $$ ) from below and thus yields useful constraints such as , and hence j ≥ m, among others. - For the number of electrons in a superconductor and the phase of its Ginzburg–Landau order parameter $$ \Delta N \, \Delta \varphi \geq 1. $$ ### Limitations The derivation of the Robertson inequality for operators $$ \hat{A} $$ and $$ \hat{B} $$ requires $$ \hat{A}\hat{B}\psi $$ and $$ \hat{B}\hat{A}\psi $$ to be defined. There are quantum systems where these conditions are not valid. One example is a quantum particle on a ring, where the wave function depends on an angular variable $$ \theta $$ in the interval $$ [0,2\pi] $$ . Define "position" and "momentum" operators $$ \hat{A} $$ and $$ \hat{B} $$ by $$ \hat{A}\psi(\theta)=\theta\psi(\theta),\quad \theta\in [0,2\pi], $$ and $$ \hat{B}\psi=-i\hbar\frac{d\psi}{d\theta}, $$ with periodic boundary conditions on $$ \hat{B} $$ . The definition of $$ \hat{A} $$ depends the $$ \theta $$ range from 0 to $$ 2\pi $$ . These operators satisfy the usual commutation relations for position and momentum operators, $$ [\hat{A},\hat{B}]=i\hbar $$ . More precisely, $$ \hat{A}\hat{B}\psi-\hat{B}\hat{A}\psi=i\hbar\psi $$ whenever both $$ \hat{A}\hat{B}\psi $$ and $$ \hat{B}\hat{A}\psi $$ are defined, and the space of such $$ \psi $$ is a dense subspace of the quantum Hilbert space. Now let $$ \psi $$ be any of the eigenstates of $$ \hat{B} $$ , which are given by $$ \psi(\theta)=e^{2\pi in\theta} $$ . These states are normalizable, unlike the eigenstates of the momentum operator on the line. Also the operator $$ \hat{A} $$ is bounded, since $$ \theta $$ ranges over a bounded interval. Thus, in the state $$ \psi $$ , the uncertainty of $$ B $$ is zero and the uncertainty of $$ A $$ is finite, so that $$ \sigma_A\sigma_B=0. $$ The Robertson uncertainty principle does not apply in this case: $$ \psi $$ is not in the domain of the operator $$ \hat{B}\hat{A} $$ , since multiplication by $$ \theta $$ disrupts the periodic boundary conditions imposed on $$ \hat{B} $$ . For the usual position and momentum operators $$ \hat{X} $$ and $$ \hat{P} $$ on the real line, no such counterexamples can occur. As long as $$ \sigma_x $$ and $$ \sigma_p $$ are defined in the state $$ \psi $$ , the Heisenberg uncertainty principle holds, even if $$ \psi $$ fails to be in the domain of $$ \hat{X}\hat{P} $$ or of $$ \hat{P}\hat{X} $$ . ### Mixed states The Robertson–Schrödinger uncertainty can be improved noting that it must hold for all components $$ \varrho_k $$ in any decomposition of the density matrix given as $$ \varrho=\sum_k p_k \varrho_k. $$ Here, for the probabilities $$ p_k\ge0 $$ and $$ \sum_k p_k=1 $$ hold. Then, using the relation $$ \sum_k a_k \sum_k b_k \ge \left(\sum_k \sqrt{a_k b_k}\right)^2 $$ for $$ a_k,b_k\ge 0 $$ , it follows that $$ \sigma_A^2 \sigma_B^2 \geq \left[\sum_k p_k L(\varrho_k)\right]^2, $$ where the function in the bound is defined $$ L(\varrho) = \sqrt{\left | \frac{1}{2}\operatorname{tr}(\rho\{A,B\}) - \operatorname{tr}(\rho A)\operatorname{tr}(\rho B)\right |^2 +\left | \frac{1}{2i} \operatorname{tr}(\rho[A,B])\right | ^2}. $$ The above relation very often has a bound larger than that of the original Robertson–Schrödinger uncertainty relation. Thus, we need to calculate the bound of the Robertson–Schrödinger uncertainty for the mixed components of the quantum state rather than for the quantum state, and compute an average of their square roots. The following expression is stronger than the Robertson–Schrödinger uncertainty relation $$ \sigma_A^2 \sigma_B^2 \geq \left[\max_{p_k,\varrho_k} \sum_k p_k L(\varrho_k)\right]^2, $$ where on the right-hand side there is a concave roof over the decompositions of the density matrix. The improved relation above is saturated by all single-qubit quantum states. With similar arguments, one can derive a relation with a convex roof on the right-hand side $$ \sigma_A^2 F_Q[\varrho,B] \geq 4 \left[\min_{p_k,\Psi_k} \sum_k p_k L(\vert \Psi_k\rangle\langle \Psi_k\vert)\right]^2 $$ where $$ F_Q[\varrho,B] $$ denotes the quantum Fisher information and the density matrix is decomposed to pure states as $$ \varrho=\sum_k p_k \vert \Psi_k\rangle \langle \Psi_k\vert. $$ The derivation takes advantage of the fact that the quantum Fisher information is the convex roof of the variance times four. A simpler inequality follows without a convex roof $$ \sigma_A^2 F_Q[\varrho,B] \geq \vert \langle i[A,B]\rangle\vert^2, $$ which is stronger than the Heisenberg uncertainty relation, since for the quantum Fisher information we have $$ F_Q[\varrho,B]\le 4 \sigma_B, $$ while for pure states the equality holds. ### The Maccone–Pati uncertainty relations The Robertson–Schrödinger uncertainty relation can be trivial if the state of the system is chosen to be eigenstate of one of the observable. The stronger uncertainty relations proved by Lorenzo Maccone and Arun K. Pati give non-trivial bounds on the sum of the variances for two incompatible observables. (Earlier works on uncertainty relations formulated as the sum of variances include, e.g., Ref. due to Yichen Huang.) For two non-commuting observables $$ A $$ and $$ B $$ the first stronger uncertainty relation is given by $$ \sigma_{A}^2 + \sigma_{ B}^2 \ge \pm i \langle \Psi\mid [A, B]|\Psi \rangle + \mid \langle \Psi\mid(A \pm i B)\mid{\bar \Psi} \rangle|^2, $$ where $$ \sigma_{A}^2 = \langle \Psi |A^2 |\Psi \rangle - \langle \Psi \mid A \mid \Psi \rangle^2 $$ , $$ \sigma_{B}^2 = \langle \Psi |B^2 |\Psi \rangle - \langle \Psi \mid B \mid\Psi \rangle^2 $$ , $$ |{\bar \Psi} \rangle $$ is a normalized vector that is orthogonal to the state of the system $$ |\Psi \rangle $$ and one should choose the sign of $$ \pm i \langle \Psi\mid[A, B]\mid\Psi \rangle $$ to make this real quantity a positive number. The second stronger uncertainty relation is given by $$ \sigma_A^2 + \sigma_B^2 \ge \frac{1}{2}| \langle {\bar \Psi}_{A+B} \mid(A + B)\mid \Psi \rangle|^2 $$ where $$ | {\bar \Psi}_{A+B} \rangle $$ is a state orthogonal to $$ |\Psi \rangle $$ . The form of $$ | {\bar \Psi}_{A+B} \rangle $$ implies that the right-hand side of the new uncertainty relation is nonzero unless $$ | \Psi\rangle $$ is an eigenstate of $$ (A + B) $$ . One may note that $$ |\Psi \rangle $$ can be an eigenstate of $$ ( A+ B) $$ without being an eigenstate of either $$ A $$ or $$ B $$ . However, when $$ |\Psi \rangle $$ is an eigenstate of one of the two observables the Heisenberg–Schrödinger uncertainty relation becomes trivial. But the lower bound in the new relation is nonzero unless $$ |\Psi \rangle $$ is an eigenstate of both. ## Energy–time An energy–time uncertainty relation like $$ \Delta E \Delta t \gtrsim \hbar/2, $$ has a long, controversial history; the meaning of $$ \Delta t $$ and $$ \Delta E $$ varies and different formulations have different arenas of validity. However, one well-known application is both well established and experimentally verified: the connection between the life-time of a resonance state, $$ \tau_{\sqrt{1/2}} $$ and its energy width $$ \Delta E $$ : $$ \tau_{\sqrt{1/2}} \Delta E = \pi\hbar/4. $$ In particle-physics, widths from experimental fits to the Breit–Wigner energy distribution are used to characterize the lifetime of quasi-stable or decaying states. An informal, heuristic meaning of the principle is the following: A state that only exists for a short time cannot have a definite energy. To have a definite energy, the frequency of the state must be defined accurately, and this requires the state to hang around for many cycles, the reciprocal of the required accuracy. For example, in spectroscopy, excited states have a finite lifetime. By the time–energy uncertainty principle, they do not have a definite energy, and, each time they decay, the energy they release is slightly different. The average energy of the outgoing photon has a peak at the theoretical energy of the state, but the distribution has a finite width called the natural linewidth. Fast-decaying states have a broad linewidth, while slow-decaying states have a narrow linewidth. The same linewidth effect also makes it difficult to specify the rest mass of unstable, fast-decaying particles in particle physics. The faster the particle decays (the shorter its lifetime), the less certain is its mass (the larger the particle's width). ### Time in quantum mechanics The concept of "time" in quantum mechanics offers many challenges. There is no quantum theory of time measurement; relativity is both fundamental to time and difficult to include in quantum mechanics. While position and momentum are associated with a single particle, time is a system property: it has no operator needed for the Robertson–Schrödinger relation. The mathematical treatment of stable and unstable quantum systems differ. These factors combine to make energy–time uncertainty principles controversial. Three notions of "time" can be distinguished: external, intrinsic, and observable. External or laboratory time is seen by the experimenter; intrinsic time is inferred by changes in dynamic variables, like the hands of a clock or the motion of a free particle; observable time concerns time as an observable, the measurement of time-separated events. An external-time energy–time uncertainty principle might say that measuring the energy of a quantum system to an accuracy $$ \Delta E $$ requires a time interval $$ \Delta t > h/\Delta E $$ . However, Yakir Aharonov and David Bohm have shown that, in some quantum systems, energy can be measured accurately within an arbitrarily short time: external-time uncertainty principles are not universal. Intrinsic time is the basis for several formulations of energy–time uncertainty relations, including the ### Mandelstam–Tamm relation discussed in the next section. A physical system with an intrinsic time closely matching the external laboratory time is called a "clock". Observable time, measuring time between two events, remains a challenge for quantum theories; some progress has been made using positive operator-valued measure concepts. Mandelstam–Tamm In 1945, Leonid Mandelstam and Igor Tamm derived a non-relativistic time–energy uncertainty relation as follows. From Heisenberg mechanics, the generalized Ehrenfest theorem for an observable B without explicit time dependence, represented by a self-adjoint operator $$ \hat B $$ relates time dependence of the average value of $$ \hat B $$ to the average of its commutator with the Hamiltonian: $$ \frac{d\langle \hat{B} \rangle}{dt} = \frac{i}{\hbar}\langle [\hat{H},\hat{B}]\rangle. $$ The value of $$ \langle [\hat{H},\hat{B}]\rangle $$ is then substituted in the Robertson uncertainty relation for the energy operator $$ \hat H $$ and $$ \hat B $$ : $$ \sigma_H\sigma_B \geq \left|\frac{1}{2i} \langle[ \hat{H}, \hat{B}] \rangle\right|, $$ giving $$ \sigma_H \frac{\sigma_B}{\left| \frac{d\langle \hat B \rangle}{dt}\right |} \ge \frac{\hbar}{2} $$ (whenever the denominator is nonzero). While this is a universal result, it depends upon the observable chosen and that the deviations $$ \sigma_H $$ and $$ \sigma_B $$ are computed for a particular state. Identifying $$ \Delta E \equiv \sigma_E $$ and the characteristic time $$ \tau_B \equiv \frac{\sigma_B}{\left| \frac{d\langle \hat B \rangle}{dt}\right |} $$ gives an energy–time relationship $$ \Delta E \tau_B \ge \frac{\hbar}{2}. $$ Although $$ \tau_B $$ has the dimension of time, it is different from the time parameter t that enters the Schrödinger equation. This $$ \tau_B $$ can be interpreted as time for which the expectation value of the observable, $$ \langle \hat B \rangle, $$ changes by an amount equal to one standard deviation. Examples: - The time a free quantum particle passes a point in space is more uncertain as the energy of the state is more precisely controlled: $$ \Delta T = \hbar/2\Delta E. $$ Since the time spread is related to the particle position spread and the energy spread is related to the momentum spread, this relation is directly related to position–momentum uncertainty. - A Delta particle, a quasistable composite of quarks related to protons and neutrons, has a lifetime of 10−23 s, so its measured mass equivalent to energy, 1232 MeV/c2, varies by ±120 MeV/c2; this variation is intrinsic and not caused by measurement errors. - Two energy states $$ \psi_{1,2} $$ with energies $$ E_{1,2}, $$ superimposed to create a composite state $$ \Psi(x,t) = a\psi_1(x) e^{-iE_1t/h} + b\psi_2(x) e^{-iE_2t/h}. $$ The probability amplitude of this state has a time-dependent interference term: $$ |\Psi(x,t)|^2 = a^2|\psi_1(x)|^2 + b^2|\psi_2(x)|^2 + 2ab\cos(\frac{E_2 - E_1}{\hbar}t). $$ The oscillation period varies inversely with the energy difference: $$ \tau = 2\pi\hbar/(E_2 - E_1) $$ . Each example has a different meaning for the time uncertainty, according to the observable and state used. ### Quantum field theory Some formulations of quantum field theory uses temporary electron–positron pairs in its calculations called virtual particles. The mass-energy and lifetime of these particles are related by the energy–time uncertainty relation. The energy of a quantum systems is not known with enough precision to limit their behavior to a single, simple history. Thus the influence of all histories must be incorporated into quantum calculations, including those with much greater or much less energy than the mean of the measured/calculated energy distribution. The energy–time uncertainty principle does not temporarily violate conservation of energy; it does not imply that energy can be "borrowed" from the universe as long as it is "returned" within a short amount of time. The energy of the universe is not an exactly known parameter at all times. When events transpire at very short time intervals, there is uncertainty in the energy of these events. ## Harmonic analysis In the context of harmonic analysis the uncertainty principle implies that one cannot at the same time localize the value of a function and its Fourier transform. To wit, the following inequality holds, $$ \left(\int_{-\infty}^\infty x^2 |f(x)|^2\,dx\right)\left(\int_{-\infty}^\infty \xi^2 |\hat{f}(\xi)|^2\,d\xi\right)\ge \frac{\|f\|_2^4}{16\pi^2}. $$ Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function and its Fourier transform : $$ H_x+H_\xi \ge \log(e/2) $$ ### Signal processing In the context of time–frequency analysis uncertainty principles are referred to as the Gabor limit, after Dennis Gabor, or sometimes the Heisenberg–Gabor limit. The basic result, which follows from " ### Benedicks's theorem ", below, is that a function cannot be both time limited and band limited (a function and its Fourier transform cannot both have bounded domain)—see bandlimited versus timelimited. More accurately, the time-bandwidth or duration-bandwidth product satisfies $$ \sigma_{t} \sigma_{f} \ge \frac{1}{4\pi} \approx 0.08 \text{ cycles}, $$ where $$ \sigma_{t} $$ and $$ \sigma_{f} $$ are the standard deviations of the time and frequency energy concentrations respectively. The minimum is attained for a Gaussian-shaped pulse (Gabor wavelet) [For the un-squared Gaussian (i.e. signal amplitude) and its un-squared Fourier transform magnitude $$ \sigma_t\sigma_f=1/2\pi $$ ; squaring reduces each $$ \sigma $$ by a factor $$ \sqrt 2 $$ .] Another common measure is the product of the time and frequency full width at half maximum (of the power/energy), which for the Gaussian equals $$ 2 \ln 2 / \pi \approx 0.44 $$ (see bandwidth-limited pulse). Stated differently, one cannot simultaneously sharply localize a signal in both the time domain and frequency domain. When applied to filters, the result implies that one cannot simultaneously achieve a high temporal resolution and high frequency resolution at the same time; a concrete example are the resolution issues of the short-time Fourier transform—if one uses a wide window, one achieves good frequency resolution at the cost of temporal resolution, while a narrow window has the opposite trade-off. Alternate theorems give more precise quantitative results, and, in time–frequency analysis, rather than interpreting the (1-dimensional) time and frequency domains separately, one instead interprets the limit as a lower limit on the support of a function in the (2-dimensional) time–frequency plane. In practice, the Gabor limit limits the simultaneous time–frequency resolution one can achieve without interference; it is possible to achieve higher resolution, but at the cost of different components of the signal interfering with each other. As a result, in order to analyze signals where the transients are important, the wavelet transform is often used instead of the Fourier. ### Discrete Fourier transform Let $$ \left \{ \mathbf{ x_n } \right \} := x_0, x_1, \ldots, x_{N-1} $$ be a sequence of N complex numbers and $$ \left \{ \mathbf{X_k} \right \} := X_0, X_1, \ldots, X_{N-1}, $$ be its discrete Fourier transform. Denote by $$ \|x\|_0 $$ the number of non-zero elements in the time sequence $$ x_0,x_1,\ldots,x_{N-1} $$ and by $$ \|X\|_0 $$ the number of non-zero elements in the frequency sequence $$ X_0,X_1,\ldots,X_{N-1} $$ . Then, $$ \|x\|_0 \cdot \|X\|_0 \ge N. $$ This inequality is sharp, with equality achieved when x or X is a Dirac mass, or more generally when x is a nonzero multiple of a Dirac comb supported on a subgroup of the integers modulo N (in which case X is also a Dirac comb supported on a complementary subgroup, and vice versa). More generally, if T and W are subsets of the integers modulo N, let $$ L_T,R_W : \ell^2(\mathbb Z/N\mathbb Z)\to\ell^2(\mathbb Z/N\mathbb Z) $$ denote the time-limiting operator and band-limiting operators, respectively. Then $$ \|L_TR_W\|^2 \le \frac{|T||W|}{|G|} $$ where the norm is the operator norm of operators on the Hilbert space $$ \ell^2(\mathbb Z/N\mathbb Z) $$ of functions on the integers modulo N. This inequality has implications for signal reconstruction. When N is a prime number, a stronger inequality holds: $$ \|x\|_0 + \|X\|_0 \ge N + 1. $$ Discovered by Terence Tao, this inequality is also sharp. Benedicks's theorem Amrein–Berthier and Benedicks's theorem intuitively says that the set of points where is non-zero and the set of points where is non-zero cannot both be small. Specifically, it is impossible for a function in and its Fourier transform to both be supported on sets of finite Lebesgue measure. A more quantitative version is $$ \|f\|_{L^2(\mathbf{R}^d)}\leq Ce^{C|S||\Sigma|} \bigl(\|f\|_{L^2(S^c)} + \| \hat{f} \|_{L^2(\Sigma^c)} \bigr) ~. $$ One expects that the factor may be replaced by , which is only known if either or is convex. ### Hardy's uncertainty principle The mathematician G. H. Hardy formulated the following uncertainty principle: it is not possible for and to both be "very rapidly decreasing". Specifically, if in $$ L^2(\mathbb{R}) $$ is such that $$ |f(x)|\leq C(1+|x|)^Ne^{-a\pi x^2} $$ and $$ |\hat{f}(\xi)|\leq C(1+|\xi|)^Ne^{-b\pi \xi^2} $$ ( $$ C>0,N $$ an integer), then, if , while if , then there is a polynomial of degree such that $$ f(x)=P(x)e^{-a\pi x^2}. $$ This was later improved as follows: if $$ f \in L^2(\mathbb{R}^d) $$ is such that $$ \int_{\mathbb{R}^d}\int_{\mathbb{R}^d}|f(x)||\hat{f}(\xi)|\frac{e^{\pi|\langle x,\xi\rangle|}}{(1+|x|+|\xi|)^N} \, dx \, d\xi < +\infty ~, $$ then $$ f(x)=P(x)e^{-\pi\langle Ax,x\rangle} ~, $$ where is a polynomial of degree and is a real positive definite matrix. This result was stated in Beurling's complete works without proof and proved in Hörmander (the case $$ d=1,N=0 $$ ) and Bonami, Demange, and Jaming for the general case. Note that Hörmander–Beurling's version implies the case in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref. A full description of the case as well as the following extension to Schwartz class distributions appears in ref. ## Additional uncertainty relations ### Heisenberg limit In quantum metrology, and especially interferometry, the Heisenberg limit is the optimal rate at which the accuracy of a measurement can scale with the energy used in the measurement. Typically, this is the measurement of a phase (applied to one arm of a beam-splitter) and the energy is given by the number of photons used in an interferometer. Although some claim to have broken the Heisenberg limit, this reflects disagreement on the definition of the scaling resource. Suitably defined, the Heisenberg limit is a consequence of the basic principles of quantum mechanics and cannot be beaten, although the weak Heisenberg limit can be beaten. ### Systematic and statistical errors The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation $$ \sigma $$ . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect. If we let $$ \varepsilon_A $$ represent the error (i.e., inaccuracy) of a measurement of an observable A and $$ \eta_B $$ the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Masanao Ozawa − encompassing both systematic and statistical errors - holds: Heisenberg's uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as The formal derivation of the Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years. Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors $$ \sigma_A $$ and $$ \sigma_B $$ . There is increasing experimental evidence that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality. Using the same formalism, it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time): The two simultaneous measurements on A and B are necessarily unsharp or weak. It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson and Ozawa relations we obtain $$ \varepsilon_A \eta_B + \varepsilon_A \, \sigma_B + \sigma_A \, \eta_B + \sigma_A \sigma_B \geq \left|\Bigl\langle \bigl[\hat{A},\hat{B}\bigr] \Bigr\rangle \right| . $$ The four terms can be written as: $$ (\varepsilon_A + \sigma_A) \, (\eta_B + \sigma_B) \, \geq \, \left|\Bigl\langle\bigl[\hat{A},\hat{B} \bigr] \Bigr\rangle \right| . $$ Defining: $$ \bar \varepsilon_A \, \equiv \, (\varepsilon_A + \sigma_A) $$ as the inaccuracy in the measured values of the variable A and $$ \bar \eta_B \, \equiv \, (\eta_B + \sigma_B) $$ as the resulting fluctuation in the conjugate variable B, Kazuo Fujikawa established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors: ### Quantum entropic uncertainty principle For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period. Other examples include highly bimodal distributions, or unimodal distributions with divergent variance. A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty. This conjecture, also studied by I. I. Hirschman and proven in 1975 by W. Beckner and by Iwo Bialynicki-Birula and Jerzy Mycielski is that, for two normalized, dimensionless Fourier transform pairs and where $$ f(a) = \int_{-\infty}^\infty g(b)\ e^{2\pi i a b}\,db $$ and $$ \,\,\,g(b) = \int_{-\infty}^\infty f(a)\ e^{- 2\pi i a b}\,da $$ the Shannon information entropies $$ H_a = -\int_{-\infty}^\infty |f(a)|^2 \log |f(a)|^2\,da, $$ and $$ H_b = -\int_{-\infty}^\infty |g(b)|^2 \log |g(b)|^2\,db $$ are subject to the following constraint, where the logarithms may be in any base. The probability distribution functions associated with the position wave function and the momentum wave function have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by $$ H_x = - \int |\psi(x)|^2 \ln \left(x_0 \, |\psi(x)|^2 \right) dx =-\left\langle \ln \left(x_0 \, \left|\psi(x)\right|^2 \right) \right\rangle $$ $$ H_p = - \int |\varphi(p)|^2 \ln (p_0\,|\varphi(p)|^2) \,dp =-\left\langle \ln (p_0\left|\varphi(p)\right|^2 ) \right\rangle $$ where and are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function and the momentum wavefunction , the above constraint can be written for the corresponding entropies as where is the Planck constant. Depending on one's choice of the product, the expression may be written in many ways. If is chosen to be , then $$ H_x + H_p \ge \log \left(\frac{e}{2}\right) $$ If, instead, is chosen to be , then $$ H_x + H_p \ge \log (e\,\pi) $$ If and are chosen to be unity in whatever system of units are being used, then $$ H_x + H_p \ge \log \left(\frac{e\,h }{2}\right) $$ where is interpreted as a dimensionless number equal to the value of the Planck constant in the chosen system of units. Note that these inequalities can be extended to multimode quantum states, or wavefunctions in more than one spatial dimension. The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities $$ H_x \le \frac{1}{2} \log ( 2e\pi \sigma_x^2 / x_0^2 )~, $$ $$ H_p \le \frac{1}{2} \log ( 2e\pi \sigma_p^2 /p_0^2 )~, $$ (equivalently, from the fact that normal distributions maximize the entropy of all such with a given variance), it readily follows that this entropic uncertainty principle is stronger than the one based on standard deviations, because $$ \sigma_x \sigma_p \ge \frac{\hbar}{2} \exp\left(H_x + H_p - \log \left(\frac{e\,h}{2\,x_0\,p_0}\right)\right) \ge \frac{\hbar}{2}~. $$ In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof). Entropic uncertainty of the normal distributionWe demonstrate this method on the ground state of the QHO, which as discussed above saturates the usual uncertainty based on standard deviations. The length scale can be set to whatever is convenient, so we assign The probability distribution is the normal distribution with Shannon entropy A completely analogous calculation proceeds for the momentum distribution. Choosing a standard momentum of : The entropic uncertainty is therefore the limiting value A measurement apparatus will have a finite resolution set by the discretization of its possible outputs into bins, with the probability of lying within one of the bins given by the Born rule. We will consider the most common experimental situation, in which the bins are of uniform size. Let δx be a measure of the spatial resolution. We take the zeroth bin to be centered near the origin, with possibly some small constant offset c. The probability of lying within the jth interval of width δx is $$ \operatorname P[x_j]= \int_{(j-1/2)\delta x-c}^{(j+1/2)\delta x-c}| \psi(x)|^2 \, dx $$ To account for this discretization, we can define the Shannon entropy of the wave function for a given measurement apparatus as $$ H_x=-\sum_{j=-\infty}^\infty \operatorname P[x_j] \ln \operatorname P[x_j]. $$ Under the above definition, the entropic uncertainty relation is $$ H_x + H_p > \ln\left(\frac{e}{2}\right)-\ln\left(\frac{\delta x \delta p}{h} \right). $$ Here we note that is a typical infinitesimal phase space volume used in the calculation of a partition function. The inequality is also strict and not saturated. Efforts to improve this bound are an active area of research. Normal distribution exampleWe demonstrate this method first on the ground state of the QHO, which as discussed above saturates the usual uncertainty based on standard deviations. The probability of lying within one of these bins can be expressed in terms of the error function. The momentum probabilities are completely analogous. For simplicity, we will set the resolutions to so that the probabilities reduce to The Shannon entropy can be evaluated numerically. The entropic uncertainty is indeed larger than the limiting value. Note that despite being in the optimal case, the inequality is not saturated. Sinc function exampleAn example of a unimodal distribution with infinite variance is the sinc function. If the wave function is the correctly normalized uniform distribution, then its Fourier transform is the sinc function, which yields infinite momentum variance despite having a centralized shape. The entropic uncertainty, on the other hand, is finite. Suppose for simplicity that the spatial resolution is just a two-bin measurement, δx = a, and that the momentum resolution is δp = h/a. Partitioning the uniform spatial distribution into two equal bins is straightforward. We set the offset c = 1/2 so that the two bins span the distribution. The bins for momentum must cover the entire real line. As done with the spatial distribution, we could apply an offset. It turns out, however, that the Shannon entropy is minimized when the zeroth bin for momentum is centered at the origin. (The reader is encouraged to try adding an offset.) The probability of lying within an arbitrary momentum bin can be expressed in terms of the sine integral. The Shannon entropy can be evaluated numerically. The entropic uncertainty is indeed larger than the limiting value. ### Uncertainty relation with three angular momentum components For a particle of total angular momentum $$ j $$ the following uncertainty relation holds $$ \sigma_{J_x}^2+\sigma_{J_y}^2+\sigma_{J_z}^2\ge j, $$ where $$ J_l $$ are angular momentum components. The relation can be derived from $$ \langle J_x^2+J_y^2+J_z^2\rangle = j(j+1), $$ and $$ \langle J_x\rangle^2+\langle J_y\rangle^2+\langle J_z\rangle^2\le j. $$ The relation can be strengthened as $$ \sigma_{J_x}^2+\sigma_{J_y}^2+F_Q[\varrho,J_z]/4\ge j, $$ where $$ F_Q[\varrho,J_z] $$ is the quantum Fisher information. ## History In 1925 Heisenberg published the Umdeutung (reinterpretation) paper where he showed that central aspect of quantum theory was the non-commutativity: the theory implied that the relative order of position and momentum measurement was significant. Working with Max Born and Pascual Jordan, he continued to develop matrix mechanics, that would become the first modern quantum mechanics formulation. In March 1926, working in Bohr's institute, Heisenberg realized that the non-commutativity implies the uncertainty principle. Writing to Wolfgang Pauli in February 1927, he worked out the basic concepts. In his celebrated 1927 paper "" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement, but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. His paper gave an analysis in terms of a microscope that Bohr showed was incorrect; Heisenberg included an addendum to the publication. In his 1930 Chicago lecture he refined his principle: Later work broadened the concept. Any two variables that do not commute cannot be measured simultaneously—the more precisely one is known, the less precisely the other can be known. Heisenberg wrote:It can be expressed in its simplest form as follows: One can never know with perfect accuracy both of those two important factors which determine the movement of one of the smallest particles—its position and its velocity. It is impossible to determine accurately both the position and the direction and speed of a particle at the same instant.Heisenberg, W., Die Physik der Atomkerne, Taylor & Francis, 1952, p. 30. Kennard in 1927 first proved the modern inequality: where , and , are the standard deviations of position and momentum. (Heisenberg only proved relation () for the special case of Gaussian states.) In 1929 Robertson generalized the inequality to all observables and in 1930 Schrödinger extended the form to allow non-zero covariance of the operators; this result is referred to as Robertson-Schrödinger inequality. ### Terminology and translation Throughout the main body of his original 1927 paper, written in German, Heisenberg used the word "Ungenauigkeit", to describe the basic theoretical principle. Only in the endnote did he switch to the word "Unsicherheit". Later on, he always used "Unbestimmtheit". When the English-language version of Heisenberg's textbook, The Physical Principles of the Quantum Theory, was published in 1930, however, only the English word "uncertainty" was used, and it became the term in the English language. ### Heisenberg's microscope The principle is quite counter-intuitive, so the early students of quantum theory had to be reassured that naive measurements to violate it were bound always to be unworkable. One way in which Heisenberg originally illustrated the intrinsic impossibility of violating the uncertainty principle is by using the observer effect of an imaginary microscope as a measuring device. He imagines an experimenter trying to measure the position and momentum of an electron by shooting a photon at it. - Problem 1 – If the photon has a short wavelength, and therefore, a large momentum, the position can be measured accurately. But the photon scatters in a random direction, transferring a large and uncertain amount of momentum to the electron. If the photon has a long wavelength and low momentum, the collision does not disturb the electron's momentum very much, but the scattering will reveal its position only vaguely. - Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electron's beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around. The combination of these trade-offs implies that no matter what photon wavelength and aperture size are used, the product of the uncertainty in measured position and measured momentum is greater than or equal to a lower limit, which is (up to a small numerical factor) equal to the Planck constant. Heisenberg did not care to formulate the uncertainty principle as an exact limit, and preferred to use it instead, as a heuristic quantitative statement, correct up to small numerical factors, which makes the radically new noncommutativity of quantum mechanics inevitable. ### Intrinsic quantum uncertainty Historically, the uncertainty principle has been confused with a related effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the system, that is, without changing something in a system. Heisenberg used such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty. It has since become clearer, however, that the uncertainty principle is inherent in the properties of all wave-like systems, and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems and is not a statement about the observational success of current technology. ## Critical reactions The Copenhagen interpretation of quantum mechanics and Heisenberg's uncertainty principle were, in fact, initially seen as twin targets by detractors. According to the Copenhagen interpretation of quantum mechanics, there is no fundamental reality that the quantum state describes, just a prescription for calculating experimental results. There is no way to say what the state of a system fundamentally is, only what the result of observations might be. Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years. ### Ideal detached observer Wolfgang Pauli called Einstein's fundamental objection to the uncertainty principle "the ideal of the detached observer" (phrase translated from the German): ### Einstein's slit The first of Einstein's thought experiments challenging the uncertainty principle went as follows: Bohr's response was that the wall is quantum mechanical as well, and that to measure the recoil to accuracy , the momentum of the wall must be known to this accuracy before the particle passes through. This introduces an uncertainty in the position of the wall and therefore the position of the slit equal to , and if the wall's momentum is known precisely enough to measure the recoil, the slit's position is uncertain enough to disallow a position measurement. A similar analysis with particles diffracting through multiple slits is given by Richard Feynman. ### Einstein's box Bohr was present when Einstein proposed the thought experiment which has become known as Einstein's box. Einstein argued that "Heisenberg's uncertainty equation implied that the uncertainty in time was related to the uncertainty in energy, the product of the two being related to the Planck constant." Consider, he said, an ideal box, lined with mirrors so that it can contain light indefinitely. The box could be weighed before a clockwork mechanism opened an ideal shutter at a chosen instant to allow one single photon to escape. "We now know, explained Einstein, precisely the time at which the photon left the box." "Now, weigh the box again. The change of mass tells the energy of the emitted light. In this manner, said Einstein, one could measure the energy emitted and the time it was released with any desired precision, in contradiction to the uncertainty principle." Bohr spent a sleepless night considering this argument, and eventually realized that it was flawed. He pointed out that if the box were to be weighed, say by a spring and a pointer on a scale, "since the box must move vertically with a change in its weight, there will be uncertainty in its vertical velocity and therefore an uncertainty in its height above the table. ... Furthermore, the uncertainty about the elevation above the Earth's surface will result in an uncertainty in the rate of the clock", because of Einstein's own theory of gravity's effect on time. "Through this chain of uncertainties, Bohr showed that Einstein's light box experiment could not simultaneously measure exactly both the energy of the photon and the time of its escape." ### EPR paradox for entangled particles In 1935, Einstein, Boris Podolsky and Nathan Rosen published an analysis of spatially separated entangled particles (EPR paradox). According to EPR, one could measure the position of one of the entangled particles and the momentum of the second particle, and from those measurements deduce the position and momentum of both particles to any precision, violating the uncertainty principle. In order to avoid such possibility, the measurement of one particle must modify the probability distribution of the other particle instantaneously, possibly violating the principle of locality. In 1964, John Stewart Bell showed that this assumption can be falsified, since it would imply a certain inequality between the probabilities of different experiments. Experimental results confirm the predictions of quantum mechanics, ruling out EPR's basic assumption of local hidden variables. ### Popper's criticism Science philosopher Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist. He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations". In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. In 1934, Popper published ("Critique of the Uncertainty Relations") in , and in the same year (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing: Popper proposed an experiment to falsify the uncertainty relations, although he later withdrew his initial version after discussions with Carl Friedrich von Weizsäcker, Heisenberg, and Einstein; Popper sent his paper to Einstein and it may have influenced the formulation of the EPR paradox. ### Free will Some scientists, including Arthur Compton and Martin Heisenberg, have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature. Proponents of this theory commonly say that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells. ### Thermodynamics There is reason to believe that violating the uncertainty principle also strongly implies the violation of the second law of thermodynamics. See Gibbs paradox. ### Rejection of the principle Uncertainty principles relate quantum particles – electrons for example – to classical concepts – position and momentum. This presumes quantum particles have position and momentum. Edwin C. Kemble pointed out in 1937 that such properties cannot be experimentally verified and assuming they exist gives rise to many contradictions; similarly Rudolf Haag notes that position in quantum mechanics is an attribute of an interaction, say between an electron and a detector, not an intrinsic property. From this point of view the uncertainty principle is not a fundamental quantum property but a concept "carried over from the language of our ancestors", as Kemble says. ## Applications Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. All forms of spectroscopy, including particle physics use the relationship to relate measured energy line-width to the lifetime of quantum states. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems. Applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers.
https://en.wikipedia.org/wiki/Uncertainty_principle
Mac is a brand of personal computers designed and marketed by Apple since 1984. The name is short for Macintosh (its official name until 1999), a reference to the McIntosh apple. The current product lineup includes the MacBook Air and MacBook Pro laptops, and the iMac, Mac Mini, Mac Studio, and Mac Pro desktops. Macs are currently sold with Apple's UNIX-based macOS operating system, which is not licensed to other manufacturers and exclusively bundled with Mac computers. This operating system replaced Apple's original Macintosh operating system, which has variously been named System, Mac OS, and Classic Mac OS. Jef Raskin conceived the Macintosh project in 1979, which was usurped and redefined by Apple co-founder Steve Jobs in 1981. The original Macintosh was launched in January 1984, after Apple's "1984" advertisement during Super Bowl XVIII. A series of incrementally improved models followed, sharing the same integrated case design. In 1987, the Macintosh II brought color graphics, but priced as a professional workstation and not a personal computer. Beginning in 1994 with the Power Macintosh, the Mac transitioned from Motorola 68000 series processors to PowerPC. Macintosh clones by other manufacturers were also briefly sold afterwards. The line was refreshed in 1998 with the launch of the iMac G3, reinvigorating the line's competitiveness against commodity IBM PC compatibles. Macs transitioned to Intel x86 processors by 2006 along with new sub-product lines MacBook and Mac Pro. Since 2020, Macs have transitioned to Apple silicon chips based on ARM64. ## History ### 1979–1996: "Macintosh" era In the late 1970s, the Apple II became one of the most popular computers, especially in education. After IBM introduced the IBM PC in 1981, its sales surpassed the Apple II. In response, Apple introduced the Lisa in 1983. The Lisa's graphical user interface was inspired by strategically licensed demonstrations of the Xerox Star. Lisa surpassed the Star with intuitive direct manipulation, like the ability to drag and drop files, double-click to launch applications, and move or resize windows by clicking and dragging instead of going through a menu. However, hampered by its high price of and lack of available software, the Lisa was commercially unsuccessful. Parallel to the Lisa's development, a skunkworks team at Apple was working on the Macintosh project. Conceived in 1979 by Jef Raskin, Macintosh was envisioned as an affordable, easy-to-use computer for the masses. Raskin named the computer after his favorite type of apple, the McIntosh. The initial team consisted of Raskin, hardware engineer Burrell Smith, and Apple co-founder Steve Wozniak. In 1981, Steve Jobs was removed from the Lisa team and joined Macintosh, and was able to gradually take control of the project due to Wozniak's temporary absence after an airplane crash. Under Jobs, the Mac grew to resemble the Lisa, with a mouse and a more intuitive graphical interface, at a quarter of the Lisa's price. Upon its January 1984 launch, the first Macintosh was described as "revolutionary" by The New York Times. Sales initially met projections, but dropped due to the machine's low performance, single floppy disk drive requiring frequent disk swapping, and initial lack of applications. Author Douglas Adams said of it, "…what I (and I think everybody else who bought the machine in the early days) fell in love with was not the machine itself, which was ridiculously slow and underpowered, but a romantic idea of the machine. And that romantic idea had to sustain me through the realities of actually working on the 128K Mac." Most of the original Macintosh team left Apple, and some followed Jobs to found NeXT after he was forced out by CEO John Sculley. The first Macintosh nevertheless generated enthusiasm among buyers and some developers, who rushed to develop entirely new programs for the platform, including PageMaker, MORE, and Excel. Apple soon released the Macintosh 512K with improved performance and an external floppy drive. The Macintosh is credited with popularizing the graphical user interface, Jobs's fascination with typography gave it an unprecedented variety of fonts and type styles like italics, bold, shadow, and outline. It is the first WYSIWYG computer, and due in large part to PageMaker and Apple's LaserWriter printer, it ignited the desktop publishing market, turning the Macintosh from an early let-down into a notable success. Levy called desktop publishing the Mac's "Trojan horse" in the enterprise market, as colleagues and executives tried these Macs and were seduced into requesting one for themselves. PageMaker creator Paul Brainerd said: "You would see the pattern. A large corporation would buy PageMaker and a couple of Macs to do the company newsletter. The next year you'd come back and there would be thirty Macintoshes. The year after that, three hundred". Ease of use for computer novices was another incentive; Peat Marwick was the first, largest, and for some time the only large corporate customer, but after it merged with the IBM PC-using KMG to form KPMG in 1987, the combined company retained Macs after studying both platforms. In late 1985, Bill Atkinson, one of the few remaining employees to have been on the original Macintosh team, proposed that Apple create a Dynabook, Alan Kay's concept for a tablet computer that stores and organizes knowledge. Sculley rebuffed him, so he adapted the idea into a Mac program, HyperCard, whose cards store any information—text, image, audio, video—with the memex-like ability to semantically link cards together. HyperCard was released in 1987 and bundled with every Macintosh. In the late 1980s, Jean-Louis Gassée, a Sculley protégé who had succeeded Jobs as head of the Macintosh division, made the Mac more expandable and powerful to appeal to tech enthusiasts and enterprise customers. This strategy led to the successful 1987 release of the Macintosh II, which appealed to power users and gave the lineup momentum. However, Gassée's "no-compromise" approach foiled Apple's first laptop, the Macintosh Portable, which has many uncommon power user features, but is almost as heavy as the original Macintosh at twice its price. Soon after its launch, Gassée was fired. Since the Mac's debut, Sculley had opposed lowering the company's profit margins, and Macintoshes were priced far above entry-level MS-DOS compatible computers. Steven Levy said that though Macintoshes were superior, the cheapest Mac cost almost twice as much as the cheapest IBM PC compatible. Sculley also resisted licensing the Mac OS to competing hardware vendors, who could have undercut Apple on pricing and jeopardized its hardware sales, as IBM PC compatibles had done to IBM. These early strategic steps caused the Macintosh to lose its chance at becoming the dominant personal computer platform. Though senior management demanded high-margin products, a few employees disobeyed and set out to create a computer that would live up to the original Macintosh's slogan, "[a] computer for the rest of us", which the market clamored for. In a pattern typical of Apple's early era, of skunkworks projects like Macintosh and Macintosh II lacking adoption by upper management who were late to realize the projects' merit, this once-renegade project was actually endorsed by senior management following market pressures. In 1990 came the Macintosh LC and the more affordable Macintosh Classic, the first model under . Between 1984 and 1989, Apple had sold one million Macs, and another 10 million over the following five years. In 1991, the Macintosh Portable was replaced with the smaller and lighter PowerBook 100, the first laptop with a palm rest and trackball in front of the keyboard. The PowerBook brought of revenue within one year, and became a status symbol. By then, the Macintosh represented 10% to 15% of the personal computer market. Fearing a decline in market share, Sculley co-founded the AIM alliance with IBM and Motorola to create a new standardized computing platform, which led to the creation of the PowerPC processor architecture, and the Taligent operating system. In 1992, Apple introduced the Macintosh Performa line, which "grew like ivy" into a disorienting number of barely differentiated models in an attempt to gain market share. This backfired by confusing customers, but the same strategy soon afflicted the PowerBook line. Michael Spindler continued this approach when he succeeded Sculley as CEO in 1993. He oversaw the Mac's transition from Motorola 68000 series to PowerPC and the release of Apple's first PowerPC machine, the well-received Power Macintosh. Many new Macintoshes suffered from inventory and quality control problems. The 1995 PowerBook 5300 was plagued with quality problems, with several recalls as some units even caught fire. Pessimistic about Apple's future, Spindler repeatedly attempted to sell Apple to other companies, including IBM, Kodak, AT&T, Sun, and Philips. In a last-ditch attempt to fend off Windows, Apple yielded and started a Macintosh clone program, which allowed other manufacturers to make System 7 computers. However, this only cannibalized the sales of Apple's higher-margin machines. Meanwhile, Windows 95 was an instant hit with customers. Apple was struggling financially as its attempts to produce a System 7 successor had all failed with Taligent, Star Trek, and Copland, and its hardware was stagnant. The Mac was no longer competitive, and its sales entered a tailspin. Corporations abandoned Macintosh in droves, replacing it with cheaper and more technically sophisticated Windows NT machines for which far more applications and peripherals existed. Even some Apple loyalists saw no future for the Macintosh. Once the world's second largest computer vendor after IBM, Apple's market share declined precipitously from 9.4% in 1993 to 3.1% in 1997. Bill Gates was ready to abandon Microsoft Office for Mac, which would have slashed any remaining business appeal the Mac had. Gil Amelio, Spindler's successor, failed to negotiate a deal with Gates. In 1996, Spindler was succeeded by Amelio, who searched for an established operating system to acquire or license for the foundation of a new Macintosh operating system. He considered BeOS, Solaris, Windows NT, and NeXT's NeXTSTEP, eventually choosing the last. Announced on December 20, 1996, Apple acquired NeXT on February 7, 1997, returning its co-founder, Steve Jobs. ### 1997–2011: Steve Jobs era NeXT had developed the mature NeXTSTEP operating system with strong multimedia and Internet capabilities. NeXTSTEP was also popular among programmers, financial firms, and academia for its object-oriented programming tools for rapid application development. In an eagerly anticipated speech at the January 1997 Macworld trade show, Steve Jobs previewed Rhapsody, a merger of NeXTSTEP and Mac OS as the foundation of Apple's new operating system strategy. At the time, Jobs only served as advisor, and Amelio was released in July 1997. Jobs was formally appointed interim CEO in September, and permanent CEO in January 2000. To continue turning the company around, Jobs streamlined Apple's operations and began layoffs. He negotiated a deal with Bill Gates in which Microsoft committed to releasing new versions of Office for Mac for five years, investing $150 million in Apple, and settling an ongoing lawsuit in which Apple alleged that Windows had copied the Mac's interface. In exchange, Apple made Internet Explorer the default Mac browser. The deal was closed hours before Jobs announced it at the August 1997 Macworld. Jobs returned focus to Apple. The Mac lineup had been incomprehensible, with dozens of hard-to-distinguish models. He streamlined it into four quadrants, a laptop and a desktop each for consumers and professionals. Apple also discontinued several Mac accessories, including the StyleWriter printer and the Newton PDA. These changes were meant to refocus Apple's engineering, marketing, and manufacturing efforts so that more care could be dedicated to each product. Jobs also stopped licensing Mac OS to clone manufacturers, which had cost Apple ten times more in lost sales than it received in licensing fees. Jobs made a deal with the largest computer reseller, CompUSA, to carry a store-within-a-store that would better showcase Macs and their software and peripherals. According to Apple, the Mac's share of computer sales in those stores went from 3% to 14%. In November, the online Apple Store launched with built-to-order Mac configurations without a middleman. When Tim Cook was hired as chief operations officer in March 1998, he closed Apple's inefficient factories and outsourced Mac production to Taiwan. Within months, he rolled out a new ERP system and implemented just-in-time manufacturing principles. This practically eliminated Apple's costly unsold inventory, and within one year, Apple had the industry's most efficient inventory turnover. Jobs's top priority was "to ship a great new product". The first is the iMac G3, an all-in-one computer that was meant to make the Internet intuitive and easy to access. While PCs came in functional beige boxes, Jony Ive gave the iMac a radical and futuristic design, meant to make the product less intimidating. Its oblong case is made of translucent plastic in Bondi blue, later revised with many colors. Ive added a handle on the back to make the computer more approachable. Jobs declared the iMac would be "legacy-free", succeeding ADB and SCSI with an infrared port and cutting-edge USB ports. Though USB had industry backing, it was still absent from most PCs and USB 1.1 was only standardized one month after the iMac's release. He also controversially removed the floppy disk drive and replaced it with a CD drive. The iMac was unveiled in May 1998, and released in August. It was an immediate commercial success and became the fastest-selling computer in Apple's history, with 800,000 units sold before the year ended. Vindicating Jobs on the Internet's appeal to consumers, 32% of iMac buyers had never used a computer before, and 12% were switching from PCs. The iMac reestablished the Mac's reputation as a trendsetter: for the next few years, translucent plastic became the dominant design trend in numerous consumer products. Apple knew it had lost its chance to compete in the Windows-dominated enterprise market, so it prioritized design and ease of use to make the Mac more appealing to average consumers, and even teens. The "Apple New Product Process" was launched as a more collaborative product development process for the Mac, with concurrent engineering principles. From then, product development was no longer driven primarily by engineering and with design as an afterthought. Instead, Ive and Jobs first defined a new product's "soul", before it was jointly developed by the marketing, engineering, and operations teams. The engineering team was led by the product design group, and Ive's design studio was the dominant voice throughout the development process. The next two Mac products in 1999, the Power Mac G3 (nicknamed "Blue and White") and the iBook, introduced industrial designs influenced by the iMac, incorporating colorful translucent plastic and carrying handles. The iBook introduced several innovations: a strengthened hinge instead of a mechanical latch to keep it closed, ports on the sides rather than on the back, and the first laptop with built-in Wi-Fi. It became the best selling laptop in the U.S. during the fourth quarter of 1999. The professional-oriented Titanium PowerBook G4 was released in 2001, becoming the lightest and thinnest laptop in its class, and the first laptop with a wide-screen display; it also debuted a magnetic latch that secures the lid elegantly. The design language of consumer Macs shifted again from colored plastics to white polycarbonate with the introduction of the 2001 Dual USB "Ice" iBook. To increase the iBook's durability, it eliminated doors and handles, and gained a more minimalistic exterior. Ive attempted to go beyond the quadrant with Power Mac G4 Cube, an innovation beyond the computer tower in a professional desktop far smaller than the Power Mac. The Cube failed in the market and was withdrawn from sale after one year. However, Ive considered it beneficial, because it helped Apple gain experience in complex machining and miniaturization. The development of a successor to the old Mac OS was well underway. Rhapsody had been previewed at WWDC 1997, featuring a Mach kernel and BSD foundations, a virtualization layer for old Mac OS apps (codenamed Blue Box), and an implementation of NeXTSTEP APIs called OpenStep (codenamed Yellow Box). Apple open-sourced the core of Rhapsody as the Darwin operating system. After several developer previews, Apple also introduced the Carbon API, which provided a way for developers to more easily make their apps native to Mac OS X without rewriting them in Yellow Box. Mac OS X was publicly unveiled in January 2000, introducing the modern Aqua graphical user interface, and a far more stable Unix foundation, with memory protection and preemptive multitasking. Blue Box became the Classic environment, and Yellow Box was renamed Cocoa. Following a public beta, the first version of Mac OS X, version 10.0 Cheetah, was released in March 2001. In 1999, Apple launched its new "digital lifestyle" strategy of which the Mac became a "digital hub" and centerpiece with several new applications. In October 1999, the iMac DV gained FireWire ports, allowing users to connect camcorders and easily create movies with iMovie; the iMac gained a CD burner and iTunes, allowing users to rip CDs, make playlists, and burn them to blank discs. Other applications include iPhoto for organizing and editing photos, and GarageBand for creating and mixing music and other audio. The digital lifestyle strategy entered other markets, with the iTunes Store, iPod, iPhone, iPad, and the 2007 renaming from Apple Computer Inc. to Apple Inc. By January 2007, the iPod was half of Apple's revenues. New Macs include the white "Sunflower" iMac G4. Ive designed a display to swivel with one finger, so that it "appear[ed] to defy gravity". In 2003, Apple released the aluminum 12-inch and 17-inch PowerBook G4, proclaiming the "Year of the Notebook". With the Microsoft deal expiring, Apple also replaced Internet Explorer with its new browser, Safari. The first Mac Mini was intended to be assembled in the U.S., but domestic manufacturers were slow and had insufficient quality processes, leading Apple to Taiwanese manufacturer Foxconn. The affordably priced Mac Mini desktop was introduced at Macworld 2005, alongside the introduction of the iWork office suite. Serlet and Tevanian were both initiating the secret project asked by Steve Jobs to propose to Sony executives, in 2001, to sell Mac OS X on Vaio laptops. They showed them a demonstration at a golf party in Hawaii, with the most expensive Vaio laptop they could have acquired. But due to bad timing, Sony refused, arguing their Vaio sales just started to grow after years of difficulties. #### Intel transition and "back to the Mac" With PowerPC chips falling behind in performance, price, and efficiency, Steve Jobs announced in 2005 the Mac transition to Intel processors, because the operating system had been developed for both architectures since the beginning. PowerPC apps run using transparent Rosetta emulation, and Windows boots natively using Boot Camp. This transition helped contribute to a few years of growth in Mac sales. After the iPhone's 2007 release, Apple began a multi-year effort to bring many iPhone innovations "back to the Mac", including multi-touch gesture support, instant wake from sleep, and fast flash storage. At Macworld 2008, Jobs introduced the first MacBook Air by taking it out of a manila envelope, touting it as the "world's thinnest notebook". The MacBook Air favors wireless technologies over physical ports, and lacks FireWire, an optical drive, or a replaceable battery. The Remote Disc feature accesses discs in other networked computers. A decade after its launch, journalist Tom Warren wrote that the MacBook Air had "immediately changed the future of laptops", starting the ultrabook trend. OS X Lion added new software features first introduced with the iPad, such as FaceTime, full-screen apps, document autosaving and versioning, and a bundled Mac App Store to replace software install discs with online downloads. It gained support for Retina displays, which had been introduced earlier with the iPhone 4. iPhone-like multi-touch technology was progressively added to all MacBook trackpads, and to desktop Macs through the Magic Mouse, and Magic Trackpad. The 2010 MacBook Air added an iPad-inspired standby mode, "instant-on" wake from sleep, and flash memory storage. After criticism by Greenpeace, Apple improved the ecological performance of its products. The 2008 MacBook Air is free of toxic chemicals like mercury, bromide, and PVC, and with smaller packaging. The enclosures of the iMac and unibody MacBook Pro were redesigned with the more recyclable aluminum and glass. On February 24, 2011, the MacBook Pro became the first computer to support Intel's new Thunderbolt connector, with two-way transfer speeds of 10 Gbit/s, and backward compatibility with Mini DisplayPort. ### 2012–present: Tim Cook era Due to deteriorating health, Steve Jobs resigned as CEO on August 24, 2011, on which he would later die that October, and Tim Cook was named as his successor. Cook's first keynote address launched iCloud, moving the digital hub from the Mac to the cloud. In 2012, the MacBook Pro was refreshed with a Retina display, and the iMac was slimmed and lost its SuperDrive. During Cook's first few years as CEO, Apple fought media criticisms that it could no longer innovate without Jobs. In 2013, Apple introduced a new cylindrical Mac Pro, with marketing chief Phil Schiller exclaiming "Can't innovate anymore, my ass!". The new model had a miniaturized design with a glossy dark gray cylindrical body and internal components organized around a central cooling system. Tech reviewers praised the 2013 Mac Pro for its power and futuristic design; however, it was poorly received by professional users, who criticized its lack of upgradability and the removal of expansion slots. The iMac was refreshed with a 5K Retina display in 2014, making it the highest-resolution all-in-one desktop computer. The MacBook was reintroduced in 2015, with a completely redesigned aluminum unibody chassis, a 12-inch Retina display, a fanless low-power Intel Core M processor, a much smaller logic board, a new Butterfly keyboard, a single USB-C port, and a solid-state Force Touch trackpad with pressure sensitivity. It was praised for its portability, but criticized for its lack of performance, the need to use adapters to use most USB peripherals, and a high starting price of . In 2015, Apple started a service program to address a widespread GPU defect in the 15-inch 2011 MacBook Pro, which could cause graphical artifacts or prevent the machine from functioning entirely. #### Neglect of professional users The Touch Bar MacBook Pro was released in October 2016. It was the thinnest MacBook Pro ever made, replaced all ports with four Thunderbolt 3 (USB-C) ports, gained a thinner "Butterfly" keyboard, and replaced function keys with the Touch Bar. The Touch Bar was criticized for making it harder to use the function keys by feel, as it offered no tactile feedback. Many users were also frustrated by the need to buy dongles, particularly professional users who relied on traditional USB-A devices, SD cards, and HDMI for video output. A few months after its release, users reported a problem with stuck keys and letters being skipped or repeated. iFixit attributed this to the ingress of dust or food crumbs under the keys, jamming them. Since the Butterfly keyboard was riveted into the laptop's case, it could only be serviced at an Apple Store or authorized service center. Apple settled a $50M class-action lawsuit over these keyboards in 2022. These same models were afflicted by "flexgate": when users closed and opened the machine, they would risk progressively damaging the cable responsible for the display backlight, which was too short. The $6 cable was soldered to the screen, requiring a $700 repair. Senior Vice President of Industrial Design Jony Ive continued to guide product designs towards simplicity and minimalism. Critics argued that he had begun to prioritize form over function, and was excessively focused on product thinness. His role in the decisions to switch to fragile Butterfly keyboards, to make the Mac Pro non-expandable, and to remove USB-A, HDMI and the SD card slot from the MacBook Pro were criticized. The long-standing keyboard issue on MacBook Pros, Apple's abandonment of the Aperture professional photography app, and the lack of Mac Pro upgrades led to declining sales and a widespread belief that Apple was no longer committed to professional users. After several years without any significant updates to the Mac Pro, Apple executives admitted in 2017 that the 2013 Mac Pro had not met expectations, and said that the company had designed themselves into a "thermal corner", preventing them from releasing a planned dual-GPU successor. Apple also unveiled their future product roadmap for professional products, including plans for an iMac Pro as a stopgap and an expandable Mac Pro to be released later. The iMac Pro was revealed at WWDC 2017, featuring updated Intel Xeon W processors and Radeon Pro Vega graphics. In 2018, Apple released a redesigned MacBook Air with a Retina display, Butterfly keyboard, Force Touch trackpad, and Thunderbolt 3 USB-C ports. The Butterfly keyboard went through three revisions, incorporating silicone gaskets in the key mechanism to prevent keys from being jammed by dust or other particles. However, many users continued to experience reliability issues with these keyboards, leading Apple to launch a program to repair affected keyboards free of charge. Higher-end models of the 15-inch 2018 MacBook Pro faced another issue where the Core i9 processor reached unusually high temperatures, resulting in reduced CPU performance from thermal throttling. Apple issued a patch to address this issue via a macOS supplemental update, blaming a "missing digital key" in the thermal management firmware. The 2019 16-inch MacBook Pro and 2020 MacBook Air replaced the unreliable Butterfly keyboard with a redesigned scissor-switch Magic Keyboard. On the MacBook Pros, the Touch Bar and Touch ID were made standard, and the Esc key was detached from the Touch Bar and returned to being a physical key. At WWDC 2019, Apple unveiled a new Mac Pro with a larger case design that allows for hardware expandability, and introduced a new expansion module system (MPX) for modules such as the Afterburner card for faster video encoding. Almost every part of the new Mac Pro is user-replaceable, with iFixit praising its high user-repairability. It received positive reviews, with reviewers praising its power, modularity, quiet cooling, and Apple's increased focus on professional workflows. #### Apple silicon transition In April 2018, Bloomberg reported Apple's plan to replace Intel chips with ARM processors similar to those in its phones, causing Intel's shares to drop by 9.2%. The Verge commented on the rumors, that such a decision made sense, as Intel was failing to make significant improvements to its processors, and could not compete with ARM chips on battery life. At WWDC 2020, Tim Cook announced that the Mac would be transitioning to Apple silicon chips, built upon an ARM architecture, over a two-year timeline. The Rosetta 2 translation layer was also introduced, enabling Apple silicon Macs to run Intel apps. On November 10, 2020, Apple announced their first system-on-a-chip designed for the Mac, the Apple M1, and a series of Macs that would ship with the M1: the MacBook Air, Mac Mini, and the 13-inch MacBook Pro. These new Macs received highly positive reviews, with reviewers highlighting significant improvements in battery life, performance, and heat management compared to previous generations. The iMac Pro was discontinued on March 6, 2021. On April 20, 2021, a new 24-inch iMac was revealed, featuring the M1 chip, seven new colors, thinner white bezels, a higher-resolution 1080p webcam, and an enclosure made entirely from recycled aluminum. On October 18, 2021, Apple announced new 14-inch and 16-inch MacBook Pros, featuring the more powerful M1 Pro and M1 Max chips, a bezel-less mini-LED 120 Hz ProMotion display, and the return of MagSafe and HDMI ports, and the SD card slot. On March 8, 2022, the Mac Studio was unveiled, also featuring the M1 Max chip and the new M1 Ultra chip in a similar form factor to the Mac Mini. It drew highly positive reviews for its flexibility and wide range of available ports. Its performance was deemed "impressive", beating the highest-end Mac Pro with a 28-core Intel Xeon chip, while being significantly more power efficient and compact. It was introduced alongside the Studio Display, meant to replace the 27-inch iMac, which was discontinued on the same day. #### Post-Apple silicon transition At WWDC 2022, Apple announced an updated MacBook Air based on a new M2 chip. It incorporates several changes from the 14-inch MacBook Pro, such as a flat, slab-shaped design, full-sized function keys, MagSafe charging, and a Liquid Retina display, with rounded corners and a display cutout incorporating a 1080p webcam. The Mac Studio with M2 Max and M2 Ultra chips and the Mac Pro with M2 Ultra chip was unveiled at WWDC 2023, and the Intel-based Mac Pro was discontinued on the same day, completing the Mac transition to Apple silicon chips. The Mac Studio was received positively as a modest upgrade over the previous generation, albeit similarly priced PCs could be equipped with faster GPUs. However, the Apple silicon-based Mac Pro was criticized for several regressions, including memory capacity and a complete lack of CPU or GPU expansion options. A 15-inch MacBook Air was also introduced, and is the largest display included on a consumer-level Apple laptop. The MacBook Pro was updated on October 30, 2023, with updated M3 Pro and M3 Max chips using a 3 nm process node, as well as the standard M3 chip in a refreshed iMac and a new base model MacBook Pro. Reviewers lamented the base memory configuration of 8 GB on the standard M3 MacBook Pro. In March 2024, the MacBook Air was also updated to include the M3 chip. In October 2024, several Macs were announced with the M4 series of chips, including the iMac, a redesigned Mac Mini, and the MacBook Pro; all of which included 16 GB of memory as standard. The MacBook Air was also upgraded with 16 GB for the same price. ## Current Mac models + Mac models currently in production Release date Model Processor June 13, 2023 Mac Pro (2023) Apple M2 UltraNovember 8, 2024 iMac (24-inch, 2024) Apple M4 Mac Mini (2024) Apple M4 or M4 Pro MacBook Pro (14-inch, 2024) Apple M4, M4 Pro or M4 Max MacBook Pro (16-inch, 2024) Apple M4 Pro or M4 MaxMarch 12, 2025 MacBook Air (13-inch, M4, 2025)Apple M4 MacBook Air (15-inch, M4, 2025) Mac Studio (2025) Apple M4 Max or M3 Ultra ## Marketing The original Macintosh was marketed at Super Bowl XVIII with the highly acclaimed "1984" ad, directed by Ridley Scott. The ad alluded to George Orwell's novel Nineteen Eighty-Four, and symbolized Apple's desire to "rescue" humanity from the conformity of computer industry giant IBM. The ad is now considered a "watershed event" and a "masterpiece." Before the Macintosh, high-tech marketing catered to industry insiders rather than consumers, so journalists covered technology like the "steel or automobiles" industries, with articles written for a highly technical audience. The Macintosh launch event pioneered event marketing techniques that have since become "widely emulated" in Silicon Valley, by creating a mystique about the product and giving an inside look into its creation. Apple took a new "multiple exclusives" approach regarding the press, giving "over one hundred interviews to journalists that lasted over six hours apiece", and introduced a new "Test Drive a Macintosh" campaign. Apple's brand, which established a "heartfelt connection with consumers", is cited as one of the keys to the Mac's success. After Steve Jobs's return to the company, he launched the Think different ad campaign, positioning the Mac as the best computer for "creative people who believe that one person can change the world". The campaign featured black-and-white photographs of luminaries like Albert Einstein, Gandhi, and Martin Luther King Jr., with Jobs saying: "if they ever used a computer, it would have been a Mac". The ad campaign was critically acclaimed and won several awards, including a Primetime Emmy. In the 2000s, Apple continued to use successful marketing campaigns to promote the Mac line, including the Switch and Get a Mac campaigns. Apple's focus on design and build quality has helped establish the Mac as a high-end, premium brand. The company's emphasis on creating iconic and visually appealing designs for its computers has given them a "human face" and made them stand out in a crowded market. Apple has long made product placements in high-profile movies and television shows to showcase Mac computers, like Mission: Impossible, Legally Blonde, and Sex and the City. Apple is known for not allowing producers to show villains using Apple products. Its own shows produced for the Apple TV+ streaming service feature prominent use of MacBooks. The Mac is known for its highly loyal customer base. In 2022, the American Customer Satisfaction Index gave the Mac the highest customer satisfaction score of any personal computer, at 82 out of 100. In that year, Apple was the fourth largest vendor of personal computers, with a market share of 8.9%. ## Hardware Apple outsources the production of its hardware to Asian manufacturers like Foxconn and Pegatron. As a highly vertically integrated company developing its own operating system and chips, it has tight control over all aspects of its products and deep integration between hardware and software. All Macs in production use ARM-based Apple silicon processors and have been praised for their performance and power efficiency. They can run Intel apps through the Rosetta 2 translation layer, and iOS and iPadOS apps distributed via the App Store. These Mac models come equipped with high-speed Thunderbolt 4 or USB 4 connectivity, with speeds up to 40 Gbit/s. Apple silicon Macs have custom integrated graphics rather than graphics cards. MacBooks are recharged with either USB-C or MagSafe connectors, depending on the model. Apple sells accessories for the Mac, including the Studio Display and Pro Display XDR external monitors, the AirPods line of wireless headphones, and keyboards and mice such as the Magic Keyboard, Magic Trackpad, and Magic Mouse. ## Software Macs run the macOS operating system, which is the second most widely used desktop OS according to StatCounter. Macs can also run Windows, Linux, or other operating systems through virtualization, emulation, or multi-booting. macOS is the successor of the classic Mac OS, which had nine releases between 1984 and 1999. The last version of classic Mac OS, Mac OS 9, was introduced in 1999. Mac OS 9 was succeeded by Mac OS X in 2001. Over the years, Mac OS X was rebranded first to OS X and later to macOS. macOS is a derivative of NextSTEP and FreeBSD. It uses the XNU kernel, and the core of macOS has been open-sourced as the Darwin operating system. macOS features the Aqua user interface, the Cocoa set of frameworks, and the Objective-C and Swift programming languages. Macs are deeply integrated with other Apple devices, including the iPhone and iPad, through Continuity features like Handoff, Sidecar, Universal Control, and Universal Clipboard. The first version of Mac OS X, version 10.0, was released in March 2001. Subsequent releases introduced major changes and features to the operating system. 10.4 Tiger added Spotlight search; 10.6 Snow Leopard brought refinements, stability, and full 64-bit support; 10.7 Lion introduced many iPad-inspired features; 10.10 Yosemite introduced a complete user interface revamp, replacing skeuomorphic designs with iOS 7-esque flat designs; 10.12 Sierra added the Siri voice assistant and Apple File System (APFS) support; 10.14 Mojave added a dark user interface mode; 10.15 Catalina dropped support for 32-bit apps; 11 Big Sur introduced an iOS-inspired redesign of the user interface, 12 Monterey added the Shortcuts app, Low Power Mode, and AirPlay to Mac; and 13 Ventura added Stage Manager, Continuity Camera, and passkeys. The Mac has a variety of apps available, including cross-platform apps like Google Chrome, Microsoft Office, Adobe Creative Cloud, Mathematica, Visual Studio Code, Ableton Live, and Cinema 4D. Apple has also developed several apps for the Mac, including Final Cut Pro, Logic Pro, iWork, GarageBand, and iMovie. A large amount of open-source software applications run natively on macOS, such as LibreOffice, VLC, and GIMP, and command-line programs, which can be installed through Macports and Homebrew. Many applications for Linux or BSD also run on macOS, often using X11. Apple's official integrated development environment (IDE) is Xcode, allowing developers to create apps for the Mac and other Apple platforms. The latest release of macOS is macOS 15 Sequoia, released on September 16, 2024. ## Timeline ## References ### Bibliography - - - - - - - - - ## Further reading - - - - - - - - - ## External links - Macintosh computer Category:Computer-related introductions in 1984 Category:Apple computers Category:Steve Jobs
https://en.wikipedia.org/wiki/Mac_%28computer%29
A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. These networks were first introduced to learn distributed representations of structure (such as logical terms), but have been successful in multiple applications, for instance in learning sequence and tree structures in natural language processing (mainly continuous representations of phrases and sentences based on word embeddings). ## Architectures ### Basic In the simplest architecture, nodes are combined into parents using a weight matrix (which is shared across the whole network) and a non-linearity such as the $$ \tanh $$ hyperbolic function. If $$ c_1 $$ and $$ c_2 $$ are $$ n $$ -dimensional vector representations of nodes, their parent will also be an $$ n $$ -dimensional vector, defined as: $$ p_{1,2} = \tanh(W[c_1;c_2]) $$ where $$ W $$ is a learned $$ n \times 2n $$ weight matrix. This architecture, with a few improvements, has been used for successfully parsing natural scenes, syntactic parsing of natural language sentences, and recursive autoencoding and generative modeling of 3D shape structures in the form of cuboid abstractions. ### Recursive cascade correlation (RecCC) RecCC is a constructive neural network approach to deal with tree domains with pioneering applications to chemistry and extension to directed acyclic graphs. ### Unsupervised RNN A framework for unsupervised RNN has been introduced in 2004. ### Tensor Recursive neural tensor networks use a single tensor-based composition function for all nodes in the tree. ## Training ### Stochastic gradient descent Typically, stochastic gradient descent (SGD) is used to train the network. The gradient is computed using backpropagation through structure (BPTS), a variant of backpropagation through time used for recurrent neural networks. ## Properties The universal approximation capability of RNNs over trees has been proved in literature. ## Related models ### Recurrent neural networks Recurrent neural networks are recursive artificial neural networks with a certain structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step. ### Tree Echo State Networks An efficient approach to implement recursive neural networks is given by the Tree Echo State Network within the reservoir computing paradigm. ### Extension to graphs Extensions to graphs include graph neural network (GNN), Neural Network for Graphs (NN4G), and more recently convolutional neural networks for graphs. ## References Category:Neural network architectures Category:Artificial neural networks
https://en.wikipedia.org/wiki/Recursive_neural_network
In probability theory and ergodic theory, a ### Markov operator is an operator on a certain function space that conserves the mass (the so-called Markov property). If the underlying measurable space is topologically sufficiently rich enough, then the Markov operator admits a kernel representation. Markov operators can be linear or non-linear. Closely related to Markov operators is the ### Markov semigroup . The definition of Markov operators is not entirely consistent in the literature. Markov operators are named after the Russian mathematician Andrey Markov. ## Definitions Markov operator Let $$ (E,\mathcal{F}) $$ be a measurable space and $$ V $$ a set of real, measurable functions $$ f:(E,\mathcal{F})\to (\mathbb{R},\mathcal{B}(\mathbb{R})) $$ . A linear operator $$ P $$ on $$ V $$ is a Markov operator if the following is true 1. $$ P $$ maps bounded, measurable function on bounded, measurable functions. 1. Let $$ \mathbf{1} $$ be the constant function $$ x\mapsto 1 $$ , then $$ P(\mathbf{1})=\mathbf{1} $$ holds. (conservation of mass / Markov property) 1. If $$ f\geq 0 $$ then $$ Pf\geq 0 $$ . (conservation of positivity) #### Alternative definitions Some authors define the operators on the Lp spaces as $$ P:L^p(X)\to L^p(Y) $$ and replace the first condition (bounded, measurable functions on such) with the property $$ \|Pf\|_Y = \|f\|_X,\quad \forall f\in L^p(X) $$ Markov semigroup Let $$ \mathcal{P}=\{P_t\}_{t\geq 0} $$ be a family of Markov operators defined on the set of bounded, measurables function on $$ (E,\mathcal{F}) $$ . Then $$ \mathcal{P} $$ is a Markov semigroup when the following is true 1. $$ P_0=\operatorname{Id} $$ . 1. $$ P_{t+s}=P_t\circ P_s $$ for all $$ t,s\geq 0 $$ . 1. There exist a σ-finite measure $$ \mu $$ on $$ (E,\mathcal{F}) $$ that is invariant under $$ \mathcal{P} $$ , that means for all bounded, positive and measurable functions $$ f:E\to \mathbb{R} $$ and every $$ t\geq 0 $$ the following holds $$ \int_E P_tf\mathrm{d}\mu =\int_E f\mathrm{d}\mu $$ . ### Dual semigroup Each Markov semigroup $$ \mathcal{P}=\{P_t\}_{t\geq 0} $$ induces a dual semigroup $$ (P^*_t)_{t\geq 0} $$ through $$ \int_EP_tf\mathrm{d\mu} =\int_E f\mathrm{d}\left(P^*_t\mu\right). $$ If $$ \mu $$ is invariant under $$ \mathcal{P} $$ then $$ P^*_t\mu=\mu $$ . ### Infinitesimal generator of the semigroup Let $$ \{P_t\}_{t\geq 0} $$ be a family of bounded, linear Markov operators on the Hilbert space $$ L^2(\mu) $$ , where $$ \mu $$ is an invariant measure. The infinitesimal generator $$ L $$ of the Markov semigroup $$ \mathcal{P}=\{P_t\}_{t\geq 0} $$ is defined as $$ Lf=\lim\limits_{t\downarrow 0}\frac{P_t f-f}{t}, $$ and the domain $$ D(L) $$ is the $$ L^2(\mu) $$ -space of all such functions where this limit exists and is in $$ L^2(\mu) $$ again. $$ D(L)=\left\{f\in L^2(\mu): \lim\limits_{t\downarrow 0}\frac{P_t f-f}{t}\text{ exists and is in } L^2(\mu)\right\}. $$ The carré du champ operator $$ \Gamma $$ measuers how far $$ L $$ is from being a derivation. ### Kernel representation of a Markov operator A Markov operator $$ P_t $$ has a kernel representation $$ (P_tf)(x)=\int_E f(y)p_t(x,\mathrm{d}y),\quad x\in E, $$ with respect to some probability kernel $$ p_t(x,A) $$ , if the underlying measurable space $$ (E,\mathcal{F}) $$ has the following sufficient topological properties: 1. Each probability measure $$ \mu:\mathcal{F}\times \mathcal{F}\to [0,1] $$ can be decomposed as $$ \mu(\mathrm{d}x,\mathrm{d}y)=k(x,\mathrm{d}y)\mu_1(\mathrm{d}x) $$ , where $$ \mu_1 $$ is the projection onto the first component and $$ k(x,\mathrm{d}y) $$ is a probability kernel. 1. There exist a countable family that generates the σ-algebra $$ \mathcal{F} $$ . If one defines now a σ-finite measure on $$ (E,\mathcal{F}) $$ then it is possible to prove that ever Markov operator $$ P $$ admits such a kernel representation with respect to $$ k(x,\mathrm{d}y) $$ . ## Literature - - - ## References Category:Probability theory Category:Ergodic theory Category:Linear operators
https://en.wikipedia.org/wiki/Markov_operator
Cross-site request forgery, also known as one-click attack or session riding and abbreviated as CSRF (sometimes pronounced sea-surf) or XSRF, is a type of malicious exploit of a website or web application where unauthorized commands are submitted from a user that the web application trusts. There are many ways in which a malicious website can transmit such commands; specially-crafted image tags, hidden forms, and JavaScript fetch or XMLHttpRequests, for example, can all work without the user's interaction or even knowledge. Unlike cross-site scripting (XSS), which exploits the trust a user has for a particular site, CSRF exploits the trust that a site has in a user's browser. In a CSRF attack, an innocent end user is tricked by an attacker into submitting a web request that they did not intend. This may cause actions to be performed on the website that can include inadvertent client or server data leakage, change of session state, or manipulation of an end user's account. The term "CSRF" is also used as an abbreviation in defences against CSRF attacks, such as techniques that use header data, form data, or cookies, to test for and prevent such attacks. ## Characteristics In a CSRF attack, the attacker's goal is to cause an innocent victim to unknowingly submit a maliciously crafted web request to a website that the victim has privileged access to. This web request can be crafted to include URL parameters, cookies and other data that appear normal to the web server processing the request. At risk are web applications that perform actions based on input from trusted and authenticated users without requiring the user to authorize (e.g. via a popup confirmation) the specific action. A user who is authenticated by a cookie saved in the user's web browser could unknowingly send an HTTP request to a site that trusts the user and thereby cause an unwanted action. A general property of web browsers is that they will automatically and invisibly include any cookies (including session cookies and others) used by a given domain in any web request sent to that domain. This property is exploited by CSRF attacks. In the event that a user is tricked into inadvertently submitting a request through their browser these automatically included cookies will cause the forged request to appear real to the web server and it will perform any appropriately requested actions including returning data, manipulating session state, or making changes to the victim's account. In order for a CSRF attack to work, an attacker must identify a reproducible web request that executes a specific action such as changing an account password on the target page. Once such a request is identified, a link can be created that generates this malicious request and that link can be embedded on a page within the attacker's control. This link may be placed in such a way that it is not even necessary for the victim to click the link. For example, it may be embedded within an html image tag on an email sent to the victim which will automatically be loaded when the victim opens their email. Once the victim has clicked the link, their browser will automatically include any cookies used by that website and submit the request to the web server. The web server will not be able to identify the forgery because the request was made by a user that was logged in, and submitted all the requisite cookies. Cross-site request forgery is an example of a confused deputy attack against a web browser because the web browser is tricked into submitting a forged request by a less privileged attacker. CSRF commonly has the following characteristics: - It involves sites that rely on a user's identity. - It exploits the site's trust in that identity. - It tricks the user's browser into sending HTTP requests to a target site where the user is already authenticated. - It involves HTTP requests that have side effects. ## History CSRF Token vulnerabilities have been known and in some cases exploited since 2001. Because it is carried out from the user's IP address, some website logs might not have evidence of CSRF. Exploits are under-reported, at least publicly, and as of 2007 there were few well-documented examples: - The Netflix website in 2006 had numerous vulnerabilities to CSRF, which could have allowed an attacker to perform actions such as adding a DVD to the victim's rental queue, changing the shipping address on the account, or altering the victim's login credentials to fully compromise the account. - The online banking web application of ING Direct was vulnerable to a CSRF attack that allowed illicit money transfers. - Popular video website YouTube was also vulnerable to CSRF in 2008 and this allowed any attacker to perform nearly all actions of any user. - McAfee Secure was also vulnerable to CSRF and it allowed attackers to change their company system. This is fixed in newer versions. ## Example Attackers who can find a reproducible link that executes a specific action on the target page while the victim is logged in can embed such link on a page they control and trick the victim into opening it. The attack carrier link may be placed in a location that the victim is likely to visit while logged into the target site (for example, a discussion forum), or sent in an HTML email body or attachment. A real CSRF vulnerability in uTorrent (CVE-2008-6586) exploited the fact that its web console accessible at localhost:8080 allowed critical actions to be executed using a simple GET request: Force a .torrent file download http://localhost:8080/gui/?action=add-url&s=http://evil.example.com/backdoor.torrent Change uTorrent administrator password http://localhost:8080/gui/?action=setsetting&s=webui.password&v=eviladmin Attacks were launched by placing malicious, automatic-action HTML image elements on forums and email spam, so that browsers visiting these pages would open them automatically, without much user action. People running vulnerable uTorrent version at the same time as opening these pages were susceptible to the attack. CSRF attacks using image tags are often made from Internet forums, where users are allowed to post images but not JavaScript, for example using BBCode: ```bbcode [img]http://localhost:8080/gui/?action=add-url&s=http://evil.example.com/backdoor.torrent[/img] ``` When accessing the attack link to the local uTorrent application at , the browser would also always automatically send any existing cookies for that domain. This general property of web browsers enables CSRF attacks to exploit their targeted vulnerabilities and execute hostile actions as long as the user is logged into the target website (in this example, the local uTorrent web interface) at the time of the attack. In the uTorrent example described above, the attack was facilitated by the fact that uTorrent's web interface used GET request for critical state-changing operations (change credentials, download a file etc.), which explicitly discourages: Because of this assumption, many existing CSRF prevention mechanisms in web frameworks will not cover GET requests, but rather apply the protection only to HTTP methods that are intended to be state-changing. ## Forging login requests An attacker may forge a request to log the victim into a target website using the attacker's credentials; this is known as login CSRF. Login CSRF makes various novel attacks possible; for instance, an attacker can later log into the site with their legitimate credentials and view private information like activity history that has been saved in the account. This attack has been demonstrated against Google and Yahoo. ## HTTP verbs and CSRF Depending on the type, the HTTP request methods vary in their susceptibility to the CSRF attacks (due to the differences in their handling by the web browsers). Therefore, the protective measures against an attack depend on the method of the HTTP request. - In HTTP GET the CSRF exploitation is trivial, using methods described above, such as a simple hyperlink containing manipulated parameters and automatically loaded by an IMG tag. By the HTTP specification however, GET should be used as a safe method, that is, not significantly changing user's state in the application. Applications using GET for such operations should switch to HTTP POST or use anti-CSRF protection. - the HTTP POST vulnerability to CSRF depends on the usage scenario: - In simplest form of POST with data encoded as a query string (`field1=value1&field2=value2`) CSRF attack is easily implemented using a simple HTML form and anti-CSRF measures must be applied. - If data is sent in any other format (JSON, XML) a standard method is to issue a POST request using XMLHttpRequest with CSRF attacks prevented by Same-origin policy (SOP) and Cross-origin resource sharing (CORS); there is a technique to send arbitrary content from a simple HTML form using `ENCTYPE` attribute; such a fake request can be distinguished from legitimate ones by `text/plain` content type, but if this is not enforced on the server, CSRF can be executed - other HTTP methods (PUT, DELETE etc.) can only be issued using XMLHttpRequest with Same-origin policy (SOP) and Cross-origin resource sharing (CORS) preventing CSRF; these measures however will not be active on websites that explicitly disable them using `Access-Control-Allow-Origin: *` header ## Other approaches to CSRF Additionally, while typically described as a static type of attack, CSRF can also be dynamically constructed as part of a payload for a cross-site scripting attack, as demonstrated by the Samy worm, or constructed on the fly from session information leaked via offsite content and sent to a target as a malicious URL. CSRF tokens could also be sent to a client by an attacker due to session fixation or other vulnerabilities, or guessed via a brute-force attack, rendered on a malicious page that generates thousands of failed requests. The attack class of "Dynamic CSRF", or using a per-client payload for session-specific forgery, was described in 2009 by Nathan Hamiel and Shawn Moyer at the BlackHat Briefings, though the taxonomy has yet to gain wider adoption. A new vector for composing dynamic CSRF attacks was presented by Oren Ofer at a local OWASP chapter meeting in January 2012 – "AJAX Hammer – Dynamic CSRF".Downloads – hasc-research – hasc-research – Google Project Hosting. Code.google.com (2013-06-17). Retrieved on 2014-04-12. ## Effects Severity metrics have been issued for CSRF token vulnerabilities that result in remote code execution with root privileges as well as a vulnerability that can compromise a root certificate, which will completely undermine a public key infrastructure. ## Limitations Several things have to happen for cross-site request forgery to succeed: 1. The attacker must target either a site that doesn't check the referrer header or a victim with a browser or plugin that allows referer spoofing. 1. The attacker must find a form submission at the target site, or a URL that has side effects, that does something (e.g., transfers money, or changes the victim's e-mail address or password). 1. The attacker must determine the right values for all the forms or URL inputs; if any of them are required to be secret authentication values or IDs that the attacker can't guess, the attack will most likely fail (unless the attacker is extremely lucky in their guess). 1. The attacker must lure the victim to a web page with malicious code while the victim is logged into the target site. The attack is blind: the attacker cannot see what the target website sends back to the victim in response to the forged requests, unless they exploit a cross-site scripting or other bug at the target website. Similarly, the attacker can only target any links or submit any forms that come up after the initial forged request if those subsequent links or forms are similarly predictable. (Multiple targets can be simulated by including multiple images on a page, or by using JavaScript to introduce a delay between clicks.) ## Prevention Most CSRF prevention techniques work by embedding additional authentication data into requests that allows the web application to detect requests from unauthorized locations. ### Synchronizer token pattern Synchronizer token pattern (STP) is a technique where a token, a secret and unique value for each request, is embedded by the web application in all HTML forms and verified on the server side. The token may be generated by any method that ensures unpredictability and uniqueness (e.g. using a hash chain of random seed). This is called a anti-forgery token in ASP.NET. The attacker is thus unable to place a correct token in their requests to authenticate them. Example of STP set by Django in a HTML form: ```xml ``` STP is the most compatible as it only relies on HTML, but introduces some complexity on the server side, due to the burden associated with checking validity of the token on each request. As the token is unique and unpredictable, it also enforces proper sequence of events (e.g. screen 1, then 2, then 3) which raises usability problem (e.g. user opens multiple tabs). It can be relaxed by using per session CSRF token instead of per request CSRF token. ### Cookie-to-header token Web applications that use JavaScript for the majority of their operations may use the following anti-CSRF technique: - On an initial visit without an associated server session, the web application sets a cookie. The cookie typically contains a random token which may remain the same for up to the life of the web session Set-Cookie: __Host-csrf_token=i8XNjC4b8KVok4uw5RftR38Wgp2BFwql; Expires=Thu, 23-Jul-2015 10:25:33 GMT; Max-Age=31449600; Path=/; SameSite=Lax; Secure - JavaScript operating on the client side reads its value and copies it into a custom HTTP header sent with each transactional request X-Csrf-Token: i8XNjC4b8KVok4uw5RftR38Wgp2BFwql - The server validates presence and integrity of the token Security of this technique is based on the assumption that only JavaScript running on the client side of an HTTPS connection to the server that initially set the cookie will be able to read the cookie's value. JavaScript running from a rogue file or email should not be able to successfully read the cookie value to copy into the custom header. Even though the cookie may be automatically sent with the rogue request, subject to the cookies SameSite policy, the server will still expect a valid header. The CSRF token itself should be unique and unpredictable. It may be generated randomly, or it may be derived from the session token using HMAC: csrf_token = HMAC(session_token, application_secret) The CSRF token cookie must not have httpOnly flag, as it is intended to be read by JavaScript by design. This technique is implemented by many modern frameworks, such as Django and AngularJS. Because the token remains constant over the whole user session, it works well with AJAX applications, but does not enforce sequence of events in the web application. The protection provided by this technique can be thwarted if the target website disables its same-origin policy using one of the following techniques: - file granting unintended access to Silverlight controls - file granting unintended access to Flash movies ### Double Submit Cookie Similarly to the cookie-to-header approach, but without involving JavaScript, a site can set a CSRF token as a cookie, and also insert it as a hidden field in each HTML form. When the form is submitted, the site can check that the cookie token matches the form token. The same-origin policy prevents an attacker from reading or setting cookies on the target domain, so they cannot put a valid token in their crafted form. The advantage of this technique over the Synchronizer pattern is that the token does not need to be stored on the server. However, if the site in question has cookie setting functionality, this protection can be bypassed. ### SameSite cookie attribute An additional "SameSite" attribute can be included when the server sets a cookie, instructing the browser on whether to attach the cookie to cross-site requests. If this attribute is set to "strict", then the cookie will only be sent on same-site requests, making CSRF ineffective. However, this requires the browser to recognise and correctly implement the attribute. ### Client-side safeguards Browser extensions such as RequestPolicy (for Mozilla Firefox) or (for both Firefox and Google Chrome/Chromium) can prevent CSRF by providing a default-deny policy for cross-site requests. However, this can significantly interfere with the normal operation of many websites. The CsFire extension (also for Firefox) can mitigate the impact of CSRF with less impact on normal browsing, by removing authentication information from cross-site requests. The NoScript extension for Firefox mitigates CSRF threats by distinguishing trusted from untrusted sites, and removing authentication & payloads from POST requests sent by untrusted sites to trusted ones. The Application Boundary Enforcer module in NoScript also blocks requests sent from internet pages to local sites (e.g. localhost), preventing CSRF attacks on local services (such as uTorrent) or routers. The Self Destructing Cookies extension for Firefox does not directly protect from CSRF, but can reduce the attack window, by deleting cookies as soon as they are no longer associated with an open tab. ### Other techniques Various other techniques have been used or proposed for CSRF prevention historically: - Verifying that the request's headers contain `X-Requested-With` (used by Ruby on Rails before v2.0 and Django before v1.2.5), or checking the HTTP `Referer` header and/or HTTP `Origin` header. - Checking the HTTP `Referer` header to see if the request is coming from an authorized page is commonly used for embedded network devices because it does not increase memory requirements. However, a request that omits the `Referer` header must be treated as unauthorized because an attacker can suppress the `Referer` header by issuing requests from FTP or HTTPS URLs. This strict `Referer` validation may cause issues with browsers or proxies that omit the `Referer` header for privacy reasons. Also, old versions of Flash (before 9.0.18) allow malicious Flash to generate GET or POST requests with arbitrary HTTP request headers using CRLF Injection. Similar CRLF injection vulnerabilities in a client can be used to spoof the referrer of an HTTP request. - POST request method was for a while perceived as immune to trivial CSRF attacks using parameters in URL (using GET method). However, both POST and any other HTTP method can be now easily executed using XMLHttpRequest. Filtering out unexpected GET requests still prevents some particular attacks, such as cross-site attacks using malicious image URLs or link addresses and cross-site information leakage through `<script>` elements (JavaScript hijacking); it also prevents (non-security-related) problems with aggressive web crawlers and link prefetching. Cross-site scripting (XSS) vulnerabilities (even in other applications running on the same domain) allow attackers to bypass essentially all CSRF preventions.
https://en.wikipedia.org/wiki/Cross-site_request_forgery
In computer science, a generator is a routine that can be used to control the iteration behaviour of a loop. All generators are also iterators. A generator is very similar to a function that returns an array, in that a generator has parameters, can be called, and generates a sequence of values. However, instead of building an array containing all the values and returning them all at once, a generator yields the values one at a time, which requires less memory and allows the caller to get started processing the first few values immediately. In short, a generator looks like a function but behaves like an iterator. Generators can be implemented in terms of more expressive control flow constructs, such as coroutines or first-class continuations. Generators, also known as semicoroutines, are a special case of (and weaker than) coroutines, in that they always yield control back to the caller (when passing a value back), rather than specifying a coroutine to jump to; see comparison of coroutines with generators. ## Uses Generators are usually invoked inside loops. The first time that a generator invocation is reached in a loop, an iterator object is created that encapsulates the state of the generator routine at its beginning, with arguments bound to the corresponding parameters. The generator's body is then executed in the context of that iterator until a special yield action is encountered; at that time, the value provided with the yield action is used as the value of the invocation expression. The next time the same generator invocation is reached in a subsequent iteration, the execution of the generator's body is resumed after the yield action, until yet another yield action is encountered. In addition to the yield action, execution of the generator body can also be terminated by a finish action, at which time the innermost loop enclosing the generator invocation is terminated. In more complicated situations, a generator may be used manually outside of a loop to create an iterator, which can then be used in various ways. Because generators compute their yielded values only on demand, they are useful for representing streams, such as sequences that would be expensive or impossible to compute at once. These include e.g. infinite sequences and live data streams. When eager evaluation is desirable (primarily when the sequence is finite, as otherwise evaluation will never terminate), one can either convert to a list, or use a parallel construction that creates a list instead of a generator. For example, in ### Python a generator `g` can be evaluated to a list `l` via `l = list(g)`, while in ### F# the sequence expression `seq { ... }` evaluates lazily (a generator or sequence) but `[ ... ]` evaluates eagerly (a list). In the presence of generators, loop constructs of a language – such as for and while – can be reduced into a single loop ... end loop construct; all the usual loop constructs can then be comfortably simulated by using suitable generators in the right way. For example, a ranged loop like `for x = 1 to 10` can be implemented as iteration through a generator, as in Python's `for x in range(1, 10)`. Further, `break` can be implemented as sending finish to the generator and then using `continue` in the loop. ## Languages providing generators Generators first appeared in ### ### C LU (1975), were a prominent feature in the string manipulation language ### Icon (1977) and are now available in Python (2001), ### C# , ### ### R uby , ### PHP , ### ECMAScript (as of ES6/ES2015), and other languages. In CLU and C#, generators are called iterators, and in Ruby, enumerators. ### Lisp The final Common Lisp standard does not natively provide generators, yet various library implementations exist, such as SERIES documented in CLtL2 or pygen. CLU A yield statement is used to implement iterators over user-defined data abstractions. ```text string_chars = iter (s: string) yields (char); index: int := 1; limit: int := string$size (s); while index <= limit do yield (string$fetch(s, index)); index := index + 1; end; end string_chars; for c: char in string_chars(s) do ... end; ``` Icon Every expression (including loops) is a generator. The language has many generators built-in and even implements some of the logic semantics using the generator mechanism (logical disjunction or "OR" is done this way). Printing squares from 0 to 20 can be achieved using a co-routine by writing: ```icon local squares, j squares := create (seq(0) ^ 2) every j := |@squares do if j <= 20 then write(j) else break ``` However, most of the time custom generators are implemented with the "suspend" keyword which functions exactly like the "yield" keyword in CLU. C C does not have generator functions as a language construct, but, as they are a subset of coroutines, it is simple to implement them using any framework that implements stackful coroutines, such as libdill. On POSIX platforms, when the cost of context switching per iteration is not a concern, or full parallelism rather than merely concurrency is desired, a very simple generator function framework can be implemented using pthreads and pipes. ### C++ It is possible to introduce generators into C++ using pre-processor macros. The resulting code might have aspects that are very different from native C++, but the generator syntax can be very uncluttered. The set of pre-processor macros defined in this source allow generators defined with the syntax as in the following example: ```cpp $generator(descent) { int i; // place the constructor of our generator, e.g. // descent(int minv, int maxv) {...} // from $emit to $stop is a body of our generator: $emit(int) // will emit int values. Start of body of the generator. for (i = 10; i > 0; --i) $yield(i); // similar to yield in Python, // returns next number in [1..10], reversed. $stop; // stop, end of sequence. End of body of the generator. }; ``` This can then be iterated using: ```cpp int main(int argc, char* argv[]) { descent gen; for (int n; gen(n);) // "get next" generator invocation printf("next number is %d\n", n); return 0; } ``` Moreover, C++11 allows foreach loops to be applied to any class that provides the `begin` and `end` functions. It's then possible to write generator-like classes by defining both the iterable methods (`begin` and `end`) and the iterator methods (`operator!=`, `operator++` and `operator*`) in the same class. For example, it is possible to write the following program: ```cpp 1. include <iostream> int main() { for (int i: range(10)) { std::cout << i << std::endl; } return 0; } ``` A basic range implementation would look like that: ```cpp class range { private: int last; int iter; public: range(int end): last(end), iter(0) {} // Iterable functions const range& begin() const { return *this; } const range& end() const { return *this; } // Iterator functions bool operator!=(const range&) const { return iter < last; } void operator++() { ++iter; } int operator*() const { return iter; } }; ``` ### Perl Perl does not natively provide generators, but support is provided by the Coro::Generator module which uses the Coro co-routine framework. Example usage: ```perl use strict; use warnings; 1. Enable generator { BLOCK } and yield use Coro::Generator; 1. Array reference to iterate over my $chars = ['A'...'Z']; 1. New generator which can be called like a coderef. my $letters = generator { my $i = 0; for my $letter (@$chars) { 1. get next letter from $chars yield $letter; } }; 1. Call the generator 15 times. print $letters->(), "\n" for (0..15); ``` ### Raku Example parallel to Icon uses Raku (formerly/aka Perl 6) Range class as one of several ways to achieve generators with the language. Printing squares from 0 to 20 can be achieved by writing: ```raku for (0 .. *).map(* ** 2) -> $i { last if $i > 20; say $i } ``` However, most of the time custom generators are implemented with "gather" and "take" keywords in a lazy context. ### Tcl In Tcl 8.6, the generator mechanism is founded on named coroutines. ```tcl proc generator {body} { coroutine gen[incr ::disambiguator] apply {{script} { 1. Produce the result of [generator], the name of the generator yield [info coroutine] 1. Do the generation eval $script 1. Finish the loop of the caller using a 'break' exception return -code break }} $body } 1. Use a simple 'for' loop to do the actual generation set count [generator { for {set i 10} {$i <= 20} {incr i} { yield $i } }] 1. Pull values from the generator until it is exhausted while 1 { puts [$count] } ``` ### Haskell In Haskell, with its lazy evaluation model, every datum created with a non-strict data constructor is generated on demand. For example, ```haskell countFrom :: Integer -> [Integer] countFrom n = n : countFrom (n + 1) from10to20 :: [Integer] from10to20 = takeWhile (<= 20) $ countFrom 10 primes :: [Integer] primes = 2 : 3 : nextPrime 5 where nextPrime n | notDivisible n = n : nextPrime (n + 2) | otherwise = nextPrime (n + 2) notDivisible n = all ((/= 0) . (rem n)) $ takeWhile ((<= n) . (^ 2)) $ tail primes ``` where `(:)` is a non-strict list constructor, cons, and `$` is just a "called-with" operator, used for parenthesization. This uses the standard adaptor function, ```haskell takeWhile p [] = [] takeWhile p (x:xs) | p x = x : takeWhile p xs | otherwise = [] ``` which walks down the list and stops on the first element that doesn't satisfy the predicate. If the list has been walked before until that point, it is just a strict data structure, but if any part hadn't been walked through before, it will be generated on demand. List comprehensions can be freely used: ```haskell squaresUnder20 = takeWhile (<= 20) [x * x | x <- countFrom 10] squaresForNumbersUnder20 = [x * x | x <- takeWhile (<= 20) $ countFrom 10] ``` ### Racket Racket provides several related facilities for generators. First, its for-loop forms work with sequences, which are a kind of a producer: ```racket (for ([i (in-range 10 20)]) (printf "i = ~s\n" i)) ``` and these sequences are also first-class values: ```racket (define 10-to-20 (in-range 10 20)) (for ([i 10-to-20]) (printf "i = ~s\n" i)) ``` Some sequences are implemented imperatively (with private state variables) and some are implemented as (possibly infinite) lazy lists. Also, new struct definitions can have a property that specifies how they can be used as sequences. But more directly, Racket comes with a generator library for a more traditional generator specification. For example, ```racket 1. lang racket (require racket/generator) (define (ints-from from) (generator () (for ([i (in-naturals from)]) ; infinite sequence of integers from 0 (yield i)))) (define g (ints-from 10)) (list (g) (g) (g)) ; -> '(10 11 12) ``` Note that the Racket core implements powerful continuation features, providing general (re-entrant) continuations that are composable, and also delimited continuations. Using this, the generator library is implemented in Racket. PHP The community of PHP implemented generators in PHP 5.5. Details can be found in the original Request for Comments: Generators. Infinite Fibonacci sequence: ```php function fibonacci(): Generator { $last = 0; $current = 1; yield 1; while (true) { $current = $last + $current; $last = $current - $last; yield $current; } } foreach (fibonacci() as $number) { echo $number, "\n"; } ``` Fibonacci sequence with limit: ```php function fibonacci(int $limit): Generator { yield $a = $b = $i = 1; while (++$i < $limit) { yield $a = ($b = $a + $b) - $a; } } foreach (fibonacci(10) as $number) { echo "$number\n"; } ``` Any function which contains a yield statement is automatically a generator function. Ruby Ruby supports generators (starting from version 1.9) in the form of the built-in Enumerator class. ```ruby 1. Generator from an Enumerator object chars = Enumerator.new(['A', 'B', 'C', 'Z']) 4.times { puts chars.next } 1. Generator from a block count = Enumerator.new do |yielder| i = 0 loop { yielder.yield i += 1 } end 100.times { puts count.next } ``` ### Java Java has had a standard interface for implementing iterators since its early days, and since Java 5, the "foreach" construction makes it easy to loop over objects that provide the `java.lang.Iterable` interface. (The Java collections framework and other collections frameworks, typically provide iterators for all collections.) ```java record Pair(int a, int b) {}; Iterable<Integer> myIterable = Stream.iterate(new Pair(1, 1), p -> new Pair(p.b, p.a + p.b)) .limit(10) .map(p -> p.a)::iterator; myIterable.forEach(System.out::println); ``` Or get an Iterator from the Java 8 super-interface BaseStream of Stream interface. ```java record Pair(int a, int b) {}; // Save the iterator of a stream that generates fib sequence Iterator<Integer> myGenerator = Stream // Generates Fib sequence .iterate(new Pair(1, 1), p -> new Pair(p.b, p.a + p.b)) .map(p -> p.a).iterator(); // Print the first 5 elements for (int i = 0; i < 5; i++) { System.out.println(myGenerator.next()); } System.out.println("done with first iteration"); // Print the next 5 elements for (int i = 0; i < 5; i++) { System.out.println(myGenerator.next()); } ``` Output: ```console 1 1 2 3 5 done with first iteration 8 13 21 34 55 ``` C# An example C# 2.0 generator (the `yield` is available since C# version 2.0): Both of these examples utilize generics, but this is not required. yield keyword also helps in implementing custom stateful iterations over a collection as discussed in this discussion. ```csharp // Method that takes an iterable input (possibly an array) // and returns all even numbers. public static IEnumerable<int> GetEven(IEnumerable<int> numbers) { foreach (int number in numbers) { if ((number % 2) == 0) { yield return number; } } } ``` It is possible to use multiple `yield return` statements and they are applied in sequence on each iteration: ```csharp public class CityCollection : IEnumerable<string> { public IEnumerator<string> GetEnumerator() { yield return "New York"; yield return "Paris"; yield return "London"; } } ``` ### XL In XL, iterators are the basis of 'for' loops: ```text import IO = XL.UI.CONSOLE iterator IntegerIterator (var out Counter : integer; Low, High : integer) written Counter in Low..High is Counter := Low while Counter <= High loop yield Counter += 1 // Note that I needs not be declared, because declared 'var out' in the iterator // An implicit declaration of I as an integer is therefore made here for I in 1..5 loop IO.WriteLn "I=", I ``` F# F# provides generators via sequence expressions, since version 1.9.1. These can define a sequence (lazily evaluated, sequential access) via `seq { ... }`, a list (eagerly evaluated, sequential access) via `[ ... ]` or an array (eagerly evaluated, indexed access) via `[| ... |]` that contain code that generates values. For example, ```fsharp seq { for b in 0 .. 25 do if b < 15 then yield b * b } ``` forms a sequence of squares of numbers from 0 to 14 by filtering out numbers from the range of numbers from 0 to 25. Python Generators were added to Python in version 2.2 in 2001. An example generator: ```python from typing import Iterator def countfrom(n: int) -> Iterator[int]: while True: yield n n += 1 1. Example use: printing out the integers from 10 to 20. 1. Note that this iteration terminates normally, despite 1. countfrom() being written as an infinite loop. for i in countfrom(10): if i <= 20: print(i) else: break 1. Another generator, which produces prime numbers indefinitely as needed. import itertools def primes() -> Iterator[int]: """Generate prime numbers indefinitely as needed.""" yield 2 n = 3 p = [2] while True: 1. If dividing n by all the numbers in p, up to and including sqrt(n), 1. produces a non-zero remainder then n is prime. if all(n % f > 0 for f in itertools.takewhile(lambda f: f * f <= n, p)): yield n p.append(n) n += 2 ``` In Python, a generator can be thought of as an iterator that contains a frozen stack frame. Whenever `next()` is called on the iterator, Python resumes the frozen frame, which executes normally until the next `yield` statement is reached. The generator's frame is then frozen again, and the yielded value is returned to the caller. PEP 380 (implemented in Python 3.3) adds the `yield from` expression, allowing a generator to delegate part of its operations to another generator or iterable. #### Generator expressions Python has a syntax modeled on that of list comprehensions, called a generator expression that aids in the creation of generators. The following extends the first example above by using a generator expression to compute squares from the `countfrom` generator function: ```python squares = (n * n for n in countfrom(2)) for j in squares: if j <= 20: print(j) else: break ``` ECMAScript ECMAScript 6 (a.k.a. Harmony) introduced generator functions. An infinite Fibonacci sequence can be written using a function generator: ```javascript function* fibonacci(limit) { let [prev, curr] = [0, 1]; while (!limit || curr <= limit) { yield curr; [prev, curr] = [curr, prev + curr]; } } // bounded by upper limit 10 for (const n of fibonacci(10)) { console.log(n); } // generator without an upper bound limit for (const n of fibonacci()) { console.log(n); if (n > 10000) break; } // manually iterating let fibGen = fibonacci(); console.log(fibGen.next().value); // 1 console.log(fibGen.next().value); // 1 console.log(fibGen.next().value); // 2 console.log(fibGen.next().value); // 3 console.log(fibGen.next().value); // 5 console.log(fibGen.next().value); // 8 // picks up from where you stopped for (const n of fibGen) { console.log(n); if (n > 10000) break; } ``` R The iterators package can be used for this purpose. ```r library(iterators) 1. Example ------------------ abc <- iter(c('a','b','c')) nextElem(abc) ``` ### Smalltalk Example in Pharo Smalltalk: The Golden ratio generator below returns to each invocation 'goldenRatio next' a better approximation to the Golden Ratio. ```Smalltalk goldenRatio := Generator on: [ :g | | x y z r | x := 0. y := 1. [ z := x + y. r := (z / y) asFloat. x := y. y := z. g yield: r ] repeat ]. goldenRatio next. ``` The expression below returns the next 10 approximations. ```Smalltalk Character cr join: ((1 to: 10) collect: [ :dummy | ratio next ]). ``` See more in A hidden gem in Pharo: Generator.
https://en.wikipedia.org/wiki/Generator_%28computer_programming%29
In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, they are also called time-invariant systems. Many laws in physics, where the independent variable is usually assumed to be time, are expressed as autonomous systems because it is assumed the laws of nature which hold now are identical to those for any point in the past or future. ## Definition An autonomous system is a system of ordinary differential equations of the form $$ \frac{d}{dt}x(t)=f(x(t)) $$ where takes values in -dimensional Euclidean space; is often interpreted as time. It is distinguished from systems of differential equations of the form $$ \frac{d}{dt}x(t)=g(x(t),t) $$ in which the law governing the evolution of the system does not depend solely on the system's current state but also the parameter , again often interpreted as time; such systems are by definition not autonomous. ## Properties Solutions are invariant under horizontal translations: Let $$ x_1(t) $$ be a unique solution of the initial value problem for an autonomous system $$ \frac{d}{dt}x(t)=f(x(t)) \, , \quad x(0)=x_0. $$ Then $$ x_2(t)=x_1(t-t_0) $$ solves $$ \frac{d}{dt}x(t)=f(x(t)) \, , \quad x(t_0)=x_0. $$ Denoting $$ s=t-t_0 $$ gets $$ x_1(s)=x_2(t) $$ and $$ ds=dt $$ , thus $$ \frac{d}{dt}x_2(t) = \frac{d}{dt}x_1(t-t_0)=\frac{d}{ds}x_1(s) = f(x_1(s)) = f(x_2(t)) . $$ For the initial condition, the verification is trivial, $$ x_2(t_0)=x_1(t_0-t_0)=x_1(0)=x_0 . $$ ## Example The equation $$ y'= \left(2-y\right)y $$ is autonomous, since the independent variable ( $$ x $$ ) does not explicitly appear in the equation. To plot the slope field and isocline for this equation, one can use the following code in GNU Octave/MATLAB ```matlab Ffun = @(X, Y)(2 - Y) .* Y; % function f(x,y)=(2-y)y [X, Y] = meshgrid(0:.2:6, -1:.2:3); % choose the plot sizes DY = Ffun(X, Y); DX = ones(size(DY)); % generate the plot values quiver(X, Y, DX, DY, 'k'); % plot the direction field in black hold on; contour(X, Y, DY, [0 1 2], 'g'); % add the isoclines(0 1 2) in green title('Slope field and isoclines for f(x,y)=(2-y)y') ``` One can observe from the plot that the function $$ \left(2-y\right)y $$ is $$ x $$ -invariant, and so is the shape of the solution, i.e. $$ y(x)=y(x-x_0) $$ for any shift $$ x_0 $$ . Solving the equation symbolically in MATLAB, by running ```matlab syms y(x); equation = (diff(y) == (2 - y) * y); % solve the equation for a general solution symbolically y_general = dsolve(equation); ``` obtains two equilibrium solutions, $$ y=0 $$ and $$ y=2 $$ , and a third solution involving an unknown constant $$ C_3 $$ , ```matlab -2 / (exp(C3 - 2 * x) - 1) ``` . Picking up some specific values for the initial condition, one can add the plot of several solutions ```matlab % solve the initial value problem symbolically % for different initial conditions y1 = dsolve(equation, y(1) == 1); y2 = dsolve(equation, y(2) == 1); y3 = dsolve(equation, y(3) == 1); y4 = dsolve(equation, y(1) == 3); y5 = dsolve(equation, y(2) == 3); y6 = dsolve(equation, y(3) == 3); % plot the solutions ezplot(y1, [0 6]); ezplot(y2, [0 6]); ezplot(y3, [0 6]); ezplot(y4, [0 6]); ezplot(y5, [0 6]); ezplot(y6, [0 6]); title('Slope field, isoclines and solutions for f(x,y)=(2-y)y') legend('Slope field', 'Isoclines', 'Solutions y_{1..6}'); text([1 2 3], [1 1 1], strcat('\leftarrow', {'y_1', 'y_2', 'y_3'})); text([1 2 3], [3 3 3], strcat('\leftarrow', {'y_4', 'y_5', 'y_6'})); grid on; ``` ## Qualitative analysis Autonomous systems can be analyzed qualitatively using the phase space; in the one-variable case, this is the phase line. ## Solution techniques The following techniques apply to one-dimensional autonomous differential equations. Any one-dimensional equation of order $$ n $$ is equivalent to an $$ n $$ -dimensional first-order system (as described in reduction to a first-order system), but not necessarily vice versa. ### First order The first-order autonomous equation $$ \frac{dx}{dt} = f(x) $$ is separable, so it can be solved by rearranging it into the integral form $$ t + C = \int \frac{dx}{f(x)} $$ ### Second order The second-order autonomous equation $$ \frac{d^2x}{dt^2} = f(x, x') $$ is more difficult, but it can be solved by introducing the new variable $$ v = \frac{dx}{dt} $$ and expressing the second derivative of $$ x $$ via the chain rule as $$ \frac{d^2x}{dt^2} = \frac{dv}{dt} = \frac{dx}{dt}\frac{dv}{dx} = v\frac{dv}{dx} $$ so that the original equation becomes $$ v\frac{dv}{dx} = f(x, v) $$ which is a first order equation containing no reference to the independent variable $$ t $$ . Solving provides $$ v $$ as a function of $$ x $$ . Then, recalling the definition of $$ v $$ : $$ \frac{dx}{dt} = v(x) \quad \Rightarrow \quad t + C = \int \frac{d x}{v(x)} $$ which is an implicit solution. #### #### Special case: The special case where $$ f $$ is independent of $$ x' $$ $$ \frac{d^2 x}{d t^2} = f(x) $$ benefits from separate treatment. These types of equations are very common in classical mechanics because they are always Hamiltonian systems. The idea is to make use of the identity $$ \frac{d x}{d t} = \left(\frac{d t}{d x}\right)^{-1} $$ which follows from the chain rule, barring any issues due to division by zero. By inverting both sides of a first order autonomous system, one can immediately integrate with respect to $$ x $$ : $$ \frac{d x}{d t} = f(x) \quad \Rightarrow \quad \frac{d t}{d x} = \frac{1}{f(x)} \quad \Rightarrow \quad t + C = \int \frac{dx}{f(x)} $$ which is another way to view the separation of variables technique. The second derivative must be expressed as a derivative with respect to $$ x $$ instead of $$ t $$ : $$ \begin{align} \frac{d^2 x}{d t^2} &= \frac{d}{d t}\left(\frac{d x}{d t}\right) = \frac{d}{d x}\left(\frac{d x}{d t}\right) \frac{d x}{d t} \\[4pt] &= \frac{d}{d x}\left(\left(\frac{d t}{d x}\right)^{-1}\right) \left(\frac{d t}{d x}\right)^{-1} \\[4pt] &= - \left(\frac{d t}{d x}\right)^{-2} \frac{d^2 t}{d x^2} \left(\frac{d t}{d x}\right)^{-1} = - \left(\frac{d t}{d x}\right)^{-3} \frac{d^2 t}{d x^2} \\[4pt] &= \frac{d}{d x}\left(\frac{1}{2}\left(\frac{d t}{d x}\right)^{-2}\right) \end{align} $$ To reemphasize: what's been accomplished is that the second derivative with respect to $$ t $$ has been expressed as a derivative of $$ x $$ . The original second order equation can now be integrated: $$ \begin{align} \frac{d^2 x}{d t^2} &= f(x) \\ \frac{d}{d x}\left(\frac{1}{2}\left(\frac{d t}{d x}\right)^{-2}\right) &= f(x) \\ \left(\frac{d t}{d x}\right)^{-2} &= 2 \int f(x) dx + C_1 \\ \frac{d t}{d x} &= \pm \frac{1}{\sqrt{2 \int f(x) dx + C_1}} \\ t + C_2 &= \pm \int \frac{dx}{\sqrt{2 \int f(x) dx + C_1}} \end{align} $$ This is an implicit solution. The greatest potential problem is inability to simplify the integrals, which implies difficulty or impossibility in evaluating the integration constants. Special case: Using the above approach, the technique can extend to the more general equation $$ \frac{d^2 x}{d t^2} = \left(\frac{d x}{d t}\right)^n f(x) $$ where $$ n $$ is some parameter not equal to two. This will work since the second derivative can be written in a form involving a power of $$ x' $$ . Rewriting the second derivative, rearranging, and expressing the left side as a derivative: $$ \begin{align} &- \left(\frac{d t}{d x}\right)^{-3} \frac{d^2 t}{d x^2} = \left(\frac{d t}{d x}\right)^{-n} f(x) \\[4pt] &- \left(\frac{d t}{d x}\right)^{n - 3} \frac{d^2 t}{d x^2} = f(x) \\[4pt] &\frac{d}{d x}\left(\frac{1}{2 - n}\left(\frac{d t}{d x}\right)^{n - 2}\right) = f(x) \\[4pt] &\left(\frac{d t}{d x}\right)^{n - 2} = (2 - n) \int f(x) dx + C_1 \\[2pt] &t + C_2 = \int \left((2 - n) \int f(x) dx + C_1\right)^{\frac{1}{n - 2}} dx \end{align} $$ The right will carry +/− if $$ n $$ is even. The treatment must be different if $$ n = 2 $$ : $$ \begin{align} - \left(\frac{d t}{d x}\right)^{-1} \frac{d^2 t}{d x^2} &= f(x) \\ -\frac{d}{d x}\left(\ln\left(\frac{d t}{d x}\right)\right) &= f(x) \\ \frac{d t}{d x} &= C_1 e^{-\int f(x) dx} \\ t + C_2 &= C_1 \int e^{-\int f(x) dx} dx \end{align} $$ ### Higher orders There is no analogous method for solving third- or higher-order autonomous equations. Such equations can only be solved exactly if they happen to have some other simplifying property, for instance linearity or dependence of the right side of the equation on the dependent variable onlyFourth order autonomous equation at eqworld. (i.e., not its derivatives). This should not be surprising, considering that nonlinear autonomous systems in three dimensions can produce truly chaotic behavior such as the Lorenz attractor and the Rössler attractor. Likewise, general non-autonomous equations of second order are unsolvable explicitly, since these can also be chaotic, as in a periodically forced pendulum. ### Multivariate case In $$ \mathbf x'(t) = A \mathbf x(t) $$ , where $$ \mathbf x(t) $$ is an $$ n $$ -dimensional column vector dependent on $$ t $$ . The solution is $$ \mathbf x(t) = e^{A t} \mathbf c $$ where $$ \mathbf c $$ is an $$ n \times 1 $$ constant vector. ### Finite durations For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stay there in zero forever after. These finite-duration solutions cannot be analytical functions on the whole real line, and because they will be non-Lipschitz functions at the ending time, they don't stand uniqueness of solutions of Lipschitz differential equations. As example, the equation: $$ y'= -\text{sgn}(y)\sqrt{|y|},\,\,y(0)=1 $$ Admits the finite duration solution: $$ y(x)=\frac{1}{4}\left(1-\frac{x}{2}+\left|1-\frac{x}{2}\right|\right)^2 $$
https://en.wikipedia.org/wiki/Autonomous_system_%28mathematics%29
In computing, virtual memory, or virtual storage, is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory". The computer's operating system, using a combination of hardware and software, maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. Main storage, as seen by a process or task, appears as a contiguous address space or collection of contiguous segments. The operating system manages virtual address spaces and the assignment of real memory to virtual memory. Address translation hardware in the CPU, often referred to as a memory management unit (MMU), automatically translates virtual addresses to physical addresses. Software within the operating system may extend these capabilities, utilizing, e.g., disk storage, to provide a virtual address space that can exceed the capacity of real memory and thus reference more memory than is physically present in the computer. The primary benefits of virtual memory include freeing applications from having to manage a shared memory space, ability to share memory used by libraries between processes, increased security due to memory isolation, and being able to conceptually use more memory than might be physically available, using the technique of paging or segmentation. ## Properties Virtual memory makes application programming easier by hiding fragmentation of physical memory; by delegating to the kernel the burden of managing the memory hierarchy (eliminating the need for the program to handle overlays explicitly); and, when each process is run in its own dedicated address space, by obviating the need to relocate program code or to access memory with relative addressing. Memory virtualization can be considered a generalization of the concept of virtual memory. ## Usage Virtual memory is an integral part of a modern computer architecture; implementations usually require hardware support, typically in the form of a memory management unit built into the CPU. While not necessary, emulators and virtual machines can employ hardware support to increase performance of their virtual memory implementations. Older operating systems, such as those for the mainframes of the 1960s, and those for personal computers of the early to mid-1980s (e.g., DOS), generally have no virtual memory functionality, though notable exceptions for mainframes of the 1960s include: - the Atlas Supervisor for the Atlas - THE multiprogramming system for the Electrologica X8 (software based virtual memory without hardware support) - MCP for the Burroughs B5000 - MTS, TSS/360 and CP/CMS for the IBM System/360 Model 67 - Multics for the GE 645 - The Time Sharing Operating System for the RCA Spectra 70/46 During the 1960s and early '70s, computer memory was very expensive. The introduction of virtual memory provided an ability for software systems with large memory demands to run on computers with less real memory. The savings from this provided a strong incentive to switch to virtual memory for all systems. The additional capability of providing virtual address spaces added another level of security and reliability, thus making virtual memory even more attractive to the marketplace. Most modern operating systems that support virtual memory also run each process in its own dedicated address space. Each program thus appears to have sole access to the virtual memory. However, some older operating systems (such as OS/VS1 and OS/VS2 SVS) and even modern ones (such as IBM i) are single address space operating systems that run all processes in a single address space composed of virtualized memory. Embedded systems and other special-purpose computer systems that require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism; virtual memory systems trigger unpredictable traps that may produce unwanted and unpredictable delays in response to input, especially if the trap requires that data be read into main memory from secondary memory. The hardware to translate virtual addresses to physical addresses typically requires a significant chip area to implement, and not all chips used in embedded systems include that hardware, which is another reason some of those systems do not use virtual memory. ## History In the 1950s, all larger programs had to contain logic for managing primary and secondary storage, such as overlaying. Virtual memory was therefore introduced not only to extend primary memory, but to make such an extension as easy as possible for programmers to use. To allow for multiprogramming and multitasking, many early systems divided memory between multiple programs without virtual memory, such as early models of the PDP-10 via registers. A claim that the concept of virtual memory was first developed by German physicist Fritz-Rudolf Güntsch at the Technische Universität Berlin in 1956 in his doctoral thesis, Logical Design of a Digital Computer with Multiple Asynchronous Rotating Drums and Automatic High Speed Memory Operation, does not stand up to careful scrutiny. The computer proposed by Güntsch (but never built) had an address space of 105 words which mapped exactly onto the 105 words of the drums, i.e. the addresses were real addresses and there was no form of indirect mapping, a key feature of virtual memory. What Güntsch did invent was a form of cache memory, since his high-speed memory was intended to contain a copy of some blocks of code or data taken from the drums. Indeed, he wrote (as quoted in translation): "The programmer need not respect the existence of the primary memory (he need not even know that it exists), for there is only one sort of addresses by which one can program as if there were only one storage." This is exactly the situation in computers with cache memory, one of the earliest commercial examples of which was the IBM System/360 Model 85. In the Model 85 all addresses were real addresses referring to the main core store. A semiconductor cache store, invisible to the user, held the contents of parts of the main store in use by the currently executing program. This is exactly analogous to Güntsch's system, designed as a means to improve performance, rather than to solve the problems involved in multi-programming. The first true virtual memory system was that implemented at the University of Manchester to create a one-level storage system as part of the Atlas Computer. It used a paging mechanism to map the virtual addresses available to the programmer onto the real memory that consisted of 16,384 words of primary core memory with an additional 98,304 words of secondary drum memory. The addition of virtual memory into the Atlas also eliminated a looming programming problem: planning and scheduling data transfers between main and secondary memory and recompiling programs for each change of size of main memory. The first Atlas was commissioned in 1962 but working prototypes of paging had been developed by 1959. As early as 1958, Robert S. Barton, working at Shell Research, suggested that main storage should be allocated automatically rather than have the programmer being concerned with overlays from secondary memory, in effect virtual memory. By 1960 Barton was lead architect on the Burroughs B5000 project. From 1959 to 1961, W. R. Lonergan was manager of the Burroughs Product Planning Group which included Barton, Donald Knuth as consultant, and Paul King. In May 1960, UCLA ran a two-week seminar "Using and Exploiting Giant Computers" to which Paul King and two others were sent. Stan Gill gave a presentation on virtual memory in the Atlas I computer. Paul King took the ideas back to Burroughs and it was determined that virtual memory should be designed into the core of the B5000.. Burroughs Corporation released the B5000 in 1964 as the first commercial computer with virtual memory. IBM developed the concept of hypervisors in their CP-40 and CP-67, and in 1972 provided it for the S/370 as Virtual Machine Facility/370. IBM introduced the Start Interpretive Execution (SIE) instruction as part of 370-XA on the 3081, and VM/XA versions of VM to exploit it. Before virtual memory could be implemented in mainstream operating systems, many problems had to be addressed. Dynamic address translation required expensive and difficult-to-build specialized hardware; initial implementations slowed down access to memory slightly. There were worries that new system-wide algorithms utilizing secondary storage would be less effective than previously used application-specific algorithms. By 1969, the debate over virtual memory for commercial computers was over; an IBM research team led by David Sayre showed that their virtual memory overlay system consistently worked better than the best manually controlled systems. Throughout the 1970s, the IBM 370 series running their virtual-storage based operating systems provided a means for business users to migrate multiple older systems into fewer, more powerful, mainframes that had improved price/performance. The first minicomputer to introduce virtual memory was the Norwegian NORD-1; during the 1970s, other minicomputers implemented virtual memory, notably VAX models running VMS. Virtual memory was introduced to the x86 architecture with the protected mode of the Intel 80286 processor, but its segment swapping technique scaled poorly to larger segment sizes. The Intel 80386 introduced paging support underneath the existing segmentation layer, enabling the page fault exception to chain with other exceptions without double fault. However, loading segment descriptors was an expensive operation, causing operating system designers to rely strictly on paging rather than a combination of paging and segmentation. ## Paged virtual memory Nearly all current implementations of virtual memory divide a virtual address space into pages, blocks of contiguous virtual memory addresses. Pages on contemporary systems are usually at least 4 kilobytes in size; systems with large virtual address ranges or amounts of real memory generally use larger page sizes. ### Page tables Page tables are used to translate the virtual addresses seen by the application into physical addresses used by the hardware to process instructions; such hardware that handles this specific translation is often known as the memory management unit. Each entry in the page table holds a flag indicating whether the corresponding page is in real memory or not. If it is in real memory, the page table entry will contain the real memory address at which the page is stored. When a reference is made to a page by the hardware, if the page table entry for the page indicates that it is not currently in real memory, the hardware raises a page fault exception, invoking the paging supervisor component of the operating system. Systems can have, e.g., one page table for the whole system, separate page tables for each address space or process, separate page tables for each segment; similarly, systems can have, e.g., no segment table, one segment table for the whole system, separate segment tables for each address space or process, separate segment tables for each region in a tree of region tables for each address space or process. If there is only one page table, different applications running at the same time use different parts of a single range of virtual addresses. If there are multiple page or segment tables, there are multiple virtual address spaces and concurrent applications with separate page tables redirect to different real addresses. Some earlier systems with smaller real memory sizes, such as the SDS 940, used page registers instead of page tables in memory for address translation. ### Paging supervisor This part of the operating system creates and manages page tables and lists of free page frames. In order to ensure that there will be enough free page frames to quickly resolve page faults, the system may periodically steal allocated page frames, using a page replacement algorithm, e.g., a least recently used (LRU) algorithm. Stolen page frames that have been modified are written back to auxiliary storage before they are added to the free queue. On some systems the paging supervisor is also responsible for managing translation registers that are not automatically loaded from page tables. Typically, a page fault that cannot be resolved results in an abnormal termination of the application. However, some systems allow the application to have exception handlers for such errors. The paging supervisor may handle a page fault exception in several different ways, depending on the details: - If the virtual address is invalid, the paging supervisor treats it as an error. - If the page is valid and the page information is not loaded into the MMU, the page information will be stored into one of the page registers. - If the page is uninitialized, a new page frame may be assigned and cleared. - If there is a stolen page frame containing the desired page, that page frame will be reused. - For a fault due to a write attempt into a read-protected page, if it is a copy-on-write page then a free page frame will be assigned and the contents of the old page copied; otherwise it is treated as an error. - If the virtual address is a valid page in a memory-mapped file or a paging file, a free page frame will be assigned and the page read in. In most cases, there will be an update to the page table, possibly followed by purging the Translation Lookaside Buffer (TLB), and the system restarts the instruction that causes the exception. If the free page frame queue is empty then the paging supervisor must free a page frame using the same page replacement algorithm for page stealing. ### Pinned pages Operating systems have memory areas that are pinned (never swapped to secondary storage). Other terms used are locked, fixed, or wired pages. For example, interrupt mechanisms rely on an array of pointers to their handlers, such as I/O completion and page fault. If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become far more complex and time-consuming, particularly in the case of page fault interruptions. Hence, some part of the page table structures is not pageable. Some pages may be pinned for short periods of time, others may be pinned for long periods of time, and still others may need to be permanently pinned. For example: - The paging supervisor code and drivers for secondary storage devices on which pages reside must be permanently pinned, as otherwise paging would not even work because the necessary code would not be available. - Timing-dependent components may be pinned to avoid variable paging delays. - Data buffers that are accessed directly by peripheral devices that use direct memory access or I/O channels must reside in pinned pages while the I/O operation is in progress because such devices and the buses to which they are attached expect to find data buffers located at physical memory addresses; regardless of whether the bus has a memory management unit for I/O, transfers cannot be stopped if a page fault occurs and then restarted when the page fault has been processed. For example, the data could come from a measurement sensor unit and lost real time data that got lost because of a page fault can not be recovered. In IBM's operating systems for System/370 and successor systems, the term is "fixed", and such pages may be long-term fixed, or may be short-term fixed, or may be unfixed (i.e., pageable). System control structures are often long-term fixed (measured in wall-clock time, i.e., time measured in seconds, rather than time measured in fractions of one second) whereas I/O buffers are usually short-term fixed (usually measured in significantly less than wall-clock time, possibly for tens of milliseconds). Indeed, the OS has a special facility for "fast fixing" these short-term fixed data buffers (fixing which is performed without resorting to a time-consuming Supervisor Call instruction). Multics used the term "wired". OpenVMS and Windows refer to pages temporarily made nonpageable (as for I/O buffers) as "locked", and simply "nonpageable" for those that are never pageable. The Single UNIX Specification also uses the term "locked" in the specification for , as do the man pages on many Unix-like systems. #### Virtual-real operation In OS/VS1 and similar OSes, some parts of systems memory are managed in "virtual-real" mode, called "V=R". In this mode every virtual address corresponds to the same real address. This mode is used for interrupt mechanisms, for the paging supervisor and page tables in older systems, and for application programs using non-standard I/O management. For example, IBM's z/OS has 3 modes (virtual-virtual, virtual-real and virtual-fixed). ### Thrashing When paging and page stealing are used, a problem called "thrashing" can occur, in which the computer spends an unsuitably large amount of time transferring pages to and from a backing store, hence slowing down useful work. A task's working set is the minimum set of pages that should be in memory in order for it to make useful progress. Thrashing occurs when there is insufficient memory available to store the working sets of all active programs. Adding real memory is the simplest response, but improving application design, scheduling, and memory usage can help. Another solution is to reduce the number of active tasks on the system. This reduces demand on real memory by swapping out the entire working set of one or more processes. A system thrashing is often a result of a sudden spike in page demand from a small number of running programs. Swap-token is a lightweight and dynamic thrashing protection mechanism. The basic idea is to set a token in the system, which is randomly given to a process that has page faults when thrashing happens. The process that has the token is given a privilege to allocate more physical memory pages to build its working set, which is expected to quickly finish its execution and to release the memory pages to other processes. A time stamp is used to handover the token one by one. The first version of swap-token was implemented in Linux 2.6. The second version is called preempt swap-token and is also in Linux 2.6. In this updated swap-token implementation, a priority counter is set for each process to track the number of swap-out pages. The token is always given to the process with a high priority, which has a high number of swap-out pages. The length of the time stamp is not a constant but is determined by the priority: the higher the number of swap-out pages of a process, the longer the time stamp for it will be. ## Segmented virtual memory Some systems, such as the Burroughs B5500, and the current Unisys MCP systems use segmentation instead of paging, dividing virtual address spaces into variable-length segments. Using segmentation matches the allocated memory blocks to the logical needs and requests of the programs, rather than the physical view of a computer, although pages themselves are an artificial division in memory. The designers of the B5000 would have found the artificial size of pages to be Procrustean in nature, a story they would later use for the exact data sizes in the B1000. In the Burroughs and Unisys systems, each memory segment is described by a master descriptor which is a single absolute descriptor which may be referenced by other relative (copy) descriptors, effecting sharing either within a process or between processes. Descriptors are central to the working of virtual memory in MCP systems. Descriptors contain not only the address of a segment, but the segment length and status in virtual memory indicated by the 'p-bit' or 'presence bit' which indicates if the address is to a segment in main memory or to a secondary-storage block. When a non-resident segment (p-bit is off) is accessed, an interrupt occurs to load the segment from secondary storage at the given address, or if the address itself is 0 then allocate a new block. In the latter case, the length field in the descriptor is used to allocate a segment of that length. A further problem to thrashing in using a segmented scheme is checkerboarding, where all free segments become too small to satisfy requests for new segments. The solution is to perform memory compaction to pack all used segments together and create a large free block from which further segments may be allocated. Since there is a single master descriptor for each segment the new block address only needs to be updated in a single descriptor, since all copies refer to the master descriptor. Paging is not free from fragmentation – the fragmentation is internal to pages (internal fragmentation). If a requested block is smaller than a page, then some space in the page will be wasted. If a block requires larger than a page, a small area in another page is required resulting in large wasted space. The fragmentation thus becomes a problem passed to programmers who may well distort their program to match certain page sizes. With segmentation, the fragmentation is external to segments (external fragmentation) and thus a system problem, which was the aim of virtual memory in the first place, to relieve programmers of such memory considerations. In multi-processing systems, optimal operation of the system depends on the mix of independent processes at any time. Hybrid schemes of segmentation and paging may be used. The Intel 80286 supports a similar segmentation scheme as an option, but it is rarely used. Segmentation and paging can be used together by dividing each segment into pages; systems with this memory structure, such as Multics and IBM System/38, are usually paging-predominant, segmentation providing memory protection. In the Intel 80386 and later IA-32 processors, the segments reside in a 32-bit linear, paged address space. Segments can be moved in and out of that space; pages there can "page" in and out of main memory, providing two levels of virtual memory; few if any operating systems do so, instead using only paging. Early non-hardware-assisted x86 virtualization solutions combined paging and segmentation because x86 paging offers only two protection domains whereas a VMM, guest OS or guest application stack needs three. The difference between paging and segmentation systems is not only about memory division; segmentation is visible to user processes, as part of memory model semantics. Hence, instead of memory that looks like a single large space, it is structured into multiple spaces. This difference has important consequences; a segment is not a page with variable length or a simple way to lengthen the address space. Segmentation that can provide a single-level memory model in which there is no differentiation between process memory and file system consists of only a list of segments (files) mapped into the process's potential address space. This is not the same as the mechanisms provided by calls such as mmap and Win32's MapViewOfFile, because inter-file pointers do not work when mapping files into semi-arbitrary places. In Multics, a file (or a segment from a multi-segment file) is mapped into a segment in the address space, so files are always mapped at a segment boundary. A file's linkage section can contain pointers for which an attempt to load the pointer into a register or make an indirect reference through it causes a trap. The unresolved pointer contains an indication of the name of the segment to which the pointer refers and an offset within the segment; the handler for the trap maps the segment into the address space, puts the segment number into the pointer, changes the tag field in the pointer so that it no longer causes a trap, and returns to the code where the trap occurred, re-executing the instruction that caused the trap. This eliminates the need for a linker completely and works when different processes map the same file into different places in their private address spaces. ## Address space swapping Some operating systems provide for swapping entire address spaces, in addition to whatever facilities they have for paging and segmentation. When this occurs, the OS writes those pages and segments currently in real memory to swap files. In a swap-in, the OS reads back the data from the swap files but does not automatically read back pages that had been paged out at the time of the swap out operation. IBM's MVS, from OS/VS2 Release 2 through z/OS, provides for marking an address space as unswappable; doing so does not pin any pages in the address space. This can be done for the duration of a job by entering the name of an eligible main program in the Program Properties Table with an unswappable flag. In addition, privileged code can temporarily make an address space unswappable using a SYSEVENT Supervisor Call instruction (SVC); certain changes in the address space properties require that the OS swap it out and then swap it back in, using SYSEVENT TRANSWAP. Swapping does not necessarily require memory management hardware, if, for example, multiple jobs are swapped in and out of the same area of storage.
https://en.wikipedia.org/wiki/Virtual_memory
A spacetime diagram is a graphical illustration of locations in space at various times, especially in the special theory of relativity. Spacetime diagrams can show the geometry underlying phenomena like time dilation and length contraction without mathematical equations. The history of an object's location through time traces out a line or curve on a spacetime diagram, referred to as the object's world line. Each point in a spacetime diagram represents a unique position in space and time and is referred to as an event. The most well-known class of spacetime diagrams are known as ## Minkowski diagrams , developed by Hermann Minkowski in 1908. Minkowski diagrams are two-dimensional graphs that depict events as happening in a universe consisting of one space dimension and one time dimension. Unlike a regular distance-time graph, the distance is displayed on the horizontal axis and time on the vertical axis. Additionally, the time and space units of measurement are chosen in such a way that an object moving at the speed of light is depicted as following a 45° angle to the diagram's axes. ## Introduction to kinetic diagrams ### Position versus time graphs In the study of 1-dimensional kinematics, position vs. time graphs (called x-t graphs for short) provide a useful means to describe motion. Kinematic features besides the object's position are visible by the slope and shape of the lines. In Fig 1-1, the plotted object moves away from the origin at a positive constant velocity (1.66 m/s) for 6 seconds, halts for 5 seconds, then returns to the origin over a period of 7 seconds at a non-constant speed (but negative velocity). At its most basic level, a spacetime diagram is merely a time vs position graph, with the directions of the axes in a usual p-t graph exchanged; that is, the vertical axis refers to temporal and the horizontal axis to spatial coordinate values. Especially when used in special relativity (SR), the temporal axes of a spacetime diagram are often scaled with the speed of light , and thus are often labeled by This changes the dimension of the addressed physical quantity from <Time> to <Length>, in accordance with the dimension associated with the spatial axis, which is frequently labeled ### Standard configuration of reference frames To ease insight into how spacetime coordinates, measured by observers in different reference frames, compare with each other, it is useful to standardize and simplify the setup. Two Galilean reference frames (i.e., conventional 3-space frames), S and S′ (pronounced "S prime"), each with observers O and O′ at rest in their respective frames, but measuring the other as moving with speeds ±v are said to be in standard configuration, when: - The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame S′. - The origins of frames S and S′ coincide at time t = 0 in frame S and also at t′ = 0 in frame S′. - Frame S′ moves in the x-direction of frame S with velocity v as measured in frame S. This spatial setting is displayed in the Fig 1-2, in which the temporal coordinates are separately annotated as quantities t and t'''. In a further step of simplification it is often sufficient to consider just the direction of the observed motion and ignore the other two spatial components, allowing x and ct to be plotted in 2-dimensional spacetime diagrams, as introduced above. ### Non-relativistic "spacetime diagrams" The black axes labelled and on Fig 1-3 are the coordinate system of an observer, referred to as at rest, and who is positioned at . This observer's world line is identical with the time axis. Each parallel line to this axis would correspond also to an object at rest but at another position. The blue line describes an object moving with constant speed to the right, such as a moving observer. This blue line labelled may be interpreted as the time axis for the second observer. Together with the axis, which is identical for both observers, it represents their coordinate system. Since the reference frames are in standard configuration, both observers agree on the location of the origin of their coordinate systems. The axes for the moving observer are not perpendicular to each other and the scale on their time axis is stretched. To determine the coordinates of a certain event, two lines, each parallel to one of the two axes, must be constructed passing through the event, and their intersections with the axes read off. Determining position and time of the event A as an example in the diagram leads to the same time for both observers, as expected. Only for the position different values result, because the moving observer has approached the position of the event A since . Generally stated, all events on a line parallel to the axis happen simultaneously for both observers. There is only one universal time , modelling the existence of one common position axis. On the other hand, due to two different time axes the observers usually measure different coordinates for the same event. This graphical translation from and to and and vice versa is described mathematically by the so-called Galilean transformation. Minkowski diagrams ### Overview The term Minkowski diagram refers to a specific form of spacetime diagram frequently used in special relativity. A Minkowski diagram is a two-dimensional graphical depiction of a portion of Minkowski space, usually where space has been curtailed to a single dimension. The units of measurement in these diagrams are taken such that the light cone at an event consists of the lines of slope plus or minus one through that event. The horizontal lines correspond to the usual notion of simultaneous events for a stationary observer at the origin. A particular Minkowski diagram illustrates the result of a Lorentz transformation. The Lorentz transformation relates two inertial frames of reference, where an observer stationary at the event makes a change of velocity along the -axis. As shown in Fig 2-1, the new time axis of the observer forms an angle with the previous time axis, with . In the new frame of reference the simultaneous events lie parallel to a line inclined by to the previous lines of simultaneity. This is the new -axis. Both the original set of axes and the primed set of axes have the property that they are orthogonal with respect to the Minkowski inner product or relativistic dot product. The original position on your time line (ct) is perpendicular to position A, the original position on your mutual timeline (x) where (t) is zero. This timeline where timelines come together are positioned then on the same timeline even when there are 2 different positions. The 2 positions are on the 45 degree Event line on the original position of A. Hence position A and position A’ on the Event line and (t)=0, relocate A’ back to position A. Whatever the magnitude of , the line forms the universal bisector, as shown in Fig 2-2. One frequently encounters Minkowski diagrams where the time units of measurement are scaled by a factor of such that one unit of equals one unit of . Such a diagram may have units of - Approximately 30 centimetres length and nanoseconds - Astronomical units and intervals of about 8 minutes and 19 seconds (499 seconds) - Light years and years - Light-second and second With that, light paths are represented by lines parallel to the bisector between the axes. ### Mathematical details The angle between the and axes will be identical with that between the time axes and . This follows from the second postulate of special relativity, which says that the speed of light is the same for all observers, regardless of their relative motion (see below). The angle is given by Extract of page 93 The corresponding boost from and to and and vice versa is described mathematically by the Lorentz transformation, which can be written where is the Lorentz factor. By applying the Lorentz transformation, the spacetime axes obtained for a boosted frame will always correspond to conjugate diameters of a pair of hyperbolas. As illustrated in Fig 2-3, the boosted and unboosted spacetime axes will in general have unequal unit lengths. If is the unit length on the axes of and respectively, the unit length on the axes of and is: The -axis represents the worldline of a clock resting in , with representing the duration between two events happening on this worldline, also called the proper time between these events. Length upon the -axis represents the rest length or proper length of a rod resting in . The same interpretation can also be applied to distance upon the - and -axes for clocks and rods resting in . ### ### History Albert Einstein announced his theory of special relativity in 1905,. See also: English translation. with Hermann Minkowski providing his graphical representation in 1908. Various English translations on Wikisource: Space and Time In Minkowski's 1908 paper there were three diagrams, first to illustrate the Lorentz transformation, then the partition of the plane by the light-cone, and finally illustration of worldlines. The first diagram used a branch of the unit hyperbola to show the locus of a unit of proper time depending on velocity, thus illustrating time dilation. The second diagram showed the conjugate hyperbola to calibrate space, where a similar stretching leaves the impression of FitzGerald contraction. In 1914 Ludwik Silberstein included a diagram of "Minkowski's representation of the Lorentz transformation". This diagram included the unit hyperbola, its conjugate, and a pair of conjugate diameters. Since the 1960s a version of this more complete configuration has been referred to as The Minkowski Diagram, and used as a standard illustration of the transformation geometry of special relativity. E. T. Whittaker has pointed out that the principle of relativity is tantamount to the arbitrariness of what hyperbola radius is selected for time in the Minkowski diagram. In 1912 Gilbert N. Lewis and Edwin B. Wilson applied the methods of synthetic geometry to develop the properties of the non-Euclidean plane that has Minkowski diagrams.Synthetic Spacetime, a digest of the axioms used, and theorems proved, by Wilson and Lewis. When Taylor and Wheeler composed Spacetime Physics (1966), they did not use the term Minkowski diagram for their spacetime geometry. Instead they included an acknowledgement of Minkowski's contribution to philosophy by the totality of his innovation of 1908. ## Loedel diagrams While a frame at rest in a Minkowski diagram has orthogonal spacetime axes, a frame moving relative to the rest frame in a Minkowski diagram has spacetime axes which form an acute angle. This asymmetry of Minkowski diagrams can be misleading, since special relativity postulates that any two inertial reference frames must be physically equivalent. The Loedel diagram is an alternative spacetime diagram that makes the symmetry of inertial references frames much more manifest. ### Formulation via median frame Several authors showed that there is a frame of reference between the resting and moving ones where their symmetry would be apparent ("median frame"). In this frame, the two other frames are moving in opposite directions with equal speed. Using such coordinates makes the units of length and time the same for both axes. If and are given between and , then these expressions are connected with the values in their median frame S0 as follows: (Translation: The Lorentz–Einstein transformation and the universal time of Ed. Guillaume) For instance, if between and , then by (2) they are moving in their median frame S0 with approximately each in opposite directions. On the other hand, if in S0, then by (1) the relative velocity between and in their own rest frames is . The construction of the axes of and is done in accordance with the ordinary method using with respect to the orthogonal axes of the median frame (Fig. 3–1). However, it turns out that when drawing such a symmetric diagram, it is possible to derive the diagram's relations even without mentioning the median frame and at all. Instead, the relative velocity between and can directly be used in the following construction, providing the same result: If is the angle between the axes of and (or between and ), and between the axes of and , it is given: Two methods of construction are obvious from Fig. 3-2: the -axis is drawn perpendicular to the -axis, the and -axes are added at angle ; and the x′-axis is drawn at angle with respect to the -axis, the -axis is added perpendicular to the -axis and the -axis perpendicular to the -axis. In a Minkowski diagram, lengths on the page cannot be directly compared to each other, due to warping factor between the axes' unit lengths in a Minkowski diagram. In particular, if and are the unit lengths of the rest frame axes and moving frame axes, respectively, in a Minkowski diagram, then the two unit lengths are warped relative to each other via the formula: By contrast, in a symmetric Loedel diagram, both the and frame axes are warped by the same factor relative to the median frame and hence have identical unit lengths. This implies that, for a Loedel spacetime diagram, we can directly compare spacetime lengths between different frames as they appear on the page; no unit length scaling/conversion between frames is necessary due to the symmetric nature of the Loedel diagram. History - Max Born (1920) drew Minkowski diagrams by placing the -axis almost perpendicular to the -axis, as well as the -axis to the -axis, in order to demonstrate length contraction and time dilation in the symmetric case of two rods and two clocks moving in opposite direction. See also [ Reprint (2013) of third edition (1922) at Google books, p. 187] - Dmitry Mirimanoff (1921) showed that there is always a median frame with respect to two relatively moving frames, and derived the relations between them from the Lorentz transformation. However, he did not give a graphical representation in a diagram. - Symmetric diagrams were systematically developed by Paul Gruner in collaboration with Josef Sauter in two papers in 1921. Relativistic effects such as length contraction and time dilation and some relations to covariant and contravariant vectors were demonstrated by them. (Translation: Elementary geometric representation of the formulas of the special theory of relativity) (translation: An elementary geometrical representation of the transformation formulas of the special theory of relativity) Gruner extended this method in subsequent papers (1922–1924), and gave credit to Mirimanoff's treatment as well. (translation: Graphical representation of the four-dimensional space-time universe) - The construction of symmetric Minkowski diagrams was later independently rediscovered by several authors. For instance, starting in 1948, Enrique Loedel Palumbo published a series of papers in Spanish language, presenting the details of such an approach.Fisica relativista, Kapelusz Editorial, Buenos Aires, Argentina (1955). In 1955, Henri Amar also published a paper presenting such relations, and gave credit to Loedel in a subsequent paper in 1957. Some authors of textbooks use symmetric Minkowski diagrams, denoting as Loedel diagrams. ## Relativistic phenomena in diagrams ### Time dilation Relativistic time dilation refers to the fact that a clock (indicating its proper time in its rest frame) that moves relative to an observer is observed to run slower. The situation is depicted in the symmetric Loedel diagrams of Fig 4-1. Note that we can compare spacetime lengths on page directly with each other, due to the symmetric nature of the Loedel diagram. In Fig 4-2, the observer whose reference frame is given by the black axes is assumed to move from the origin O towards A. The moving clock has the reference frame given by the blue axes and moves from O to B. For the black observer, all events happening simultaneously with the event at A are located on a straight line parallel to its space axis. This line passes through A and B, so A and B are simultaneous from the reference frame of the observer with black axes. However, the clock that is moving relative to the black observer marks off time along the blue time axis. This is represented by the distance from O to B. Therefore, the observer at A with the black axes notices their clock as reading the distance from O to A while they observe the clock moving relative him or her to read the distance from O to B. Due to the distance from O to B being smaller than the distance from O to A, they conclude that the time passed on the clock moving relative to them is smaller than that passed on their own clock. A second observer, having moved together with the clock from O to B, will argue that the black axis clock has only reached C and therefore runs slower. The reason for these apparently paradoxical statements is the different determination of the events happening synchronously at different locations. Due to the principle of relativity, the question of who is right has no answer and does not make sense. ### Length contraction Relativistic length contraction refers to the fact that a ruler (indicating its proper length in its rest frame) that moves relative to an observer is observed to contract/shorten. The situation is depicted in symmetric Loedel diagrams in Fig 4-3. Note that we can compare spacetime lengths on page directly with each other, due to the symmetric nature of the Loedel diagram. In Fig 4-4, the observer is assumed again to move along the -axis. The world lines of the endpoints of an object moving relative to him are assumed to move along the -axis and the parallel line passing through A and B. For this observer the endpoints of the object at are O and A. For a second observer moving together with the object, so that for him the object is at rest, it has the proper length OB at . Due to . the object is contracted for the first observer. The second observer will argue that the first observer has evaluated the endpoints of the object at O and A respectively and therefore at different times, leading to a wrong result due to his motion in the meantime. If the second observer investigates the length of another object with endpoints moving along the -axis and a parallel line passing through C and D he concludes the same way this object to be contracted from OD to OC. Each observer estimates objects moving with the other observer to be contracted. This apparently paradoxical situation is again a consequence of the relativity of simultaneity as demonstrated by the analysis via Minkowski diagram. For all these considerations it was assumed, that both observers take into account the speed of light and their distance to all events they see in order to determine the actual times at which these events happen from their point of view. ### Constancy of the speed of light Another postulate of special relativity is the constancy of the speed of light. It says that any observer in an inertial reference frame measuring the vacuum speed of light relative to themself obtains the same value regardless of his own motion and that of the light source. This statement seems to be paradoxical, but it follows immediately from the differential equation yielding this, and the Minkowski diagram agrees. It explains also the result of the Michelson–Morley experiment which was considered to be a mystery before the theory of relativity was discovered, when photons were thought to be waves through an undetectable medium. For world lines of photons passing the origin in different directions and holds. That means any position on such a world line corresponds with steps on - and -axes of equal absolute value. From the rule for reading off coordinates in coordinate system with tilted axes follows that the two world lines are the angle bisectors of the - and -axes. As shown in Fig 4-5, the Minkowski diagram illustrates them as being angle bisectors of the - and -axes as well. That means both observers measure the same speed for both photons. Further coordinate systems corresponding to observers with arbitrary velocities can be added to this Minkowski diagram. For all these systems both photon world lines represent the angle bisectors of the axes. The more the relative speed approaches the speed of light the more the axes approach the corresponding angle bisector. The axis is always more flat and the time axis more steep than the photon world lines. The scales on both axes are always identical, but usually different from those of the other coordinate systems. ### Speed of light and causality Straight lines passing the origin which are steeper than both photon world lines correspond with objects moving more slowly than the speed of light. If this applies to an object, then it applies from the viewpoint of all observers, because the world lines of these photons are the angle bisectors for any inertial reference frame. Therefore, any point above the origin and between the world lines of both photons can be reached with a speed smaller than that of the light and can have a cause-and-effect relationship with the origin. This area is the absolute future, because any event there happens later compared to the event represented by the origin regardless of the observer, which is obvious graphically from the Minkowski diagram in Fig 4-6. Following the same argument the range below the origin and between the photon world lines is the absolute past relative to the origin. Any event there belongs definitely to the past and can be the cause of an effect at the origin. The relationship between any such pairs of event is called timelike, because they have a time distance greater than zero for all observers. A straight line connecting these two events is always the time axis of a possible observer for whom they happen at the same place. Two events which can be connected just with the speed of light are called lightlike. In principle a further dimension of space can be added to the Minkowski diagram leading to a three-dimensional representation. In this case the ranges of future and past become cones with apexes touching each other at the origin. They are called light cones. ### The speed of light as a limit Following the same argument, all straight lines passing through the origin and which are more nearly horizontal than the photon world lines, would correspond to objects or signals moving faster than light regardless of the speed of the observer. Therefore, no event outside the light cones can be reached from the origin, even by a light-signal, nor by any object or signal moving with less than the speed of light. Such pairs of events are called spacelike because they have a finite spatial distance different from zero for all observers. On the other hand, a straight line connecting such events is always the space coordinate axis of a possible observer for whom they happen at the same time. By a slight variation of the velocity of this coordinate system in both directions it is always possible to find two inertial reference frames whose observers estimate the chronological order of these events to be different. Given an object moving faster than light, say from O to A in Fig 4-7, then for any observer watching the object moving from O to A, another observer can be found (moving at less than the speed of light with respect to the first) for whom the object moves from A to O. The question of which observer is right has no unique answer, and therefore makes no physical sense. Any such moving object or signal would violate the principle of causality. Also, any general technical means of sending signals faster than light would permit information to be sent into the originator's own past. In the diagram, an observer at O in the system sends a message moving faster than light to A. At A, it is received by another observer, moving so as to be in the system, who sends it back, again faster than light, arriving at B. But B is in the past relative to O. The absurdity of this process becomes obvious when both observers subsequently confirm that they received no message at all, but all messages were directed towards the other observer as can be seen graphically in the Minkowski diagram. Furthermore, if it were possible to accelerate an observer to the speed of light, their space and time axes would coincide with their angle bisector. The coordinate system would collapse, in concordance with the fact that due to time dilation, time would effectively stop passing for them. These considerations show that the speed of light as a limit is a consequence of the properties of spacetime, and not of the properties of objects such as technologically imperfect space ships. The prohibition of faster-than-light motion, therefore, has nothing in particular to do with electromagnetic waves or light, but comes as a consequence of the structure of spacetime. ## Accelerating observers It is often, incorrectly, asserted that special relativity cannot handle accelerating particles or accelerating reference frames. In reality, accelerating particles present no difficulty at all in special relativity. On the other hand, accelerating frames do require some special treatment, However, as long as one is dealing with flat, Minkowskian spacetime, special relativity can handle the situation. It is only in the presence of gravitation that general relativity is required. An accelerating particle's 4-vector acceleration is the derivative with respect to proper time of its 4-velocity. This is not a difficult situation to handle. Accelerating frames require that one understand the concept of a momentarily comoving reference frame (MCRF), which is to say, a frame traveling at the same instantaneous velocity of a particle at any given instant. Consider the animation in Fig 5–1. The curved line represents the world line of a particle that undergoes continuous acceleration, including complete changes of direction in the positive and negative x''-directions. The red axes are the axes of the MCRF for each point along the particle's trajectory. The coordinates of events in the unprimed (stationary) frame can be related to their coordinates in any momentarily co-moving primed frame using the Lorentz transformations. Fig 5-2 illustrates the changing views of spacetime along the world line of a rapidly accelerating particle. The $$ ct' $$ axis (not drawn) is vertical, while the $$ x' $$ axis (not drawn) is horizontal. The dashed line is the spacetime trajectory ("world line") of the particle. The balls are placed at regular intervals of proper time along the world line. The solid diagonal lines are the light cones for the observer's current event, and they intersect at that event. The small dots are other arbitrary events in the spacetime. The slope of the world line (deviation from being vertical) is the velocity of the particle on that section of the world line. Bends in the world line represent particle acceleration. As the particle accelerates, its view of spacetime changes. These changes in view are governed by the Lorentz transformations. Also note that: - the balls on the world line before/after future/past accelerations are more spaced out due to time dilation. - events which were simultaneous before an acceleration (horizontally spaced events) are at different times afterwards due to the relativity of simultaneity, - events pass through the light cone lines due to the progression of proper time, but not due to the change of views caused by the accelerations, and - the world line always remains within the future and past light cones of the current event. If one imagines each event to be the flashing of a light, then the events that are within the past light cone of the observer are the events visible to the observer. The slope of the world line (deviation from being vertical) gives the velocity relative to the observer. ## Case of non-inertial reference frames The photon world lines are determined using the metric with $$ d \tau = 0 $$ . The light cones are deformed according to the position. In an inertial reference frame a free particle has a straight world line. In a non-inertial reference frame the world line of a free particle is curved. Take the example of the fall of an object dropped without initial velocity from a rocket. The rocket has a uniformly accelerated motion with respect to an inertial reference frame. As can be seen from Fig 6-2 of a Minkowski diagram in a non-inertial reference frame, the object once dropped, gains speed, reaches a maximum, and then sees its speed decrease and asymptotically cancel on the horizon where its proper time freezes at $$ t_\text{H} $$ . The velocity is measured by an observer at rest in the accelerated rocket.
https://en.wikipedia.org/wiki/Spacetime_diagram
Maxwell's equations, or Maxwell–Heaviside equations, are a set of coupled partial differential equations that, together with the Lorentz force law, form the foundation of classical electromagnetism, classical optics, electric and magnetic circuits. The equations provide a mathematical model for electric, optical, and radio technologies, such as power generation, electric motors, wireless communication, lenses, radar, etc. They describe how electric and magnetic fields are generated by charges, currents, and changes of the fields. The equations are named after the physicist and mathematician James Clerk Maxwell, who, in 1861 and 1862, published an early form of the equations that included the Lorentz force law. Maxwell first used the equations to propose that light is an electromagnetic phenomenon. The modern form of the equations in their most common formulation is credited to Oliver Heaviside. Maxwell's equations may be combined to demonstrate how fluctuations in electromagnetic fields (waves) propagate at a constant speed in vacuum, c (). Known as electromagnetic radiation, these waves occur at various wavelengths to produce a spectrum of radiation from radio waves to gamma rays. In partial differential equation form and a coherent system of units, Maxwell's microscopic equations can be written as (top to bottom: ### Gauss's law , ### Gauss's law for magnetism , ### Faraday's law , Ampère-Maxwell law) $$ \begin{align} \nabla \cdot \mathbf{E} \,\,\, &= \frac{\rho}{\varepsilon_0} \\ \nabla \cdot \mathbf{B} \,\,\, &= 0 \\ \nabla \times \mathbf{E} &= -\frac{\partial \mathbf{B}}{\partial t} \\ \nabla \times \mathbf{B} &= \mu_0 \left(\mathbf{J} + \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t} \right) \end{align} $$ With $$ \mathbf{E} $$ the electric field, $$ \mathbf{B} $$ the magnetic field, $$ \rho $$ the electric charge density and $$ \mathbf{J} $$ the current density. $$ \varepsilon_0 $$ is the vacuum permittivity and $$ \mu_0 $$ the vacuum permeability. The equations have two major variants: - The microscopic equations have universal applicability but are unwieldy for common calculations. They relate the electric and magnetic fields to total charge and total current, including the complicated charges and currents in materials at the atomic scale. - The macroscopic equations define two new auxiliary fields that describe the large-scale behaviour of matter without having to consider atomic-scale charges and quantum phenomena like spins. However, their use requires experimentally determined parameters for a phenomenological description of the electromagnetic response of materials. The term "Maxwell's equations" is often also used for equivalent alternative formulations. Versions of Maxwell's equations based on the electric and magnetic scalar potentials are preferred for explicitly solving the equations as a boundary value problem, analytical mechanics, or for use in quantum mechanics. The covariant formulation (on spacetime rather than space and time separately) makes the compatibility of Maxwell's equations with special relativity manifest. Maxwell's equations in curved spacetime, commonly used in high-energy and gravitational physics, are compatible with general relativity. In fact, Albert Einstein developed special and general relativity to accommodate the invariant speed of light, a consequence of Maxwell's equations, with the principle that only relative movement has physical consequences. The publication of the equations marked the unification of a theory for previously separately described phenomena: magnetism, electricity, light, and associated radiation. Since the mid-20th century, it has been understood that Maxwell's equations do not give an exact description of electromagnetic phenomena, but are instead a classical limit of the more precise theory of quantum electrodynamics. ## History of the equations ## Conceptual descriptions Gauss's law Gauss's law describes the relationship between an electric field and electric charges: an electric field points away from positive charges and towards negative charges, and the net outflow of the electric field through a closed surface is proportional to the enclosed charge, including bound charge due to polarization of material. The coefficient of the proportion is the permittivity of free space. Gauss's law for magnetism Gauss's law for magnetism states that electric charges have no magnetic analogues, called magnetic monopoles; no north or south magnetic poles exist in isolation. Instead, the magnetic field of a material is attributed to a dipole, and the net outflow of the magnetic field through a closed surface is zero. Magnetic dipoles may be represented as loops of current or inseparable pairs of equal and opposite "magnetic charges". Precisely, the total magnetic flux through a Gaussian surface is zero, and the magnetic field is a solenoidal vector field. Faraday's law The Maxwell–Faraday version of Faraday's law of induction describes how a time-varying magnetic field corresponds to curl of an electric field. In integral form, it states that the work per unit charge required to move a charge around a closed loop equals the rate of change of the magnetic flux through the enclosed surface. The electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field and generates an electric field in a nearby wire. ### Ampère–Maxwell law The original law of Ampère states that magnetic fields relate to electric current. Maxwell's addition states that magnetic fields also relate to changing electric fields, which Maxwell called displacement current. The integral form states that electric and displacement currents are associated with a proportional magnetic field along any enclosing curve. Maxwell's modification of Ampère's circuital law is important because the laws of Ampère and Gauss must otherwise be adjusted for static fields. As a consequence, it predicts that a rotating magnetic field occurs with a changing electric field.Principles of physics: a calculus-based text, by R. A. Serway, J. W. Jewett, page 809. A further consequence is the existence of self-sustaining electromagnetic waves which travel through empty space. The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents, matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics. ## Formulation in terms of electric and magnetic fields (microscopic or in vacuum version) In the electric and magnetic field formulation there are four equations that determine the fields for given charge and current distribution. A separate law of nature, the Lorentz force law, describes how the electric and magnetic fields act on charged particles and currents. By convention, a version of this law in the original equations by Maxwell is no longer included. The vector calculus formalism below, the work of Oliver Heaviside, has become standard. It is rotationally invariant, and therefore mathematically more transparent than Maxwell's original 20 equations in x, y and z components. The relativistic formulations are more symmetric and Lorentz invariant. For the same equations expressed using tensor calculus or differential forms (see ). The differential and integral formulations are mathematically equivalent; both are useful. The integral formulation relates fields within a region of space to fields on the boundary and can often be used to simplify and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential equations are purely local and are a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis. ### Key to the notation Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated. The equations introduce the electric field, , a vector field, and the magnetic field, , a pseudovector field, each generally having a time and location dependence. The sources are - the total electric charge density (total charge per unit volume), , and - the total electric current density (total current per unit area), . The universal constants appearing in the equations (the first two ones explicitly only in the SI formulation) are: - the permittivity of free space, , and - the permeability of free space, , and - the speed of light, $$ c = ({\varepsilon_0\mu_0})^{-1/2} $$ #### Differential equations In the differential equations, - the nabla symbol, , denotes the three-dimensional gradient operator, del, - the symbol (pronounced "del dot") denotes the divergence operator, - the symbol (pronounced "del cross") denotes the curl operator. #### Integral equations In the integral equations, - is any volume with closed boundary surface , and - is any surface with closed boundary curve , The equations are a little easier to interpret with time-independent surfaces and volumes. Time-independent surfaces and volumes are "fixed" and do not change over a given time interval. For example, since the surface is time-independent, we can bring the differentiation under the integral sign in Faraday's law: $$ \frac{\mathrm{d}}{\mathrm{d}t} \iint_{\Sigma} \mathbf{B} \cdot \mathrm{d}\mathbf{S} = \iint_{\Sigma} \frac{\partial \mathbf{B}}{\partial t} \cdot \mathrm{d}\mathbf{S}\,, $$ Maxwell's equations can be formulated with possibly time-dependent surfaces and volumes by using the differential version and using Gauss' and Stokes' theorems as appropriate. - $$ {\vphantom{\int}}_{\scriptstyle\partial \Omega} $$ is a surface integral over the boundary surface , with the loop indicating the surface is closed - $$ \iiint_\Omega $$ is a volume integral over the volume , - $$ \oint_{\partial \Sigma} $$ is a line integral around the boundary curve , with the loop indicating the curve is closed. - $$ \iint_\Sigma $$ is a surface integral over the surface , - The total electric charge enclosed in is the volume integral over of the charge density (see the "macroscopic formulation" section below): $$ Q = \iiint_\Omega \rho \ \mathrm{d}V, $$ where is the volume element. - The net magnetic flux is the surface integral of the magnetic field passing through a fixed surface, : $$ \Phi_B = \iint_{\Sigma} \mathbf{B} \cdot \mathrm{d} \mathbf{S}, $$ - The net electric flux is the surface integral of the electric field passing through : $$ \Phi_E = \iint_{\Sigma} \mathbf{E} \cdot \mathrm{d} \mathbf{S}, $$ - The net electric current is the surface integral of the electric current density passing through : $$ I = \iint_{\Sigma} \mathbf{J} \cdot \mathrm{d} \mathbf{S}, $$ where denotes the differential vector element of surface area , normal to surface . (Vector area is sometimes denoted by rather than , but this conflicts with the notation for magnetic vector potential). ### Formulation in the SI Name Integral equations Differential equations Gauss's law Gauss's law for magnetism Maxwell–Faraday equation (Faraday's law of induction) Ampère–Maxwell law ### Formulation in the Gaussian system The definitions of charge, electric field, and magnetic field can be altered to simplify theoretical calculation, by absorbing dimensioned factors of and into the units (and thus redefining these). With a corresponding change in the values of the quantities for the Lorentz force law this yields the same physics, i.e. trajectories of charged particles, or work done by an electric motor. These definitions are often preferred in theoretical and high energy physics where it is natural to take the electric and magnetic field with the same units, to simplify the appearance of the electromagnetic tensor: the Lorentz covariant object unifying electric and magnetic field would then contain components with uniform unit and dimension. Such modified definitions are conventionally used with the Gaussian (CGS) units. Using these definitions, colloquially "in Gaussian units", the Maxwell equations become: Name Integral equations Differential equations Gauss's law Gauss's law for magnetism Maxwell–Faraday equation (Faraday's law of induction) Ampère–Maxwell law The equations simplify slightly when a system of quantities is chosen in the speed of light, c, is used for nondimensionalization, so that, for example, seconds and lightseconds are interchangeable, and c = 1. Further changes are possible by absorbing factors of . This process, called rationalization, affects whether Coulomb's law or Gauss's law includes such a factor (see Heaviside–Lorentz units, used mainly in particle physics). ## Relationship between differential and integral formulations The equivalence of the differential and integral formulations are a consequence of the Gauss divergence theorem and the Kelvin–Stokes theorem. ### Flux and divergence According to the (purely mathematical) Gauss divergence theorem, the electric flux through the boundary surface can be rewritten as $$ \vphantom{\oint}_{\scriptstyle\partial \Omega} \mathbf{E}\cdot\mathrm{d}\mathbf{S}=\iiint_{\Omega} \nabla\cdot\mathbf{E}\, \mathrm{d}V $$ The integral version of Gauss's equation can thus be rewritten as $$ \iiint_{\Omega} \left(\nabla \cdot \mathbf{E} - \frac{\rho}{\varepsilon_0}\right) \, \mathrm{d}V = 0 $$ Since is arbitrary (e.g. an arbitrary small ball with arbitrary center), this is satisfied if and only if the integrand is zero everywhere. This is the differential equations formulation of Gauss equation up to a trivial rearrangement. Similarly rewriting the magnetic flux in Gauss's law for magnetism in integral form gives $$ \vphantom{\oint}_{\scriptstyle\partial \Omega} \mathbf{B}\cdot\mathrm{d}\mathbf{S} = \iiint_{\Omega} \nabla \cdot \mathbf{B}\, \mathrm{d}V = 0. $$ which is satisfied for all if and only if $$ \nabla \cdot \mathbf{B} = 0 $$ everywhere. ### Circulation and curl By the Kelvin–Stokes theorem we can rewrite the line integrals of the fields around the closed boundary curve to an integral of the "circulation of the fields" (i.e. their curls) over a surface it bounds, i.e. $$ \oint_{\partial \Sigma} \mathbf{B} \cdot \mathrm{d}\boldsymbol{\ell} = \iint_\Sigma (\nabla \times \mathbf{B}) \cdot \mathrm{d}\mathbf{S}, $$ Hence the Ampère–Maxwell law, the modified version of Ampère's circuital law, in integral form can be rewritten as $$ \iint_\Sigma \left(\nabla \times \mathbf{B} - \mu_0 \left(\mathbf{J} + \varepsilon_0 \frac{\partial \mathbf{E}}{\partial t}\right)\right)\cdot \mathrm{d}\mathbf{S} = 0. $$ Since can be chosen arbitrarily, e.g. as an arbitrary small, arbitrary oriented, and arbitrary centered disk, we conclude that the integrand is zero if and only if the Ampère–Maxwell law in differential equations form is satisfied. The equivalence of Faraday's law in differential and integral form follows likewise. The line integrals and curls are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field. ## Charge conservation The invariance of charge can be derived as a corollary of Maxwell's equations. The left-hand side of the Ampère–Maxwell law has zero divergence by the div–curl identity. Expanding the divergence of the right-hand side, interchanging derivatives, and applying Gauss's law gives: $$ 0 = \nabla\cdot (\nabla\times \mathbf{B}) = \nabla \cdot \left(\mu_0 \left(\mathbf{J} + \varepsilon_0 \frac{\partial \mathbf{E}} {\partial t} \right) \right) = \mu_0\left(\nabla\cdot \mathbf{J} + \varepsilon_0\frac{\partial}{\partial t}\nabla\cdot \mathbf{E}\right) = \mu_0\left(\nabla\cdot \mathbf{J} +\frac{\partial \rho}{\partial t}\right) $$ i.e., $$ \frac{\partial \rho}{\partial t} + \nabla \cdot \mathbf{J} = 0. $$ By the Gauss divergence theorem, this means the rate of change of charge in a fixed volume equals the net current flowing through the boundary: $$ \frac{d}{dt}Q_\Omega = \frac{d}{dt} \iiint_{\Omega} \rho \mathrm{d}V = - $$ $$ \vphantom{\oint}_{\scriptstyle \partial \Omega } \mathbf{J} \cdot {\rm d}\mathbf{S} = - I_{\partial \Omega}. $$ In particular, in an isolated system the total charge is conserved. ## Vacuum equations, electromagnetic waves and speed of light In a region with no charges () and no currents (), such as in vacuum, Maxwell's equations reduce to: $$ \begin{align} \nabla \cdot \mathbf{E} &= 0, & \nabla \times \mathbf{E} + \frac{\partial\mathbf B}{\partial t} = 0, \\ \nabla \cdot \mathbf{B} &= 0, & \nabla \times \mathbf{B} - \mu_0\varepsilon_0 \frac{\partial\mathbf E}{\partial t} = 0. \end{align} $$ Taking the curl of the curl equations, and using the curl of the curl identity we obtain $$ \begin{align} \mu_0\varepsilon_0 \frac{\partial^2 \mathbf{E}}{\partial t^2} - \nabla^2 \mathbf{E} = 0, \\ \mu_0\varepsilon_0 \frac{\partial^2 \mathbf{B}}{\partial t^2} - \nabla^2 \mathbf{B} = 0. \end{align} $$ The quantity $$ \mu_0\varepsilon_0 $$ has the dimension (T/L)2. Defining $$ c = (\mu_0 \varepsilon_0)^{-1/2} $$ , the equations above have the form of the standard wave equations $$ \begin{align} \frac{1}{c^2} \frac{\partial^2 \mathbf{E}}{\partial t^2} - \nabla^2 \mathbf{E} = 0, \\ \frac{1}{c^2} \frac{\partial^2 \mathbf{B}}{\partial t^2} - \nabla^2 \mathbf{B} = 0. \end{align} $$ Already during Maxwell's lifetime, it was found that the known values for $$ \varepsilon_0 $$ and $$ \mu_0 $$ give $$ c \approx 2.998 \times 10^8~\text{m/s} $$ , then already known to be the speed of light in free space. This led him to propose that light and radio waves were propagating electromagnetic waves, since amply confirmed. In the old SI system of units, the values of $$ \mu_0 = 4\pi\times 10^{-7} $$ and $$ c = 299\,792\,458~\text{m/s} $$ are defined constants, (which means that by definition $$ \varepsilon_0 = 8.854\,187\,8... \times 10^{-12}~\text{F/m} $$ ) that define the ampere and the metre. In the new SI system, only c keeps its defined value, and the electron charge gets a defined value. In materials with relative permittivity, , and relative permeability, , the phase velocity of light becomes $$ v_\text{p} = \frac{1}\sqrt{\mu_0\mu_\text{r} \varepsilon_0\varepsilon_\text{r}}, $$ which is usually less than . In addition, and are perpendicular to each other and to the direction of wave propagation, and are in phase with each other. A sinusoidal plane wave is one special solution of these equations. Maxwell's equations explain how these waves can physically propagate through space. The changing magnetic field creates a changing electric field through Faraday's law. In turn, that electric field creates a changing magnetic field through Maxwell's modification of Ampère's circuital law. This perpetual cycle allows these waves, now known as electromagnetic radiation, to move through space at velocity . ## Macroscopic formulation The above equations are the microscopic version of Maxwell's equations, expressing the electric and the magnetic fields in terms of the (possibly atomic-level) charges and currents present. This is sometimes called the "general" form, but the macroscopic version below is equally general, the difference being one of bookkeeping. The microscopic version is sometimes called "Maxwell's equations in vacuum": this refers to the fact that the material medium is not built into the structure of the equations, but appears only in the charge and current terms. The microscopic version was introduced by Lorentz, who tried to use it to derive the macroscopic properties of bulk matter from its microscopic constituents. "Maxwell's macroscopic equations", also known as Maxwell's equations in matter, are more similar to those that Maxwell introduced himself. Name Integral equations (SI) Differential equations (SI) Differential equations (Gaussian system) Gauss's law Ampère–Maxwell law Gauss's law for magnetism Maxwell–Faraday equation (Faraday's law of induction) In the macroscopic equations, the influence of bound charge and bound current is incorporated into the displacement field and the magnetizing field , while the equations depend only on the free charges and free currents . This reflects a splitting of the total electric charge Q and current I (and their densities and J) into free and bound parts: $$ \begin{align} Q &= Q_\text{f} + Q_\text{b} = \iiint_\Omega \left(\rho_\text{f} + \rho_\text{b} \right) \, \mathrm{d}V = \iiint_\Omega \rho \,\mathrm{d}V, \\ I &= I_\text{f} + I_\text{b} = \iint_\Sigma \left(\mathbf{J}_\text{f} + \mathbf{J}_\text{b} \right) \cdot \mathrm{d}\mathbf{S} = \iint_\Sigma \mathbf{J} \cdot \mathrm{d}\mathbf{S}. \end{align} $$ The cost of this splitting is that the additional fields and need to be determined through phenomenological constituent equations relating these fields to the electric field and the magnetic field , together with the bound charge and current. See below for a detailed description of the differences between the microscopic equations, dealing with total charge and current including material contributions, useful in air/vacuum; and the macroscopic equations, dealing with free charge and current, practical to use within materials. ### Bound charge and current When an electric field is applied to a dielectric material its molecules respond by forming microscopic electric dipoles – their atomic nuclei move a tiny distance in the direction of the field, while their electrons move a tiny distance in the opposite direction. This produces a macroscopic bound charge in the material even though all of the charges involved are bound to individual molecules. For example, if every molecule responds the same, similar to that shown in the figure, these tiny movements of charge combine to produce a layer of positive bound charge on one side of the material and a layer of negative charge on the other side. The bound charge is most conveniently described in terms of the polarization of the material, its dipole moment per unit volume. If is uniform, a macroscopic separation of charge is produced only at the surfaces where enters and leaves the material. For non-uniform , a charge is also produced in the bulk. Somewhat similarly, in all materials the constituent atoms exhibit magnetic moments that are intrinsically linked to the angular momentum of the components of the atoms, most notably their electrons. The connection to angular momentum suggests the picture of an assembly of microscopic current loops. Outside the material, an assembly of such microscopic current loops is not different from a macroscopic current circulating around the material's surface, despite the fact that no individual charge is traveling a large distance. These bound currents can be described using the magnetization . The very complicated and granular bound charges and bound currents, therefore, can be represented on the macroscopic scale in terms of and , which average these charges and currents on a sufficiently large scale so as not to see the granularity of individual atoms, but also sufficiently small that they vary with location in the material. As such, Maxwell's macroscopic equations ignore many details on a fine scale that can be unimportant to understanding matters on a gross scale by calculating fields that are averaged over some suitable volume. ### Auxiliary fields, polarization and magnetization The definitions of the auxiliary fields are: $$ \begin{align} \mathbf{D}(\mathbf{r}, t) &= \varepsilon_0 \mathbf{E}(\mathbf{r}, t) + \mathbf{P}(\mathbf{r}, t), \\ \mathbf{H}(\mathbf{r}, t) &= \frac{1}{\mu_0} \mathbf{B}(\mathbf{r}, t) - \mathbf{M}(\mathbf{r}, t), \end{align} $$ where is the polarization field and is the magnetization field, which are defined in terms of microscopic bound charges and bound currents respectively. The macroscopic bound charge density and bound current density in terms of polarization and magnetization are then defined as $$ \begin{align} \rho_\text{b} &= -\nabla\cdot\mathbf{P}, \\ \mathbf{J}_\text{b} &= \nabla\times\mathbf{M} + \frac{\partial\mathbf{P}}{\partial t}. \end{align} $$ If we define the total, bound, and free charge and current density by $$ \begin{align} \rho &= \rho_\text{b} + \rho_\text{f}, \\ \mathbf{J} &= \mathbf{J}_\text{b} + \mathbf{J}_\text{f}, \end{align} $$ and use the defining relations above to eliminate , and , the "macroscopic" Maxwell's equations reproduce the "microscopic" equations. ### Constitutive relations In order to apply 'Maxwell's macroscopic equations', it is necessary to specify the relations between displacement field and the electric field , as well as the magnetizing field and the magnetic field . Equivalently, we have to specify the dependence of the polarization (hence the bound charge) and the magnetization (hence the bound current) on the applied electric and magnetic field. The equations specifying this response are called constitutive relations. For real-world materials, the constitutive relations are rarely simple, except approximately, and usually determined by experiment. See the main article on constitutive relations for a fuller description. For materials without polarization and magnetization, the constitutive relations are (by definition) $$ \mathbf{D} = \varepsilon_0\mathbf{E}, \quad \mathbf{H} = \frac{1}{\mu_0}\mathbf{B}, $$ where is the permittivity of free space and the permeability of free space. Since there is no bound charge, the total and the free charge and current are equal. An alternative viewpoint on the microscopic equations is that they are the macroscopic equations together with the statement that vacuum behaves like a perfect linear "material" without additional polarization and magnetization. More generally, for linear materials the constitutive relations are $$ \mathbf{D} = \varepsilon\mathbf{E}, \quad \mathbf{H} = \frac{1}{\mu}\mathbf{B}, $$ where is the permittivity and the permeability of the material. For the displacement field the linear approximation is usually excellent because for all but the most extreme electric fields or temperatures obtainable in the laboratory (high power pulsed lasers) the interatomic electric fields of materials of the order of 1011 V/m are much higher than the external field. For the magnetizing field $$ \mathbf{H} $$ , however, the linear approximation can break down in common materials like iron leading to phenomena like hysteresis. Even the linear case can have various complications, however. - For homogeneous materials, and are constant throughout the material, while for inhomogeneous materials they depend on location within the material (and perhaps time). - For isotropic materials, and are scalars, while for anisotropic materials (e.g. due to crystal structure) they are tensors. - Materials are generally dispersive, so and depend on the frequency of any incident EM waves. Even more generally, in the case of non-linear materials (see for example nonlinear optics), and are not necessarily proportional to , similarly or is not necessarily proportional to . In general and depend on both and , on location and time, and possibly other physical quantities. In applications one also has to describe how the free currents and charge density behave in terms of and possibly coupled to other physical quantities like pressure, and the mass, number density, and velocity of charge-carrying particles. E.g., the original equations given by Maxwell (see History of Maxwell's equations) included Ohm's law in the form $$ \mathbf{J}_\text{f} = \sigma \mathbf{E}. $$ ## Alternative formulations Following are some of the several other mathematical formalisms of Maxwell's equations, with the columns separating the two homogeneous Maxwell equations from the two inhomogeneous ones. Each formulation has versions directly in terms of the electric and magnetic fields, and indirectly in terms of the electrical potential and the vector potential . Potentials were introduced as a convenient way to solve the homogeneous equations, but it was thought that all observable physics was contained in the electric and magnetic fields (or relativistically, the Faraday tensor). The potentials play a central role in quantum mechanics, however, and act quantum mechanically with observable consequences even when the electric and magnetic fields vanish (Aharonov–Bohm effect). Each table describes one formalism. See the main article for details of each formulation. The direct spacetime formulations make manifest that the Maxwell equations are relativistically invariant, where space and time are treated on equal footing. Because of this symmetry, the electric and magnetic fields are treated on equal footing and are recognized as components of the Faraday tensor. This reduces the four Maxwell equations to two, which simplifies the equations, although we can no longer use the familiar vector formulation. Maxwell equations in formulation that do not treat space and time manifestly on the same footing have Lorentz invariance as a hidden symmetry. This was a major source of inspiration for the development of relativity theory. Indeed, even the formulation that treats space and time separately is not a non-relativistic approximation and describes the same physics by simply renaming variables. For this reason the relativistic invariant equations are usually called the Maxwell equations as well. Each table below describes one formalism. + Tensor calculus Formulation Homogeneous equations Inhomogeneous equations Fields Minkowski space Potentials (any gauge) Minkowski space Potentials (Lorenz gauge) Minkowski space Fields any spacetime Potentials (any gauge) any spacetime (with §topological restrictions) Potentials (Lorenz gauge) any spacetime (with topological restrictions) + Differential forms Formulation Homogeneous equations Inhomogeneous equations Fields any spacetime Potentials (any gauge) any spacetime (with topological restrictions) Potentials (Lorenz gauge) any spacetime (with topological restrictions) - In the tensor calculus formulation, the electromagnetic tensor is an antisymmetric covariant order 2 tensor; the four-potential, , is a covariant vector; the current, , is a vector; the square brackets, , denote antisymmetrization of indices; is the partial derivative with respect to the coordinate, . In Minkowski space coordinates are chosen with respect to an inertial frame; , so that the metric tensor used to raise and lower indices is . The d'Alembert operator on Minkowski space is as in the vector formulation. In general spacetimes, the coordinate system is arbitrary, the covariant derivative , the Ricci tensor, and raising and lowering of indices are defined by the Lorentzian metric, and the d'Alembert operator is defined as . The topological restriction is that the second real cohomology group of the space vanishes (see the differential form formulation for an explanation). This is violated for Minkowski space with a line removed, which can model a (flat) spacetime with a point-like monopole on the complement of the line. - In the differential form formulation on arbitrary space times, is the electromagnetic tensor considered as a 2-form, is the potential 1-form, $$ J = - J_\alpha {\star}\mathrm{d}x^\alpha $$ is the current 3-form, is the exterior derivative, and $$ {\star} $$ is the Hodge star on forms defined (up to its orientation, i.e. its sign) by the Lorentzian metric of spacetime. In the special case of 2-forms such as F, the Hodge star $$ {\star} $$ depends on the metric tensor only for its local scale. This means that, as formulated, the differential form field equations are conformally invariant, but the Lorenz gauge condition breaks conformal invariance. The operator $$ \Box = (-{\star} \mathrm{d} {\star} \mathrm{d} - \mathrm{d} {\star} \mathrm{d} {\star}) $$ is the d'Alembert–Laplace–Beltrami operator on 1-forms on an arbitrary Lorentzian spacetime. The topological condition is again that the second real cohomology group is 'trivial' (meaning that its form follows from a definition). By the isomorphism with the second de Rham cohomology this condition means that every closed 2-form is exact. Other formalisms include the geometric algebra formulation and a matrix representation of Maxwell's equations. Historically, a quaternionic formulation was used. ## Solutions Maxwell's equations are partial differential equations that relate the electric and magnetic fields to each other and to the electric charges and currents. Often, the charges and currents are themselves dependent on the electric and magnetic fields via the Lorentz force equation and the constitutive relations. These all form a set of coupled partial differential equations which are often very difficult to solve: the solutions encompass all the diverse phenomena of classical electromagnetism. Some general remarks follow. As for any differential equation, boundary conditions and initial conditions are necessary for a unique solution. For example, even with no charges and no currents anywhere in spacetime, there are the obvious solutions for which E and B are zero or constant, but there are also non-trivial solutions corresponding to electromagnetic waves. In some cases, Maxwell's equations are solved over the whole of space, and boundary conditions are given as asymptotic limits at infinity. In other cases, Maxwell's equations are solved in a finite region of space, with appropriate conditions on the boundary of that region, for example an artificial absorbing boundary representing the rest of the universe,S. G. Johnson, Notes on Perfectly Matched Layers, online MIT course notes (Aug. 2007). or periodic boundary conditions, or walls that isolate a small region from the outside world (as with a waveguide or cavity resonator). Jefimenko's equations (or the closely related Liénard–Wiechert potentials) are the explicit solution to Maxwell's equations for the electric and magnetic fields created by any given distribution of charges and currents. It assumes specific initial conditions to obtain the so-called "retarded solution", where the only fields present are the ones created by the charges. However, Jefimenko's equations are unhelpful in situations when the charges and currents are themselves affected by the fields they create. Numerical methods for differential equations can be used to compute approximate solutions of Maxwell's equations when exact solutions are impossible. These include the finite element method and finite-difference time-domain method. For more details, see Computational electromagnetics. ## Overdetermination of Maxwell's equations Maxwell's equations seem overdetermined, in that they involve six unknowns (the three components of and ) but eight equations (one for each of the two Gauss's laws, three vector components each for Faraday's and Ampère's circuital laws). (The currents and charges are not unknowns, being freely specifiable subject to charge conservation.) This is related to a certain limited kind of redundancy in Maxwell's equations: It can be proven that any system satisfying Faraday's law and Ampère's circuital law automatically also satisfies the two Gauss's laws, as long as the system's initial condition does, and assuming conservation of charge and the nonexistence of magnetic monopoles. This explanation was first introduced by Julius Adams Stratton in 1941. Although it is possible to simply ignore the two Gauss's laws in a numerical algorithm (apart from the initial conditions), the imperfect precision of the calculations can lead to ever-increasing violations of those laws. By introducing dummy variables characterizing these violations, the four equations become not overdetermined after all. The resulting formulation can lead to more accurate algorithms that take all four laws into account. Both identities $$ \nabla\cdot \nabla\times \mathbf{B} \equiv 0, \nabla\cdot \nabla\times \mathbf{E} \equiv 0 $$ , which reduce eight equations to six independent ones, are the true reason of overdetermination. Equivalently, the overdetermination can be viewed as implying conservation of electric and magnetic charge, as they are required in the derivation described above but implied by the two Gauss's laws. For linear algebraic equations, one can make 'nice' rules to rewrite the equations and unknowns. The equations can be linearly dependent. But in differential equations, and especially partial differential equations (PDEs), one needs appropriate boundary conditions, which depend in not so obvious ways on the equations. Even more, if one rewrites them in terms of vector and scalar potential, then the equations are underdetermined because of gauge fixing. ## Maxwell's equations as the classical limit of QED Maxwell's equations and the Lorentz force law (along with the rest of classical electromagnetism) are extraordinarily successful at explaining and predicting a variety of phenomena. However, they do not account for quantum effects, and so their domain of applicability is limited. Maxwell's equations are thought of as the classical limit of quantum electrodynamics (QED). Some observed electromagnetic phenomena cannot be explained with Maxwell's equations if the source of the electromagnetic fields are the classical distributions of charge and current. These include photon–photon scattering and many other phenomena related to photons or virtual photons, "nonclassical light" and quantum entanglement of electromagnetic fields (see Quantum optics). E.g. quantum cryptography cannot be described by Maxwell theory, not even approximately. The approximate nature of Maxwell's equations becomes more and more apparent when going into the extremely strong field regime (see Euler–Heisenberg Lagrangian) or to extremely small distances. Finally, Maxwell's equations cannot explain any phenomenon involving individual photons interacting with quantum matter, such as the photoelectric effect, Planck's law, the Duane–Hunt law, and single-photon light detectors. However, many such phenomena may be explained using a halfway theory of quantum matter coupled to a classical electromagnetic field, either as external field or with the expected value of the charge current and density on the right hand side of Maxwell's equations. This is known as semiclassical theory or self-field QED and was initially discovered by de Broglie and Schrödinger and later fully developed by E.T. Jaynes and A.O. Barut. ## Variations Popular variations on the Maxwell equations as a classical theory of electromagnetic fields are relatively scarce because the standard equations have stood the test of time remarkably well. ### Magnetic monopoles Maxwell's equations posit that there is electric charge, but no magnetic charge (also called magnetic monopoles), in the universe. Indeed, magnetic charge has never been observed, despite extensive searches, and may not exist. If they did exist, both Gauss's law for magnetism and Faraday's law would need to be modified, and the resulting four equations would be fully symmetric under the interchange of electric and magnetic fields.
https://en.wikipedia.org/wiki/Maxwell%27s_equations
In universal algebra, a variety of algebras or equational class is the class of all algebraic structures of a given signature satisfying a given set of identities. For example, the groups form a variety of algebras, as do the abelian groups, the rings, the monoids etc. According to Birkhoff's theorem, a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras, and (direct) products. In the context of category theory, a variety of algebras, together with its homomorphisms, forms a category; these are usually called finitary algebraic categories. A covariety is the class of all coalgebraic structures of a given signature. ## Terminology A variety of algebras should not be confused with an algebraic variety, which means a set of solutions to a system of polynomial equations. They are formally quite distinct and their theories have little in common. The term "variety of algebras" refers to algebras in the general sense of universal algebra; there is also a more specific sense of algebra, namely as algebra over a field, i.e. a vector space equipped with a bilinear multiplication. ## Definition A signature (in this context) is a set, whose elements are called operations, each of which is assigned a natural number (0, 1, 2, ...) called its arity. Given a signature σ and a set V, whose elements are called variables, a word is a finite rooted tree in which each node is labelled by either a variable or an operation, such that every node labelled by a variable has no branches away from the root and every node labelled by an operation o has as many branches away from the root as the arity of o. An equational law is a pair of such words; the axiom consisting of the words v and w is written as . A theory consists of a signature, a set of variables, and a set of equational laws. Any theory gives a variety of algebras as follows. Given a theory T, an algebra of T consists of a set A together with, for each operation o of T with arity n, a function such that for each axiom and each assignment of elements of A to the variables in that axiom, the equation holds that is given by applying the operations to the elements of A as indicated by the trees defining v and w. The class of algebras of a given theory T is called a variety of algebras. Given two algebras of a theory T, say A and B, a homomorphism is a function such that $$ f(o_A(a_1, \dots, a_n)) = o_B(f(a_1), \dots, f(a_n)) $$ for every operation o of arity n. Any theory gives a category where the objects are algebras of that theory and the morphisms are homomorphisms. ## Examples The class of all semigroups forms a variety of algebras of signature (2), meaning that a semigroup has a single binary operation. A sufficient defining equation is the associative law: $$ x(yz) = (xy)z. $$ The class of groups forms a variety of algebras of signature (2,0,1), the three operations being respectively multiplication (binary), identity (nullary, a constant) and inversion (unary). The familiar axioms of associativity, identity and inverse form one suitable set of identities: $$ x(yz) = (xy)z $$ $$ 1 x = x 1 = x $$ $$ x x^{-1} = x^{-1} x = 1. $$ The class of rings also forms a variety of algebras. The signature here is (2,2,0,0,1) (two binary operations, two constants, and one unary operation). If we fix a specific ring R, we can consider the class of left R-modules. To express the scalar multiplication with elements from R, we need one unary operation for each element of R. If the ring is infinite, we will thus have infinitely many operations, which is allowed by the definition of an algebraic structure in universal algebra. We will then also need infinitely many identities to express the module axioms, which is allowed by the definition of a variety of algebras. So the left R-modules do form a variety of algebras. The fields do not form a variety of algebras; the requirement that all non-zero elements be invertible cannot be expressed as a universally satisfied identity (see below). The cancellative semigroups also do not form a variety of algebras, since the cancellation property is not an equation, it is an implication that is not equivalent to any set of equations. However, they do form a quasivariety as the implication defining the cancellation property is an example of a quasi-identity. ## Birkhoff's variety theorem Given a class of algebraic structures of the same signature, we can define the notions of homomorphism, subalgebra, and product. Garrett Birkhoff proved that a class of algebraic structures of the same signature is a variety if and only if it is closed under the taking of homomorphic images, subalgebras and arbitrary products. This is a result of fundamental importance to universal algebra and known as Birkhoff's variety theorem or as the HSP theorem. H, S, and P stand, respectively, for the operations of homomorphism, subalgebra, and product. One direction of the equivalence mentioned above, namely that a class of algebras satisfying some set of identities must be closed under the HSP operations, follows immediately from the definitions. Proving the converse—classes of algebras closed under the HSP operations must be equational—is more difficult. Using the easy direction of Birkhoff's theorem, we can for example verify the claim made above, that the field axioms are not expressible by any possible set of identities: the product of fields is not a field, so fields do not form a variety. ## Subvarieties A subvariety of a variety of algebras V is a subclass of V that has the same signature as V and is itself a variety, i.e., is defined by a set of identities. Notice that although every group becomes a semigroup when the identity as a constant is omitted (and/or the inverse operation is omitted), the class of groups does not form a subvariety of the variety of semigroups because the signatures are different. Similarly, the class of semigroups that are groups is not a subvariety of the variety of semigroups. The class of monoids that are groups contains $$ \langle\mathbb Z,+\rangle $$ and does not contain its subalgebra (more precisely, submonoid) $$ \langle\mathbb N,+\rangle $$ . However, the class of abelian groups is a subvariety of the variety of groups because it consists of those groups satisfying , with no change of signature. The finitely generated abelian groups do not form a subvariety, since by Birkhoff's theorem they don't form a variety, as an arbitrary product of finitely generated abelian groups is not finitely generated. Viewing a variety V and its homomorphisms as a category, a subvariety U of V is a full subcategory of V, meaning that for any objects a, b in U, the homomorphisms from a to b in U are exactly those from a to b in V. ## Free objects Suppose V is a non-trivial variety of algebras, i.e. V contains algebras with more than one element. One can show that for every set S, the variety V contains a free algebra FS on S. This means that there is an injective set map that satisfies the following universal property: given any algebra A in V and any map , there exists a unique V-homomorphism such that . This generalizes the notions of free group, free abelian group, free algebra, free module etc. It has the consequence that every algebra in a variety is a homomorphic image of a free algebra. ## Category theory Besides varieties, category theorists use two other frameworks that are equivalent in terms of the kinds of algebras they describe: finitary monads and Lawvere theories. We may go from a variety to a finitary monad as follows. A category with some variety of algebras as objects and homomorphisms as morphisms is called a finitary algebraic category. For any finitary algebraic category V, the forgetful functor has a left adjoint , namely the functor that assigns to each set the free algebra on that set. This adjunction is monadic, meaning that the category V is equivalent to the Eilenberg–Moore category SetT for the monad . Moreover the monad T is finitary, meaning it commutes with filtered colimits. The monad is thus enough to recover the finitary algebraic category. Indeed, finitary algebraic categories are precisely those categories equivalent to the Eilenberg-Moore categories of finitary monads. Both these, in turn, are equivalent to categories of algebras of Lawvere theories. Working with monads permits the following generalization. One says a category is an algebraic category if it is monadic over Set. This is a more general notion than "finitary algebraic category" because it admits such categories as CABA (complete atomic Boolean algebras) and CSLat (complete semilattices) whose signatures include infinitary operations. In those two cases the signature is large, meaning that it forms not a set but a proper class, because its operations are of unbounded arity. The algebraic category of sigma algebras also has infinitary operations, but their arity is countable whence its signature is small (forms a set). Every finitary algebraic category is a locally presentable category. ## Pseudovariety of finite algebras Since varieties are closed under arbitrary direct products, all non-trivial varieties contain infinite algebras. Attempts have been made to develop a finitary analogue of the theory of varieties. This led, e.g., to the notion of variety of finite semigroups. This kind of variety uses only finitary products. However, it uses a more general kind of identities. A pseudovariety is usually defined to be a class of algebras of a given signature, closed under the taking of homomorphic images, subalgebras and finitary direct products. Not every author assumes that all algebras of a pseudovariety are finite; if this is the case, one sometimes talks of a variety of finite algebras. For pseudovarieties, there is no general finitary counterpart to Birkhoff's theorem, but in many cases the introduction of a more complex notion of equations allows similar results to be derived. Namely, a class of finite monoids is a variety of finite monoids if and only if it can be defined by a set of profinite identities. Pseudovarieties are of particular importance in the study of finite semigroups and hence in formal language theory. Eilenberg's theorem, often referred to as the variety theorem, describes a natural correspondence between varieties of regular languages and pseudovarieties of finite semigroups.
https://en.wikipedia.org/wiki/Variety_%28universal_algebra%29
Computable functions are the basic objects of study in computability theory. Informally, a function is computable if there is an algorithm that computes the value of the function for every value of its argument. Because of the lack of a precise definition of the concept of algorithm, every formal definition of computability must refer to a specific model of computation. Many such models of computation have been proposed, the major ones being Turing machines, register machines, lambda calculus and general recursive functions. Although these four are of a very different nature, they provide exactly the same class of computable functions, and, for every model of computation that has ever been proposed, the computable functions for such a model are computable for the above four models of computation. The ## Church–Turing thesis is the unprovable assertion that every notion of computability that can be imagined can compute only functions that are computable in the above sense. Before the precise definition of computable functions, mathematicians often used the informal term effectively calculable. This term has since come to be identified with the computable functions. The effective computability of these functions does not imply that they can be efficiently computed (i.e. computed within a reasonable amount of time). In fact, for some effectively calculable functions it can be shown that any algorithm that computes them will be very inefficient in the sense that the running time of the algorithm increases exponentially (or even superexponentially) with the length of the input. The fields of feasible computability and computational complexity study functions that can be computed efficiently. The Blum axioms can be used to define an abstract computational complexity theory on the set of computable functions. In computational complexity theory, the problem of computiong the value of a function is known as a function problem, by contrast to decision problems whose results are either "yes" of "no". ## Definition Computability of a function is an informal notion. One way to describe it is to say that a function is computable if its value can be obtained by an effective procedure. With more rigor, a function $$ f:\mathbb N^k\rightarrow\mathbb N $$ is computable if and only if there is an effective procedure that, given any -tuple $$ \mathbf x $$ of natural numbers, will produce the value $$ f(\mathbf x) $$ . In agreement with this definition, the remainder of this article presumes that computable functions take finitely many natural numbers as arguments and produce a value which is a single natural number. As counterparts to this informal description, there exist multiple formal, mathematical definitions. The class of computable functions can be defined in many equivalent models of computation, including - Turing machines - General recursive functions - Lambda calculus - Post machines (Post–Turing machines and tag machines). - Register machines Although these models use different representations for the functions, their inputs, and their outputs, translations exist between any two models, and so every model describes essentially the same class of functions, giving rise to the opinion that formal computability is both natural and not too narrow. These functions are sometimes referred to as "recursive", to contrast with the informal term "computable", a distinction stemming from a 1934 discussion between Kleene and Gödel.p.6 For example, one can formalize computable functions as μ-recursive functions, which are partial functions that take finite tuples of natural numbers and return a single natural number (just as above). They are the smallest class of partial functions that includes the constant, successor, and projection functions, and is closed under composition, primitive recursion, and the μ operator. Equivalently, computable functions can be formalized as functions which can be calculated by an idealized computing agent such as a Turing machine or a register machine. Formally speaking, a partial function $$ f:\mathbb N^k\rightarrow\mathbb N $$ can be calculated if and only if there exists a computer program with the following properties: 1. If $$ f(\mathbf x) $$ is defined, then the program will terminate on the input $$ \mathbf x $$ with the value $$ f(\mathbf x) $$ stored in the computer memory. 1. If $$ f(\mathbf x) $$ is undefined, then the program never terminates on the input $$ \mathbf x $$ . ## Characteristics of computable functions The basic characteristic of a computable function is that there must be a finite procedure (an algorithm) telling how to compute the function. The models of computation listed above give different interpretations of what a procedure is and how it is used, but these interpretations share many properties. The fact that these models give equivalent classes of computable functions stems from the fact that each model is capable of reading and mimicking a procedure for any of the other models, much as a compiler is able to read instructions in one computer language and emit instructions in another language. Enderton [1977] gives the following characteristics of a procedure for computing a computable function; similar characterizations have been given by Turing [1936], Rogers [1967], and others. - "There must be exact instructions (i.e. a program), finite in length, for the procedure." Thus every computable function must have a finite program that completely describes how the function is to be computed. It is possible to compute the function by just following the instructions; no guessing or special insight is required. - "If the procedure is given a k-tuple x in the domain of f, then after a finite number of discrete steps the procedure must terminate and produce f(x)." Intuitively, the procedure proceeds step by step, with a specific rule to cover what to do at each step of the calculation. Only finitely many steps can be carried out before the value of the function is returned. - "If the procedure is given a k-tuple x which is not in the domain of f, then the procedure might go on forever, never halting. Or it might get stuck at some point (i.e., one of its instructions cannot be executed), but it must not pretend to produce a value for f at x." Thus if a value for f(x) is ever found, it must be the correct value. It is not necessary for the computing agent to distinguish correct outcomes from incorrect ones because the procedure is defined as correct if and only if it produces an outcome. Enderton goes on to list several clarifications of these 3 requirements of the procedure for a computable function: 1. The procedure must theoretically work for arbitrarily large arguments. It is not assumed that the arguments are smaller than the number of atoms in the Earth, for example. 1. The procedure is required to halt after finitely many steps in order to produce an output, but it may take arbitrarily many steps before halting. No time limitation is assumed. 1. Although the procedure may use only a finite amount of storage space during a successful computation, there is no bound on the amount of space that is used. It is assumed that additional storage space can be given to the procedure whenever the procedure asks for it. To summarise, based on this view a function is computable if: The field of computational complexity studies functions with prescribed bounds on the time and/or space allowed in a successful computation. ## Computable sets and relations A set of natural numbers is called computable (synonyms: recursive, decidable) if there is a computable, total function such that for any natural number , if is in and if is not in . A set of natural numbers is called computably enumerable (synonyms: recursively enumerable, semidecidable) if there is a computable function such that for each number , is defined if and only if is in the set. Thus a set is computably enumerable if and only if it is the domain of some computable function. The word enumerable is used because the following are equivalent for a nonempty subset of the natural numbers: - is the domain of a computable function. - is the range of a total computable function. If is infinite then the function can be assumed to be injective. If a set is the range of a function then the function can be viewed as an enumeration of , because the list will include every element of . Because each finitary relation on the natural numbers can be identified with a corresponding set of finite sequences of natural numbers, the notions of computable relation and computably enumerable relation can be defined from their analogues for sets. ## Formal languages In computability theory in computer science, it is common to consider formal languages. An alphabet is an arbitrary set. A word on an alphabet is a finite sequence of symbols from the alphabet; the same symbol may be used more than once. For example, binary strings are exactly the words on the alphabet }. A language is a subset of the collection of all words on a fixed alphabet. For example, the collection of all binary strings that contain exactly 3 ones is a language over the binary alphabet. A key property of a formal language is the level of difficulty required to decide whether a given word is in the language. Some coding system must be developed to allow a computable function to take an arbitrary word in the language as input; this is usually considered routine. A language is called computable (synonyms: recursive, decidable) if there is a computable function such that for each word over the alphabet, if the word is in the language and if the word is not in the language. Thus a language is computable just in case there is a procedure that is able to correctly tell whether arbitrary words are in the language. A language is computably enumerable (synonyms: recursively enumerable, semidecidable) if there is a computable function such that is defined if and only if the word is in the language. The term enumerable has the same etymology as in computably enumerable sets of natural numbers. ## Examples The following functions are computable: - Each function with a finite domain; e.g., any finite sequence of natural numbers. - Each constant function f : Nk → N, f(n1,...nk) := n. - Addition f : N2 → N, f(n1,n2) := n1 + n2 - The greatest common divisor of two numbers - A Bézout coefficient of two numbers - The smallest prime factor of a number If f and g are computable, then so are: f + g, f * g, if f is unary, max(f,g), min(f,g), and many more combinations. The following examples illustrate that a function may be computable though it is not known which algorithm computes it. - The function f such that f(n) = 1 if there is a sequence of at least n consecutive fives in the decimal expansion of , and f(n) = 0 otherwise, is computable. (The function f is either the constant 1 function, which is computable, or else there is a k such that f(n) = 1 if n < k and f(n) = 0 if n ≥ k. Every such function is computable. It is not known whether there are arbitrarily long runs of fives in the decimal expansion of π, so we don't know which of those functions is f. Nevertheless, we know that the function f must be computable.) - Each finite segment of an uncomputable sequence of natural numbers (such as the Busy Beaver function Σ) is computable. E.g., for each natural number n, there exists an algorithm that computes the finite sequence Σ(0), Σ(1), Σ(2), ..., Σ(n) — in contrast to the fact that there is no algorithm that computes the entire Σ-sequence, i.e. Σ(n) for all n. Thus, "Print 0, 1, 4, 6, 13" is a trivial algorithm to compute Σ(0), Σ(1), Σ(2), Σ(3), Σ(4); similarly, for any given value of n, such a trivial algorithm exists (even though it may never be known or produced by anyone) to compute Σ(0), Σ(1), Σ(2), ..., Σ(n). Church–Turing thesis The Church–Turing thesis states that any function computable from a procedure possessing the three properties listed above is a computable function. Because these three properties are not formally stated, the Church–Turing thesis cannot be proved. The following facts are often taken as evidence for the thesis: - Many equivalent models of computation are known, and they all give the same definition of computable function (or a weaker version, in some instances). - No stronger model of computation which is generally considered to be effectively calculable has been proposed. The Church–Turing thesis is sometimes used in proofs to justify that a particular function is computable by giving a concrete description of a procedure for the computation. This is permitted because it is believed that all such uses of the thesis can be removed by the tedious process of writing a formal procedure for the function in some model of computation. ## Provability Given a function (or, similarly, a set), one may be interested not only if it is computable, but also whether this can be proven in a particular proof system (usually first order Peano arithmetic). A function that can be proven to be computable is called provably total. The set of provably total functions is recursively enumerable: one can enumerate all the provably total functions by enumerating all their corresponding proofs, that prove their computability. This can be done by enumerating all the proofs of the proof system and ignoring irrelevant ones. ### Relation to recursively defined functions In a function defined by a recursive definition, each value is defined by a fixed first-order formula of other, previously defined values of the same function or other functions, which might be simply constants. A subset of these is the primitive recursive functions. Another example is the Ackermann function, which is recursively defined but not primitive recursive. For definitions of this type to avoid circularity or infinite regress, it is necessary that recursive calls to the same function within a definition be to arguments that are smaller in some well-partial-order on the function's domain. For instance, for the Ackermann function $$ A $$ , whenever the definition of $$ A(x,y) $$ refers to $$ A(p,q) $$ , then $$ (p,q) < (x,y) $$ w.r.t. the lexicographic order on pairs of natural numbers. In this case, and in the case of the primitive recursive functions, well-ordering is obvious, but some "refers-to" relations are nontrivial to prove as being well-orderings. Any function defined recursively in a well-ordered way is computable: each value can be computed by expanding a tree of recursive calls to the function, and this expansion must terminate after a finite number of calls, because otherwise Kőnig's lemma would lead to an infinite descending sequence of calls, violating the assumption of well-ordering. ### Total functions that are not provably total In a sound proof system, every provably total function is indeed total, but the converse is not true: in every first-order proof system that is strong enough and sound (including Peano arithmetic), one can prove (in another proof system) the existence of total functions that cannot be proven total in the proof system. If the total computable functions are enumerated via the Turing machines that produces them, then the above statement can be shown, if the proof system is sound, by a similar diagonalization argument to that used above, using the enumeration of provably total functions given earlier. One uses a Turing machine that enumerates the relevant proofs, and for every input n calls fn(n) (where fn is n-th function by this enumeration) by invoking the Turing machine that computes it according to the n-th proof. Such a Turing machine is guaranteed to halt if the proof system is sound. ## Uncomputable functions and unsolvable problems Every computable function has a finite procedure giving explicit, unambiguous instructions on how to compute it. Furthermore, this procedure has to be encoded in the finite alphabet used by the computational model, so there are only countably many computable functions. For example, functions may be encoded using a string of bits (the alphabet }). The real numbers are uncountable so most real numbers are not computable. See computable number. The set of finitary functions on the natural numbers is uncountable so most are not computable. Concrete examples of such functions are Busy beaver, Kolmogorov complexity, or any function that outputs the digits of a noncomputable number, such as Chaitin's constant. Similarly, most subsets of the natural numbers are not computable. The halting problem was the first such set to be constructed. The Entscheidungsproblem, proposed by David Hilbert, asked whether there is an effective procedure to determine which mathematical statements (coded as natural numbers) are true. Turing and Church independently showed in the 1930s that this set of natural numbers is not computable. According to the Church–Turing thesis, there is no effective procedure (with an algorithm) which can perform these computations. ## Extensions of computability ### Relative computability The notion of computability of a function can be relativized to an arbitrary set of natural numbers A. A function f is defined to be computable in A (equivalently A-computable or computable relative to A) when it satisfies the definition of a computable function with modifications allowing access to A as an oracle. As with the concept of a computable function relative computability can be given equivalent definitions in many different models of computation. This is commonly accomplished by supplementing the model of computation with an additional primitive operation which asks whether a given integer is a member of A. We can also talk about f being computable in g by identifying g with its graph. ### Higher recursion theory Hyperarithmetical theory studies those sets that can be computed from a computable ordinal number of iterates of the Turing jump of the empty set. This is equivalent to sets defined by both a universal and existential formula in the language of second order arithmetic and to some models of Hypercomputation. Even more general recursion theories have been studied, such as E-recursion theory in which any set can be used as an argument to an E-recursive function. ### Hyper-computation Although the Church–Turing thesis states that the computable functions include all functions with algorithms, it is possible to consider broader classes of functions that relax the requirements that algorithms must possess. The field of Hypercomputation studies models of computation that go beyond normal Turing computation.
https://en.wikipedia.org/wiki/Computable_function
In category theory, a branch of mathematics, a functor category $$ D^C $$ is a category where the objects are the functors $$ F: C \to D $$ and the morphisms are natural transformations $$ \eta: F \to G $$ between the functors (here, $$ G: C \to D $$ is another object in the category). Functor categories are of interest for two main reasons: - many commonly occurring categories are (disguised) functor categories, so any statement proved for general functor categories is widely applicable; - every category embeds in a functor category (via the Yoneda embedding); the functor category often has nicer properties than the original category, allowing certain operations that were not available in the original setting. ## Definition Suppose $$ C $$ is a small category (i.e. the objects and morphisms form a set rather than a proper class) and $$ D $$ is an arbitrary category. The category of functors from $$ C $$ to $$ D $$ , written as Fun( $$ C $$ , $$ D $$ ), Funct( $$ C $$ , $$ D $$ ), $$ [C,D] $$ , or $$ D ^C $$ , has as objects the covariant functors from $$ C $$ to $$ D $$ , and as morphisms the natural transformations between such functors. Note that natural transformations can be composed: if $$ \mu (X) : F(X) \to G(X) $$ is a natural transformation from the functor $$ F : C \to D $$ to the functor $$ G : C \to D $$ , and $$ \eta(X) : G(X) \to H(X) $$ is a natural transformation from the functor $$ G $$ to the functor $$ H $$ , then the composition $$ \eta(X)\mu(X) : F(X) \to H(X) $$ defines a natural transformation from $$ F $$ to $$ H $$ . With this composition of natural transformations (known as vertical composition, see natural transformation), $$ D^C $$ satisfies the axioms of a category. In a completely analogous way, one can also consider the category of all contravariant functors from $$ C $$ to $$ D $$ ; we write this as Funct( $$ C^\text{op},D $$ ). If $$ C $$ and $$ D $$ are both preadditive categories (i.e. their morphism sets are abelian groups and the composition of morphisms is bilinear), then we can consider the category of all additive functors from $$ C $$ to $$ D $$ , denoted by Add( $$ C $$ , $$ D $$ ). ## Examples - If $$ I $$ is a small discrete category (i.e. its only morphisms are the identity morphisms), then a functor from $$ I $$ to $$ C $$ essentially consists of a family of objects of $$ C $$ , indexed by $$ I $$ ; the functor category $$ C ^I $$ can be identified with the corresponding product category: its elements are families of objects in $$ C $$ and its morphisms are families of morphisms in $$ C $$ . - An arrow category $$ \mathcal{C}^\rightarrow $$ (whose objects are the morphisms of $$ \mathcal{C} $$ , and whose morphisms are commuting squares in $$ \mathcal{C} $$ ) is just $$ \mathcal{C}^\mathbf{2} $$ , where 2 is the category with two objects and their identity morphisms as well as an arrow from one object to the other (but not another arrow back the other way). - A directed graph consists of a set of arrows and a set of vertices, and two functions from the arrow set to the vertex set, specifying each arrow's start and end vertex. The category of all directed graphs is thus nothing but the functor category $$ \textbf{Set}^C $$ , where $$ C $$ is the category with two objects connected by two parallel morphisms (source and target), and Set denotes the category of sets. - Any group $$ G $$ can be considered as a one-object category in which every morphism is invertible. The category of all -sets is the same as the functor category Set. Natural transformations are -maps. - Similar to the previous example, the category of K-linear representations of the group $$ G $$ is the same as the functor category VectK (where VectK denotes the category of all vector spaces over the field K). - Any ring $$ R $$ can be considered as a one-object preadditive category; the category of left modules over $$ R $$ is the same as the additive functor category Add( $$ R $$ , $$ \textbf{Ab} $$ ) (where $$ \textbf{Ab} $$ denotes the category of abelian groups), and the category of right $$ R $$ -modules is Add( $$ R^\text{op} $$ , $$ \textbf{Ab} $$ ). Because of this example, for any preadditive category $$ C $$ , the category Add( $$ C $$ , $$ \textbf{Ab} $$ ) is sometimes called the "category of left modules over $$ C $$ " and Add( $$ C^\text{op} $$ , $$ \textbf{Ab} $$ ) is the "category of right modules over $$ C $$ ". - The category of presheaves on a topological space $$ X $$ is a functor category: we turn the topological space into a category $$ C $$ having the open sets in $$ X $$ as objects and a single morphism from $$ U $$ to $$ V $$ if and only if $$ U $$ is contained in $$ V $$ . The category of presheaves of sets (abelian groups, rings) on $$ X $$ is then the same as the category of contravariant functors from $$ C $$ to $$ \textbf{Set} $$ (or $$ \textbf{Ab} $$ or $$ \textbf{Ring} $$ ). Because of this example, the category Funct( $$ C^\text{op} $$ , $$ \textbf{Set} $$ ) is sometimes called the "category of presheaves of sets on $$ C $$ " even for general categories $$ C $$ not arising from a topological space. To define sheaves on a general category $$ C $$ , one needs more structure: a Grothendieck topology on $$ C $$ . (Some authors refer to categories that are equivalent to $$ \textbf{Set}^C $$ as presheaf categories.) ## Facts Most constructions that can be carried out in $$ D $$ can also be carried out in $$ D^C $$ by performing them "componentwise", separately for each object in $$ C $$ . For instance, if any two objects $$ X $$ and $$ Y $$ in $$ D $$ have a product $$ X\times Y $$ , then any two functors $$ F $$ and $$ G $$ in $$ D^C $$ have a product $$ F\times G $$ , defined by $$ (F \times G)(c) = F(c)\times G(c) $$ for every object $$ c $$ in $$ C $$ . Similarly, if $$ \eta_c : F(c) \to G(c) $$ is a natural transformation and each $$ \eta_c $$ has a kernel $$ K_c $$ in the category $$ D $$ , then the kernel of $$ \eta $$ in the functor category $$ D^C $$ is the functor $$ K $$ with $$ K(c) = K_c $$ for every object $$ c $$ in $$ C $$ . As a consequence we have the general rule of thumb that the functor category $$ D^C $$ shares most of the "nice" properties of $$ D $$ : - if $$ D $$ is complete (or cocomplete), then so is $$ D^C $$ ; - if $$ D $$ is an abelian category, then so is $$ D^C $$ ; We also have: - if $$ C $$ is any small category, then the category $$ \textbf{Set}^C $$ of presheaves is a topos. So from the above examples, we can conclude right away that the categories of directed graphs, $$ G $$ -sets and presheaves on a topological space are all complete and cocomplete topoi, and that the categories of representations of $$ G $$ , modules over the ring $$ R $$ , and presheaves of abelian groups on a topological space $$ X $$ are all abelian, complete and cocomplete. The embedding of the category $$ C $$ in a functor category that was mentioned earlier uses the Yoneda lemma as its main tool. For every object $$ X $$ of $$ C $$ , let $$ \text{Hom}(-,X) $$ be the contravariant representable functor from $$ C $$ to $$ \textbf{Set} $$ . The Yoneda lemma states that the assignment $$ X \mapsto \operatorname{Hom}(-,X) $$ is a full embedding of the category $$ C $$ into the category Funct( $$ C^\text{op} $$ , $$ \textbf{Set} $$ ). So $$ C $$ naturally sits inside a topos. The same can be carried out for any preadditive category $$ C $$ : Yoneda then yields a full embedding of $$ C $$ into the functor category Add( $$ C^\text{op} $$ , $$ \textbf{Ab} $$ ). So $$ C $$ naturally sits inside an abelian category. The intuition mentioned above (that constructions that can be carried out in $$ D $$ can be "lifted" to $$ D^C $$ ) can be made precise in several ways; the most succinct formulation uses the language of adjoint functors. Every functor $$ F : D \to E $$ induces a functor $$ F^C : D^C \to E^C $$ (by composition with $$ F $$ ). If $$ F $$ and $$ G $$ is a pair of adjoint functors, then $$ F^C $$ and $$ G^C $$ is also a pair of adjoint functors. The functor category $$ D^C $$ has all the formal properties of an exponential object; in particular the functors from $$ E \times C \to D $$ stand in a natural one-to-one correspondence with the functors from $$ E $$ to $$ D^C $$ . The category $$ \textbf{Cat} $$ of all small categories with functors as morphisms is therefore a cartesian closed category.
https://en.wikipedia.org/wiki/Functor_category
CH is a proprietary cross-platform C and C++ interpreter and scripting language environment. It was designed by Harry Cheng as a scripting language for beginners to learn mathematics, computing, numerical analysis (numeric methods), and programming in C/C++. Ch is now developed and marketed by SoftIntegration, Inc.. Free versions include the student edition, and the non-commercial Professional Edition for Raspberry Pi. CH can be embedded in C and C++ application programs. It has numerical computing and graphical plotting features. CH is combined of both shell and IDE. CH shell combines the features of common shell and C language. ChIDE provides quick code navigation and symbolic debugging. It is based on embedded CH, Scite, and Scintilla. CH is written in C and runs on Windows, Linux, macOS, FreeBSD, AIX, Solaris, QNX, and HP-UX. It supports C90 and major C99 features, but it does not support the full set of C++ features. C99 complex number, IEEE 754 floating-point arithmetic, and variable-length array features were supported in CH before they became part of the C99 standard. An article published by Computer Reseller News (CRN) named CH as notable among C-based virtual machines for its functionality and the availability of third-party libraries. CH has many tool kits that extend its functions. For example, the CH Mechanism Toolkit is used for design and analysis of commonly used mechanisms such as four-bar linkage, five-bar linkage, six-bar linkage, crank-slider mechanism, and cam-follower system. CH Control System Toolkit is used for the design, analysis, and modelling of continuous-time or discrete-time linear time-invariant (LTI) control systems. Both tool kits include the source code. CH has been integrated into free C-STEM Studio, a platform for learning computing, science, technology, engineering, and mathematics (C-STEM) with robotics. C-STEM Studio is developed by the UC Davis Center for Integrated Computing and STEM Education, offering a curriculum for K-12 students. CH supports LEGO Mindstorms NXT and EV3, Arduino, Linkbot, Finch Robot, RoboTalk and Raspberry Pi, Pi Zero, and ARM for robot programming and learning.ch finch It can also be embedded into the LabVIEW system design platform and development environment. ## Features CH supports the 1999 ISO C Standard (C99) and C++ classes. It is a superset of C with C++ classes. Several major features of C99 are supported, such as complex numbers, variable length arrays (VLAs), IEEE 754 floating-point arithmetic, and generic mathematical functions. The specification for wide characters in Addendum 1 for C90 is also supported. C++ features available in CH include: - Member functions - Mixed code and declaration - The this -> pointer - Reference type and pass-by-reference - Function-style type conversion - Classes - Private/public data and functions in classes. Ch is compatible with C++ in that by default, members of a class definition are assumed to be private until a 'public' declaration is given - Static member of class/struct/union - Const member functions - The new and delete operators - Constructors and destructors - Polymorphic functions - The scope resolution operator - The I/O functions cout, cerr, and cin with endl - Arguments for variadic functions are optional CH supports classes in C++ with added abilities, including: - Classes inside member functions - Nested functions with classes - Passing member function to argument of pointer-to-function type of functions CH can interact with existing C/C++ libraries and call C/C++ functions from CH script. As a C/C++ interpreter, CH can be used as a scripting engine and extension language for applications. Pointers to arrays or variables can be passed and shared in both C-compiled and CH scripting contexts. One example of an embedded CH scripting application is Mobile-C, which has been used for collaborative visualization of distributed mesh model. CH has a built-in string type (string_t) for automatic memory allocation and de-allocation. It supports shell aliases, history, and piping. CH has built-in 2D/3D graphical plotting features and computational arrays for numerical computing. A 2D linear equation of the form b = A*x can be written verbatim in Ch.
https://en.wikipedia.org/wiki/Ch_%28computer_programming%29
In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein's 1905 paper, "On the Electrodynamics of Moving Bodies", the theory is presented as being based on just two postulates: 1. The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). This is known as the principle of relativity. 1. The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer. This is known as the principle of light constancy, or the principle of light speed invariance. The first postulate was first formulated by Galileo Galilei (see Galilean invariance). ## Background Special relativity builds upon important physics ideas. The non-technical ideas include: - speed or velocity, how the relative distance between an object and a reference point changes with time. - speed of light, the maximum speed of information, independent of the speed of the source and receiver, - event: something that happens at a definite place and time. For examples, an explosion or a flash of light from an atom; a generalization of a point in geometrical space, - clocks, relativity is all about time; in relativity observers read clocks. Two observers in relative motion receive information about two events via light signals traveling at constant speed, independent of either observer's speed. Their motion during the transit time causes them to get the information at different times on their local clock. The more technical background ideas include: - invariance: when physical laws do not change when a specific circumstance changes, such observations at different uniform velocities; - spacetime: a union of geometrical space and time. - spacetime interval between two events: a measure of separation that generalizes distance: $$ (\text{interval})^2 = \left[ \text{event separation in time} \right]^2 - \left[ \text{event separation in space} \right]^2 $$ - coordinate system or reference frame: a mechanism to specify points in space with respect common reference axes, - inertial reference frames: two reference frames in uniform relative motion, - coordinate transformation: a procedure to respecify a points against a different coordinate system. The spacetime interval is an invariant between inertial frames, demonstrating the physical unity of spacetime. Coordinate systems are not invariant between inertial frames and require transformations. ## Overview ### Basis Unusual among modern topics in physics, the theory of special relativity needs only mathematics at high school level and yet it fundamentally alters our understanding, especially our understanding of the concept of time. Built on just two postulates or assumptions, many interesting consequences follow. The two postulates both concern observers moving at a constant speed relative to each other. The first postulate, the , says the laws of physics do not depend on objects being at absolute rest: an observer on a moving train sees natural phenomena on that train that look the same whether the train is moving or not. The second postulate, constant speed of light, says observers on a moving train or on in the train station see light travel at the same speed. A light signal from the station to the train has the same speed, no matter how fast a train goes. In the theory of special relativity, the two postulates combine to change the definition of "relative speed". Rather than the simple concept of distance traveled divided by time spent, the new theory incorporates the speed of light as the maximum possible speed. In special relativity, covering ten times more distance on the ground in the same amount of time according to a moving watch does not result in a speed up as seen from the ground by a factor of ten. ### Consequences Special relativity has a wide range of consequences that have been experimentally verified.Will, C. M. (2005). Special relativity: a centenary perspective. In Einstein, 1905–2005: Poincaré Seminar 2005 (pp. 33-58). Basel: Birkhäuser Basel. The conceptual effects include: - the , events that appear simultaneous to one observer may not be simultaneous to an observer in motion, - , time measured between two events by observers in motions differ, - , distances between two events by observers in motions differ, - the , velocities no longer simply add, Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula , where $$ c $$ is the speed of light in vacuum.The Feynman Lectures on Physics Vol. I Ch. 15-9: ### Equivalence of mass and energy Special relativity replaced the conventional notion of an absolute, universal time with the notion of a time that is local to each observer. Information about distant objects can arrive no faster than the speed of light so visual observations always report events that have happened in the past. This effect makes visual descriptions of the effects of special relativity especially prone to mistakes. Special relativity also has profound technical consequences. A defining feature of special relativity is the replacement of Euclidean geometry with Lorentzian geometry. Distances in Euclidean geometry are calculated with the Pythagorean theorem and only involved spatial coordinates. In Lorentzian geometry, 'distances' become 'intervals' and include a time coordinate with a minus sign. Unlike spatial distances, the interval between two events has the same value for all observers independent of their relative velocity. When comparing two sets of coordinates in relative motion is Lorentz transformation replace Galilean transformations of Newtonian mechanics. Other effects include the relativistic corrects to the Doppler effect and the Thomas precession. It also explains how electricity and magnetism are related. ## History The principle of relativity, forming one of the two postulates of special relativity, was described by Galileo Galilei in 1632 using a thought experiment involving observing natural phenomena on a moving ship. His conclusions were summarized as Galilean relativity and used as the basis of Newtonian mechanics. This principle can be expressed as a coordinate transformation, between two coordinate systems. Isaac Newton noted that many transformations, such as those involving rotation or acceleration, will not preserve the observation of physical phenomena. Newton considered only those transformations involving motion with respect to an immovable absolute space, now called transformations between inertial frames. In 1864 James Clerk Maxwell presented a theory of electromagnetism which did not obey Galilean relativity. The theory specifically predicted a constant speed of light in vacuum, no matter the motion (velocity, acceleration, etc.) of the light emitter or receiver or its frequency, wavelength, direction, polarization, or phase. This, as yet untested theory, was thought at the time to be only valid in inertial frames fixed in an aether Numerous experiments followed, attempting to measure the speed of light as Earth moved through the proposed fixed aether, culminating in the 1887 Michelson-Morley experiment which only confirmed the constant speed of light. Several fixes to the aether theory where proposed, with those of George Francis Fitzgerald, Hendrik Antoon Lorentz, and Jules Henri Poincare all pointing in the direction of a result similar to the theory of special relativity. The final important step was taken by Albert Einstein in a paper published on 26 September 1905 titled "On the Electrodynamics of Moving Bodies". Einstein applied the Lorentz transformations known to be compatible with Maxwell's equations for electrodynamics to the classical laws of mechanics. This changed Newton's mechanics situations involving all motions, especially velocities close to that of light (known as ). Another way to describe the advance made by the special theory is to say Einstein extended the Galilean principle so that it accounted for the constant speed of light, a phenomenon that had been observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics. The theory became essentially complete in 1907, with Hermann Minkowski's papers on spacetime. Special relativity has proven to be the most accurate model of motion at any speed when gravitational and quantum effects are negligible. Even so, the Newtonian model remains accurate at low velocities relative to the speed of light, for example, everyday motion on Earth. When updating his 1911 book on relativity, to include general relativity in 1920, Robert Daniel Carmichael called the earlier work the "restricted theory" as a "special case" of the new general theory; he also used the phrase "special theory of relativity". In comparing to the general theory in 1923 Einstein specifically called his earlier work "the special theory of relativity", saying he meant a restriction to frames uniform motion. Just as Galilean relativity is accepted as an approximation of special relativity that is valid for low speeds, special relativity is considered an approximation of general relativity that is valid for weak gravitational fields, that is, at a sufficiently small scale (e.g., when tidal forces are negligible) and in conditions of free fall. But general relativity incorporates non-Euclidean geometry to represent gravitational effects as the geometric curvature of spacetime. Special relativity is restricted to the flat spacetime known as Minkowski space. As long as the universe can be modeled as a pseudo-Riemannian manifold, a Lorentz-invariant frame that abides by special relativity can be defined for a sufficiently small neighborhood of each point in this curved spacetime. ## Traditional "two postulates" approach to special relativity Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the (then) known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light in vacuum and the independence of physical laws (especially the constancy of the speed of light) from the choice of inertial system. In his initial presentation of special relativity in 1905 he expressed these postulates as: - The principle of relativity – the laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other. - The principle of invariant light speed – "... light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body" (from the preface). That is, light in vacuum propagates with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source. The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance. The derivation of special relativity depends not only on these two explicit postulates, but also on several tacit assumptions (made in almost all theories of physics), including the isotropy and homogeneity of space and the independence of measuring rods and clocks from their past history. ## Principle of relativity ### Reference frames and relative motion Reference frames play a crucial role in relativity theory. The term reference frame as used here is an observational perspective in space that is not undergoing any change in motion (acceleration), from which a position can be measured along 3 spatial axes (so, at rest or constant velocity). In addition, a reference frame has the ability to determine measurements of the time of events using a "clock" (any reference device with uniform periodicity). An event is an occurrence that can be assigned a single unique moment and location in space relative to a reference frame: it is a "point" in spacetime. Since the speed of light is constant in relativity irrespective of the reference frame, pulses of light can be used to unambiguously measure distances and refer back to the times that events occurred to the clock, even though light takes time to reach the clock after the event has transpired. For example, the explosion of a firecracker may be considered to be an "event". We can completely specify an event by its four spacetime coordinates: The time of occurrence and its 3-dimensional spatial location define a reference point. Let's call this reference frame S. In relativity theory, we often want to calculate the coordinates of an event from differing reference frames. The equations that relate measurements made in different frames are called transformation equations. ### Standard configuration To gain insight into how the spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration. With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-1, two Galilean reference frames (i.e., conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame (pronounced "S prime" or "S dash") belongs to a second observer . - The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame . - Frame moves, for simplicity, in a single direction: the x-direction of frame S with a constant velocity v as measured in frame S. - The origins of frames S and are coincident when time for frame S and for frame . Since there is no absolute reference frame in relativity theory, a concept of "moving" does not strictly exist, as everything may be moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be comoving. Therefore, S and are not comoving. ### Lack of an absolute reference frame The principle of relativity, which states that physical laws have the same form in each inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. But in the late 19th century the existence of electromagnetic waves led some physicists to suggest that the universe was filled with a substance they called "aether", which, they postulated, would act as the medium through which these waves, or vibrations, propagated (in many respects similar to the way sound propagates through air). The aether was thought to be an absolute reference frame against which all speeds could be measured, and could be considered fixed and motionless relative to Earth or some other fixed reference point. The aether was supposed to be sufficiently elastic to support electromagnetic waves, while those waves could interact with matter, yet offering no resistance to bodies passing through it (its one property was that it allowed electromagnetic waves to propagate). The results of various experiments, including the Michelson–Morley experiment in 1887 (subsequently verified with more accurate and innovative experiments), led to the theory of special relativity, by showing that the aether did not exist. Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities. ### Relativity without the second postulate From the principle of relativity alone without assuming the constancy of the speed of light (i.e., using the isotropy of space and the symmetry implied by the principle of special relativity) it can be shown that the spacetime transformations between inertial frames are either Euclidean, Galilean, or Lorentzian. In the Lorentzian case, one can then obtain relativistic interval conservation and a certain finite limiting speed. Experiments suggest that this speed is the speed of light in vacuum.David Morin (2007) Introduction to Classical Mechanics, Cambridge University Press, Cambridge, chapter 11, Appendix I, . ## Lorentz invariance as the essential core of special relativity ### Two- vs one- postulate approaches In Einstein's own view, the two postulates of relativity and the invariance of the speed of light lead to a single postulate, the Lorentz transformation: Following Einstein's original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations, but Einstein stuck to his approach throughout work. Henri Poincaré provided the mathematical framework for relativity theory by proving that Lorentz transformations are a subset of his Poincaré group of symmetry transformations. Einstein later derived these transformations from his axioms. While the traditional two-postulate approach to special relativity is presented in innumerable college textbooks and popular presentations, other treatments of special relativity base it on the single postulate of universal Lorentz covariance, or, equivalently, on the single postulate of Minkowski spacetime.Schutz, J. (1997) Independent Axioms for Minkowski Spacetime, Addison Wesley Longman Limited, . Textbooks starting with the single postulate of Minkowski spacetime include those by Taylor and Wheeler and by Callahan. ### Lorentz transformation and its inverse Define an event to have spacetime coordinates in system S and in a reference frame moving at a velocity v on the x-axis with respect to that frame, . Then the Lorentz transformation specifies that these coordinates are related in the following way: $$ \begin{align} t' &= \gamma \ (t - vx/c^2) \\ x' &= \gamma \ (x - v t) \\ y' &= y \\ z' &= z , \end{align} $$ where $$ \gamma = \frac{1}{\sqrt{1 - v^2/c^2}} $$ is the Lorentz factor and c is the speed of light in vacuum, and the velocity v of , relative to S, is parallel to the x-axis. For simplicity, the y and z coordinates are unaffected; only the x and t coordinates are transformed. These Lorentz transformations form a one-parameter group of linear mappings, that parameter being called rapidity. Solving the four transformation equations above for the unprimed coordinates yields the inverse Lorentz transformation: $$ \begin{align} t &= \gamma ( t' + v x'/c^2) \\ x &= \gamma ( x' + v t') \\ y &= y' \\ z &= z'. \end{align} $$ This shows that the unprimed frame is moving with the velocity −v, as measured in the primed frame. There is nothing special about the x-axis. The transformation can apply to the y- or z-axis, or indeed in any direction parallel to the motion (which are warped by the γ factor) and perpendicular; see the article Lorentz transformation for details. A quantity that is invariant under Lorentz transformations is known as a Lorentz scalar. Writing the Lorentz transformation and its inverse in terms of coordinate differences, where one event has coordinates and , another event has coordinates and , and the differences are defined as -    $$ \Delta x' = x'_2-x'_1 \ , \ \Delta t' = t'_2-t'_1 \ . $$ -    $$ \Delta x = x_2-x_1 \ , \ \ \Delta t = t_2-t_1 \ . $$ we get -    $$ \Delta x' = \gamma \ (\Delta x - v \,\Delta t) \ ,\ \ $$ $$ \Delta t' = \gamma \ \left(\Delta t - v \ \Delta x / c^{2} \right) \ . $$ -    $$ \Delta x = \gamma \ (\Delta x' + v \,\Delta t') \ , \ $$ $$ \Delta t = \gamma \ \left(\Delta t' + v \ \Delta x' / c^{2} \right) \ . $$ If we take differentials instead of taking differences, we get -    $$ dx' = \gamma \ (dx - v\,dt) \ ,\ \ $$ $$ dt' = \gamma \ \left( dt - v \ dx / c^{2} \right) \ . $$ -    $$ dx = \gamma \ (dx' + v\,dt') \ , \ $$ $$ dt = \gamma \ \left(dt' + v \ dx' / c^{2} \right) \ . $$ ### Graphical representation of the Lorentz transformation Spacetime diagrams (Minkowski diagrams) are an extremely useful aid to visualizing how coordinates transform between different reference frames. Although it is not as easy to perform exact computations using them as directly invoking the Lorentz transformations, their main power is their ability to provide an intuitive grasp of the results of a relativistic scenario. To draw a spacetime diagram, begin by considering two Galilean reference frames, S and S′, in standard configuration, as shown in Fig. 2-1. Fig. 3-1a. Draw the $$ x $$ and $$ t $$ axes of frame S. The $$ x $$ axis is horizontal and the $$ t $$ (actually $$ ct $$ ) axis is vertical, which is the opposite of the usual convention in kinematics. The $$ ct $$ axis is scaled by a factor of $$ c $$ so that both axes have common units of length. In the diagram shown, the gridlines are spaced one unit distance apart. The 45° diagonal lines represent the worldlines of two photons passing through the origin at time $$ t = 0. $$ The slope of these worldlines is 1 because the photons advance one unit in space per unit of time. Two events, $$ \text{A} $$ and $$ \text{B}, $$ have been plotted on this graph so that their coordinates may be compared in the S and S' frames. Fig. 3-1b. Draw the $$ x' $$ and $$ ct' $$ axes of frame S'. The $$ ct' $$ axis represents the worldline of the origin of the S' coordinate system as measured in frame S. In this figure, $$ v = c/2. $$ Both the $$ ct' $$ and $$ x' $$ axes are tilted from the unprimed axes by an angle $$ \alpha = \tan^{-1}(\beta), $$ where $$ \beta = v/c. $$ The primed and unprimed axes share a common origin because frames S and S' had been set up in standard configuration, so that $$ t=0 $$ when $$ t'=0. $$ Fig. 3-1c. Units in the primed axes have a different scale from units in the unprimed axes. From the Lorentz transformations, we observe that $$ (x', ct') $$ coordinates of $$ (0, 1) $$ in the primed coordinate system transform to $$ (\beta \gamma, \gamma) $$ in the unprimed coordinate system. Likewise, $$ (x', ct') $$ coordinates of $$ (1, 0) $$ in the primed coordinate system transform to $$ (\gamma, \beta \gamma) $$ in the unprimed system. Draw gridlines parallel with the $$ ct' $$ axis through points $$ (k \gamma, k \beta \gamma) $$ as measured in the unprimed frame, where $$ k $$ is an integer. Likewise, draw gridlines parallel with the $$ x' $$ axis through $$ (k \beta \gamma, k \gamma) $$ as measured in the unprimed frame. Using the Pythagorean theorem, we observe that the spacing between $$ ct' $$ units equals $$ \sqrt{(1 + \beta ^2)/(1 - \beta ^2)} $$ times the spacing between $$ ct $$ units, as measured in frame S. This ratio is always greater than 1, and ultimately it approaches infinity as $$ \beta \to 1. $$ Fig. 3-1d. Since the speed of light is an invariant, the worldlines of two photons passing through the origin at time $$ t' = 0 $$ still plot as 45° diagonal lines. The primed coordinates of $$ \text{A} $$ and $$ \text{B} $$ are related to the unprimed coordinates through the Lorentz transformations and could be approximately measured from the graph (assuming that it has been plotted accurately enough), but the real merit of a Minkowski diagram is its granting us a geometric view of the scenario. For example, in this figure, we observe that the two timelike-separated events that had different x-coordinates in the unprimed frame are now at the same position in space. While the unprimed frame is drawn with space and time axes that meet at right angles, the primed frame is drawn with axes that meet at acute or obtuse angles. This asymmetry is due to unavoidable distortions in how spacetime coordinates map onto a Cartesian plane, but the frames are actually equivalent. ## Consequences derived from the Lorentz transformation The consequences of special relativity can be derived from the Lorentz transformation equations. These transformations, and hence special relativity, lead to different physical predictions than those of Newtonian mechanics at all relative velocities, and most pronounced when relative velocities become comparable to the speed of light. The speed of light is so much larger than anything most humans encounter that some of the effects predicted by relativity are initially counterintuitive. ### Invariant interval In Galilean relativity, the spatial separation, (), and the temporal separation, (), between two events are independent invariants, the values of which do not change when observed from different frames of reference. In special relativity, however, the interweaving of spatial and temporal coordinates generates the concept of an invariant interval, denoted as : $$ \Delta s^2 \; \overset\text{def}{=} \; c^2 \Delta t^2 - (\Delta x^2 + \Delta y^2 + \Delta z^2) $$ In considering the physical significance of , there are three cases to note: - Δs2 > 0: In this case, the two events are separated by more time than space, and they are hence said to be timelike separated. This implies that , and given the Lorentz transformation , it is evident that there exists a $$ v $$ less than $$ c $$ for which $$ \Delta x' = 0 $$ (in particular, ). In other words, given two events that are timelike separated, it is possible to find a frame in which the two events happen at the same place. In this frame, the separation in time, , is called the proper time. - Δs2 < 0: In this case, the two events are separated by more space than time, and they are hence said to be spacelike separated. This implies that , and given the Lorentz transformation , there exists a $$ v $$ less than $$ c $$ for which $$ \Delta t' = 0 $$ (in particular, ). In other words, given two events that are spacelike separated, it is possible to find a frame in which the two events happen at the same time. In this frame, the separation in space, , is called the proper distance, or proper length. For values of $$ v $$ greater than and less than , the sign of $$ \Delta t' $$ changes, meaning that the temporal order of spacelike-separated events changes depending on the frame in which the events are viewed. But the temporal order of timelike-separated events is absolute, since the only way that $$ v $$ could be greater than $$ c^2 \Delta t / \Delta x $$ would be if . - Δs2 = 0: In this case, the two events are said to be lightlike separated. This implies that , and this relationship is frame independent due to the invariance of . From this, we observe that the speed of light is $$ c $$ in every inertial frame. In other words, starting from the assumption of universal Lorentz covariance, the constant speed of light is a derived result, rather than a postulate as in the two-postulates formulation of the special theory. The interweaving of space and time revokes the implicitly assumed concepts of absolute simultaneity and synchronization across non-comoving frames. The form of , being the difference of the squared time lapse and the squared spatial distance, demonstrates a fundamental discrepancy between Euclidean and spacetime distances. The invariance of this interval is a property of the general Lorentz transform (also called the Poincaré transformation), making it an isometry of spacetime. The general Lorentz transform extends the standard Lorentz transform (which deals with translations without rotation, that is, Lorentz boosts, in the x-direction) with all other translations, reflections, and rotations between any Cartesian inertial frame. In the analysis of simplified scenarios, such as spacetime diagrams, a reduced-dimensionality form of the invariant interval is often employed: $$ \Delta s^2 \, = \, c^2 \Delta t^2 - \Delta x^2 $$ Demonstrating that the interval is invariant is straightforward for the reduced-dimensionality case and with frames in standard configuration: $$ \begin{align} c^2 \Delta t^2 - \Delta x^2 &= c^2 \gamma ^2 \left(\Delta t' + \dfrac{v \Delta x'}{c^2} \right)^2 - \gamma ^2 \ (\Delta x' + v \Delta t')^2 \\ &= \gamma ^2 \left( c^2 \Delta t' ^ {\, 2} + 2 v \Delta x' \Delta t' + \dfrac{v^2 \Delta x' ^ {\, 2}}{c^2} \right) - \gamma ^2 \ (\Delta x' ^ {\, 2} + 2 v \Delta x' \Delta t' + v^2 \Delta t' ^ {\, 2}) \\ &= \gamma ^2 c^2 \Delta t' ^ {\, 2} - \gamma ^2 v^2 \Delta t' ^{\, 2} - \gamma ^2 \Delta x' ^ {\, 2} + \gamma ^2 \dfrac{v^2 \Delta x' ^ {\, 2}}{c^2} \\ &= \gamma ^2 c^2 \Delta t' ^ {\, 2} \left( 1 - \dfrac{v^2}{c^2} \right) - \gamma ^2 \Delta x' ^{\, 2} \left( 1 - \dfrac{v^2}{c^2} \right) \\ &= c^2 \Delta t' ^{\, 2} - \Delta x' ^{\, 2} \end{align} $$ The value of $$ \Delta s^2 $$ is hence independent of the frame in which it is measured. ### Relativity of simultaneity Consider two events happening in two different locations that occur simultaneously in the reference frame of one inertial observer. They may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity). From (the forward Lorentz transformation in terms of coordinate differences) $$ \Delta t' = \gamma \left(\Delta t - \frac{v \,\Delta x}{c^{2}} \right) $$ It is clear that the two events that are simultaneous in frame S (satisfying ), are not necessarily simultaneous in another inertial frame (satisfying ). Only if these events are additionally co-local in frame S (satisfying ), will they be simultaneous in another frame . The Sagnac effect can be considered a manifestation of the relativity of simultaneity. Since relativity of simultaneity is a first order effect in , instruments based on the Sagnac effect for their operation, such as ring laser gyroscopes and fiber optic gyroscopes, are capable of extreme levels of sensitivity. ### Time dilation The time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames. Suppose a clock is at rest in the unprimed system S. The location of the clock on two different ticks is then characterized by . To find the relation between the times between these ticks as measured in both systems, can be used to find: $$ \Delta t' = \gamma\, \Delta t $$ for events satisfying $$ \Delta x = 0 \ . $$ This shows that the time (Δ) between the two ticks as seen in the frame in which the clock is moving (), is longer than the time (Δt) between these ticks as measured in the rest frame of the clock (S). Time dilation explains a number of physical phenomena; for example, the lifetime of high speed muons created by the collision of cosmic rays with particles in the Earth's outer atmosphere and moving towards the surface is greater than the lifetime of slowly moving muons, created and decaying in a laboratory. Whenever one hears a statement to the effect that "moving clocks run slow", one should envision an inertial reference frame thickly populated with identical, synchronized clocks. As a moving clock travels through this array, its reading at any particular point is compared with a stationary clock at the same point. The measurements that we would get if we actually looked at a moving clock would, in general, not at all be the same thing, because the time that we would see would be delayed by the finite speed of light, i.e. the times that we see would be distorted by the Doppler effect. Measurements of relativistic effects must always be understood as having been made after finite speed-of-light effects have been factored out. #### Langevin's light-clock Paul Langevin, an early proponent of the theory of relativity, did much to popularize the theory in the face of resistance by many physicists to Einstein's revolutionary concepts. Among his numerous contributions to the foundations of special relativity were independent work on the mass–energy relationship, a thorough examination of the twin paradox, and investigations into rotating coordinate systems. His name is frequently attached to a hypothetical construct called a "light-clock" (originally developed by Lewis and Tolman in 1909), which he used to perform a novel derivation of the Lorentz transformation. A light-clock is imagined to be a box of perfectly reflecting walls wherein a light signal reflects back and forth from opposite faces. The concept of time dilation is frequently taught using a light-clock that is traveling in uniform inertial motion perpendicular to a line connecting the two mirrors. (Langevin himself made use of a light-clock oriented parallel to its line of motion.) Consider the scenario illustrated in Observer A holds a light-clock of length $$ L $$ as well as an electronic timer with which she measures how long it takes a pulse to make a round trip up and down along the light-clock. Although observer A is traveling rapidly along a train, from her point of view the emission and receipt of the pulse occur at the same place, and she measures the interval using a single clock located at the precise position of these two events. For the interval between these two events, observer A finds . A time interval measured using a single clock that is motionless in a particular reference frame is called a proper time interval. Fig. 4-3B illustrates these same two events from the standpoint of observer B, who is parked by the tracks as the train goes by at a speed of . Instead of making straight up-and-down motions, observer B sees the pulses moving along a zig-zag line. However, because of the postulate of the constancy of the speed of light, the speed of the pulses along these diagonal lines is the same $$ c $$ that observer A saw for her up-and-down pulses. B measures the speed of the vertical component of these pulses as $$ \pm \sqrt{c^2 - v^2}, $$ so that the total round-trip time of the pulses is $$ t_\text{B} = 2L \big/ \sqrt{ c^2 - v^2 } = {} $$ . Note that for observer B, the emission and receipt of the light pulse occurred at different places, and he measured the interval using two stationary and synchronized clocks located at two different positions in his reference frame. The interval that B measured was therefore not a proper time interval because he did not measure it with a single resting clock. #### Reciprocal time dilation In the above description of the Langevin light-clock, the labeling of one observer as stationary and the other as in motion was completely arbitrary. One could just as well have observer B carrying the light-clock and moving at a speed of $$ v $$ to the left, in which case observer A would perceive B's clock as running slower than her local clock. There is no paradox here, because there is no independent observer C who will agree with both A and B. Observer C necessarily makes his measurements from his own reference frame. If that reference frame coincides with A's reference frame, then C will agree with A's measurement of time. If C's reference frame coincides with B's reference frame, then C will agree with B's measurement of time. If C's reference frame coincides with neither A's frame nor B's frame, then C's measurement of time will disagree with both A's and B's measurement of time. ### Twin paradox The reciprocity of time dilation between two observers in separate inertial frames leads to the so-called twin paradox, articulated in its present form by Langevin in 1911. Langevin imagined an adventurer wishing to explore the future of the Earth. This traveler boards a projectile capable of traveling at 99.995% of the speed of light. After making a round-trip journey to and from a nearby star lasting only two years of his own life, he returns to an Earth that is two hundred years older. This result appears puzzling because both the traveler and an Earthbound observer would see the other as moving, and so, because of the reciprocity of time dilation, one might initially expect that each should have found the other to have aged less. In reality, there is no paradox at all, because in order for the two observers to perform side-by-side comparisons of their elapsed proper times, the symmetry of the situation must be broken: At least one of the two observers must change their state of motion to match that of the other. Knowing the general resolution of the paradox, however, does not immediately yield the ability to calculate correct quantitative results. Many solutions to this puzzle have been provided in the literature and have been reviewed in the Twin paradox article. We will examine in the following one such solution to the paradox. Our basic aim will be to demonstrate that, after the trip, both twins are in perfect agreement about who aged by how much, regardless of their different experiences. illustrates a scenario where the traveling twin flies at to and from a star distant. During the trip, each twin sends yearly time signals (measured in their own proper times) to the other. After the trip, the cumulative counts are compared. On the outward phase of the trip, each twin receives the other's signals at the lowered rate of . Initially, the situation is perfectly symmetric: note that each twin receives the other's one-year signal at two years measured on their own clock. The symmetry is broken when the traveling twin turns around at the four-year mark as measured by her clock. During the remaining four years of her trip, she receives signals at the enhanced rate of . The situation is quite different with the stationary twin. Because of light-speed delay, he does not see his sister turn around until eight years have passed on his own clock. Thus, he receives enhanced-rate signals from his sister for only a relatively brief period. Although the twins disagree in their respective measures of total time, we see in the following table, as well as by simple observation of the Minkowski diagram, that each twin is in total agreement with the other as to the total number of signals sent from one to the other. There is hence no paradox. Item Measured by thestay-at-home Fig 4-4 Measured bythe traveler Fig 4-4 Total time of trip Total number of pulses sent 10 8 Time when traveler's turnaround is detected Number of pulses received at initial rate 4 2 Time for remainder of trip Number of signals received at final rate 4 8 Total number of received pulses 8 10 Twin's calculation as to how much the other twin should have aged ### Length contraction The dimensions (e.g., length) of an object as measured by one observer may be smaller than the results of measurements of the same object made by another observer (e.g., the ladder paradox involves a long ladder traveling near the speed of light and being contained within a smaller garage). Similarly, suppose a measuring rod is at rest and aligned along the x-axis in the unprimed system S. In this system, the length of this rod is written as Δx. To measure the length of this rod in the system , in which the rod is moving, the distances to the end points of the rod must be measured simultaneously in that system . In other words, the measurement is characterized by , which can be combined with to find the relation between the lengths Δx and Δ: $$ \Delta x' = \frac{\Delta x}{\gamma} $$ for events satisfying $$ \Delta t' = 0 \ . $$ This shows that the length (Δ) of the rod as measured in the frame in which it is moving (), is shorter than its length (Δx) in its own rest frame (S). Time dilation and length contraction are not merely appearances. Time dilation is explicitly related to our way of measuring time intervals between events that occur at the same place in a given coordinate system (called "co-local" events). These time intervals (which can be, and are, actually measured experimentally by relevant observers) are different in another coordinate system moving with respect to the first, unless the events, in addition to being co-local, are also simultaneous. Similarly, length contraction relates to our measured distances between separated but simultaneous events in a given coordinate system of choice. If these events are not co-local, but are separated by distance (space), they will not occur at the same spatial distance from each other when seen from another moving coordinate system. ### Lorentz transformation of velocities Consider two frames S and in standard configuration. A particle in S moves in the x direction with velocity vector . What is its velocity $$ \mathbf{u'} $$ in frame ? We can write Substituting expressions for $$ dx' $$ and $$ dt' $$ from into , followed by straightforward mathematical manipulations and back-substitution from yields the Lorentz transformation of the speed $$ u $$ to : The inverse relation is obtained by interchanging the primed and unprimed symbols and replacing $$ v $$ with . For $$ \mathbf{u} $$ not aligned along the x-axis, we write: The forward and inverse transformations for this case are: and can be interpreted as giving the resultant $$ \mathbf{u} $$ of the two velocities $$ \mathbf{v} $$ and , and they replace the formula . which is valid in Galilean relativity. Interpreted in such a fashion, they are commonly referred to as the relativistic velocity addition (or composition) formulas, valid for the three axes of S and being aligned with each other (although not necessarily in standard configuration). We note the following points: - If an object (e.g., a photon) were moving at the speed of light in one frame , then it would also be moving at the speed of light in any other frame, moving at . - The resultant speed of two velocities with magnitude less than c is always a velocity with magnitude less than c. - If both and (and then also and ) are small with respect to the speed of light (that is, e.g., , then the intuitive Galilean transformations are recovered from the transformation equations for special relativity - Attaching a frame to a photon (riding a light beam like Einstein considers) requires special treatment of the transformations. There is nothing special about the x direction in the standard configuration. The above formalism applies to any direction; and three orthogonal directions allow dealing with all directions in space by decomposing the velocity vectors to their components in these directions. See Velocity-addition formula for details. ### Thomas rotation The composition of two non-collinear Lorentz boosts (i.e., two non-collinear Lorentz transformations, neither of which involve rotation) results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation. Thomas rotation results from the relativity of simultaneity. In Fig. 4-5a, a rod of length $$ L $$ in its rest frame (i.e., having a proper length of ) rises vertically along the y-axis in the ground frame. In Fig. 4-5b, the same rod is observed from the frame of a rocket moving at speed $$ v $$ to the right. If we imagine two clocks situated at the left and right ends of the rod that are synchronized in the frame of the rod, relativity of simultaneity causes the observer in the rocket frame to observe (not see) the clock at the right end of the rod as being advanced in time by , and the rod is correspondingly observed as tilted. Unlike second-order relativistic effects such as length contraction or time dilation, this effect becomes quite significant even at fairly low velocities. For example, this can be seen in the spin of moving particles, where Thomas precession is a relativistic correction that applies to the spin of an elementary particle or the rotation of a macroscopic gyroscope, relating the angular velocity of the spin of a particle following a curvilinear orbit to the angular velocity of the orbital motion. Thomas rotation provides the resolution to the well-known "meter stick and hole paradox". ### Causality and prohibition of motion faster than light In Fig. 4-6, the time interval between the events A (the "cause") and B (the "effect") is 'timelike'; that is, there is a frame of reference in which events A and B occur at the same location in space, separated only by occurring at different times. If A precedes B in that frame, then A precedes B in all frames accessible by a Lorentz transformation. It is possible for matter (or information) to travel (below light speed) from the location of A, starting at the time of A, to the location of B, arriving at the time of B, so there can be a causal relationship (with A the cause and B the effect). The interval AC in the diagram is 'spacelike'; that is, there is a frame of reference in which events A and C occur simultaneously, separated only in space. There are also frames in which A precedes C (as shown) and frames in which C precedes A. But no frames are accessible by a Lorentz transformation, in which events A and C occur at the same location. If it were possible for a cause-and-effect relationship to exist between events A and C, paradoxes of causality would result. For example, if signals could be sent faster than light, then signals could be sent into the sender's past (observer B in the diagrams). A variety of causal paradoxes could then be constructed. Consider the spacetime diagrams in Fig. 4-7. A and B stand alongside a railroad track, when a high-speed train passes by, with C riding in the last car of the train and D riding in the leading car. The world lines of A and B are vertical (ct), distinguishing the stationary position of these observers on the ground, while the world lines of C and D are tilted forwards (), reflecting the rapid motion of the observers C and D stationary in their train, as observed from the ground. 1. Fig. 4-7a. The event of "B passing a message to D", as the leading car passes by, is at the origin of D's frame. D sends the message along the train to C in the rear car, using a fictitious "instantaneous communicator". The worldline of this message is the fat red arrow along the $$ -x' $$ axis, which is a line of simultaneity in the primed frames of C and D. In the (unprimed) ground frame the signal arrives earlier than it was sent. 1. Fig. 4-7b. The event of "C passing the message to A", who is standing by the railroad tracks, is at the origin of their frames. Now A sends the message along the tracks to B via an "instantaneous communicator". The worldline of this message is the blue fat arrow, along the $$ +x $$ axis, which is a line of simultaneity for the frames of A and B. As seen from the spacetime diagram, in the primed frames of C and D, B will receive the message before it was sent out, a violation of causality. It is not necessary for signals to be instantaneous to violate causality. Even if the signal from D to C were slightly shallower than the $$ x' $$ axis (and the signal from A to B slightly steeper than the $$ x $$ axis), it would still be possible for B to receive his message before he had sent it. By increasing the speed of the train to near light speeds, the $$ ct' $$ and $$ x' $$ axes can be squeezed very close to the dashed line representing the speed of light. With this modified setup, it can be demonstrated that even signals only slightly faster than the speed of light will result in causality violation. Therefore, if causality is to be preserved, one of the consequences of special relativity is that no information signal or material object can travel faster than light in vacuum. This is not to say that all faster than light speeds are impossible. Various trivial situations can be described where some "things" (not actual matter or energy) move faster than light. For example, the location where the beam of a search light hits the bottom of a cloud can move faster than light when the search light is turned rapidly (although this does not violate causality or any other relativistic phenomenon)., Section 3.7 page 107 ## Optical effects ### Dragging effects In 1850, Hippolyte Fizeau and Léon Foucault independently established that light travels more slowly in water than in air, thus validating a prediction of Fresnel's wave theory of light and invalidating the corresponding prediction of Newton's corpuscular theory. The speed of light was measured in still water. What would be the speed of light in flowing water? In 1851, Fizeau conducted an experiment to answer this question, a simplified representation of which is illustrated in Fig. 5-1. A beam of light is divided by a beam splitter, and the split beams are passed in opposite directions through a tube of flowing water. They are recombined to form interference fringes, indicating a difference in optical path length, that an observer can view. The experiment demonstrated that dragging of the light by the flowing water caused a displacement of the fringes, showing that the motion of the water had affected the speed of the light. According to the theories prevailing at the time, light traveling through a moving medium would be a simple sum of its speed through the medium plus the speed of the medium. Contrary to expectation, Fizeau found that although light appeared to be dragged by the water, the magnitude of the dragging was much lower than expected. If $$ u' = c/n $$ is the speed of light in still water, and $$ v $$ is the speed of the water, and $$ u_{\pm} $$ is the water-borne speed of light in the lab frame with the flow of water adding to or subtracting from the speed of light, then $$ u_{\pm} =\frac{c}{n} \pm v\left(1-\frac{1}{n^2}\right) \ . $$ Fizeau's results, although consistent with Fresnel's earlier hypothesis of partial aether dragging, were extremely disconcerting to physicists of the time. Among other things, the presence of an index of refraction term meant that, since $$ n $$ depends on wavelength, the aether must be capable of sustaining different motions at the same time. A variety of theoretical explanations were proposed to explain Fresnel's dragging coefficient, that were completely at odds with each other. Even before the Michelson–Morley experiment, Fizeau's experimental results were among a number of observations that created a critical situation in explaining the optics of moving bodies. From the point of view of special relativity, Fizeau's result is nothing but an approximation to , the relativistic formula for composition of velocities. $$ u_{\pm} = \frac{u' \pm v}{ 1 \pm u'v/c^2 } = $$ $$ \frac {c/n \pm v}{ 1 \pm v/cn } \approx $$ $$ c \left( \frac{1}{n} \pm \frac{v}{c} \right) \left( 1 \mp \frac{v}{cn} \right) \approx $$ $$ \frac{c}{n} \pm v \left( 1 - \frac{1}{n^2} \right) $$ ### Relativistic aberration of light Because of the finite speed of light, if the relative motions of a source and receiver include a transverse component, then the direction from which light arrives at the receiver will be displaced from the geometric position in space of the source relative to the receiver. The classical calculation of the displacement takes two forms and makes different predictions depending on whether the receiver, the source, or both are in motion with respect to the medium. (1) If the receiver is in motion, the displacement would be the consequence of the aberration of light. The incident angle of the beam relative to the receiver would be calculable from the vector sum of the receiver's motions and the velocity of the incident light. (2) If the source is in motion, the displacement would be the consequence of light-time correction. The displacement of the apparent position of the source from its geometric position would be the result of the source's motion during the time that its light takes to reach the receiver. The classical explanation failed experimental test. Since the aberration angle depends on the relationship between the velocity of the receiver and the speed of the incident light, passage of the incident light through a refractive medium should change the aberration angle. In 1810, Arago used this expected phenomenon in a failed attempt to measure the speed of light, and in 1870, George Airy tested the hypothesis using a water-filled telescope, finding that, against expectation, the measured aberration was identical to the aberration measured with an air-filled telescope. A "cumbrous" attempt to explain these results used the hypothesis of partial aether-drag, but was incompatible with the results of the Michelson–Morley experiment, which apparently demanded complete aether-drag. Assuming inertial frames, the relativistic expression for the aberration of light is applicable to both the receiver moving and source moving cases. A variety of trigonometrically equivalent formulas have been published. Expressed in terms of the variables in Fig. 5-2, these include $$ \cos \theta ' = \frac{ \cos \theta + v/c}{ 1 + (v/c)\cos \theta} $$   OR   $$ \sin \theta ' = \frac{\sin \theta}{\gamma [ 1 + (v/c) \cos \theta ]} $$   OR   $$ \tan \frac{\theta '}{2} = \left( \frac{c - v}{c + v} \right)^{1/2} \tan \frac {\theta}{2} $$ ### Relativistic Doppler effect #### Relativistic longitudinal Doppler effect The classical Doppler effect depends on whether the source, receiver, or both are in motion with respect to the medium. The relativistic Doppler effect is independent of any medium. Nevertheless, relativistic Doppler shift for the longitudinal case, with source and receiver moving directly towards or away from each other, can be derived as if it were the classical phenomenon, but modified by the addition of a time dilation term, and that is the treatment described here. Assume the receiver and the source are moving away from each other with a relative speed $$ v $$ as measured by an observer on the receiver or the source (The sign convention adopted here is that $$ v $$ is negative if the receiver and the source are moving towards each other). Assume that the source is stationary in the medium. Then $$ f_{r} = \left(1 - \frac v {c_s} \right) f_s $$ where $$ c_s $$ is the speed of sound. For light, and with the receiver moving at relativistic speeds, clocks on the receiver are time dilated relative to clocks at the source. The receiver will measure the received frequency to be $$ f_r = \gamma\left(1 - \beta\right) f_s = \sqrt{\frac{1 - \beta}{1 + \beta}}\,f_s. $$ where - $$ \beta = v/c $$   and - $$ \gamma = \frac{1}{\sqrt{1 - \beta^2}} $$ is the Lorentz factor. An identical expression for relativistic Doppler shift is obtained when performing the analysis in the reference frame of the receiver with a moving source. #### Transverse Doppler effect The transverse Doppler effect is one of the main novel predictions of the special theory of relativity. Classically, one might expect that if source and receiver are moving transversely with respect to each other with no longitudinal component to their relative motions, that there should be no Doppler shift in the light arriving at the receiver. Special relativity predicts otherwise. Fig. 5-3 illustrates two common variants of this scenario. Both variants can be analyzed using simple time dilation arguments. In Fig. 5-3a, the receiver observes light from the source as being blueshifted by a factor of . In Fig. 5-3b, the light is redshifted by the same factor. ### Measurement versus visual appearance Time dilation and length contraction are not optical illusions, but genuine effects. Measurements of these effects are not an artifact of Doppler shift, nor are they the result of neglecting to take into account the time it takes light to travel from an event to an observer. Scientists make a fundamental distinction between measurement or observation on the one hand, versus visual appearance, or what one sees. The measured shape of an object is a hypothetical snapshot of all of the object's points as they exist at a single moment in time. But the visual appearance of an object is affected by the varying lengths of time that light takes to travel from different points on the object to one's eye. For many years, the distinction between the two had not been generally appreciated, and it had generally been thought that a length contracted object passing by an observer would in fact actually be seen as length contracted. In 1959, James Terrell and Roger Penrose independently pointed out that differential time lag effects in signals reaching the observer from the different parts of a moving object result in a fast moving object's visual appearance being quite different from its measured shape. For example, a receding object would appear contracted, an approaching object would appear elongated, and a passing object would have a skew appearance that has been likened to a rotation. A sphere in motion retains the circular outline for all speeds, for any distance, and for all view angles, although the surface of the sphere and the images on it will appear distorted. Both Fig. 5-4 and Fig. 5-5 illustrate objects moving transversely to the line of sight. In Fig. 5-4, a cube is viewed from a distance of four times the length of its sides. At high speeds, the sides of the cube that are perpendicular to the direction of motion appear hyperbolic in shape. The cube is actually not rotated. Rather, light from the rear of the cube takes longer to reach one's eyes compared with light from the front, during which time the cube has moved to the right. At high speeds, the sphere in Fig. 5-5 takes on the appearance of a flattened disk tilted up to 45° from the line of sight. If the objects' motions are not strictly transverse but instead include a longitudinal component, exaggerated distortions in perspective may be seen. This illusion has come to be known as Terrell rotation or the Terrell–Penrose effect. Another example where visual appearance is at odds with measurement comes from the observation of apparent superluminal motion in various radio galaxies, BL Lac objects, quasars, and other astronomical objects that eject relativistic-speed jets of matter at narrow angles with respect to the viewer. An apparent optical illusion results giving the appearance of faster than light travel. In Fig. 5-6, galaxy M87 streams out a high-speed jet of subatomic particles almost directly towards us, but Penrose–Terrell rotation causes the jet to appear to be moving laterally in the same manner that the appearance of the cube in Fig. 5-4 has been stretched out. ## Dynamics Section dealt strictly with kinematics, the study of the motion of points, bodies, and systems of bodies without considering the forces that caused the motion. This section discusses masses, forces, energy and so forth, and as such requires consideration of physical effects beyond those encompassed by the Lorentz transformation itself. Equivalence of mass and energy Mass–energy equivalence is a consequence of special relativity. The energy and momentum, which are separate in Newtonian mechanics, form a four-vector in relativity, and this relates the time component (the energy) to the space components (the momentum) in a non-trivial way. For an object at rest, the energy–momentum four-vector is : it has a time component, which is the energy, and three space components, which are zero. By changing frames with a Lorentz transformation in the x direction with a small value of the velocity v, the energy momentum four-vector becomes . The momentum is equal to the energy multiplied by the velocity divided by c2. As such, the Newtonian mass of an object, which is the ratio of the momentum to the velocity for slow velocities, is equal to E/c2. The energy and momentum are properties of matter and radiation, and it is impossible to deduce that they form a four-vector just from the two basic postulates of special relativity by themselves, because these do not talk about matter or radiation, they only talk about space and time. The derivation therefore requires some additional physical reasoning. In his 1905 paper, Einstein used the additional principles that Newtonian mechanics should hold for slow velocities, so that there is one energy scalar and one three-vector momentum at slow velocities, and that the conservation law for energy and momentum is exactly true in relativity. Furthermore, he assumed that the energy of light is transformed by the same Doppler-shift factor as its frequency, which he had previously shown to be true based on Maxwell's equations. The first of Einstein's papers on this subject was "Does the Inertia of a Body Depend upon its Energy Content?" in 1905. Although Einstein's argument in this paper is nearly universally accepted by physicists as correct, even self-evident, many authors over the years have suggested that it is wrong. Other authors suggest that the argument was merely inconclusive because it relied on some implicit assumptions. Einstein acknowledged the controversy over his derivation in his 1907 survey paper on special relativity. There he notes that it is problematic to rely on Maxwell's equations for the heuristic mass–energy argument. The argument in his 1905 paper can be carried out with the emission of any massless particles, but the Maxwell equations are implicitly used to make it obvious that the emission of light in particular can be achieved only by doing work. To emit electromagnetic waves, all you have to do is shake a charged particle, and this is clearly doing work, so that the emission is of energy.In a letter to Carl Seelig in 1955, Einstein wrote "I had already previously found that Maxwell's theory did not account for the micro-structure of radiation and could therefore have no general validity.", Einstein letter to Carl Seelig, 1955. ### Einstein's 1905 demonstration of E = mc2 In his fourth of his 1905 Annus mirabilis papers, Einstein presented a heuristic argument for the equivalence of mass and energy. Although, as discussed above, subsequent scholarship has established that his arguments fell short of a broadly definitive proof, the conclusions that he reached in this paper have stood the test of time. Einstein took as starting assumptions his recently discovered formula for relativistic Doppler shift, the laws of conservation of energy and conservation of momentum, and the relationship between the frequency of light and its energy as implied by Maxwell's equations. Fig. 6-1 (top). Consider a system of plane waves of light having frequency $$ f $$ traveling in direction $$ \phi $$ relative to the x-axis of reference frame S. The frequency (and hence energy) of the waves as measured in frame that is moving along the x-axis at velocity $$ v $$ is given by the relativistic Doppler shift formula that Einstein had developed in his 1905 paper on special relativity: $$ \frac{f'}{f} = \frac{1 - (v/c) \cos{\phi}}{\sqrt{1 - v^2/c^2}} $$ Fig. 6-1 (bottom). Consider an arbitrary body that is stationary in reference frame S. Let this body emit a pair of equal-energy light-pulses in opposite directions at angle $$ \phi $$ with respect to the x-axis. Each pulse has energy . Because of conservation of momentum, the body remains stationary in S after emission of the two pulses. Let $$ E_0 $$ be the energy of the body before emission of the two pulses and $$ E_1 $$ after their emission. Next, consider the same system observed from frame that is moving along the x-axis at speed $$ v $$ relative to frame S. In this frame, light from the forwards and reverse pulses will be relativistically Doppler-shifted. Let $$ H_0 $$ be the energy of the body measured in reference frame before emission of the two pulses and $$ H_1 $$ after their emission. We obtain the following relationships: $$ \begin{align} E_0 &= E_1 + \tfrac{1}{2}L + \tfrac{1}{2}L = E_1 + L \\[5mu] H_0 &= H_1 + \tfrac12 L \frac{1 - (v/c) \cos{\phi}}{\sqrt{1 - v^2/c^2}} + \tfrac12 L \frac{1 + (v/c) \cos{\phi}}{\sqrt{1 - v^2/c^2}} = H_1 + \frac{L}{{\sqrt{1 - v^2/c^2}}} \end{align} $$ From the above equations, we obtain the following: The two differences of form $$ H - E $$ seen in the above equation have a straightforward physical interpretation. Since $$ H $$ and $$ E $$ are the energies of the arbitrary body in the moving and stationary frames, $$ H_0 - E_0 $$ and $$ H_1 - E_1 $$ represents the kinetic energies of the bodies before and after the emission of light (except for an additive constant that fixes the zero point of energy and is conventionally set to zero). Hence, Taking a Taylor series expansion and neglecting higher order terms, he obtained Comparing the above expression with the classical expression for kinetic energy, K.E. = mv2, Einstein then noted: "If a body gives off the energy L in the form of radiation, its mass diminishes by L/c2." Rindler has observed that Einstein's heuristic argument suggested merely that energy contributes to mass. In 1905, Einstein's cautious expression of the mass–energy relationship allowed for the possibility that "dormant" mass might exist that would remain behind after all the energy of a body was removed. By 1907, however, Einstein was ready to assert that all inertial mass represented a reserve of energy. "To equate all mass with energy required an act of aesthetic faith, very characteristic of Einstein." Einstein's bold hypothesis has been amply confirmed in the years subsequent to his original proposal. For a variety of reasons, Einstein's original derivation is currently seldom taught. Besides the vigorous debate that continues until this day as to the formal correctness of his original derivation, the recognition of special relativity as being what Einstein called a "principle theory" has led to a shift away from reliance on electromagnetic phenomena to purely dynamic methods of proof. ### How far can you travel from the Earth? Since nothing can travel faster than light, one might conclude that a human can never travel farther from Earth than ~ 100 light years. You would easily think that a traveler would never be able to reach more than the few solar systems that exist within the limit of 100 light years from Earth. However, because of time dilation, a hypothetical spaceship can travel thousands of light years during a passenger's lifetime. If a spaceship could be built that accelerates at a constant 1g, it will, after one year, be travelling at almost the speed of light as seen from Earth. This is described by: $$ v(t) = \frac{at}{\sqrt{1+ a^2t^2/c^2}} , $$ where v(t) is the velocity at a time t, a is the acceleration of the spaceship and t is the coordinate time as measured by people on Earth. Therefore, after one year of accelerating at 9.81 m/s2, the spaceship will be travelling at and after three years, relative to Earth. After three years of this acceleration, with the spaceship achieving a velocity of 94.6% of the speed of light relative to Earth, time dilation will result in each second experienced on the spaceship corresponding to 3.1 seconds back on Earth. During their journey, people on Earth will experience more time than they do – since their clocks (all physical phenomena) would really be ticking 3.1 times faster than those of the spaceship. A 5-year round trip for the traveller will take 6.5 Earth years and cover a distance of over 6 light-years. A 20-year round trip for them (5 years accelerating, 5 decelerating, twice each) will land them back on Earth having travelled for 335 Earth years and a distance of 331 light years. A full 40-year trip at 1g will appear on Earth to last 58,000 years and cover a distance of 55,000 light years. A 40-year trip at will take years and cover about light years. A one-way 28 year (14 years accelerating, 14 decelerating as measured with the astronaut's clock) trip at 1g acceleration could reach 2,000,000 light-years to the Andromeda Galaxy. This same time dilation is why a muon travelling close to c is observed to travel much farther than c times its half-life (when at rest). ### Elastic collisions Examination of the collision products generated by particle accelerators around the world provides scientists evidence of the structure of the subatomic world and the natural laws governing it. Analysis of the collision products, the sum of whose masses may vastly exceed the masses of the incident particles, requires special relativity. In Newtonian mechanics, analysis of collisions involves use of the conservation laws for mass, momentum and energy. In relativistic mechanics, mass is not independently conserved, because it has been subsumed into the total relativistic energy. We illustrate the differences that arise between the Newtonian and relativistic treatments of particle collisions by examining the simple case of two perfectly elastic colliding particles of equal mass. (Inelastic collisions are discussed in Spacetime#Conservation laws. Radioactive decay may be considered a sort of time-reversed inelastic collision.) Elastic scattering of charged elementary particles deviates from ideality due to the production of Bremsstrahlung radiation. #### Newtonian analysis Fig. 6-2 provides a demonstration of the result, familiar to billiard players, that if a stationary ball is struck elastically by another one of the same mass (assuming no sidespin, or "English"), then after collision, the diverging paths of the two balls will subtend a right angle. (a) In the stationary frame, an incident sphere traveling at 2v strikes a stationary sphere. (b) In the center of momentum frame, the two spheres approach each other symmetrically at ±v. After elastic collision, the two spheres rebound from each other with equal and opposite velocities ±u. Energy conservation requires that = . (c) Reverting to the stationary frame, the rebound velocities are . The dot product , indicating that the vectors are orthogonal. #### Relativistic analysis Consider the elastic collision scenario in Fig. 6-3 between a moving particle colliding with an equal mass stationary particle. Unlike the Newtonian case, the angle between the two particles after collision is less than 90°, is dependent on the angle of scattering, and becomes smaller and smaller as the velocity of the incident particle approaches the speed of light: The relativistic momentum and total relativistic energy of a particle are given by Conservation of momentum dictates that the sum of the momenta of the incoming particle and the stationary particle (which initially has momentum = 0) equals the sum of the momenta of the emergent particles: Likewise, the sum of the total relativistic energies of the incoming particle and the stationary particle (which initially has total energy mc2) equals the sum of the total energies of the emergent particles: Breaking down () into its components, replacing $$ v $$ with the dimensionless , and factoring out common terms from () and () yields the following: From these we obtain the following relationships: For the symmetrical case in which $$ \phi = \theta $$ and , () takes on the simpler form: ## Beyond the basics ### Rapidity Lorentz transformations relate coordinates of events in one reference frame to those of another frame. Relativistic composition of velocities is used to add two velocities together. The formulas to perform the latter computations are nonlinear, making them more complex than the corresponding Galilean formulas. This nonlinearity is an artifact of our choice of parameters. We have previously noted that in an spacetime diagram, the points at some constant spacetime interval from the origin form an invariant hyperbola. We have also noted that the coordinate systems of two spacetime reference frames in standard configuration are hyperbolically rotated with respect to each other. The natural functions for expressing these relationships are the hyperbolic analogs of the trigonometric functions. Fig. 7-1a shows a unit circle with sin(a) and cos(a), the only difference between this diagram and the familiar unit circle of elementary trigonometry being that a is interpreted, not as the angle between the ray and the , but as twice the area of the sector swept out by the ray from the . Numerically, the angle and measures for the unit circle are identical. Fig. 7-1b shows a unit hyperbola with sinh(a) and cosh(a), where a is likewise interpreted as twice the tinted area. Fig. 7-2 presents plots of the sinh, cosh, and tanh functions. For the unit circle, the slope of the ray is given by $$ \text{slope} = \tan a = \frac{\sin a }{\cos a }. $$ In the Cartesian plane, rotation of point into point by angle θ is given by $$ \begin{pmatrix} x' \\ y' \\ \end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \\ \end{pmatrix}\begin{pmatrix} x \\ y \\ \end{pmatrix}. $$ In a spacetime diagram, the velocity parameter $$ \beta $$ is the analog of slope. The rapidity, φ, is defined by $$ \beta \equiv \tanh \phi \equiv \frac{v}{c}, $$ where $$ \tanh \phi = \frac{\sinh \phi}{\cosh \phi} = \frac{e^\phi-e^{-\phi}}{e^\phi+e^{-\phi}}. $$ The rapidity defined above is very useful in special relativity because many expressions take on a considerably simpler form when expressed in terms of it. For example, rapidity is simply additive in the collinear velocity-addition formula; $$ \beta = \frac{\beta_1 + \beta_2}{1 + \beta_1 \beta_2} = $$ $$ \frac{\tanh \phi_1 + \tanh \phi_2}{1 + \tanh \phi_1 \tanh \phi_2} = $$ $$ \tanh(\phi_1 + \phi_2), $$ or in other words, . The Lorentz transformations take a simple form when expressed in terms of rapidity. The γ factor can be written as $$ \gamma = \frac{1}{\sqrt{1 - \beta^2}} = \frac{1}{\sqrt{1 - \tanh^2 \phi}} $$ $$ = \cosh \phi, $$ $$ \gamma \beta = \frac{\beta}{\sqrt{1 - \beta^2}} = \frac{\tanh \phi}{\sqrt{1 - \tanh^2 \phi}} $$ $$ = \sinh \phi. $$ Transformations describing relative motion with uniform velocity and without rotation of the space coordinate axes are called boosts. Substituting γ and γβ into the transformations as previously presented and rewriting in matrix form, the Lorentz boost in the may be written as $$ \begin{pmatrix} c t' \\ x' \end{pmatrix} = \begin{pmatrix} \cosh \phi & -\sinh \phi \\ -\sinh \phi & \cosh \phi \end{pmatrix} \begin{pmatrix} ct \\ x \end{pmatrix}, $$ and the inverse Lorentz boost in the may be written as $$ \begin{pmatrix} c t \\ x \end{pmatrix} = \begin{pmatrix} \cosh \phi & \sinh \phi \\ \sinh \phi & \cosh \phi \end{pmatrix} \begin{pmatrix} c t' \\ x' \end{pmatrix}. $$ In other words, Lorentz boosts represent hyperbolic rotations in Minkowski spacetime. The advantages of using hyperbolic functions are such that some textbooks such as the classic ones by Taylor and Wheeler introduce their use at a very early stage.Rapidity arises naturally as a coordinates on the pure boost generators inside the Lie algebra algebra of the Lorentz group. Likewise, rotation angles arise naturally as coordinates (modulo ) on the pure rotation generators in the Lie algebra. (Together they coordinatize the whole Lie algebra.) A notable difference is that the resulting rotations are periodic in the rotation angle, while the resulting boosts are not periodic in rapidity (but rather one-to-one). The similarity between boosts and rotations is formal resemblance. ### 4‑vectors Four‑vectors have been mentioned above in context of the energy–momentum , but without any great emphasis. Indeed, none of the elementary derivations of special relativity require them. But once understood, , and more generally tensors, greatly simplify the mathematics and conceptual understanding of special relativity. Working exclusively with such objects leads to formulas that are manifestly relativistically invariant, which is a considerable advantage in non-trivial contexts. For instance, demonstrating relativistic invariance of Maxwell's equations in their usual form is not trivial, while it is merely a routine calculation, really no more than an observation, using the field strength tensor formulation. On the other hand, general relativity, from the outset, relies heavily on , and more generally tensors, representing physically relevant entities. Relating these via equations that do not rely on specific coordinates requires tensors, capable of connecting such even within a curved spacetime, and not just within a flat one as in special relativity. The study of tensors is outside the scope of this article, which provides only a basic discussion of spacetime. #### Definition of 4-vectors A 4-tuple, is a "4-vector" if its component Ai transform between frames according to the Lorentz transformation. If using coordinates, A is a if it transforms (in the ) according to $$ \begin{align} A_0' &= \gamma \left( A_0 - (v/c) A_1 \right) \\ A_1' &= \gamma \left( A_1 - (v/c) A_0 \right)\\ A_2' &= A_2 \\ A_3' &= A_3 \end{align} , $$ which comes from simply replacing ct with A0 and x with A1 in the earlier presentation of the Lorentz transformation. As usual, when we write x, t, etc. we generally mean Δx, Δt etc. The last three components of a must be a standard vector in three-dimensional space. Therefore, a must transform like under Lorentz transformations as well as rotations. #### Properties of 4-vectors - Closure under linear combination: If A and B are , then is also a . - Inner-product invariance: If A and B are , then their inner product (scalar product) is invariant, i.e. their inner product is independent of the frame in which it is calculated. Note how the calculation of inner product differs from the calculation of the inner product of a . In the following, $$ \vec{A} $$ and $$ \vec{B} $$ are : - : $$ A \cdot B \equiv $$ $$ A_0 B_0 - A_1 B_1 - A_2 B_2 - A_3 B_3 \equiv $$ $$ A_0 B_0 - \vec{A} \cdot \vec{B} $$ In addition to being invariant under Lorentz transformation, the above inner product is also invariant under rotation in . Two vectors are said to be orthogonal if . Unlike the case with , orthogonal are not necessarily at right angles to each other. The rule is that two are orthogonal if they are offset by equal and opposite angles from the 45° line, which is the world line of a light ray. This implies that a lightlike is orthogonal to itself. - Invariance of the magnitude of a vector: The magnitude of a vector is the inner product of a with itself, and is a frame-independent property. As with intervals, the magnitude may be positive, negative or zero, so that the vectors are referred to as timelike, spacelike or null (lightlike). Note that a null vector is not the same as a zero vector. A null vector is one for which , while a zero vector is one whose components are all zero. Special cases illustrating the invariance of the norm include the invariant interval $$ c^2 t^2 - x^2 $$ and the invariant length of the relativistic momentum vector . #### Examples of 4-vectors - Displacement 4-vector: Otherwise known as the spacetime separation, this is or for infinitesimal separations, . - : $$ dS \equiv (dt, dx, dy, dz) $$ - Velocity 4-vector: This results when the displacement is divided by $$ d \tau $$ , where $$ d \tau $$ is the proper time between the two events that yield dt, dx, dy, and dz. - : $$ V \equiv \frac{dS}{d \tau} = \frac{(dt, dx, dy, dz)}{dt/\gamma} = $$ $$ \gamma \left(1, \frac{dx}{dt}, \frac{dy}{dt}, \frac{dz}{dt} \right) = $$ $$ (\gamma, \gamma \vec{v} ) $$ The is tangent to the world line of a particle, and has a length equal to one unit of time in the frame of the particle. An accelerated particle does not have an inertial frame in which it is always at rest. However, an inertial frame can always be found that is momentarily comoving with the particle. This frame, the momentarily comoving reference frame (MCRF), enables application of special relativity to the analysis of accelerated particles. Since photons move on null lines, $$ d \tau = 0 $$ for a photon, and a cannot be defined. There is no frame in which a photon is at rest, and no MCRF can be established along a photon's path. - Energy–momentum 4-vector: - : $$ P \equiv (E/c, \vec{p}) = (E/c, p_x, p_y, p_z) $$ As indicated before, there are varying treatments for the energy–momentum so that one may also see it expressed as $$ (E, \vec{p}) $$ or . The first component is the total energy (including mass) of the particle (or system of particles) in a given frame, while the remaining components are its spatial momentum. The energy–momentum is a conserved quantity. - ### Acceleration 4-vector: This results from taking the derivative of the velocity with respect to . - : $$ A \equiv \frac{dV}{d \tau} = $$ $$ \frac{d}{d \tau} (\gamma, \gamma \vec{v}) = $$ $$ \gamma \left( \frac{d \gamma}{dt}, \frac{d(\gamma \vec{v})}{dt} \right) $$ - Force 4-vector: This is the derivative of the momentum with respect to $$ \tau . $$ - : $$ F \equiv \frac{dP}{d \tau} = $$ $$ \gamma \left(\frac{dE}{dt}, \frac{d \vec{p}}{dt} \right) = $$ $$ \gamma \left( \frac{dE}{dt},\vec{f} \right) $$ As expected, the final components of the above are all standard corresponding to spatial , etc. #### 4-vectors and physical law The first postulate of special relativity declares the equivalency of all inertial frames. A physical law holding in one frame must apply in all frames, since otherwise it would be possible to differentiate between frames. Newtonian momenta fail to behave properly under Lorentzian transformation, and Einstein preferred to change the definition of momentum to one involving rather than give up on conservation of momentum. Physical laws must be based on constructs that are frame independent. This means that physical laws may take the form of equations connecting scalars, which are always frame independent. However, equations involving require the use of tensors with appropriate rank, which themselves can be thought of as being built up from . Acceleration Special relativity does accommodate accelerations as well as accelerating frames of reference. It is a common misconception that special relativity is applicable only to inertial frames, and that it is unable to handle accelerating objects or accelerating reference frames. It is only when gravitation is significant that general relativity is required. Properly handling accelerating frames does require some care, however. The difference between special and general relativity is that (1) In special relativity, all velocities are relative, but acceleration is absolute. (2) In general relativity, all motion is relative, whether inertial, accelerating, or rotating. To accommodate this difference, general relativity uses curved spacetime. In this section, we analyze several scenarios involving accelerated reference frames. #### Dewan–Beran–Bell spaceship paradox The Dewan–Beran–Bell spaceship paradox (Bell's spaceship paradox) is a good example of a problem where intuitive reasoning unassisted by the geometric insight of the spacetime approach can lead to issues. In Fig. 7-4, two identical spaceships float in space and are at rest relative to each other. They are connected by a string that is capable of only a limited amount of stretching before breaking. At a given instant in our frame, the observer frame, both spaceships accelerate in the same direction along the line between them with the same constant proper acceleration. Will the string break? When the paradox was new and relatively unknown, even professional physicists had difficulty working out the solution. Two lines of reasoning lead to opposite conclusions. Both arguments, which are presented below, are flawed even though one of them yields the correct answer. 1. To observers in the rest frame, the spaceships start a distance L apart and remain the same distance apart during acceleration. During acceleration, L is a length contracted distance of the distance in the frame of the accelerating spaceships. After a sufficiently long time, γ will increase to a sufficiently large factor that the string must break. 1. Let A and B be the rear and front spaceships. In the frame of the spaceships, each spaceship sees the other spaceship doing the same thing that it is doing. A says that B has the same acceleration that he has, and B sees that A matches her every move. So the spaceships stay the same distance apart, and the string does not break. The problem with the first argument is that there is no "frame of the spaceships." There cannot be, because the two spaceships measure a growing distance between the two. Because there is no common frame of the spaceships, the length of the string is ill-defined. Nevertheless, the conclusion is correct, and the argument is mostly right. The second argument, however, completely ignores the relativity of simultaneity. A spacetime diagram (Fig. 7-5) makes the correct solution to this paradox almost immediately evident. Two observers in Minkowski spacetime accelerate with constant magnitude $$ k $$ acceleration for proper time $$ \sigma $$ (acceleration and elapsed time measured by the observers themselves, not some inertial observer). They are comoving and inertial before and after this phase. In Minkowski geometry, the length along the line of simultaneity $$ A'B'' $$ turns out to be greater than the length along the line of simultaneity . The length increase can be calculated with the help of the Lorentz transformation. If, as illustrated in Fig. 7-5, the acceleration is finished, the ships will remain at a constant offset in some frame . If $$ x_{A} $$ and $$ x_{B}=x_{A}+L $$ are the ships' positions in , the positions in frame $$ S' $$ are: $$ \begin{align} x'_{A}& = \gamma\left(x_{A}-vt\right)\\ x'_{B}& = \gamma\left(x_{A}+L-vt\right)\\ L'& = x'_{B}-x'_{A} =\gamma L \end{align} $$ The "paradox", as it were, comes from the way that Bell constructed his example. In the usual discussion of Lorentz contraction, the rest length is fixed and the moving length shortens as measured in frame . As shown in Fig. 7-5, Bell's example asserts the moving lengths $$ AB $$ and $$ A'B' $$ measured in frame $$ S $$ to be fixed, thereby forcing the rest frame length $$ A'B'' $$ in frame $$ S' $$ to increase. #### Accelerated observer with horizon Certain special relativity problem setups can lead to insight about phenomena normally associated with general relativity, such as event horizons. In the text accompanying Section "Invariant hyperbola" of the article Spacetime, the magenta hyperbolae represented actual paths that are tracked by a constantly accelerating traveler in spacetime. During periods of positive acceleration, the traveler's velocity just approaches the speed of light, while, measured in our frame, the traveler's acceleration constantly decreases. Fig. 7-6 details various features of the traveler's motions with more specificity. At any given moment, her space axis is formed by a line passing through the origin and her current position on the hyperbola, while her time axis is the tangent to the hyperbola at her position. The velocity parameter $$ \beta $$ approaches a limit of one as $$ ct $$ increases. Likewise, $$ \gamma $$ approaches infinity. The shape of the invariant hyperbola corresponds to a path of constant proper acceleration. This is demonstrable as follows: 1. We remember that . 1. Since , we conclude that . 1. $$ \gamma = 1/\sqrt{1 - \beta ^2} = $$ $$ \sqrt{c^2 t^2 - s^2}/s $$ 1. From the relativistic force law, $$ F = dp/dt = $$ . 1. Substituting $$ \beta(ct) $$ from step 2 and the expression for $$ \gamma $$ from step 3 yields , which is a constant expression. Fig. 7-6 illustrates a specific calculated scenario. Terence (A) and Stella (B) initially stand together 100 light hours from the origin. Stella lifts off at time 0, her spacecraft accelerating at 0.01 c per hour. Every twenty hours, Terence radios updates to Stella about the situation at home (solid green lines). Stella receives these regular transmissions, but the increasing distance (offset in part by time dilation) causes her to receive Terence's communications later and later as measured on her clock, and she never receives any communications from Terence after 100 hours on his clock (dashed green lines). After 100 hours according to Terence's clock, Stella enters a dark region. She has traveled outside Terence's timelike future. On the other hand, Terence can continue to receive Stella's messages to him indefinitely. He just has to wait long enough. Spacetime has been divided into distinct regions separated by an apparent event horizon. So long as Stella continues to accelerate, she can never know what takes place behind this horizon. ## Relativity and unifying electromagnetism Theoretical investigation in classical electromagnetism led to the discovery of wave propagation. Equations generalizing the electromagnetic effects found that finite propagation speed of the E and B fields required certain behaviors on charged particles. The general study of moving charges forms the Liénard–Wiechert potential, which is a step towards special relativity. The Lorentz transformation of the electric field of a moving charge into a non-moving observer's reference frame results in the appearance of a mathematical term commonly called the magnetic field. Conversely, the magnetic field generated by a moving charge disappears and becomes a purely electrostatic field in a comoving frame of reference. Maxwell's equations are thus simply an empirical fit to special relativistic effects in a classical model of the Universe. As electric and magnetic fields are reference frame dependent and thus intertwined, one speaks of electromagnetic fields. Special relativity provides the transformation rules for how an electromagnetic field in one inertial frame appears in another inertial frame. Maxwell's equations in the 3D form are already consistent with the physical content of special relativity, although they are easier to manipulate in a manifestly covariant form, that is, in the language of tensor calculus. ## Theories of relativity and quantum mechanics Special relativity can be combined with quantum mechanics to form relativistic quantum mechanics and quantum electrodynamics. How general relativity and quantum mechanics can be unified is one of the unsolved problems in physics; quantum gravity and a "theory of everything", which require a unification including general relativity too, are active and ongoing areas in theoretical research. The early Bohr–Sommerfeld atomic model explained the fine structure of alkali metal atoms using both special relativity and the preliminary knowledge on quantum mechanics of the time. In 1928, Paul Dirac constructed an influential relativistic wave equation, now known as the Dirac equation in his honour, that is fully compatible both with special relativity and with the final version of quantum theory existing after 1926. This equation not only described the intrinsic angular momentum of the electrons called spin, it also led to the prediction of the antiparticle of the electron (the positron), and fine structure could only be fully explained with special relativity. It was the first foundation of relativistic quantum mechanics. On the other hand, the existence of antiparticles leads to the conclusion that relativistic quantum mechanics is not enough for a more accurate and complete theory of particle interactions. Instead, a theory of particles interpreted as quantized fields, called quantum field theory, becomes necessary; in which particles can be created and destroyed throughout space and time. ## Status Special relativity in its Minkowski spacetime is accurate only when the absolute value of the gravitational potential is much less than c2 in the region of interest. In a strong gravitational field, one must use general relativity. General relativity becomes special relativity at the limit of a weak field. At very small scales, such as at the Planck length and below, quantum effects must be taken into consideration resulting in quantum gravity. But at macroscopic scales and in the absence of strong gravitational fields, special relativity is experimentally tested to extremely high degree of accuracy (10−20) and thus accepted by the physics community. Experimental results that appear to contradict it are not reproducible and are thus widely believed to be due to experimental errors. Special relativity is mathematically self-consistent, and it is an organic part of all modern physical theories, most notably quantum field theory, string theory, and general relativity (in the limiting case of negligible gravitational fields). Newtonian mechanics mathematically follows from special relativity at small velocities (compared to the speed of light) – thus Newtonian mechanics can be considered as a special relativity of slow moving bodies. See Classical mechanics for a more detailed discussion. Several experiments predating Einstein's 1905 paper are now interpreted as evidence for relativity. Of these it is known Einstein was aware of the Fizeau experiment before 1905, and historians have concluded that Einstein was at least aware of the Michelson–Morley experiment as early as 1899 despite claims he made in his later years that it played no role in his development of the theory. - The Fizeau experiment (1851, repeated by Michelson and Morley in 1886) measured the speed of light in moving media, with results that are consistent with relativistic addition of colinear velocities. - The famous Michelson–Morley experiment (1881, 1887) gave further support to the postulate that detecting an absolute reference velocity was not achievable. It should be stated here that, contrary to many alternative claims, it said little about the invariance of the speed of light with respect to the source and observer's velocity, as both source and observer were travelling together at the same velocity at all times. - The Trouton–Noble experiment (1903) showed that the torque on a capacitor is independent of position and inertial reference frame. - The Experiments of Rayleigh and Brace (1902, 1904) showed that length contraction does not lead to birefringence for a co-moving observer, in accordance with the relativity principle. Particle accelerators accelerate and measure the properties of particles moving at near the speed of light, where their behavior is consistent with relativity theory and inconsistent with the earlier Newtonian mechanics. These machines would simply not work if they were not engineered according to relativistic principles. In addition, a considerable number of modern experiments have been conducted to test special relativity. Some examples: - Tests of relativistic energy and momentum – testing the limiting speed of particles - Ives–Stilwell experiment – testing relativistic Doppler effect and time dilation - Experimental testing of time dilation – relativistic effects on a fast-moving particle's half-life - Kennedy–Thorndike experiment – time dilation in accordance with Lorentz transformations - Hughes–Drever experiment – testing isotropy of space and mass - Modern searches for Lorentz violation – various modern tests - Experiments to test emission theory demonstrated that the speed of light is independent of the speed of the emitter. - Experiments to test the aether drag hypothesis – no "aether flow obstruction". ## Technical discussion of spacetime ### Geometry of spacetime #### Comparison between flat Euclidean space and Minkowski space Special relativity uses a "flat" 4-dimensional Minkowski space – an example of a spacetime. Minkowski spacetime appears to be very similar to the standard 3-dimensional Euclidean space, but there is a crucial difference with respect to time. In 3D space, the differential of distance (line element) ds is defined by $$ ds^2 = d\mathbf{x} \cdot d\mathbf{x} = dx_1^2 + dx_2^2 + dx_3^2, $$ where are the differentials of the three spatial dimensions. In Minkowski geometry, there is an extra dimension with coordinate X0 derived from time, such that the distance differential fulfills $$ ds^2 = -dX_0^2 + dX_1^2 + dX_2^2 + dX_3^2, $$ where are the differentials of the four spacetime dimensions. This suggests a deep theoretical insight: special relativity is simply a rotational symmetry of our spacetime, analogous to the rotational symmetry of Euclidean space (see Fig. 10-1). Just as Euclidean space uses a Euclidean metric, so spacetime uses a Minkowski metric. Basically, special relativity can be stated as the invariance of any spacetime interval (that is the 4D distance between any two events) when viewed from any inertial reference frame. All equations and effects of special relativity can be derived from this rotational symmetry (the Poincaré group) of Minkowski spacetime. The actual form of ds above depends on the metric and on the choices for the X0 coordinate. To make the time coordinate look like the space coordinates, it can be treated as imaginary: (this is called a Wick rotation). According to Misner, Thorne and Wheeler (1971, §2.3), ultimately the deeper understanding of both special and general relativity will come from the study of the Minkowski metric (described below) and to take , rather than a "disguised" Euclidean metric using ict as the time coordinate. Some authors use , with factors of c elsewhere to compensate; for instance, spatial coordinates are divided by c or factors of c±2 are included in the metric tensor. These numerous conventions can be superseded by using natural units where . Then space and time have equivalent units, and no factors of c appear anywhere. #### 3D spacetime If we reduce the spatial dimensions to 2, so that we can represent the physics in a 3D space $$ ds^2 = dx_1^2 + dx_2^2 - c^2 dt^2, $$ we see that the null geodesics lie along a dual-cone (see Fig. 10-2) defined by the equation; $$ ds^2 = 0 = dx_1^2 + dx_2^2 - c^2 dt^2 $$ or simply $$ dx_1^2 + dx_2^2 = c^2 dt^2, $$ which is the equation of a circle of radius c dt. #### 4D spacetime If we extend this to three spatial dimensions, the null geodesics are the 4-dimensional cone: $$ ds^2 = 0 = dx_1^2 + dx_2^2 + dx_3^2 - c^2 dt^2 $$ so $$ dx_1^2 + dx_2^2 + dx_3^2 = c^2 dt^2. $$ As illustrated in Fig. 10-3, the null geodesics can be visualized as a set of continuous concentric spheres with radii = c dt. This null dual-cone represents the "line of sight" of a point in space. That is, when we look at the stars and say "The light from that star that I am receiving is X years old", we are looking down this line of sight: a null geodesic. We are looking at an event a distance $$ d = \sqrt{x_1^2 + x_2^2 + x_3^2} $$ away and a time d/c in the past. For this reason the null dual cone is also known as the "light cone". (The point in the lower left of the Fig. 10-2 represents the star, the origin represents the observer, and the line represents the null geodesic "line of sight".) The cone in the −t region is the information that the point is "receiving", while the cone in the +t section is the information that the point is "sending". The geometry of Minkowski space can be depicted using Minkowski diagrams, which are useful also in understanding many of the thought experiments in special relativity. ### Physics in spacetime #### Transformations of physical quantities between reference frames Above, the Lorentz transformation for the time coordinate and three space coordinates illustrates that they are intertwined. This is true more generally: certain pairs of "timelike" and "spacelike" quantities naturally combine on equal footing under the same Lorentz transformation. The Lorentz transformation in standard configuration above, that is, for a boost in the x-direction, can be recast into matrix form as follows: $$ \begin{pmatrix} ct'\\ x'\\ y'\\ z' \end{pmatrix} = \begin{pmatrix} \gamma & -\beta\gamma & 0 & 0\\ -\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} ct\\ x\\ y\\ z \end{pmatrix} = \begin{pmatrix} \gamma ct- \gamma\beta x\\ \gamma x - \beta \gamma ct \\ y\\ z \end{pmatrix}. $$ In Newtonian mechanics, quantities that have magnitude and direction are mathematically described as 3d vectors in Euclidean space, and in general they are parametrized by time. In special relativity, this notion is extended by adding the appropriate timelike quantity to a spacelike vector quantity, and we have 4d vectors, or "four-vectors", in Minkowski spacetime. The components of vectors are written using tensor index notation, as this has numerous advantages. The notation makes it clear the equations are manifestly covariant under the Poincaré group, thus bypassing the tedious calculations to check this fact. In constructing such equations, we often find that equations previously thought to be unrelated are, in fact, closely connected being part of the same tensor equation. Recognizing other physical quantities as tensors simplifies their transformation laws. Throughout, upper indices (superscripts) are contravariant indices rather than exponents except when they indicate a square (this should be clear from the context), and lower indices (subscripts) are covariant indices. For simplicity and consistency with the earlier equations, Cartesian coordinates will be used. The simplest example of a four-vector is the position of an event in spacetime, which constitutes a timelike component ct and spacelike component , in a contravariant position four-vector with components: $$ X^\nu = (X^0, X^1, X^2, X^3)= (ct, x, y, z) = (ct, \mathbf{x} ). $$ where we define so that the time coordinate has the same dimension of distance as the other spatial dimensions; so that space and time are treated equally.Charles W. Misner, Kip S. Thorne & John A. Wheeler, Gravitation, pg 51, Now the transformation of the contravariant components of the position 4-vector can be compactly written as: $$ X^{\mu'}=\Lambda^{\mu'}{}_\nu X^\nu $$ where there is an implied summation on $$ \nu $$ from 0 to 3, and $$ \Lambda^{\mu'}{}_{\nu} $$ is a matrix. More generally, all contravariant components of a four-vector $$ T^\nu $$ transform from one frame to another frame by a Lorentz transformation: $$ T^{\mu'} = \Lambda^{\mu'}{}_{\nu} T^\nu $$ Examples of other 4-vectors include the four-velocity , defined as the derivative of the position 4-vector with respect to proper time: $$ U^\mu = \frac{dX^\mu}{d\tau} = \gamma(v)( c , v_x , v_y, v_z ) = \gamma(v) (c, \mathbf{v} ). $$ where the Lorentz factor is: $$ \gamma(v)= \frac{1}{\sqrt{1 - v^2/c^2 }} \qquad v^2 = v_x^2 + v_y^2 + v_z^2. $$ The relativistic energy $$ E = \gamma(v)mc^2 $$ and relativistic momentum $$ \mathbf{p} = \gamma(v)m \mathbf{v} $$ of an object are respectively the timelike and spacelike components of a contravariant four-momentum vector: $$ P^\mu = m U^\mu = m\gamma(v)(c,v_x,v_y,v_z)= \left (\frac{E}{c},p_x,p_y,p_z \right ) = \left (\frac{E}{c}, \mathbf{p} \right ). $$ where m is the invariant mass. The four-acceleration is the proper time derivative of 4-velocity: $$ A^\mu = \frac{d U^\mu}{d\tau}. $$ The transformation rules for three-dimensional velocities and accelerations are very awkward; even above in standard configuration the velocity equations are quite complicated owing to their non-linearity. On the other hand, the transformation of four-velocity and four-acceleration are simpler by means of the Lorentz transformation matrix. The four-gradient of a scalar field φ transforms covariantly rather than contravariantly: $$ \begin{pmatrix} \dfrac{1}{c} \dfrac{\partial \phi}{\partial t'} & \dfrac{\partial \phi}{\partial x'} & \dfrac{\partial \phi}{\partial y'} & \dfrac{\partial \phi}{\partial z'} \end{pmatrix} = \begin{pmatrix} \dfrac{1}{c} \dfrac{\partial \phi}{\partial t} & \dfrac{\partial \phi}{\partial x} & \dfrac{\partial \phi}{\partial y} & \dfrac{\partial \phi}{\partial z} \end{pmatrix} \begin{pmatrix} \gamma & +\beta\gamma & 0 & 0\\ +\beta\gamma & \gamma & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix} , $$ which is the transpose of: $$ (\partial_{\mu'} \phi) = \Lambda_{\mu'}{}^{\nu} (\partial_\nu \phi)\qquad \partial_{\mu} \equiv \frac{\partial}{\partial x^{\mu}}. $$ only in Cartesian coordinates. It is the covariant derivative that transforms in manifest covariance, in Cartesian coordinates this happens to reduce to the partial derivatives, but not in other coordinates. More generally, the covariant components of a 4-vector transform according to the inverse Lorentz transformation: $$ T_{\mu'} = \Lambda_{\mu'}{}^{\nu} T_\nu, $$ where $$ \Lambda_{\mu'}{}^{\nu} $$ is the reciprocal matrix of . The postulates of special relativity constrain the exact form the Lorentz transformation matrices take. More generally, most physical quantities are best described as (components of) tensors. So to transform from one frame to another, we use the well-known tensor transformation law $$ T^{\alpha' \beta' \cdots \zeta'}_{\theta' \iota' \cdots \kappa'} = \Lambda^{\alpha'}{}_{\mu} \Lambda^{\beta'}{}_{\nu} \cdots \Lambda^{\zeta'}{}_{\rho} \Lambda_{\theta'}{}^{\sigma} \Lambda_{\iota'}{}^{\upsilon} \cdots \Lambda_{\kappa'}{}^{\phi} T^{\mu \nu \cdots \rho}_{\sigma \upsilon \cdots \phi} $$ where $$ \Lambda_{\chi'}{}^{\psi} $$ is the reciprocal matrix of . All tensors transform by this rule. An example of a four-dimensional second order antisymmetric tensor is the relativistic angular momentum, which has six components: three are the classical angular momentum, and the other three are related to the boost of the center of mass of the system. The derivative of the relativistic angular momentum with respect to proper time is the relativistic torque, also second order antisymmetric tensor. The electromagnetic field tensor is another second order antisymmetric tensor field, with six components: three for the electric field and another three for the magnetic field. There is also the stress–energy tensor for the electromagnetic field, namely the electromagnetic stress–energy tensor. #### Metric The metric tensor allows one to define the inner product of two vectors, which in turn allows one to assign a magnitude to the vector. Given the four-dimensional nature of spacetime the Minkowski metric η has components (valid with suitably chosen coordinates), which can be arranged in a matrix: $$ \eta_{\alpha\beta} = \begin{pmatrix} -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix} , $$ which is equal to its reciprocal, , in those frames. Throughout we use the signs as above, different authors use different conventions – see Minkowski metric alternative signs. The Poincaré group is the most general group of transformations that preserves the Minkowski metric: $$ \eta_{\alpha\beta} = \eta_{\mu'\nu'} \Lambda^{\mu'}{}_\alpha \Lambda^{\nu'}{}_\beta $$ and this is the physical symmetry underlying special relativity. The metric can be used for raising and lowering indices on vectors and tensors. Invariants can be constructed using the metric, the inner product of a 4-vector T with another 4-vector S is: $$ T^{\alpha}S_{\alpha}=T^{\alpha}\eta_{\alpha\beta}S^{\beta} = T_{\alpha}\eta^{\alpha\beta}S_{\beta} = \text{invariant scalar} $$ Invariant means that it takes the same value in all inertial frames, because it is a scalar (0 rank tensor), and so no appears in its trivial transformation. The magnitude of the 4-vector T is the positive square root of the inner product with itself: $$ |\mathbf{T}| = \sqrt{T^{\alpha}T_{\alpha}} $$ One can extend this idea to tensors of higher order, for a second order tensor we can form the invariants: $$ T^{\alpha}{}_{\alpha},T^{\alpha}{}_{\beta}T^{\beta}{}_{\alpha},T^{\alpha}{}_{\beta}T^{\beta}{}_{\gamma}T^{\gamma}{}_{\alpha} = \text{invariant scalars}, $$ similarly for higher order tensors. Invariant expressions, particularly inner products of 4-vectors with themselves, provide equations that are useful for calculations, because one does not need to perform Lorentz transformations to determine the invariants. #### Relativistic kinematics and invariance The coordinate differentials transform also contravariantly: $$ dX^{\mu'}=\Lambda^{\mu'}{}_\nu dX^\nu $$ so the squared length of the differential of the position four-vector dXμ constructed using $$ d\mathbf{X}^2 = dX^\mu \,dX_\mu = \eta_{\mu\nu}\,dX^\mu \,dX^\nu = -(c\,dt)^2+(dx)^2+(dy)^2+(dz)^2 $$ is an invariant. Notice that when the line element dX2 is negative that is the differential of proper time, while when dX2 is positive, is differential of the proper distance. The 4-velocity Uμ has an invariant form: $$ \mathbf U^2 = \eta_{\nu\mu} U^\nu U^\mu = -c^2 \,, $$ which means all velocity four-vectors have a magnitude of c. This is an expression of the fact that there is no such thing as being at coordinate rest in relativity: at the least, you are always moving forward through time. Differentiating the above equation by τ produces: $$ 2\eta_{\mu\nu}A^\mu U^\nu = 0. $$ So in special relativity, the acceleration four-vector and the velocity four-vector are orthogonal. #### Relativistic dynamics and invariance The invariant magnitude of the momentum 4-vector generates the energy–momentum relation: $$ \mathbf{P}^2 = \eta^{\mu\nu}P_\mu P_\nu = -\left (\frac{E}{c} \right )^2 + p^2 . $$ We can work out what this invariant is by first arguing that, since it is a scalar, it does not matter in which reference frame we calculate it, and then by transforming to a frame where the total momentum is zero. $$ \mathbf{P}^2 = - \left (\frac{E_\text{rest}}{c} \right )^2 = - (m c)^2 . $$ We see that the rest energy is an independent invariant. A rest energy can be calculated even for particles and systems in motion, by translating to a frame in which momentum is zero. The rest energy is related to the mass according to the celebrated equation discussed above: $$ E_\text{rest} = m c^2. $$ The mass of systems measured in their center of momentum frame (where total momentum is zero) is given by the total energy of the system in this frame. It may not be equal to the sum of individual system masses measured in other frames. To use Newton's third law of motion, both forces must be defined as the rate of change of momentum with respect to the same time coordinate. That is, it requires the 3D force defined above. Unfortunately, there is no tensor in 4D that contains the components of the 3D force vector among its components. If a particle is not traveling at c, one can transform the 3D force from the particle's co-moving reference frame into the observer's reference frame. This yields a 4-vector called the four-force. It is the rate of change of the above energy momentum four-vector with respect to proper time. The covariant version of the four-force is: $$ F_\nu = \frac{d P_{\nu}}{d \tau} = m A_\nu $$ In the rest frame of the object, the time component of the four-force is zero unless the "invariant mass" of the object is changing (this requires a non-closed system in which energy/mass is being directly added or removed from the object) in which case it is the negative of that rate of change of mass, times c. In general, though, the components of the four-force are not equal to the components of the three-force, because the three force is defined by the rate of change of momentum with respect to coordinate time, that is, dp/dt while the four-force is defined by the rate of change of momentum with respect to proper time, that is, dp/dτ. In a continuous medium, the 3D density of force combines with the density of power to form a covariant 4-vector. The spatial part is the result of dividing the force on a small cell (in 3-space) by the volume of that cell. The time component is −1/c times the power transferred to that cell divided by the volume of the cell. This will be used below in the section on electromagnetism.
https://en.wikipedia.org/wiki/Special_relativity
In chemistry, orbital hybridisation (or hybridization) is the concept of mixing atomic orbitals to form new hybrid orbitals (with different energies, shapes, etc., than the component atomic orbitals) suitable for the pairing of electrons to form chemical bonds in valence bond theory. For example, in a carbon atom which forms four single bonds, the valence-shell s orbital combines with three valence-shell p orbitals to form four equivalent ### ### sp 3 mixtures in a tetrahedral arrangement around the carbon to bond to four different atoms. Hybrid orbitals are useful in the explanation of molecular geometry and atomic bonding properties and are symmetrically disposed in space. Usually hybrid orbitals are formed by mixing atomic orbitals of comparable energies. ## History and uses Chemist Linus Pauling first developed the hybridisation theory in 1931 to explain the structure of simple molecules such as methane (CH4) using atomic orbitals. Pauling pointed out that a carbon atom forms four bonds by using one s and three p orbitals, so that "it might be inferred" that a carbon atom would form three bonds at right angles (using p orbitals) and a fourth weaker bond using the s orbital in some arbitrary direction. In reality, methane has four C–H bonds of equivalent strength. The angle between any two bonds is the tetrahedral bond angle of 109°28' (around 109.5°). Pauling supposed that in the presence of four hydrogen atoms, the s and p orbitals form four equivalent combinations which he called hybrid orbitals. Each hybrid is denoted sp3 to indicate its composition, and is directed along one of the four C–H bonds. This concept was developed for such simple chemical systems, but the approach was later applied more widely, and today it is considered an effective heuristic for rationalizing the structures of organic compounds. It gives a simple orbital picture equivalent to Lewis structures. Hybridisation theory is an integral part of organic chemistry, one of the most compelling examples being Baldwin's rules. For drawing reaction mechanisms sometimes a classical bonding picture is needed with two atoms sharing two electrons. Hybridisation theory explains bonding in alkenes and methane. The amount of p character or s character, which is decided mainly by orbital hybridisation, can be used to reliably predict molecular properties such as acidity or basicity. ## Overview Orbitals are a model representation of the behavior of electrons within molecules. In the case of simple hybridization, this approximation is based on atomic orbitals, similar to those obtained for the hydrogen atom, the only neutral atom for which the Schrödinger equation can be solved exactly. In heavier atoms, such as carbon, nitrogen, and oxygen, the atomic orbitals used are the 2s and 2p orbitals, similar to excited state orbitals for hydrogen. Hybrid orbitals are assumed to be mixtures of atomic orbitals, superimposed on each other in various proportions. For example, in methane, the C hybrid orbital which forms each carbon–hydrogen bond consists of 25% s character and 75% p character and is thus described as sp3 (read as s-p-three) hybridised. Quantum mechanics describes this hybrid as an sp3 wavefunction of the form $$ N(s + \sqrt 3 p\sigma) $$ , where N is a normalisation constant (here 1/2) and pσ is a p orbital directed along the C-H axis to form a sigma bond. The ratio of coefficients (denoted λ in general) is in this example. Since the electron density associated with an orbital is proportional to the square of the wavefunction, the ratio of p-character to s-character is λ2 = 3. The p character or the weight of the p component is N2λ2 = 3/4. ## Types of hybridisation sp3 Hybridisation describes the bonding of atoms from an atom's point of view. For a tetrahedrally coordinated carbon (e.g., methane CH4), the carbon should have 4 orbitals directed towards the 4 hydrogen atoms. Carbon's ground state configuration is 1s2 2s2 2p2 or more easily read: C ↑↓ ↑↓ ↑ ↑ 1s 2s 2p 2p 2p This diagram suggests that the carbon atom could use its two singly occupied p-type orbitals to form two covalent bonds with two hydrogen atoms in a methylene (CH2) molecule, with a hypothetical bond angle of 90° corresponding to the angle between two p orbitals on the same atom. However the true H–C–H angle in singlet methylene is about 102° which implies the presence of some orbital hybridisation. The carbon atom can also bond to four hydrogen atoms in methane by an excitation (or promotion) of an electron from the doubly occupied 2s orbital to the empty 2p orbital, producing four singly occupied orbitals. C* ↑↓ ↑ ↑ ↑ ↑ 1s 2s 2p 2p 2p The energy released by the formation of two additional bonds more than compensates for the excitation energy required, energetically favoring the formation of four C-H bonds. According to quantum mechanics, the lowest energy is obtained if the four bonds are equivalent, which requires that they are formed from equivalent orbitals on the carbon. A set of four equivalent orbitals can be obtained that are linear combinations of the valence-shell (core orbitals are almost never involved in bonding) s and p wave functions, which are the four sp3 hybrids. C* ↑↓ ↑ ↑ ↑ ↑ 1s sp3 sp3 sp3 sp3 In CH4, four sp3 hybrid orbitals are overlapped by the four hydrogens' 1s orbitals, yielding four σ (sigma) bonds (that is, four single covalent bonds) of equal length and strength. The following : translates into : ### sp2 Other carbon compounds and other molecules may be explained in a similar way. For example, ethene (C2H4) has a double bond between the carbons. For this molecule, carbon sp2 hybridises, because one π (pi) bond is required for the double bond between the carbons and only three σ bonds are formed per carbon atom. In sp2 hybridisation the 2s orbital is mixed with only two of the three available 2p orbitals, usually denoted 2px and 2py. The third 2p orbital (2pz) remains unhybridised. C* ↑↓ ↑ ↑ ↑ ↑ 1s sp2 sp2 sp2 2p forming a total of three sp2 orbitals with one remaining p orbital. In ethene, the two carbon atoms form a σ bond by overlapping one sp2 orbital from each carbon atom. The π bond between the carbon atoms perpendicular to the molecular plane is formed by 2p–2p overlap. Each carbon atom forms covalent C–H bonds with two hydrogens by s–sp2 overlap, all with 120° bond angles. The hydrogen–carbon bonds are all of equal strength and length, in agreement with experimental data. sp The chemical bonding in compounds such as alkynes with triple bonds is explained by sp hybridization. In this model, the 2s orbital is mixed with only one of the three p orbitals, C* ↑↓ ↑ ↑ ↑ ↑ 1s sp sp 2p 2p resulting in two sp orbitals and two remaining p orbitals. The chemical bonding in acetylene (ethyne) (C2H2) consists of sp–sp overlap between the two carbon atoms forming a σ bond and two additional π bonds formed by p–p overlap. Each carbon also bonds to hydrogen in a σ s–sp overlap at 180° angles. ## Hybridisation and molecule shape Hybridisation helps to explain molecule shape, since the angles between bonds are approximately equal to the angles between hybrid orbitals. This is in contrast to valence shell electron-pair repulsion (VSEPR) theory, which can be used to predict molecular geometry based on empirical rules rather than on valence-bond or orbital theories. ### spx hybridisation As the valence orbitals of main group elements are the one s and three p orbitals with the corresponding octet rule, spx hybridization is used to model the shape of these molecules. Coordination number Shape Hybridisation Examples 2 Linear sp hybridisation (180°) CO2 3 Trigonal planar sp2 hybridisation (120°) BCl3 4 Tetrahedral sp3 hybridisation (109.5°) CCl4 Interorbital angles ### spxdy hybridisation As the valence orbitals of transition metals are the five d, one s and three p orbitals with the corresponding 18-electron rule, spxdy hybridisation is used to model the shape of these molecules. These molecules tend to have multiple shapes corresponding to the same hybridization due to the different d-orbitals involved. A square planar complex has one unoccupied p-orbital and hence has 16 valence electrons. Coordination number Shape Hybridisation Examples 4 Square planar sp2d hybridisation PtCl42− 5 Trigonal bipyramidal sp3d hybridisation Fe(CO)5 Square pyramidal MnCl52− 6 Octahedral sp3d2 hybridisation Mo(CO)6 7 Pentagonal bipyramidal sp3d3 hybridisation ZrF73− Capped octahedral MoF7− Capped trigonal prismatic TaF72− 8 Square antiprismatic sp3d4 hybridisation ReF8− Dodecahedral Mo(CN)84− Bicapped trigonal prismatic ZrF84− 9 Tricapped trigonal prismatic sp3d5 hybridisation ReH92− Capped square antiprismatic ### sdx hybridisation In certain transition metal complexes with a low d electron count, the p-orbitals are unoccupied and sdx hybridisation is used to model the shape of these molecules. Coordination number Shape Hybridisation Examples 3 Trigonal pyramidal sd2 hybridisation (90°) CrO3 4 Tetrahedral sd3 hybridisation (70.5°, 109.5°) TiCl4 5 Square pyramidal sd4 hybridisation (65.9°, 114.1°) Ta(CH3)5 6 C3v Trigonal prismatic sd5 hybridisation (63.4°, 116.6°) W(CH3)6 Interorbital angles ## Hybridisation of hypervalent molecules ### Octet expansion In some general chemistry textbooks, hybridization is presented for main group coordination number 5 and above using an "expanded octet" scheme with d-orbitals first proposed by Pauling. However, such a scheme is now considered to be incorrect in light of computational chemistry calculations. Coordination number Molecular shape Hybridisation Examples 5 Trigonal bipyramidal sp3d hybridisation 6 Octahedral sp3d2 hybridisation 7 Pentagonal bipyramidal sp3d3 hybridisation In 1990, Eric Alfred Magnusson of the University of New South Wales published a paper definitively excluding the role of d-orbital hybridisation in bonding in hypervalent compounds of second-row (period 3) elements, ending a point of contention and confusion. Part of the confusion originates from the fact that d-functions are essential in the basis sets used to describe these compounds (or else unreasonably high energies and distorted geometries result). Also, the contribution of the d-function to the molecular wavefunction is large. These facts were incorrectly interpreted to mean that d-orbitals must be involved in bonding. ### Resonance In light of computational chemistry, a better treatment would be to invoke sigma bond resonance in addition to hybridisation, which implies that each resonance structure has its own hybridisation scheme. All resonance structures must obey the octet rule. Coordination number Resonance structures 5 Trigonal bipyramidal 6 Octahedral 7 Pentagonal bipyramidal ## Hybridisation in computational VB theory While the simple model of orbital hybridisation is commonly used to explain molecular shape, hybridisation is used differently when computed in modern valence bond programs. Specifically, hybridisation is not determined a priori but is instead variationally optimized to find the lowest energy solution and then reported. This means that all artificial constraints, specifically two constraints, on orbital hybridisation are lifted: - that hybridisation is restricted to integer values (isovalent hybridisation) - that hybrid orbitals are orthogonal to one another (hybridisation defects) This means that in practice, hybrid orbitals do not conform to the simple ideas commonly taught and thus in scientific computational papers are simply referred to as spx, spxdy or sdx hybrids to express their nature instead of more specific integer values. ### Isovalent hybridisation Although ideal hybrid orbitals can be useful, in reality, most bonds require orbitals of intermediate character. This requires an extension to include flexible weightings of atomic orbitals of each type (s, p, d) and allows for a quantitative depiction of the bond formation when the molecular geometry deviates from ideal bond angles. The amount of p-character is not restricted to integer values; i.e., hybridizations like sp2.5 are also readily described. The hybridization of bond orbitals is determined by Bent's rule: "Atomic s character concentrates in orbitals directed towards electropositive substituents". For molecules with lone pairs, the bonding orbitals are isovalent spx hybrids. For example, the two bond-forming hybrid orbitals of oxygen in water can be described as sp4.0 to give the interorbital angle of 104.5°. This means that they have 20% s character and 80% p character and does not imply that a hybrid orbital is formed from one s and four p orbitals on oxygen since the 2p subshell of oxygen only contains three p orbitals. ### Hybridisation defects Hybridisation of s and p orbitals to form effective spx hybrids requires that they have comparable radial extent. While 2p orbitals are on average less than 10% larger than 2s, in part attributable to the lack of a radial node in 2p orbitals, 3p orbitals which have one radial node, exceed the 3s orbitals by 20–33%. The difference in extent of s and p orbitals increases further down a group. The hybridisation of atoms in chemical bonds can be analysed by considering localised molecular orbitals, for example using natural localised molecular orbitals in a natural bond orbital (NBO) scheme. In methane, CH4, the calculated p/s ratio is approximately 3 consistent with "ideal" sp3 hybridisation, whereas for silane, SiH4, the p/s ratio is closer to 2. A similar trend is seen for the other 2p elements. Substitution of fluorine for hydrogen further decreases the p/s ratio. The 2p elements exhibit near ideal hybridisation with orthogonal hybrid orbitals. For heavier p block elements this assumption of orthogonality cannot be justified. These deviations from the ideal hybridisation were termed hybridisation defects by Kutzelnigg. However, computational VB groups such as Gerratt, Cooper and Raimondi (SCVB) as well as Shaik and Hiberty (VBSCF) go a step further to argue that even for model molecules such as methane, ethylene and acetylene, the hybrid orbitals are already defective and nonorthogonal, with hybridisations such as sp1.76 instead of sp3 for methane. ## Photoelectron spectra One misconception concerning orbital hybridization is that it incorrectly predicts the ultraviolet photoelectron spectra of many molecules. While this is true if Koopmans' theorem is applied to localized hybrids, quantum mechanics requires that the (in this case ionized) wavefunction obey the symmetry of the molecule which implies resonance in valence bond theory. For example, in methane, the ionised states (CH4+) can be constructed out of four resonance structures attributing the ejected electron to each of the four sp3 orbitals. A linear combination of these four structures, conserving the number of structures, leads to a triply degenerate T2 state and an A1 state. The difference in energy between each ionized state and the ground state would be ionization energy, which yields two values in agreement with experimental results. ## Localized vs canonical molecular orbitals Bonding orbitals formed from hybrid atomic orbitals may be considered as localized molecular orbitals, which can be formed from the delocalized orbitals of molecular orbital theory by an appropriate mathematical transformation. For molecules in the ground state, this transformation of the orbitals leaves the total many-electron wave function unchanged. The hybrid orbital description of the ground state is, therefore equivalent to the delocalized orbital description for ground state total energy and electron density, as well as the molecular geometry that corresponds to the minimum total energy value. ### Two localized representations Molecules with multiple bonds or multiple lone pairs can have orbitals represented in terms of sigma and pi symmetry or equivalent orbitals. Different valence bond methods use either of the two representations, which have mathematically equivalent total many-electron wave functions and are related by a unitary transformation of the set of occupied molecular orbitals. For multiple bonds, the sigma-pi representation is the predominant one compared to the equivalent orbital (bent bond) representation. In contrast, for multiple lone pairs, most textbooks use the equivalent orbital representation. However, the sigma-pi representation is also used, such as by Weinhold and Landis within the context of natural bond orbitals, a localized orbital theory containing modernized analogs of classical (valence bond/Lewis structure) bonding pairs and lone pairs. For the hydrogen fluoride molecule, for example, two F lone pairs are essentially unhybridized p orbitals, while the other is an spx hybrid orbital. An analogous consideration applies to water (one O lone pair is in a pure p orbital, another is in an spx hybrid orbital).
https://en.wikipedia.org/wiki/Orbital_hybridisation
Scrum is an agile team collaboration framework commonly used in software development and other industries. Scrum prescribes for teams to break work into goals to be completed within time-boxed iterations, called sprints. Each sprint is no longer than one month and commonly lasts two weeks. The scrum team assesses progress in time-boxed, stand-up meetings of up to 15 minutes, called daily scrums. At the end of the sprint, the team holds two further meetings: one sprint review to demonstrate the work for stakeholders and solicit feedback, and one internal sprint retrospective. A person in charge of a scrum team is typically called a scrum master. Scrum's approach to product development involves bringing decision-making authority to an operational level. Unlike a sequential approach to product development, scrum is an iterative and incremental framework for product development. Scrum allows for continuous feedback and flexibility, requiring teams to self-organize by encouraging physical co-location or close online collaboration, and mandating frequent communication among all team members. The flexible approach of scrum is based in part on the notion of requirement volatility, that stakeholders will change their requirements as the project evolves. ## History The use of the term scrum in software development came from a 1986 Harvard Business Review paper titled "The New New Product Development Game" by Hirotaka Takeuchi and Ikujiro Nonaka. Based on case studies from manufacturing firms in the automotive, photocopier, and printer industries, the authors outlined a new approach to product development for increased speed and flexibility. They called this the rugby approach, as the process involves a single cross-functional team operating across multiple overlapping phases in which the team "tries to go the distance as a unit, passing the ball back and forth". The authors later developed scrum in their book, The Knowledge Creating Company. In the early 1990s, Ken Schwaber used what would become scrum at his company, Advanced Development Methods. Jeff Sutherland, John Scumniotales, and Jeff McKenna developed a similar approach at Easel Corporation, referring to the approach with the term scrum. Sutherland and Schwaber later worked together to integrate their ideas into a single framework, formally known as scrum. Schwaber and Sutherland tested scrum and continually improved it, leading to the publication of a research paper in 1995, and the Manifesto for Agile Software Development in 2001. Schwaber also collaborated with Babatunde Ogunnaike at DuPont Research Station and the University of Delaware to develop Scrum. Ogunnaike believed that software development projects could often fail when initial conditions change if product management was not rooted in empirical practice. In 2002, Schwaber with others founded the Scrum Alliance and set up the Certified Scrum accreditation series. Schwaber left the Scrum Alliance in late 2009 and subsequently founded Scrum.org, which oversees the parallel Professional Scrum accreditation series. Since 2009, a public document called The Scrum Guide has been published and updated by Schwaber and Sutherland. It has been revised six times, with the most recent version having been published in November 2020. ## Scrum team A scrum team is organized into at least three categories of individuals: the product owner, developers, and the scrum master. The product owner liaises with stakeholders, those who have an interest in the project's outcome, to communicate tasks and expectations with developers. ### Developers in a scrum team organize work by themselves, with the facilitation of a scrum master. ### Product owner Each scrum team has one product owner. The product owner focuses on the business side of product development and spends the majority of time liaising with stakeholders and the team. The role is intended to primarily represent the product's stakeholders, the voice of the customer, or the desires of a committee, and bears responsibility for the delivery of business results. Product owners manage the product backlog and are responsible for maximizing the value that a team delivers. They do not dictate the technical solutions of a team but may instead attempt to seek consensus among team members. As the primary liaison of the scrum team towards stakeholders, product owners are responsible for communicating announcements, project definitions and progress, RIDAs (risks, impediments, dependencies, and assumptions), funding and scheduling changes, the product backlog, and project governance, among other responsibilities. Product owners can also cancel a sprint if necessary, without the input of team members. Developers In scrum, the term developer or team member refers to anyone who plays a role in the development and support of the product and can include researchers, architects, designers, programmers, etc. ### Scrum master Scrum is facilitated by a scrum master, whose role is to educate and coach teams about scrum theory and practice. Scrum masters have differing roles and responsibilities from traditional team leads or project managers. Some scrum master responsibilities include coaching, objective setting, problem solving, oversight, planning, backlog management, and communication facilitation. On the other hand, traditional project managers often have people management responsibilities, which a scrum master does not. Scrum teams do not involve project managers, so as to maximize self-organisation among developers. ## Workflow ### Sprint A sprint (also known as a design sprint, iteration, or timebox) is a fixed period of time wherein team members work on a specific goal. Each sprint is normally between one week and one month, with two weeks being the most common. The outcome of the sprint is a functional deliverable, or a product which has received some development in increments. When a sprint is abnormally terminated, the next step is to conduct new sprint planning, where the reason for the termination is reviewed. Each sprint starts with a sprint planning event in which a sprint goal is defined. Priorities for planned sprints are chosen out of the backlog. Each sprint ends with two events: - A sprint review (progress shown to stakeholders to elicit their feedback) - A sprint retrospective (identifying lessons and improvements for the next sprints) The suggested maximum duration of sprint planning is eight hours for a four-week sprint. ### Daily scrum Each day during a sprint, the developers hold a daily scrum (often conducted standing up) with specific guidelines, and which may be facilitated by a scrum master. Daily scrum meetings are intended to be less than 15 minutes in length, taking place at the same time and location daily. The purpose of the meeting is to announce progress made towards the sprint goal and issues that may be hindering the goal, without going into any detailed discussion. Once over, individual members can go into a 'breakout session' or an 'after party' for extended discussion and collaboration. Scrum masters are responsible for ensuring that team members use daily scrums effectively or, if team members are unable to use them, providing alternatives to achieve similar outcomes. ### Post-sprint events Conducted at the end of a sprint, a sprint review is a meeting that has a team share the work they've completed with stakeholders and liaise with them on feedback, expectations, and upcoming plans. At a sprint review completed deliverables are demonstrated to stakeholders. The recommended duration for a sprint review is one hour per week of sprint. A sprint retrospective is a separate meeting that allows team members to internally analyze the strengths and weaknesses of the sprint, future areas of improvement, and continuous process improvement actions. ### Backlog refinement Backlog refinement is a process by which team members revise and prioritize a backlog for future sprints. It can be done as a separate stage done before the beginning of a new sprint or as a continuous process that team members work on by themselves. Backlog refinement can include the breaking down of large tasks into smaller and clearer ones, the clarification of success criteria, and the revision of changing priorities and returns. ## Artifacts Artifacts are a means by which scrum teams manage product development by documenting work done towards the project. There are seven scrum artifacts, with three of them being the most common: product backlog, sprint backlog, and increment. ### Product backlog The product backlog is a breakdown of work to be done and contains an ordered list of product requirements (such as features, bug fixes and non-functional requirements) that the team maintains for a product. The order of a product backlog corresponds to the urgency of the task. Common formats for backlog items include user stories and use cases. The product backlog may also contain the product owner's assessment of business value and the team's assessment of the product's effort or complexity, which can be stated in story points using the rounded Fibonacci scale. These estimates try to help the product owner gauge the timeline and may influence the ordering of product backlog items. The product owner maintains and prioritizes product backlog items based on considerations such as risk, business value, dependencies, size, and timing. High-priority items at the top of the backlog are broken down into more detail for developers to work on, while tasks further down the backlog may be more vague. ### Sprint backlog The sprint backlog is the subset of items from the product backlog intended for developers to address in a particular sprint. Developers fill this backlog with tasks they deem appropriate to fill the sprint, using past performance to assess their capacity for each sprint. The scrum approach has tasks on the sprint backlog not assigned to developers by any particular individual or leader. Team members self organize by pulling work as needed according to the backlog priority and their own capabilities and capacity. ### Increment An increment is a potentially releasable output of a sprint, which meets the sprint goal. It is formed from all the completed sprint backlog items, integrated with the work of all previous sprints. ### Other artifacts #### Burndown chart Often used in scrum, a burndown chart is a publicly displayed chart showing remaining work. It provides quick visualizations for reference. The horizontal axis of the burndown chart shows the days remaining, while the vertical axis shows the amount of work remaining each day. During sprint planning, the ideal burndown chart is plotted. Then, during the sprint, developers update the chart with the remaining work. #### Release burnup chart Updated at the end of each sprint, the release burn-up chart shows progress towards delivering a forecast scope. The horizontal axis of the release burnup chart shows the sprints in a release, while the vertical axis shows the amount of work completed at the end of each sprint. #### Velocity Some project managers believe that a team's total capability effort for a single sprint can be derived by evaluating work completed in the last sprint. The collection of historical "velocity" data is a guideline for assisting the team in understanding their capacity. ## Limitations Some have argued that scrum events, such as daily scrums and scrum reviews, hurt productivity and waste time that could be better spent on actual productive tasks. Scrum has also been observed to pose difficulties for part-time or geographically distant teams; those that have highly specialized members who would be better off working by themselves or in working cliques; and those that are unsuitable for incremental and development testing. ## Adaptations Scrum is frequently tailored or adapted in different contexts to achieve varying aims. A common approach to adapting scrum is the combination of scrum with other software development methodologies, as scrum does not cover the whole product development lifecycle. Various scrum practitioners have also suggested more detailed techniques for how to apply or adapt scrum to particular problems or organizations. Many refer to these techniques as 'patterns', an analogous use to design patterns in architecture and software. ### Scrumban Scrumban is a software production model based on scrum and kanban. To illustrate each stage of work, teams working in the same space often use post-it notes or a large whiteboard. Kanban models allow a team to visualize work stages and limitations. ### Scrum of scrums Scrum of scrums is a technique to operate scrum at scale for multiple teams coordinating on the same product. Scrum-of-scrums daily scrum meetings involve ambassadors selected from each individual team, who may be either a developer or scrum master. As a tool for coordination, scrum of scrums allows teams to collectively work on team-wide risks, impediments, dependencies, and assumptions (RIDAs), which may be tracked in a backlog of their own. ### Large-scale scrum Large-scale scrum is an organizational system for product development that scales scrum with varied rules and guidelines, developed by Bas Vodde and Craig Larman. There are two levels to the framework: the first level, designed for up to eight teams; and the second level, known as 'LeSS Huge', which can accommodate development involving hundreds of developers. ## Criticism A systematic review found "that Distributed Scrum has no impact, positive or negative on overall project success" in distributed software development. Martin Fowler, one of the authors of the Manifesto for Agile Software Development, has criticized what he calls "faux-agile" practices that are "disregarding Agile's values and principles", and "the Agile Industrial Complex imposing methods upon people" contrary to the Agile principle of valuing "individuals and interactions over processes and tools" and allowing the individuals doing the work to decide how the work is done, changing processes to suit their needs. In September 2016, Ron Jeffries, a signatory to the Agile Manifesto, described what he called "Dark Scrum", saying that "Scrum can be very unsafe for programmers."
https://en.wikipedia.org/wiki/Scrum_%28software_development%29
In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function 1. Formally, a linear-fractional program is defined as the problem of maximizing (or minimizing) a ratio of affine functions over a polyhedron, $$ \begin{align} \text{maximize} \quad & \frac{\mathbf{c}^T \mathbf{x} + \alpha}{\mathbf{d}^T \mathbf{x} + \beta} \\ \text{subject to} \quad & A\mathbf{x} \leq \mathbf{b}, \end{align} $$ where $$ \mathbf{x} \in \mathbb{R}^n $$ represents the vector of variables to be determined, $$ \mathbf{c}, \mathbf{d} \in \mathbb{R}^n $$ and $$ \mathbf{b} \in \mathbb{R}^m $$ are vectors of (known) coefficients, $$ A \in \mathbb{R}^{m \times n} $$ is a (known) matrix of coefficients and $$ \alpha, \beta \in \mathbb{R} $$ are constants. The constraints have to restrict the feasible region to $$ \{\mathbf{x} | \mathbf{d}^T\mathbf{x} + \beta > 0\} $$ , i.e. the region on which the denominator is positive. Alternatively, the denominator of the objective function has to be strictly negative in the entire feasible region. ## Motivation by comparison to linear programming Both linear programming and linear-fractional programming represent optimization problems using linear equations and linear inequalities, which for each problem-instance define a feasible set. Fractional linear programs have a richer set of objective functions. Informally, linear programming computes a policy delivering the best outcome, such as maximum profit or lowest cost. In contrast, a linear-fractional programming is used to achieve the highest ratio of outcome to cost, the ratio representing the highest efficiency. For example, in the context of LP we maximize the objective function profit = income − cost and might obtain maximum profit of $100 (= $1100 of income − $1000 of cost). Thus, in LP we have an efficiency of $100/$1000 = 0.1. Using LFP we might obtain an efficiency of $10/$50 = 0.2 with a profit of only $10, but only requiring $50 of investment. ## Transformation to a linear program Any linear-fractional program can be transformed into a linear program, assuming that the feasible region is non-empty and bounded, using the Charnes–Cooper transformation. The main idea is to introduce a new non-negative variable $$ t $$ to the program which will be used to rescale the constants involved in the program ( $$ \alpha, \beta, \mathbf{b} $$ ). This allows us to require that the denominator of the objective function ( $$ \mathbf{d}^T \mathbf{x} + \beta $$ ) equals 1. (To understand the transformation, it is instructive to consider the simpler special case with $$ \alpha = \beta = 0 $$ .) Formally, the linear program obtained via the Charnes–Cooper transformation uses the transformed variables $$ \mathbf{y} \in \mathbb{R}^n $$ and $$ t \ge 0 $$ : $$ \begin{align} \text{maximize} \quad & \mathbf{c}^T \mathbf{y} + \alpha t \\ \text{subject to} \quad & A\mathbf{y} \leq \mathbf{b} t \\ & \mathbf{d}^T \mathbf{y} + \beta t = 1 \\ & t \geq 0. \end{align} $$ A solution $$ \mathbf{x} $$ to the original linear-fractional program can be translated to a solution of the transformed linear program via the equalities $$ \mathbf{y} = \frac{1}{\mathbf{d}^T \mathbf{x} + \beta} \cdot \mathbf{x}\quad \text{and} \quad t = \frac{1}{\mathbf{d}^T \mathbf{x} + \beta}. $$ Conversely, a solution for $$ \mathbf{y} $$ and $$ t $$ of the transformed linear program can be translated to a solution of the original linear-fractional program via $$ \mathbf{x}=\frac{1}{t}\mathbf{y}. $$ ## Duality Let the dual variables associated with the constraints $$ A\mathbf{y} - \mathbf{b} t \leq \mathbf{0} $$ and $$ \mathbf{d}^T \mathbf{y} + \beta t - 1 = 0 $$ be denoted by $$ \mathbf{u} $$ and $$ \lambda $$ , respectively. Then the dual of the LFP above is $$ \begin{align} \text{minimize} \quad & \lambda \\ \text{subject to} \quad & A^T\mathbf{u} + \lambda \mathbf{d} = \mathbf{c} \\ & -\mathbf{b}^T \mathbf{u} + \lambda \beta \geq \alpha \\ & \mathbf{u} \in \mathbb{R}_+^m, \lambda \in \mathbb{R}, \end{align} $$ which is an LP and which coincides with the dual of the equivalent linear program resulting from the Charnes–Cooper transformation. ## Properties and algorithms The objective function in a linear-fractional problem is both quasiconcave and quasiconvex (hence quasilinear) with a monotone property, pseudoconvexity, which is a stronger property than quasiconvexity. A linear-fractional objective function is both pseudoconvex and pseudoconcave, hence pseudolinear. Since an LFP can be transformed to an LP, it can be solved using any LP solution method, such as the simplex algorithm (of George B. Dantzig), the criss-cross algorithm, or interior-point methods. ## Notes ## Sources - ## Further reading - - - - - Category:Optimization algorithms and methods Category:Linear programming Category:Generalized convexity
https://en.wikipedia.org/wiki/Linear-fractional_programming
In computer science, counting sort is an algorithm for sorting a collection of objects according to keys that are small positive integers; that is, it is an integer sorting algorithm. It operates by counting the number of objects that possess distinct key values, and applying prefix sum on those counts to determine the positions of each key value in the output sequence. Its running time is linear in the number of items and the difference between the maximum key value and the minimum key value, so it is only suitable for direct use in situations where the variation in keys is not significantly greater than the number of items. It is often used as a subroutine in radix sort, another sorting algorithm, which can handle larger keys more efficiently.. Counting sort is not a comparison sort; it uses key values as indexes into an array and the lower bound for comparison sorting will not apply. Bucket sort may be used in lieu of counting sort, and entails a similar time analysis. However, compared to counting sort, bucket sort requires linked lists, dynamic arrays, or a large amount of pre-allocated memory to hold the sets of items within each bucket, whereas counting sort stores a single number (the count of items) per bucket. ## Input and output assumptions In the most general case, the input to counting sort consists of a collection of items, each of which has a non-negative integer key whose maximum value is at most . In some descriptions of counting sort, the input to be sorted is assumed to be more simply a sequence of integers itself, but this simplification does not accommodate many applications of counting sort. For instance, when used as a subroutine in radix sort, the keys for each call to counting sort are individual digits of larger item keys; it would not suffice to return only a sorted list of the key digits, separated from the items. In applications such as in radix sort, a bound on the maximum key value will be known in advance, and can be assumed to be part of the input to the algorithm. However, if the value of is not already known then it may be computed, as a first step, by an additional loop over the data to determine the maximum key value. The output is an array of the elements ordered by their keys. Because of its application to radix sorting, counting sort must be a stable sort; that is, if two elements share the same key, their relative order in the output array and their relative order in the input array should match. ## Pseudocode In pseudocode, the algorithm may be expressed as: function CountingSort(input, k) count ← array of k + 1 zeros output ← array of same length as input for i = 0 to length(input) - 1 do j = key(input[i]) count[j] = count[j] + 1 for i = 1 to k do count[i] = count[i] + count[i - 1] for i = length(input) - 1 down to 0 do j = key(input[i]) count[j] = count[j] - 1 output[count[j]] = input[i] return output Here `input` is the input array to be sorted, `key` returns the numeric key of each item in the input array, `count` is an auxiliary array used first to store the numbers of items with each key, and then (after the second loop) to store the positions where items with each key should be placed, `k` is the maximum value of the non-negative key values and `output` is the sorted output array. In summary, the algorithm loops over the items in the first loop, computing a histogram of the number of times each key occurs within the `input` collection. After that in the second loop, it performs a prefix sum computation on `count` in order to determine, for each key, the position range where the items having that key should be placed; i.e. items of key $$ i $$ should be placed starting in position `count[ $$ i $$ ]`. Finally, in the third loop, it loops over the items of `input` again, but in reverse order, moving each item into its sorted position in the `output` array. The relative order of items with equal keys is preserved here; i.e., this is a stable sort. ## Complexity analysis Because the algorithm uses only simple `for` loops, without recursion or subroutine calls, it is straightforward to analyze. The initialization of the count array, and the second for loop which performs a prefix sum on the count array, each iterate at most times and therefore take time. The other two for loops, and the initialization of the output array, each take time. Therefore, the time for the whole algorithm is the sum of the times for these steps, . Because it uses arrays of length and , the total space usage of the algorithm is also . For problem instances in which the maximum key value is significantly smaller than the number of items, counting sort can be highly space-efficient, as the only storage it uses other than its input and output arrays is the Count array which uses space . ## Variant algorithms If each item to be sorted is itself an integer, and used as key as well, then the second and third loops of counting sort can be combined; in the second loop, instead of computing the position where items with key `i` should be placed in the output, simply append `Count[i]` copies of the number `i` to the output. This algorithm may also be used to eliminate duplicate keys, by replacing the `Count` array with a bit vector that stores a `one` for a key that is present in the input and a `zero` for a key that is not present. If additionally the items are the integer keys themselves, both second and third loops can be omitted entirely and the bit vector will itself serve as output, representing the values as offsets of the non-`zero` entries, added to the range's lowest value. Thus the keys are sorted and the duplicates are eliminated in this variant just by being placed into the bit array. For data in which the maximum key size is significantly smaller than the number of data items, counting sort may be parallelized by splitting the input into subarrays of approximately equal size, processing each subarray in parallel to generate a separate count array for each subarray, and then merging the count arrays. When used as part of a parallel radix sort algorithm, the key size (base of the radix representation) should be chosen to match the size of the split subarrays. The simplicity of the counting sort algorithm and its use of the easily parallelizable prefix sum primitive also make it usable in more fine-grained parallel algorithms. As described, counting sort is not an in-place algorithm; even disregarding the count array, it needs separate input and output arrays. It is possible to modify the algorithm so that it places the items into sorted order within the same array that was given to it as the input, using only the count array as auxiliary storage; however, the modified in-place version of counting sort is not stable. ## History Although radix sorting itself dates back far longer, counting sort, and its application to radix sorting, were both invented by Harold H. Seward in 1954.. Section 5.2, Sorting by counting, pp. 75–80, and historical notes, p. 170. ## References ## External links - Counting Sort html5 visualization - Demonstration applet from Cardiff University - . Category:Sorting algorithms Category:Stable sorts
https://en.wikipedia.org/wiki/Counting_sort
Conversation threading is a feature used by many email clients, bulletin boards, newsgroups, and Internet forums in which the software aids the user by visually grouping messages with their replies. These groups are called a conversation, topic thread, or simply a thread. A discussion forum, e-mail client or news client is said to have a "conversation view", "threaded topics" or a "threaded mode" if messages can be grouped in this manner. An email thread is also sometimes called an email chain. Threads can be displayed in a variety of ways. Early messaging systems (and most modern email clients) will automatically include original message text in a reply, making each individual email into its own copy of the entire thread. Software may also arrange threads of messages within lists, such as an email inbox. These arrangements can be hierarchical or nested, arranging messages close to their replies in a tree, or they can be linear or flat, displaying all messages in chronological order regardless of reply relationships. Conversation threading as a form of interactive journalism became popular on Twitter from around 2016 onward, when authors such as Eric Garland and Seth Abramson began to post essays in real time, constructing them as a series of numbered tweets, each limited to 140 or 280 characters. ## Mechanism Internet email clients compliant with the RFC 822 standard (and its successor RFC 5322) add a unique message identifier in the Message-ID: header field of each message, e.g. Message-ID: <xNCx2XP2qgUc9Qd2uR99iHsiAaJfVoqj91ocj3tdWT@wikimedia.org> If a user creates message B by replying to message A, the mail client will add the unique message ID of message A in form of the fields In-Reply-To: <xNCx2XP2qgUc9Qd2uR99iHsiAaJfVoqj91ocj3tdWT@wikimedia.org> ## References : <xNCx2XP2qgUc9Qd2uR99iHsiAaJfVoqj91ocj3tdWT@wikimedia.org> to the header of reply B. RFC 5322 defines the following algorithm for populating these fields: The "In-Reply-To:" field will contain the contents of the "Message-ID:" field of the message to which this one is a reply (the "parent message"). If there is more than one parent message, then the "In-Reply-To:" field will contain the contents of all of the parents' "Message-ID:" fields. If there is no "Message-ID:" field in any of the parent messages, then the new message will have no "In- Reply-To:" field. The "References:" field will contain the contents of the parent's "References:" field (if any) followed by the contents of the parent's "Message-ID:" field (if any). If the parent message does not contain a "References:" field but does have an "In-Reply-To:" field containing a single message identifier, then the "References:" field will contain the contents of the parent's "In-Reply-To:" field followed by the contents of the parent's "Message-ID:" field (if any). If the parent has none of the "References:", "In-Reply-To:", or "Message-ID:" fields, then the new message will have no "References:" field. Modern email clients then can use the unique message identifiers found in the RFC 822 Message-ID, In-Reply-To: and References: fields of all received email headers to locate the parent and root message in the hierarchy, reconstruct the chain of reply-to actions that created them, and display them as a discussion tree. The purpose of the References: field is to enable reconstruction of the discussion tree even if some replies in it are missing. ## Advantages ### Elimination of turn-taking and time constraints Threaded discussions allow readers to quickly grasp the overall structure of a conversation, isolate specific points of conversations nested within the threads, and as a result, post new messages to extend discussions in any existing thread or sub-thread without time constraints. With linear threads on the other hand, once the topic shifts to a new point of discussion, users are: 1) less inclined to make posts to revisit and expand on earlier points of discussion in order to avoid fragmenting the linear conversation similar to what occurs with turn-taking in face-to-face conversations; and/or 2) obligated to make a motion to stay on topic or move to change the topic of discussion. Given this advantage, threaded discussion is most useful for facilitating extended conversations or debates involving complex multi-step tasks (e.g., identify major premises → challenge veracity → share evidence → question accuracy, validity, or relevance of presented evidence) – as often found in newsgroups and complicated email chains – as opposed to simple single-step tasks (e.g., posting or share answers to a simple question). ### Message targeting Email allows messages to be targeted at particular members of the audience by using the "To" and "CC" lines. However, some message systems do not have this option. As a result, it can be difficult to determine the intended recipient of a particular message. When messages are displayed hierarchically, it is easier to visually identify the author of the previous message. ### Eliminating list clutter It can be difficult to process, analyze, evaluate, synthesize, and integrate important information when viewing large lists of messages. Grouping messages by thread makes the process of reviewing large numbers of messages in context to a given discussion topic more time efficient and with less mental effort, thus making more time and mental resources available to further extend and advance discussions within each individual topic/thread. In group forums, allowing users to reply to threads will reduce the number of new posts shown in the list. Some clients allow operations on entire threads of messages. For example, the text-based newsreader nn has a "kill" function which automatically deletes incoming messages based on the rules set up by the user matching the message's subject or author. This can dramatically reduce the number of messages one has to manually check and delete. ### Real-time feedback When an author, usually a journalist, posts threads via Twitter, users are able to respond to each 140- or 280-character tweet in the thread, often before the author posts the next message. This allows the author the option of including the feedback as part of subsequent messages. ## Disadvantages ### Reliability Accurate threading of messages requires the email software to identify messages that are replies to other messages. Some algorithms used for this purpose can be unreliable. For example, email clients that use the subject line to relate messages can be fooled by two unrelated messages that happen to have the same subject line. Modern email clients use unique identifiers in email headers to locate the parent and root message in the hierarchy. When non-compliant clients participate in discussions, they can confuse message threading as it depends on all clients respecting these optional mail standards when composing replies to messages. ### Individual message control Messages within a thread do not always provide the user with the same options as individual messages. For example, it may not be possible to move, star, reply to, archive, or delete individual messages that are contained within a thread. The lack of individual message control can prevent messaging systems from being used as to-do lists (a common function of email folders). Individual messages that contain information relevant to a to-do item can easily get lost in a long thread of messages. ### Parallel discussions With conversational threading, it is much easier to reply to individual messages that are not the most recent message in the thread. As a result, multiple threads of discussions often occur in parallel. Following, revisiting, and participating in parallel discussions at the same time can be mentally challenging. Following parallel discussions can be particularly disorienting and can inhibit discussions when discussion threads are not organized in a coherent, conceptual, or logical structure (e.g., threads presenting arguments in support of a given claim under debate intermingled with threads presenting arguments in opposition to the claim). ### Temporal fragmentation Thread fragmentation can be particularly problematic for systems that allow users to choose different display modes (hierarchical vs. linear). Users of the hierarchical display mode will reply to older messages, confusing users of the linear display mode. ## Examples The following email clients, forums, bbs, newsgroups, image/text boards, and social networks can group and display messages by thread. ### Client-based - Apple Mail - Emacs Gnus - FastMail - Forte Agent - Gmail - Mailbird - Microsoft Outlook - Mozilla Thunderbird - Mutt - Pan - Protonmail - slrn ### Web-based - 4chan - Discourse - FastMail - Gmail - Hacker News - MSN Groups - Protonmail - Reddit - Roundcube - Slashdot - Yahoo! Groups - Zulip References ### Sources - cited in - Wolsey, T. DeVere, "Literature discussion in cyberspace: Young adolescents using threaded discussion groups to talk about books. Reading Online, 7(4), January/February 2004. Retrieved 2007-12-30. - Network Working Group, IETF (June 2008). "Internet Message Access Protocol - SORT and THREAD Extensions". Retrieved 2009-10-10. Category:Internet terminology Category:Internet forum terminology
https://en.wikipedia.org/wiki/Thread_%28online_communication%29
In machine learning, a deep belief network (DBN) is a generative graphical model, or alternatively a class of deep neural network, composed of multiple layers of latent variables ("hidden units"), with connections between the layers but not between units within each layer. When trained on a set of examples without supervision, a DBN can learn to probabilistically reconstruct its inputs. The layers then act as feature detectors. After this learning step, a DBN can be further trained with supervision to perform classification. DBNs can be viewed as a composition of simple, unsupervised networks such as restricted Boltzmann machines (RBMs) or autoencoders, where each sub-network's hidden layer serves as the visible layer for the next. An RBM is an undirected, generative energy-based model with a "visible" input layer and a hidden layer and connections between but not within layers. This composition leads to a fast, layer-by-layer unsupervised training procedure, where contrastive divergence is applied to each sub-network in turn, starting from the "lowest" pair of layers (the lowest visible layer is a training set). The observation that DBNs can be trained greedily, one layer at a time, led to one of the first effective deep learning algorithms. Overall, there are many attractive implementations and uses of DBNs in real-life applications and scenarios (e.g., electroencephalography, drug discovery). ## Training The training method for RBMs proposed by Geoffrey Hinton for use with training "Product of Experts" models is called contrastive divergence (CD). CD provides an approximation to the maximum likelihood method that would ideally be applied for learning the weights. In training a single RBM, weight updates are performed with gradient descent via the following equation: $$ w_{ij}(t+1) = w_{ij}(t) + \eta\frac{\partial \log(p(v))}{\partial w_{ij}} $$ where, $$ p(v) $$ is the probability of a visible vector, which is given by $$ p(v) = \frac{1}{Z}\sum_he^{-E(v,h)} $$ . $$ Z $$ is the partition function (used for normalizing) and $$ E(v,h) $$ is the energy function assigned to the state of the network. A lower energy indicates the network is in a more "desirable" configuration. The gradient $$ \frac{\partial \log(p(v))}{\partial w_{ij}} $$ has the simple form $$ \langle v_ih_j\rangle_\text{data} - \langle v_ih_j\rangle_\text{model} $$ where $$ \langle\cdots\rangle_p $$ represent averages with respect to distribution $$ p $$ . The issue arises in sampling $$ \langle v_ih_j\rangle_\text{model} $$ because this requires extended alternating Gibbs sampling. CD replaces this step by running alternating Gibbs sampling for $$ n $$ steps (values of $$ n = 1 $$ perform well). After $$ n $$ steps, the data are sampled and that sample is used in place of $$ \langle v_ih_j\rangle_\text{model} $$ . The CD procedure works as follows: 1. Initialize the visible units to a training vector. 1. Update the hidden units in parallel given the visible units: $$ p(h_j = 1 \mid \textbf{V}) = \sigma(b_j + \sum_i v_iw_{ij}) $$ . $$ \sigma $$ is the sigmoid function and $$ b_j $$ is the bias of $$ h_j $$ . 1. Update the visible units in parallel given the hidden units: $$ p(v_i = 1 \mid \textbf{H}) = \sigma(a_i + \sum_j h_jw_{ij}) $$ . $$ a_i $$ is the bias of $$ v_i $$ . This is called the "reconstruction" step. 1. Re-update the hidden units in parallel given the reconstructed visible units using the same equation as in step 2. 1. Perform the weight update: $$ \Delta w_{ij} \propto \langle v_ih_j\rangle_\text{data} - \langle v_ih_j\rangle_\text{reconstruction} $$ . Once an RBM is trained, another RBM is "stacked" atop it, taking its input from the final trained layer. The new visible layer is initialized to a training vector, and values for the units in the already-trained layers are assigned using the current weights and biases. The new RBM is then trained with the procedure above. This whole process is repeated until the desired stopping criterion is met. Although the approximation of CD to maximum likelihood is crude (does not follow the gradient of any function), it is empirically effective.
https://en.wikipedia.org/wiki/Deep_belief_network
A dark galaxy is a hypothesized galaxy with no (or very few) stars. They received their name because they have no visible stars but may be detectable if they contain significant amounts of gas. Astronomers have long theorized the existence of dark galaxies, but there are no confirmed examples to date. Dark galaxies are distinct from intergalactic gas clouds caused by galactic tidal interactions, since these gas clouds do not contain dark matter, so they do not technically qualify as galaxies. Distinguishing between intergalactic gas clouds and galaxies is difficult; most candidate dark galaxies turn out to be tidal gas clouds. The best candidate dark galaxies to date as of 2025 include HI1225+01, AGC229385, AC G185.0-11.5 (the most recent and promising candidate thus far), and numerous gas clouds detected in studies of quasars. On 25 August 2016, astronomers reported that Dragonfly 44, an ultra diffuse galaxy (UDG) with the mass of the Milky Way galaxy, but with nearly no discernible stars or galactic structure, is made almost entirely of dark matter. ## Observational evidence Large surveys with sensitive but low-resolution radio telescopes like Arecibo or the Parkes Telescope look for 21-cm emission from atomic hydrogen in galaxies. These surveys are then matched to optical surveys to identify any objects with no optical counterpart—that is, sources with no stars. Another way astronomers search for dark galaxies is to look for hydrogen absorption lines in the spectra of background quasars. This technique has revealed many intergalactic clouds of hydrogen, but following up on candidate dark galaxies is difficult, since these sources tend to be too far away and are often optically drowned out by the bright light from the quasars. ## Nature of dark galaxies ### Origin In 2005, astronomers discovered gas cloud ### VIRGOHI21 and attempted to determine what it was and why it exerted such a massive gravitational pull on galaxy NGC 4254. After years of ruling out other possible explanations, some have concluded that VIRGOHI21 is a dark galaxy. ### Size The actual size of dark galaxies is unknown because they cannot be observed with normal telescopes. There have been various estimations, ranging from double the size of the Milky Way to the size of a small quasar. ### Structure Dark galaxies are theoretically composed of dark matter, hydrogen, and dust. Some scientists support the idea that dark galaxies may contain stars. Yet the exact composition of dark galaxies remains unknown because there is no conclusive way to identify them. Nevertheless, astronomers estimate that the mass of the gas in these galaxies is approximately one billion times that of the Sun. ### Methodology to observe dark bodies Dark galaxies contain no visible stars and are invisible to optical telescopes. The Arecibo Galaxy Environment Survey (AGES) harnessed the Arecibo radio telescope to search for dark galaxies, which are predicted to contain detectable amounts of neutral hydrogen. The Arecibo radio telescope was useful where others are not because of its ability to detect the emission from this neutral hydrogen, specifically the 21-cm line. ### Alternative theories Scientists say that the galaxies we see today only began to create stars after dark galaxies. Based on numerous scientific assertions, dark galaxies played a big role in many of the galaxies astronomers and scientists see today. Martin Haehnel, from Kavli Institute for Cosmology at the University of Cambridge, claims that the precursor to the Milky Way galaxy was actually a much smaller bright galaxy that had merged with dark galaxies nearby to form the Milky Way we currently see. Multiple scientists agree that dark galaxies are building blocks of modern galaxies. Sebastian Cantalupo of the University of California, Santa Cruz, agrees with this theory. He goes on to say, "In our current theory of galaxy formation, we believe that big galaxies form from the merger of smaller galaxies. Dark galaxies bring to big galaxies a lot of gas, which then accelerates star formation in the bigger galaxies." Scientists have specific techniques they use to locate these dark galaxies. These techniques have the capability of teaching us more about other special events that occur in the universe; for instance, the cosmic web. This "web" is made of invisible filaments of gas and dark matter believed to permeate the universe, as well as "feeding and building galaxies and galaxy clusters where the filaments intersect." ## Potential dark galaxies ### FAST J0139+4328 Located 94 million light years away from Earth, this galaxy is visible in radio waves with minimal visible light. ### HE0450-2958 HE0450-2958 is a quasar at redshift z=0.285. Hubble Space Telescope images showed that the quasar is located at the edge of a large cloud of gas, but no host galaxy was detected for the quasar. The authors of the Hubble study suggested that one possible scenario was that the quasar is located in a dark galaxy. However, subsequent analysis by other groups found no evidence that the host galaxy is anomalously dark, and demonstrated that a normal host galaxy is probably present, so the observations do not support the dark galaxy interpretation. ### HVC 127-41-330 HVC 127-41-330 is a cloud rotating at high speed between Andromeda and the Triangulum Galaxy. Astronomer Josh Simon considers this cloud to be a dark galaxy because of the speed of its rotation and its predicted mass. ### J0613+52 J0613+52 is a possible dark galaxy, discovered with the Green Bank Telescope when it was accidentally pointed to the wrong coordinates. Stars could possibly exist within it, but were observed as of January 2024. ### Nube Nube was discovered in 2023 by analyzing deep optical imagery of an area in Stripe 82. Due to its low surface brightness, Nube is classified as an "almost dark galaxy." ### Smith's Cloud Smith's Cloud is a candidate to be a dark galaxy, due to its projected mass and survival of encounters with the Milky Way. VIRGOHI21 Initially discovered in 2000, VIRGOHI21 was announced in February 2005 as a good candidate to be a true dark galaxy. It was detected in 21-cm surveys, and was suspected to be a possible cosmic partner to the galaxy NGC 4254. This unusual-looking galaxy appears to be one partner in a cosmic collision, and appeared to show dynamics consistent with a dark galaxy (and apparently inconsistent with the predictions of the Modified Newtonian Dynamics (MOND) theory). However, further observations revealed that VIRGOHI21 was an intergalactic gas cloud, stripped from NGC4254 by a high speed collision. The high speed interaction was caused by infall into the Virgo cluster.
https://en.wikipedia.org/wiki/Dark_galaxy
In object-oriented programming, "immutable interface" is a pattern for designing an immutable object. The immutable interface pattern involves defining a type which does not provide any methods which mutate state. Objects which are referenced by that type are not seen to have any mutable state, and appear immutable. ## Example ### Java Consider a Java class which represents a two-dimensional point. ```java public class Point2D { private int x; private int y; public Point2D(int x, int y) { this.x = x; this.y = y; } public int getX() { return this.x; } public int getY() { return this.y; } public void setX(int newX) { this.x = newX; } public void setY(int newY) { this.y = newY; } } ``` The class Point2D is mutable: its state can be changed after construction, by invoking either of the setter methods (`setX()` or `setY()`). An immutable interface for Point2D could be defined as: ```java public interface ImmutablePoint2D { public int getX(); public int getY(); } ``` By making Point2D implement ImmutablePoint2D, client code could now reference a type which does not have mutating methods, and thus appears immutable. This is demonstrated in the following example: ```java ImmutablePoint2D point = new Point2D(0,0); // a concrete instance of Point2D is referenced by the immutable interface int x = point.getX(); // valid method call int y = point.setX(42); // compile error: the method setX() does not exist on type ImmutablePoint2D ``` By referencing only the immutable interface, it is not valid to call a method which mutates the state of the concrete object. ## Advantages - Clearly communicates the immutable intent of the type. - Unlike types implementing the Immutable Wrapper pattern, does not need to "cancel out" mutating methods by issuing a "No Operation" instruction, or throwing a runtime exception when a mutating method is invoked. ## Disadvantages - It is possible for instances referenced by the immutable interface type to be cast to their concrete, mutable type, and have their state mutated. For example: ```java public void mutate(ImmutablePoint2D point) { ((Point2D)point).setX(42); // this call is legal, since the type has // been converted to the mutable Point2D class } ``` - Concrete classes have to explicitly declare they implement the immutable interface. This may not be possible if the concrete class "belongs to" third-party code, for instance, if it is contained within a library. - The object is not really immutable and hence not suitable for use in data structures relying on immutability like hash maps. And the object could be modified concurrently from the "mutable side". - Some compiler optimizations available for immutable objects might not be available for mutable objects. ## Alternatives An alternative to the immutable interface pattern is the immutable wrapper pattern. Persistent data structures are effectively immutable while allowing modified views of themselves. ## References Category:Object-oriented programming Category:Articles with example Java code
https://en.wikipedia.org/wiki/Immutable_interface
Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identifying spam emails and deciding whether to give someone a driving license. As well as 'category', synonyms or near-synonyms for 'class' include 'type', 'species', 'order', 'concept', 'taxon', 'group', 'identification' and 'division'. The meaning of the word 'classification' (and its synonyms) may take on one of several related meanings. It may encompass both classification and the creation of classes, as for example in 'the task of categorizing pages in Wikipedia'; this overall activity is listed under taxonomy. It may refer exclusively to the underlying scheme of classes (which otherwise may be called a taxonomy). Or it may refer to the label given to an object by the classifier. Classification is a part of many different kinds of activities and is studied from many different points of view including medicine, philosophy, law, anthropology, biology, taxonomy, cognition, communications, knowledge organization, psychology, statistics, machine learning, economics and mathematics. ## Binary vs multi-class classification Methodological work aimed at improving the accuracy of a classifier is commonly divided between cases where there are exactly two classes (binary classification) and cases where there are three or more classes (multiclass classification). ## Evaluation of accuracy Unlike in decision theory, it is assumed that a classifier repeats the classification task over and over. And unlike a lottery, it is assumed that each classification can be either right or wrong; in the theory of measurement, classification is understood as measurement against a nominal scale. Thus it is possible to try to measure the accuracy of a classifier. Measuring the accuracy of a classifier allows a choice to be made between two alternative classifiers. This is important both when developing a classifier and in choosing which classifier to deploy. There are however many different methods for evaluating the accuracy of a classifier and no general method for determining which method should be used in which circumstances. Different fields have taken different approaches, even in binary classification. In pattern recognition, error rate is popular. The Gini coefficient and KS statistic are widely used in the credit scoring industry. Sensitivity and specificity are widely used in epidemiology and medicine. Precision and recall are widely used in information retrieval. Classifier accuracy depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems (a phenomenon that may be explained by the no-free-lunch theorem).
https://en.wikipedia.org/wiki/Classification
In computer science, a linked list is a linear collection of data elements whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes which together represent a sequence. In its most basic form, each node contains data, and a reference (in other words, a link) to the next node in the sequence. This structure allows for efficient insertion or removal of elements from any position in the sequence during iteration. More complex variants add additional links, allowing more efficient insertion or removal of nodes at arbitrary positions. A drawback of linked lists is that data access time is linear in respect to the number of nodes in the list. Because nodes are serially linked, accessing any node requires that the prior node be accessed beforehand (which introduces difficulties in pipelining). Faster access, such as random access, is not feasible. Arrays have better cache locality compared to linked lists. Linked lists are among the simplest and most common data structures. They can be used to implement several other common abstract data types, including lists, stacks, queues, associative arrays, and S-expressions, though it is not uncommon to implement those data structures directly without using a linked list as the basis. The principal benefit of a linked list over a conventional array is that the list elements can be easily inserted or removed without reallocation or reorganization of the entire structure because the data items do not need to be stored contiguously in memory or on disk, while restructuring an array at run-time is a much more expensive operation. Linked lists allow insertion and removal of nodes at any point in the list, and allow doing so with a constant number of operations by keeping the link previous to the link being added or removed in memory during list traversal. On the other hand, since simple linked lists by themselves do not allow random access to the data or any form of efficient indexing, many basic operations—such as obtaining the last node of the list, finding a node that contains a given datum, or locating the place where a new node should be inserted—may require iterating through most or all of the list elements. ## History Linked lists were developed in 1955–1956, by Allen Newell, Cliff Shaw and Herbert A. Simon at RAND Corporation and Carnegie Mellon University as the primary data structure for their Information Processing Language (IPL). IPL was used by the authors to develop several early artificial intelligence programs, including the Logic Theory Machine, the General Problem Solver, and a computer chess program. Reports on their work appeared in IRE Transactions on Information Theory in 1956, and several conference proceedings from 1957 to 1959, including Proceedings of the Western Joint Computer Conference in 1957 and 1958, and Information Processing (Proceedings of the first UNESCO International Conference on Information Processing) in 1959. The now-classic diagram consisting of blocks representing list nodes with arrows pointing to successive list nodes appears in "Programming the Logic Theory Machine" by Newell and Shaw in Proc. WJCC, February 1957. Newell and Simon were recognized with the ACM Turing Award in 1975 for having "made basic contributions to artificial intelligence, the psychology of human cognition, and list processing". The problem of machine translation for natural language processing led Victor Yngve at Massachusetts Institute of Technology (MIT) to use linked lists as data structures in his COMIT programming language for computer research in the field of linguistics. A report on this language entitled "A programming language for mechanical translation" appeared in Mechanical Translation in 1958. Another early appearance of linked lists was by Hans Peter Luhn who wrote an internal IBM memorandum in January 1953 that suggested the use of linked lists in chained hash tables. LISP, standing for list processor, was created by John McCarthy in 1958 while he was at MIT and in 1960 he published its design in a paper in the Communications of the ACM, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I". One of LISP's major data structures is the linked list. By the early 1960s, the utility of both linked lists and languages which use these structures as their primary data representation was well established. Bert Green of the MIT Lincoln Laboratory published a review article entitled "Computer languages for symbol manipulation" in IRE Transactions on Human Factors in Electronics in March 1961 which summarized the advantages of the linked list approach. A later review article, "A Comparison of list-processing computer languages" by Bobrow and Raphael, appeared in Communications of the ACM in April 1964. Several operating systems developed by Technical Systems Consultants (originally of West Lafayette Indiana, and later of Chapel Hill, North Carolina) used singly linked lists as file structures. A directory entry pointed to the first sector of a file, and succeeding portions of the file were located by traversing pointers. Systems using this technique included Flex (for the Motorola 6800 CPU), mini-Flex (same CPU), and Flex9 (for the Motorola 6809 CPU). A variant developed by TSC for and marketed by Smoke Signal Broadcasting in California, used doubly linked lists in the same manner. The TSS/360 operating system, developed by IBM for the System 360/370 machines, used a double linked list for their file system catalog. The directory structure was similar to Unix, where a directory could contain files and other directories and extend to any depth. ## Basic concepts and nomenclature Each record of a linked list is often called an 'element' or 'node'. The field of each node that contains the address of the next node is usually called the 'next link' or 'next pointer'. The remaining fields are known as the 'data', 'information', 'value', 'cargo', or 'payload' fields. The 'head' of a list is its first node. The 'tail' of a list may refer either to the rest of the list after the head, or to the last node in the list. In Lisp and some derived languages, the next node may be called the 'cdr' (pronounced ) of the list, while the payload of the head node may be called the 'car'. ### Singly linked list #### Singly linked lists contain nodes which have a 'value' field as well as 'next' field, which points to the next node in line of nodes. Operations that can be performed on singly linked lists include insertion, deletion and traversal. The following C language code demonstrates how to add a new node with the "value" to the end of a singly linked list: ```c // Each node in a linked list is a structure. The head node is the first node in the list. Node *addNodeToTail(Node *head, int value) { // declare Node pointer and initialize to point to the new Node (i.e., it will have the new Node's memory address) being added to the end of the list. Node *temp = malloc(sizeof *temp); /// 'malloc' in stdlib. temp->value = value; // Add data to the value field of the new Node. temp->next = NULL; // initialize invalid links to nil. if (head == NULL) { head = temp; // If the linked list is empty (i.e., the head node pointer is a null pointer), then have the head node pointer point to the new Node. } else { Node *p = head; // Assign the head node pointer to the Node pointer 'p'. while (p->next != NULL) { p = p->next; // Traverse the list until p is the last Node. The last Node always points to NULL. } p->next = temp; // Make the previously last Node point to the new Node. } return head; // Return the head node pointer. } ``` ### Doubly linked list In a 'doubly linked list', each node contains, besides the next-node link, a second link field pointing to the 'previous' node in the sequence. The two links may be called 'forward('s') and 'backwards', or 'next' and 'prev'('previous'). A technique known as XOR-linking allows a doubly linked list to be implemented using a single link field in each node. However, this technique requires the ability to do bit operations on addresses, and therefore may not be available in some high-level languages. Many modern operating systems use doubly linked lists to maintain references to active processes, threads, and other dynamic objects. A common strategy for rootkits to evade detection is to unlink themselves from these lists. ### Multiply linked list In a 'multiply linked list', each node contains two or more link fields, each field being used to connect the same set of data arranged in a different order (e.g., by name, by department, by date of birth, etc.). While a doubly linked list can be seen as a special case of multiply linked list, the fact that the two and more orders are opposite to each other leads to simpler and more efficient algorithms, so they are usually treated as a separate case. ### Circular linked list In the last node of a linked list, the link field often contains a null reference, a special value is used to indicate the lack of further nodes. A less common convention is to make it point to the first node of the list; in that case, the list is said to be 'circular' or 'circularly linked'; otherwise, it is said to be 'open' or 'linear'. It is a list where the last node pointer points to the first node (i.e., the "next link" pointer of the last node has the memory address of the first node). In the case of a circular doubly linked list, the first node also points to the last node of the list. ### Sentinel nodes In some implementations an extra 'sentinel' or 'dummy' node may be added before the first data record or after the last one. This convention simplifies and accelerates some list-handling algorithms, by ensuring that all links can be safely dereferenced and that every list (even one that contains no data elements) always has a "first" and "last" node. ### Empty lists An empty list is a list that contains no data records. This is usually the same as saying that it has zero nodes. If sentinel nodes are being used, the list is usually said to be empty when it has only sentinel nodes. ### Hash linking The link fields need not be physically part of the nodes. If the data records are stored in an array and referenced by their indices, the link field may be stored in a separate array with the same indices as the data records. ### List handles Since a reference to the first node gives access to the whole list, that reference is often called the 'address', 'pointer', or 'handle' of the list. #### Algorithms that manipulate linked lists usually get such handles to the input lists and return the handles to the resulting lists. In fact, in the context of such algorithms, the word "list" often means "list handle". In some situations, however, it may be convenient to refer to a list by a handle that consists of two links, pointing to its first and last nodes. ### Combining alternatives The alternatives listed above may be arbitrarily combined in almost every way, so one may have circular doubly linked lists without sentinels, circular singly linked lists with sentinels, etc. ## Tradeoffs As with most choices in computer programming and design, no method is well suited to all circumstances. A linked list data structure might work well in one case, but cause problems in another. This is a list of some of the common tradeoffs involving linked list structures. ### Linked lists vs. dynamic arrays A dynamic array is a data structure that allocates all elements contiguously in memory, and keeps a count of the current number of elements. If the space reserved for the dynamic array is exceeded, it is reallocated and (possibly) copied, which is an expensive operation. Linked lists have several advantages over dynamic arrays. Insertion or deletion of an element at a specific point of a list, assuming that a pointer is indexed to the node (before the one to be removed, or before the insertion point) already, is a constant-time operation (otherwise without this reference it is O(n)), whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration. Moreover, arbitrarily many elements may be inserted into a linked list, limited only by the total memory available; while a dynamic array will eventually fill up its underlying array data structure and will have to reallocate—an expensive operation, one that may not even be possible if memory is fragmented, although the cost of reallocation can be averaged over insertions, and the cost of an insertion due to reallocation would still be amortized O(1). This helps with appending elements at the array's end, but inserting into (or removing from) middle positions still carries prohibitive costs due to data moving to maintain contiguity. An array from which many elements are removed may also have to be resized in order to avoid wasting too much space. On the other hand, dynamic arrays (as well as fixed-size array data structures) allow constant-time random access, while linked lists allow only sequential access to elements. Singly linked lists, in fact, can be easily traversed in only one direction. This makes linked lists unsuitable for applications where it's useful to look up an element by its index quickly, such as heapsort. Sequential access on arrays and dynamic arrays is also faster than on linked lists on many machines, because they have optimal locality of reference and thus make good use of data caching. Another disadvantage of linked lists is the extra storage needed for references, which often makes them impractical for lists of small data items such as characters or Boolean values, because the storage overhead for the links may exceed by a factor of two or more the size of the data. In contrast, a dynamic array requires only the space for the data itself (and a very small amount of control data). It can also be slow, and with a naïve allocator, wasteful, to allocate memory separately for each new element, a problem generally solved using memory pools. Some hybrid solutions try to combine the advantages of the two representations. Unrolled linked lists store several elements in each list node, increasing cache performance while decreasing memory overhead for references. CDR coding does both these as well, by replacing references with the actual data referenced, which extends off the end of the referencing record. A good example that highlights the pros and cons of using dynamic arrays vs. linked lists is by implementing a program that resolves the Josephus problem. The Josephus problem is an election method that works by having a group of people stand in a circle. Starting at a predetermined person, one may count around the circle n times. Once the nth person is reached, one should remove them from the circle and have the members close the circle. The process is repeated until only one person is left. That person wins the election. This shows the strengths and weaknesses of a linked list vs. a dynamic array, because if the people are viewed as connected nodes in a circular linked list, then it shows how easily the linked list is able to delete nodes (as it only has to rearrange the links to the different nodes). However, the linked list will be poor at finding the next person to remove and will need to search through the list until it finds that person. A dynamic array, on the other hand, will be poor at deleting nodes (or elements) as it cannot remove one node without individually shifting all the elements up the list by one. However, it is exceptionally easy to find the nth person in the circle by directly referencing them by their position in the array. The list ranking problem concerns the efficient conversion of a linked list representation into an array. Although trivial for a conventional computer, solving this problem by a parallel algorithm is complicated and has been the subject of much research. A balanced tree has similar memory access patterns and space overhead to a linked list while permitting much more efficient indexing, taking O(log n) time instead of O(n) for a random access. However, insertion and deletion operations are more expensive due to the overhead of tree manipulations to maintain balance. Schemes exist for trees to automatically maintain themselves in a balanced state: AVL trees or red–black trees. ### Singly linked linear lists vs. other lists While doubly linked and circular lists have advantages over singly linked linear lists, linear lists offer some advantages that make them preferable in some situations. A singly linked linear list is a recursive data structure, because it contains a pointer to a smaller object of the same type. For that reason, many operations on singly linked linear lists (such as merging two lists, or enumerating the elements in reverse order) often have very simple recursive algorithms, much simpler than any solution using iterative commands. While those recursive solutions can be adapted for doubly linked and circularly linked lists, the procedures generally need extra arguments and more complicated base cases. Linear singly linked lists also allow tail-sharing, the use of a common final portion of sub-list as the terminal portion of two different lists. In particular, if a new node is added at the beginning of a list, the former list remains available as the tail of the new one—a simple example of a persistent data structure. Again, this is not true with the other variants: a node may never belong to two different circular or doubly linked lists. In particular, end-sentinel nodes can be shared among singly linked non-circular lists. The same end-sentinel node may be used for every such list. In Lisp, for example, every proper list ends with a link to a special node, denoted by `nil` or `()`. The advantages of the fancy variants are often limited to the complexity of the algorithms, not in their efficiency. A circular list, in particular, can usually be emulated by a linear list together with two variables that point to the first and last nodes, at no extra cost. ### Doubly linked vs. singly linked Double-linked lists require more space per node (unless one uses XOR-linking), and their elementary operations are more expensive; but they are often easier to manipulate because they allow fast and easy sequential access to the list in both directions. In a doubly linked list, one can insert or delete a node in a constant number of operations given only that node's address. To do the same in a singly linked list, one must have the address of the pointer to that node, which is either the handle for the whole list (in case of the first node) or the link field in the previous node. Some algorithms require access in both directions. On the other hand, doubly linked lists do not allow tail-sharing and cannot be used as persistent data structures. ### Circularly linked vs. linearly linked A circularly linked list may be a natural option to represent arrays that are naturally circular, e.g. the corners of a polygon, a pool of buffers that are used and released in FIFO ("first in, first out") order, or a set of processes that should be time-shared in round-robin order. In these applications, a pointer to any node serves as a handle to the whole list. With a circular list, a pointer to the last node gives easy access also to the first node, by following one link. Thus, in applications that require access to both ends of the list (e.g., in the implementation of a queue), a circular structure allows one to handle the structure by a single pointer, instead of two. A circular list can be split into two circular lists, in constant time, by giving the addresses of the last node of each piece. The operation consists in swapping the contents of the link fields of those two nodes. Applying the same operation to any two nodes in two distinct lists joins the two list into one. This property greatly simplifies some algorithms and data structures, such as the quad-edge and face-edge. The simplest representation for an empty circular list (when such a thing makes sense) is a null pointer, indicating that the list has no nodes. Without this choice, many algorithms have to test for this special case, and handle it separately. By contrast, the use of null to denote an empty linear list is more natural and often creates fewer special cases. For some applications, it can be useful to use singly linked lists that can vary between being circular and being linear, or even circular with a linear initial segment. Algorithms for searching or otherwise operating on these have to take precautions to avoid accidentally entering an endless loop. One well-known method is to have a second pointer walking the list at half or double the speed, and if both pointers meet at the same node, a cycle has been found. ### Using sentinel nodes Sentinel node may simplify certain list operations, by ensuring that the next or previous nodes exist for every element, and that even empty lists have at least one node. One may also use a sentinel node at the end of the list, with an appropriate data field, to eliminate some end-of-list tests. For example, when scanning the list looking for a node with a given value x, setting the sentinel's data field to x makes it unnecessary to test for end-of-list inside the loop. Another example is the merging two sorted lists: if their sentinels have data fields set to +∞, the choice of the next output node does not need special handling for empty lists. However, sentinel nodes use up extra space (especially in applications that use many short lists), and they may complicate other operations (such as the creation of a new empty list). However, if the circular list is used merely to simulate a linear list, one may avoid some of this complexity by adding a single sentinel node to every list, between the last and the first data nodes. With this convention, an empty list consists of the sentinel node alone, pointing to itself via the next-node link. The list handle should then be a pointer to the last data node, before the sentinel, if the list is not empty; or to the sentinel itself, if the list is empty. The same trick can be used to simplify the handling of a doubly linked linear list, by turning it into a circular doubly linked list with a single sentinel node. However, in this case, the handle should be a single pointer to the dummy node itself. ## Linked list operations When manipulating linked lists in-place, care must be taken to not use values that have been invalidated in previous assignments. This makes algorithms for inserting or deleting linked list nodes somewhat subtle. This section gives pseudocode for adding or removing nodes from singly, doubly, and circularly linked lists in-place. Throughout, null is used to refer to an end-of-list marker or sentinel, which may be implemented in a number of ways. ### Linearly linked lists Singly linked lists The node data structure will have two fields. There is also a variable, firstNode which always points to the first node in the list, or is null for an empty list. record Node { data; // The data being stored in the node Node next // A reference to the next node, null for last node } record List { Node firstNode // points to first node of list; null for empty list } Traversal of a singly linked list is simple, beginning at the first node and following each next link until reaching the end: node := list.firstNode while node not null (do something with node.data) node := node.next The following code inserts a node after an existing node in a singly linked list. The diagram shows how it works. Inserting a node before an existing one cannot be done directly; instead, one must keep track of the previous node and insert a node after it. function insertAfter(Node node, Node newNode) // insert newNode after node newNode.next := node.next node.next := newNode Inserting at the beginning of the list requires a separate function. This requires updating firstNode. function insertBeginning(List list, Node newNode) // insert node before current first node newNode.next := list.firstNode list.firstNode := newNode Similarly, there are functions for removing the node after a given node, and for removing a node from the beginning of the list. The diagram demonstrates the former. To find and remove a particular node, one must again keep track of the previous element. function removeAfter(Node node) // remove node past this one obsoleteNode := node.next node.next := node.next.next destroy obsoleteNode function removeBeginning(List list) // remove first node obsoleteNode := list.firstNode list.firstNode := list.firstNode.next // point past deleted node destroy obsoleteNode Notice that `removeBeginning()` sets `list.firstNode` to `null` when removing the last node in the list. Since it is not possible to iterate backwards, efficient `insertBefore` or `removeBefore` operations are not possible. Inserting to a list before a specific node requires traversing the list, which would have a worst case running time of O(n). Appending one linked list to another can be inefficient unless a reference to the tail is kept as part of the List structure, because it is needed to traverse the entire first list in order to find the tail, and then append the second list to this. Thus, if two linearly linked lists are each of length $$ n $$ , list appending has asymptotic time complexity of $$ O(n) $$ . In the Lisp family of languages, list appending is provided by the `append` procedure. Many of the special cases of linked list operations can be eliminated by including a dummy element at the front of the list. This ensures that there are no special cases for the beginning of the list and renders both `insertBeginning()` and `removeBeginning()` unnecessary, i.e., every element or node is next to another node (even the first node is next to the dummy node). In this case, the first useful data in the list will be found at `list.firstNode.next`. ### Circularly linked list In a circularly linked list, all nodes are linked in a continuous circle, without using null. For lists with a front and a back (such as a queue), one stores a reference to the last node in the list. The next node after the last node is the first node. Elements can be added to the back of the list and removed from the front in constant time. Circularly linked lists can be either singly or doubly linked. Both types of circularly linked lists benefit from the ability to traverse the full list beginning at any given node. This often allows us to avoid storing firstNode and lastNode, although if the list may be empty, there needs to be a special representation for the empty list, such as a lastNode variable which points to some node in the list or is null if it is empty; it uses such a lastNode here. This representation significantly simplifies adding and removing nodes with a non-empty list, but empty lists are then a special case. Algorithms Assuming that someNode is some node in a non-empty circular singly linked list, this code iterates through that list starting with someNode: function iterate(someNode) if someNode ≠ null node := someNode do do something with node.value node := node.next while node ≠ someNode Notice that the test "while node ≠ someNode" must be at the end of the loop. If the test was moved to the beginning of the loop, the procedure would fail whenever the list had only one node. This function inserts a node "newNode" into a circular linked list after a given node "node". If "node" is null, it assumes that the list is empty. function insertAfter(Node node, Node newNode) if node = null // assume list is empty newNode.next := newNode else newNode.next := node.next node.next := newNode update lastNode variable if necessary Suppose that "L" is a variable pointing to the last node of a circular linked list (or null if the list is empty). To append "newNode" to the end of the list, one may do insertAfter(L, newNode) L := newNode To insert "newNode" at the beginning of the list, one may do insertAfter(L, newNode) if L = null L := newNode This function inserts a value "newVal" before a given node "node" in O(1) time. A new node has been created between "node" and the next node, then puts the value of "node" into that new node, and puts "newVal" in "node". Thus, a singly linked circularly linked list with only a firstNode variable can both insert to the front and back in O(1) time. function insertBefore(Node node, newVal) if node = null // assume list is empty newNode := new Node(data:=newVal, next:=newNode) else newNode := new Node(data:=node.data, next:=node.next) node.data := newVal node.next := newNode update firstNode variable if necessary This function removes a non-null node from a list of size greater than 1 in O(1) time. It copies data from the next node into the node, and then sets the node's next pointer to skip over the next node. function remove(Node node) if node ≠ null and size of list > 1 removedData := node.data node.data := node.next.data node.next = node.next.next return removedData ### Linked lists using arrays of nodes Languages that do not support any type of reference can still create links by replacing pointers with array indices. The approach is to keep an array of records, where each record has integer fields indicating the index of the next (and possibly previous) node in the array. Not all nodes in the array need be used. If records are also not supported, parallel arrays can often be used instead. As an example, consider the following linked list record that uses arrays instead of pointers: record Entry { integer next; // index of next entry in array integer prev; // previous entry (if double-linked) string name; real balance; } A linked list can be built by creating an array of these structures, and an integer variable to store the index of the first element. integer listHead Entry Records[1000] Links between elements are formed by placing the array index of the next (or previous) cell into the Next or Prev field within a given element. For example: Index Next Prev Name Balance 0 1 4 Jones, John 123.45 1 −1 0 Smith, Joseph 234.56 2 (listHead) 4 −1 Adams, Adam 0.00 3 Ignore, Ignatius 999.99 4 0 2 Another, Anita 876.54 5 6 7 In the above example, `ListHead` would be set to 2, the location of the first entry in the list. Notice that entry 3 and 5 through 7 are not part of the list. These cells are available for any additions to the list. By creating a `ListFree` integer variable, a free list could be created to keep track of what cells are available. If all entries are in use, the size of the array would have to be increased or some elements would have to be deleted before new entries could be stored in the list. The following code would traverse the list and display names and account balance: i := listHead while i ≥ 0 // loop through the list print i, Records[i].name, Records[i].balance // print entry i := Records[i].next When faced with a choice, the advantages of this approach include: - The linked list is relocatable, meaning it can be moved about in memory at will, and it can also be quickly and directly serialized for storage on disk or transfer over a network. - Especially for a small list, array indexes can occupy significantly less space than a full pointer on many architectures. - Locality of reference can be improved by keeping the nodes together in memory and by periodically rearranging them, although this can also be done in a general store. - Naïve dynamic memory allocators can produce an excessive amount of overhead storage for each node allocated; almost no allocation overhead is incurred per node in this approach. - Seizing an entry from a pre-allocated array is faster than using dynamic memory allocation for each node, since dynamic memory allocation typically requires a search for a free memory block of the desired size. This approach has one main disadvantage, however: it creates and manages a private memory space for its nodes. This leads to the following issues: - It increases complexity of the implementation. - Growing a large array when it is full may be difficult or impossible, whereas finding space for a new linked list node in a large, general memory pool may be easier. - Adding elements to a dynamic array will occasionally (when it is full) unexpectedly take linear (O(n)) instead of constant time (although it is still an amortized constant). - Using a general memory pool leaves more memory for other data if the list is smaller than expected or if many nodes are freed. For these reasons, this approach is mainly used for languages that do not support dynamic memory allocation. These disadvantages are also mitigated if the maximum size of the list is known at the time the array is created. ## Language support Many programming languages such as Lisp and Scheme have singly linked lists built in. In many functional languages, these lists are constructed from nodes, each called a cons or cons cell. The cons has two fields: the car, a reference to the data for that node, and the cdr, a reference to the next node. Although cons cells can be used to build other data structures, this is their primary purpose. In languages that support abstract data types or templates, linked list ADTs or templates are available for building linked lists. In other languages, linked lists are typically built using references together with records. ## Internal and external storage When constructing a linked list, one is faced with the choice of whether to store the data of the list directly in the linked list nodes, called internal storage, or merely to store a reference to the data, called external storage. Internal storage has the advantage of making access to the data more efficient, requiring less storage overall, having better locality of reference, and simplifying memory management for the list (its data is allocated and deallocated at the same time as the list nodes). External storage, on the other hand, has the advantage of being more generic, in that the same data structure and machine code can be used for a linked list no matter what the size of the data is. It also makes it easy to place the same data in multiple linked lists. Although with internal storage the same data can be placed in multiple lists by including multiple next references in the node data structure, it would then be necessary to create separate routines to add or delete cells based on each field. It is possible to create additional linked lists of elements that use internal storage by using external storage, and having the cells of the additional linked lists store references to the nodes of the linked list containing the data. In general, if a set of data structures needs to be included in linked lists, external storage is the best approach. If a set of data structures need to be included in only one linked list, then internal storage is slightly better, unless a generic linked list package using external storage is available. Likewise, if different sets of data that can be stored in the same data structure are to be included in a single linked list, then internal storage would be fine. Another approach that can be used with some languages involves having different data structures, but all have the initial fields, including the next (and prev if double linked list) references in the same location. After defining separate structures for each type of data, a generic structure can be defined that contains the minimum amount of data shared by all the other structures and contained at the top (beginning) of the structures. Then generic routines can be created that use the minimal structure to perform linked list type operations, but separate routines can then handle the specific data. This approach is often used in message parsing routines, where several types of messages are received, but all start with the same set of fields, usually including a field for message type. The generic routines are used to add new messages to a queue when they are received, and remove them from the queue in order to process the message. The message type field is then used to call the correct routine to process the specific type of message. ### Example of internal and external storage To create a linked list of families and their members, using internal storage, the structure might look like the following: record member { // member of a family member next; string firstName; integer age; } record family { // the family itself family next; string lastName; string address; member members // head of list of members of this family } To print a complete list of families and their members using internal storage, write: aFamily := Families // start at head of families list while aFamily ≠ null // loop through list of families print information about family aMember := aFamily.members // get head of list of this family's members while aMember ≠ null // loop through list of members print information about member aMember := aMember.next aFamily := aFamily.next Using external storage, the following structures can be created: record node { // generic link structure node next; pointer data // generic pointer for data at node } record member { // structure for family member string firstName; integer age } record family { // structure for family string lastName; string address; node members // head of list of members of this family } To print a complete list of families and their members using external storage, write: famNode := Families // start at head of families list while famNode ≠ null // loop through list of families aFamily := (family) famNode.data // extract family from node print information about family memNode := aFamily.members // get list of family members while memNode ≠ null // loop through list of members aMember := (member)memNode.data // extract member from node print information about member memNode := memNode.next famNode := famNode.next Notice that when using external storage, an extra step is needed to extract the record from the node and cast it into the proper data type. This is because both the list of families and the list of members within the family are stored in two linked lists using the same data structure (node), and this language does not have parametric types. As long as the number of families that a member can belong to is known at compile time, internal storage works fine. If, however, a member needed to be included in an arbitrary number of families, with the specific number known only at run time, external storage would be necessary. ### Speeding up search Finding a specific element in a linked list, even if it is sorted, normally requires O(n) time (linear search). This is one of the primary disadvantages of linked lists over other data structures. In addition to the variants discussed above, below are two simple ways to improve search time. In an unordered list, one simple heuristic for decreasing average search time is the move-to-front heuristic, which simply moves an element to the beginning of the list once it is found. This scheme, handy for creating simple caches, ensures that the most recently used items are also the quickest to find again. Another common approach is to "index" a linked list using a more efficient external data structure. For example, one can build a red–black tree or hash table whose elements are references to the linked list nodes. Multiple such indexes can be built on a single list. The disadvantage is that these indexes may need to be updated each time a node is added or removed (or at least, before that index is used again). ### Random-access lists A random-access list is a list with support for fast random access to read or modify any element in the list. One possible implementation is a skew binary random-access list using the skew binary number system, which involves a list of trees with special properties; this allows worst-case constant time head/cons operations, and worst-case logarithmic time random access to an element by index. Random-access lists can be implemented as persistent data structures. Random-access lists can be viewed as immutable linked lists in that they likewise support the same O(1) head and tail operations. A simple extension to random-access lists is the min-list, which provides an additional operation that yields the minimum element in the entire list in constant time (without mutation complexities). ## Related data structures Both stacks and queues are often implemented using linked lists, and simply restrict the type of operations which are supported. The skip list is a linked list augmented with layers of pointers for quickly jumping over large numbers of elements, and then descending to the next layer. This process continues down to the bottom layer, which is the actual list. A binary tree can be seen as a type of linked list where the elements are themselves linked lists of the same nature. The result is that each node may include a reference to the first node of one or two other linked lists, which, together with their contents, form the subtrees below that node. An unrolled linked list is a linked list in which each node contains an array of data values. This leads to improved cache performance, since more list elements are contiguous in memory, and reduced memory overhead, because less metadata needs to be stored for each element of the list. A hash table may use linked lists to store the chains of items that hash to the same position in the hash table. A heap shares some of the ordering properties of a linked list, but is almost always implemented using an array. Instead of references from node to node, the next and previous data indexes are calculated using the current data's index. A self-organizing list rearranges its nodes based on some heuristic which reduces search times for data retrieval by keeping commonly accessed nodes at the head of the list. ## Notes ## References ## Further reading - - - - - - - - - - - - - - - - ## External links - Description from the Dictionary of Algorithms and Data Structures - Introduction to Linked Lists, Stanford University Computer Science Library - Linked List Problems, Stanford University Computer Science Library - Open Data Structures - Chapter 3 - Linked Lists, Pat Morin - Patent for the idea of having nodes which are in several linked lists simultaneously (note that this technique was widely used for many decades before the patent was granted) - Implementation of a singly linked list in C - Implementation of a singly linked list in C++ - Implementation of a doubly linked list in C - Implementation of a doubly linked list in C++ Category:Articles with example C code
https://en.wikipedia.org/wiki/Linked_list
Imitation learning is a paradigm in reinforcement learning, where an agent learns to perform a task by supervised learning from expert demonstrations. It is also called learning from demonstration and apprenticeship learning. It has been applied to underactuated robotics, self-driving cars, quadcopter navigation, helicopter aerobatics, and locomotion. ## Approaches Expert demonstrations are recordings of an expert performing the desired task, often collected as state-action pairs $$ (o_t^*, a_t^*) $$ . ### Behavior Cloning Behavior Cloning (BC) is the most basic form of imitation learning. Essentially, it uses supervised learning to train a policy $$ \pi_\theta $$ such that, given an observation $$ o_t $$ , it would output an action distribution $$ \pi_\theta(\cdot | o_t) $$ that is approximately the same as the action distribution of the experts. BC is susceptible to distribution shift. Specifically, if the trained policy differs from the expert policy, it might find itself straying from expert trajectory into observations that would have never occurred in expert trajectories. This was already noted by ALVINN, where they trained a neural network to drive a van using human demonstrations. They noticed that because a human driver never strays far from the path, the network would never be trained on what action to take if it ever finds itself straying far from the path. ### DAgger Dagger (Dataset Aggregation) improves on behavior cloning by iteratively training on a dataset of expert demonstrations. In each iteration, the algorithm first collects data by rolling out the learned policy $$ \pi_\theta $$ . Then, it queries the expert for the optimal action $$ a_t^* $$ on each observation $$ o_t $$ encountered during the rollout. Finally, it aggregates the new data into the dataset $$ D \leftarrow D \cup \{ (o_1, a_1^*), (o_2, a_2^*), ..., (o_T, a_T^*) \} $$ and trains a new policy on the aggregated dataset. ### Decision transformer The Decision Transformer approach models reinforcement learning as a sequence modelling problem. Similar to Behavior Cloning, it trains a sequence model, such as a Transformer, that models rollout sequences $$ (R_1, o_1, a_1), (R_2, o_2, a_2), \dots, (R_t, o_t, a_t), $$ where $$ R_t = r_t + r_{t+1} + \dots + r_T $$ is the sum of future reward in the rollout. During training time, the sequence model is trained to predict each action $$ a_t $$ , given the previous rollout as context: $$ (R_1, o_1, a_1), (R_2, o_2, a_2), \dots, (R_t, o_t) $$ During inference time, to use the sequence model as an effective controller, it is simply given a very high reward prediction $$ R $$ , and it would generalize by predicting an action that would result in the high reward. This was shown to scale predictably to a Transformer with 1 billion parameters that is superhuman on 41 Atari games. ### Other approaches See for more examples. ## Related approaches Inverse Reinforcement Learning (IRL) learns a reward function that explains the expert's behavior and then uses reinforcement learning to find a policy that maximizes this reward. Generative Adversarial Imitation Learning (GAIL) uses generative adversarial networks (GANs) to match the distribution of agent behavior to the distribution of expert demonstrations. It extends a previous approach using game theory.
https://en.wikipedia.org/wiki/Imitation_learning
Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas: - ## Theoretical foundations and analysis - Use of computer technology to aid logicians - Use of concepts from logic for computer applications Theoretical foundations and analysis Logic plays a fundamental role in computer science. Some of the key areas of logic that are particularly significant are computability theory (formerly called recursion theory), modal logic and category theory. The theory of computation is based on concepts defined by logicians and mathematicians such as Alonzo Church and Alan Turing. Church first showed the existence of algorithmically unsolvable problems using his notion of lambda-definability. Turing gave the first compelling analysis of what can be called a mechanical procedure and Kurt Gödel asserted that he found Turing's analysis "perfect.". In addition some other major areas of theoretical overlap between logic and computer science are: - Gödel's incompleteness theorem proves that any logical system powerful enough to characterize arithmetic will contain statements that can neither be proved nor disproved within that system. This has direct application to theoretical issues relating to the feasibility of proving the completeness and correctness of software. - The frame problem is a basic problem that must be overcome when using first-order logic to represent the goals of an artificial intelligence agent and the state of its environment. - The Curry–Howard correspondence is a relation between logical systems and programming languages. This theory established a precise correspondence between proofs and programs. In particular it showed that terms in the simply typed lambda calculus correspond to proofs of intuitionistic propositional logic. - Category theory represents a view of mathematics that emphasizes the relations between structures. It is intimately tied to many aspects of computer science: type systems for programming languages, the theory of transition systems, models of programming languages and the theory of programming language semantics. - Logic programming is a programming, database and knowledge representation paradigm that is based on formal logic. A logic program is a set of sentences about some problem domain. Computation is performed by applying logical reasoning to solve problems in the domain. Major logic programming language families include Prolog, Answer Set Programming (ASP) and Datalog. ## Computers to assist logicians One of the first applications to use the term artificial intelligence was the Logic Theorist system developed by Allen Newell, Cliff Shaw, and Herbert Simon in 1956. One of the things that a logician does is to take a set of statements in logic and deduce the conclusions (additional statements) that must be true by the laws of logic. For example, if given the statements "All humans are mortal" and "Socrates is human" a valid conclusion is "Socrates is mortal". Of course this is a trivial example. In actual logical systems the statements can be numerous and complex. It was realized early on that this kind of analysis could be significantly aided by the use of computers. Logic Theorist validated the theoretical work of Bertrand Russell and Alfred North Whitehead in their influential work on mathematical logic called Principia Mathematica. In addition, subsequent systems have been utilized by logicians to validate and discover new mathematical theorems and proofs. ## Logic applications for computers There has always been a strong influence from mathematical logic on the field of artificial intelligence (AI). From the beginning of the field it was realized that technology to automate logical inferences could have great potential to solve problems and draw conclusions from facts. Ron Brachman has described first-order logic (FOL) as the metric by which all AI knowledge representation formalisms should be evaluated. First-order logic is a general and powerful method for describing and analyzing information. The reason FOL itself is simply not used as a computer language is that it is actually too expressive, in the sense that FOL can easily express statements that no computer, no matter how powerful, could ever solve. For this reason every form of knowledge representation is in some sense a trade off between expressivity and computability. A widely held belief maintains that the more expressive the language is, the closer it is to FOL, the more likely it is to be slower and prone to an infinite loop. However, in a recent work by Heng Zhang et al., this belief has been rigorously challenged. Their findings establish that all universal knowledge representation formalisms are recursively isomorphic. Furthermore, their proof demonstrates that FOL can be translated into a pure procedural knowledge representation formalism defined by Turing machines with computationally feasible overhead, specifically within deterministic polynomial time or even at lower complexity. For example, IF–THEN rules used in expert systems approximate to a very limited subset of FOL. Rather than arbitrary formulas with the full range of logical operators the starting point is simply what logicians refer to as modus ponens. As a result, rule-based systems can support high-performance computation, especially if they take advantage of optimization algorithms and compilation. On the other hand, logic programming, which combines the Horn clause subset of first-order logic with a non-monotonic form of negation, has both high expressive power and efficient implementations. In particular, the logic programming language Prolog is a Turing complete programming language. Datalog extends the relational database model with recursive relations, while answer set programming is a form of logic programming oriented towards difficult (primarily NP-hard) search problems. Another major area of research for logical theory is software engineering. Research projects such as the Knowledge Based Software Assistant and Programmer's Apprentice programs have applied logical theory to validate the correctness of software specifications. They have also used logical tools to transform the specifications into efficient code on diverse platforms and to prove the equivalence between the implementation and the specification. This formal transformation-driven approach is often far more effortful than traditional software development. However, in specific domains with appropriate formalisms and reusable templates the approach has proven viable for commercial products. The appropriate domains are usually those such as weapons systems, security systems, and real-time financial systems where failure of the system has excessively high human or financial cost. An example of such a domain is Very Large Scale Integrated (VLSI) design—the process for designing the chips used for the CPUs and other critical components of digital devices. An error in a chip can be catastrophic. Unlike software, chips can't be patched or updated. As a result, there is commercial justification for using formal methods to prove that the implementation corresponds to the specification. Another important application of logic to computer technology has been in the area of frame languages and automatic classifiers. Frame languages such as KL-ONE can be directly mapped to set theory and first-order logic. This allows specialized theorem provers called classifiers to analyze the various declarations between sets, subsets, and relations in a given model. In this way the model can be validated and any inconsistent definitions flagged. The classifier can also infer new information, for example define new sets based on existing information and change the definition of existing sets based on new data. The level of flexibility is ideal for handling the ever changing world of the Internet. Classifier technology is built on top of languages such as the Web Ontology Language to allow a logical semantic level on top of the existing Internet. This layer is called the Semantic Web. Temporal logic is used for reasoning in concurrent systems.
https://en.wikipedia.org/wiki/Logic_in_computer_science
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. The term 'random variable' in its mathematical definition refers to neither randomness nor variability but instead is a mathematical function in which - the domain is the set of possible outcomes in a sample space (e.g. the set $$ \{H,T\} $$ which are the possible upper sides of a flipped coin heads $$ H $$ or tails $$ T $$ as the result from tossing a coin); and - the range is a measurable space (e.g. corresponding to the domain above, the range might be the set $$ \{-1, 1\} $$ if say heads $$ H $$ mapped to -1 and $$ T $$ mapped to 1). Typically, the range of a random variable is a subset of the real numbers. Informally, randomness typically represents some fundamental element of chance, such as in the roll of a die; it may also represent uncertainty, such as measurement error. However, the interpretation of probability is philosophically complicated, and even in specific cases is not always straightforward. The purely mathematical analysis of random variables is independent of such interpretational difficulties, and can be based upon a rigorous axiomatic setup. In the formal mathematical language of measure theory, a random variable is defined as a measurable function from a probability measure space (called the sample space) to a measurable space. This allows consideration of the pushforward measure, which is called the distribution of the random variable; the distribution is thus a probability measure on the set of all possible values of the random variable. It is possible for two random variables to have identical distributions but to differ in significant ways; for instance, they may be independent. It is common to consider the special cases of discrete random variables and absolutely continuous random variables, corresponding to whether a random variable is valued in a countable subset or in an interval of real numbers. There are other important possibilities, especially in the theory of stochastic processes, wherein it is natural to consider random sequences or random functions. Sometimes a random variable is taken to be automatically valued in the real numbers, with more general random quantities instead being called random elements. According to George Mackey, Pafnuty Chebyshev was the first person "to think systematically in terms of random variables". ## Definition A random variable $$ X $$ is a measurable function $$ X \colon \Omega \to E $$ from a sample space $$ \Omega $$ as a set of possible outcomes to a measurable space $$ E $$ . The technical axiomatic definition requires the sample space $$ \Omega $$ to belong to a probability triple $$ (\Omega, \mathcal{F}, \operatorname{P}) $$ (see the measure-theoretic definition). A random variable is often denoted by capital Roman letters such as $$ X, Y, Z, T $$ . The probability that $$ X $$ takes on a value in a measurable set $$ S\subseteq E $$ is written as $$ \operatorname{P}(X \in S) = \operatorname{P}(\{ \omega \in \Omega \mid X(\omega) \in S \}) $$ . ### Standard case In many cases, $$ X $$ is real-valued, i.e. $$ E = \mathbb{R} $$ . In some contexts, the term random element (see extensions) is used to denote a random variable not of this form. When the image (or range) of $$ X $$ is finite or countably infinite, the random variable is called a discrete random variable and its distribution is a discrete probability distribution, i.e. can be described by a probability mass function that assigns a probability to each value in the image of $$ X $$ . If the image is uncountably infinite (usually an interval) then $$ X $$ is called a continuous random variable. In the special case that it is absolutely continuous, its distribution can be described by a probability density function, which assigns probabilities to intervals; in particular, each individual point must necessarily have probability zero for an absolutely continuous random variable. Not all continuous random variables are absolutely continuous. Any random variable can be described by its cumulative distribution function, which describes the probability that the random variable will be less than or equal to a certain value. ### Extensions The term "random variable" in statistics is traditionally limited to the real-valued case ( $$ E=\mathbb{R} $$ ). In this case, the structure of the real numbers makes it possible to define quantities such as the expected value and variance of a random variable, its cumulative distribution function, and the moments of its distribution. However, the definition above is valid for any measurable space $$ E $$ of values. Thus one can consider random elements of other sets $$ E $$ , such as random Boolean values, categorical values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, and functions. One may then specifically refer to a random variable of type , or an -valued random variable. This more general concept of a random element is particularly useful in disciplines such as graph theory, machine learning, natural language processing, and other fields in discrete mathematics and computer science, where one is often interested in modeling the random variation of non-numerical data structures. In some cases, it is nonetheless convenient to represent each element of $$ E $$ , using one or more real numbers. In this case, a random element may optionally be represented as a vector of real-valued random variables (all defined on the same underlying probability space $$ \Omega $$ , which allows the different random variables to covary). For example: - A random word may be represented as a random integer that serves as an index into the vocabulary of possible words. Alternatively, it can be represented as a random indicator vector, whose length equals the size of the vocabulary, where the only values of positive probability are $$ (1 \ 0 \ 0 \ 0 \ \cdots) $$ , $$ (0 \ 1 \ 0 \ 0 \ \cdots) $$ , $$ (0 \ 0 \ 1 \ 0 \ \cdots) $$ and the position of the 1 indicates the word. - A random sentence of given length $$ N $$ may be represented as a vector of $$ N $$ random words. - A random graph on $$ N $$ given vertices may be represented as a $$ N \times N $$ matrix of random variables, whose values specify the adjacency matrix of the random graph. - A random function $$ F $$ may be represented as a collection of random variables $$ F(x) $$ , giving the function's values at the various points $$ x $$ in the function's domain. The $$ F(x) $$ are ordinary real-valued random variables provided that the function is real-valued. For example, a stochastic process is a random function of time, a random vector is a random function of some index set such as $$ 1,2,\ldots, n $$ , and random field is a random function on any set (typically time, space, or a discrete set). ## Distribution functions If a random variable $$ X\colon \Omega \to \mathbb{R} $$ defined on the probability space $$ (\Omega, \mathcal{F}, \operatorname{P}) $$ is given, we can ask questions like "How likely is it that the value of $$ X $$ is equal to 2?". This is the same as the probability of the event $$ \{ \omega : X(\omega) = 2 \}\,\! $$ which is often written as $$ P(X = 2)\,\! $$ or $$ p_X(2) $$ for short. Recording all these probabilities of outputs of a random variable $$ X $$ yields the probability distribution of $$ X $$ . The probability distribution "forgets" about the particular probability space used to define $$ X $$ and only records the probabilities of various output values of $$ X $$ . Such a probability distribution, if $$ X $$ is real-valued, can always be captured by its cumulative distribution function $$ F_X(x) = \operatorname{P}(X \le x) $$ and sometimes also using a probability density function, $$ f_X $$ . In measure-theoretic terms, we use the random variable $$ X $$ to "push-forward" the measure $$ P $$ on $$ \Omega $$ to a measure $$ p_X $$ on $$ \mathbb{R} $$ . The measure $$ p_X $$ is called the "(probability) distribution of $$ X $$ " or the "law of $$ X $$ ". The density $$ f_X = dp_X/d\mu $$ , the Radon–Nikodym derivative of $$ p_X $$ with respect to some reference measure $$ \mu $$ on $$ \mathbb{R} $$ (often, this reference measure is the Lebesgue measure in the case of continuous random variables, or the counting measure in the case of discrete random variables). The underlying probability space $$ \Omega $$ is a technical device used to guarantee the existence of random variables, sometimes to construct them, and to define notions such as correlation and dependence or independence based on a joint distribution of two or more random variables on the same probability space. In practice, one often disposes of the space $$ \Omega $$ altogether and just puts a measure on $$ \mathbb{R} $$ that assigns measure 1 to the whole real line, i.e., one works with probability distributions instead of random variables. See the article on quantile functions for fuller development. ## Examples ### Discrete random variable Consider an experiment where a person is chosen at random. An example of a random variable may be the person's height. Mathematically, the random variable is interpreted as a function which maps the person to their height. Associated with the random variable is a probability distribution that allows the computation of the probability that the height is in any subset of possible values, such as the probability that the height is between 180 and 190 cm, or the probability that the height is either less than 150 or more than 200 cm. Another random variable may be the person's number of children; this is a discrete random variable with non-negative integer values. It allows the computation of probabilities for individual integer values – the probability mass function (PMF) – or for sets of values, including infinite sets. For example, the event of interest may be "an even number of children". For both finite and infinite event sets, their probabilities can be found by adding up the PMFs of the elements; that is, the probability of an even number of children is the infinite sum $$ \operatorname{PMF}(0) + \operatorname{PMF}(2) + \operatorname{PMF}(4) + \cdots $$ . In examples such as these, the sample space is often suppressed, since it is mathematically hard to describe, and the possible values of the random variables are then treated as a sample space. But when two random variables are measured on the same sample space of outcomes, such as the height and number of children being computed on the same random persons, it is easier to track their relationship if it is acknowledged that both height and number of children come from the same random person, for example so that questions of whether such random variables are correlated or not can be posed. If $$ \{a_n\}, \{b_n\} $$ are countable sets of real numbers, $$ b_n >0 $$ and $$ \sum_n b_n=1 $$ , then $$ F=\sum_n b_n \delta_{a_n}(x) $$ is a discrete distribution function. Here $$ \delta_t(x) = 0 $$ for $$ x < t $$ , $$ \delta_t(x) = 1 $$ for $$ x \ge t $$ . Taking for instance an enumeration of all rational numbers as $$ \{a_n\} $$ , one gets a discrete function that is not necessarily a step function (piecewise constant). #### Coin toss The possible outcomes for one coin toss can be described by the sample space $$ \Omega = \{\text{heads}, \text{tails}\} $$ . We can introduce a real-valued random variable $$ Y $$ that models a $1 payoff for a successful bet on heads as follows: $$ Y(\omega) = \begin{cases} 1, & \text{if } \omega = \text{heads}, \\[6pt] 0, & \text{if } \omega = \text{tails}. \end{cases} $$ If the coin is a fair coin, Y has a probability mass function $$ f_Y $$ given by: $$ f_Y(y) = \begin{cases} \tfrac 12,& \text{if }y=1,\\[6pt] \tfrac 12,& \text{if }y=0, \end{cases} $$ #### Dice roll A random variable can also be used to describe the process of rolling dice and the possible outcomes. The most obvious representation for the two-dice case is to take the set of pairs of numbers n1 and n2 from {1, 2, 3, 4, 5, 6} (representing the numbers on the two dice) as the sample space. The total number rolled (the sum of the numbers in each pair) is then a random variable X given by the function that maps the pair to the sum: $$ X((n_1, n_2)) = n_1 + n_2 $$ and (if the dice are fair) has a probability mass function fX given by: $$ f_X(S) = \frac{\min(S-1, 13-S)}{36}, \text{ for } S \in \{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12\} $$ ### Continuous random variable Formally, a continuous random variable is a random variable whose cumulative distribution function is continuous everywhere. There are no "gaps", which would correspond to numbers which have a finite probability of occurring. Instead, continuous random variables almost never take an exact prescribed value c (formally, $$ \forall c \in \mathbb{R}:\; \Pr(X = c) = 0 $$ ) but there is a positive probability that its value will lie in particular intervals which can be arbitrarily small. Continuous random variables usually admit probability density functions (PDF), which characterize their CDF and probability measures; such distributions are also called absolutely continuous; but some continuous distributions are singular, or mixes of an absolutely continuous part and a singular part. An example of a continuous random variable would be one based on a spinner that can choose a horizontal direction. Then the values taken by the random variable are directions. We could represent these directions by North, West, East, South, Southeast, etc. However, it is commonly more convenient to map the sample space to a random variable which takes values which are real numbers. This can be done, for example, by mapping a direction to a bearing in degrees clockwise from North. The random variable then takes values which are real numbers from the interval [0, 360), with all parts of the range being "equally likely". In this case, X = the angle spun. Any real number has probability zero of being selected, but a positive probability can be assigned to any range of values. For example, the probability of choosing a number in [0, 180] is . Instead of speaking of a probability mass function, we say that the probability density of X is 1/360. The probability of a subset of [0, 360) can be calculated by multiplying the measure of the set by 1/360. In general, the probability of a set for a given continuous random variable can be calculated by integrating the density over the given set. More formally, given any interval $$ I = [a, b] = \{x \in \mathbb{R} : a \le x \le b \} $$ , a random variable $$ X_I \sim \operatorname{U}(I) = \operatorname{U}[a, b] $$ is called a "continuous uniform random variable" (CURV) if the probability that it takes a value in a subinterval depends only on the length of the subinterval. This implies that the probability of $$ X_I $$ falling in any subinterval $$ [c, d] \sube [a, b] $$ is proportional to the length of the subinterval, that is, if , one has $$ \Pr\left( X_I \in [c,d]\right) = \frac{d - c}{b - a} $$ where the last equality results from the unitarity axiom of probability. The probability density function of a CURV $$ X \sim \operatorname {U}[a, b] $$ is given by the indicator function of its interval of support normalized by the interval's length: $$ f_X(x) = \begin{cases} \displaystyle{1 \over b-a}, & a \le x \le b \\ 0, & \text{otherwise}. \end{cases} $$ Of particular interest is the uniform distribution on the unit interval $$ [0, 1] $$ . Samples of any desired probability distribution $$ \operatorname{D} $$ can be generated by calculating the quantile function of $$ \operatorname{D} $$ on a randomly-generated number distributed uniformly on the unit interval. This exploits properties of cumulative distribution functions, which are a unifying framework for all random variables. ### Mixed type A mixed random variable is a random variable whose cumulative distribution function is neither discrete nor everywhere-continuous. It can be realized as a mixture of a discrete random variable and a continuous random variable; in which case the will be the weighted average of the CDFs of the component variables. An example of a random variable of mixed type would be based on an experiment where a coin is flipped and the spinner is spun only if the result of the coin toss is heads. If the result is tails, X = −1; otherwise X = the value of the spinner as in the preceding example. There is a probability of that this random variable will have the value −1. Other ranges of values would have half the probabilities of the last example. Most generally, every probability distribution on the real line is a mixture of discrete part, singular part, and an absolutely continuous part; see . The discrete part is concentrated on a countable set, but this set may be dense (like the set of all rational numbers). ## Measure-theoretic definition The most formal, axiomatic definition of a random variable involves measure theory. Continuous random variables are defined in terms of sets of numbers, along with functions that map such sets to probabilities. Because of various difficulties (e.g. the Banach–Tarski paradox) that arise if such sets are insufficiently constrained, it is necessary to introduce what is termed a sigma-algebra to constrain the possible sets over which probabilities can be defined. Normally, a particular such sigma-algebra is used, the Borel σ-algebra, which allows for probabilities to be defined over any sets that can be derived either directly from continuous intervals of numbers or by a finite or countably infinite number of unions and/or intersections of such intervals. The measure-theoretic definition is as follows. Let $$ (\Omega, \mathcal{F}, P) $$ be a probability space and $$ (E, \mathcal{E}) $$ a measurable space. Then an -valued random variable is a measurable function $$ X\colon \Omega \to E $$ , which means that, for every subset $$ B\in\mathcal{E} $$ , its preimage is $$ \mathcal{F} $$ -measurable; $$ X^{-1}(B)\in \mathcal{F} $$ , where $$ X^{-1}(B) = \{\omega : X(\omega)\in B\} $$ . This definition enables us to measure any subset $$ B\in \mathcal{E} $$ in the target space by looking at its preimage, which by assumption is measurable. In more intuitive terms, a member of $$ \Omega $$ is a possible outcome, a member of $$ \mathcal{F} $$ is a measurable subset of possible outcomes, the function $$ P $$ gives the probability of each such measurable subset, $$ E $$ represents the set of values that the random variable can take (such as the set of real numbers), and a member of $$ \mathcal{E} $$ is a "well-behaved" (measurable) subset of $$ E $$ (those for which the probability may be determined). The random variable is then a function from any outcome to a quantity, such that the outcomes leading to any useful subset of quantities for the random variable have a well-defined probability. When $$ E $$ is a topological space, then the most common choice for the σ-algebra $$ \mathcal{E} $$ is the Borel σ-algebra $$ \mathcal{B}(E) $$ , which is the σ-algebra generated by the collection of all open sets in $$ E $$ . In such case the $$ (E, \mathcal{E}) $$ -valued random variable is called an -valued random variable. Moreover, when the space $$ E $$ is the real line $$ \mathbb{R} $$ , then such a real-valued random variable is called simply a random variable. ### Real-valued random variables In this case the observation space is the set of real numbers. Recall, $$ (\Omega, \mathcal{F}, P) $$ is the probability space. For a real observation space, the function $$ X\colon \Omega \rightarrow \mathbb{R} $$ is a real-valued random variable if $$ \{ \omega : X(\omega) \le r \} \in \mathcal{F} \qquad \forall r \in \mathbb{R}. $$ This definition is a special case of the above because the set $$ \{(-\infty, r]: r \in \R\} $$ generates the Borel σ-algebra on the set of real numbers, and it suffices to check measurability on any generating set. Here we can prove measurability on this generating set by using the fact that $$ \{ \omega : X(\omega) \le r \} = X^{-1}((-\infty, r]) $$ . ## Moments The probability distribution of a random variable is often characterised by a small number of parameters, which also have a practical interpretation. For example, it is often enough to know what its "average value" is. This is captured by the mathematical concept of expected value of a random variable, denoted $$ \operatorname{E}[X] $$ , and also called the first moment. In general, $$ \operatorname{E}[f(X)] $$ is not equal to $$ f(\operatorname{E}[X]) $$ . Once the "average value" is known, one could then ask how far from this average value the values of $$ X $$ typically are, a question that is answered by the variance and standard deviation of a random variable. $$ \operatorname{E}[X] $$ can be viewed intuitively as an average obtained from an infinite population, the members of which are particular evaluations of $$ X $$ . Mathematically, this is known as the (generalised) problem of moments: for a given class of random variables $$ X $$ , find a collection $$ \{f_i\} $$ of functions such that the expectation values $$ \operatorname{E}[f_i(X)] $$ fully characterise the distribution of the random variable $$ X $$ . Moments can only be defined for real-valued functions of random variables (or complex-valued, etc.). If the random variable is itself real-valued, then moments of the variable itself can be taken, which are equivalent to moments of the identity function $$ f(X)=X $$ of the random variable. However, even for non-real-valued random variables, moments can be taken of real-valued functions of those variables. For example, for a categorical random variable X that can take on the nominal values "red", "blue" or "green", the real-valued function $$ [X = \text{green}] $$ can be constructed; this uses the Iverson bracket, and has the value 1 if $$ X $$ has the value "green", 0 otherwise. Then, the expected value and other moments of this function can be determined. ## Functions of random variables A new random variable Y can be defined by applying a real Borel measurable function $$ g\colon \mathbb{R} \rightarrow \mathbb{R} $$ to the outcomes of a real-valued random variable $$ X $$ . That is, $$ Y=g(X) $$ . The cumulative distribution function of $$ Y $$ is then $$ F_Y(y) = \operatorname{P}(g(X) \le y). $$ If function $$ g $$ is invertible (i.e., $$ h = g^{-1} $$ exists, where $$ h $$ is $$ g $$ 's inverse function) and is either increasing or decreasing, then the previous relation can be extended to obtain $$ F_Y(y) = \operatorname{P}(g(X) \le y) = \begin{cases} \operatorname{P}(X \le h(y)) = F_X(h(y)), & \text{if } h = g^{-1} \text{ increasing} ,\\ \\ \operatorname{P}(X \ge h(y)) = 1 - F_X(h(y)), & \text{if } h = g^{-1} \text{ decreasing} . \end{cases} $$ With the same hypotheses of invertibility of $$ g $$ , assuming also differentiability, the relation between the probability density functions can be found by differentiating both sides of the above expression with respect to $$ y $$ , in order to obtain $$ f_Y(y) = f_X\bigl(h(y)\bigr) \left| \frac{d h(y)}{d y} \right|. $$ If there is no invertibility of $$ g $$ but each $$ y $$ admits at most a countable number of roots (i.e., a finite, or countably infinite, number of $$ x_i $$ such that $$ y = g(x_i) $$ ) then the previous relation between the probability density functions can be generalized with $$ f_Y(y) = \sum_{i} f_X(g_{i}^{-1}(y)) \left| \frac{d g_{i}^{-1}(y)}{d y} \right| $$ where $$ x_i = g_i^{-1}(y) $$ , according to the inverse function theorem. The formulas for densities do not demand $$ g $$ to be increasing. In the measure-theoretic, axiomatic approach to probability, if a random variable $$ X $$ on $$ \Omega $$ and a Borel measurable function $$ g\colon \mathbb{R} \rightarrow \mathbb{R} $$ , then $$ Y = g(X) $$ is also a random variable on $$ \Omega $$ , since the composition of measurable functions is also measurable. (However, this is not necessarily true if $$ g $$ is Lebesgue measurable.) The same procedure that allowed one to go from a probability space $$ (\Omega, P) $$ to $$ (\mathbb{R}, dF_{X}) $$ can be used to obtain the distribution of $$ Y $$ . ### Example 1 Let $$ X $$ be a real-valued, continuous random variable and let $$ Y = X^2 $$ . $$ F_Y(y) = \operatorname{P}(X^2 \le y). $$ If $$ y < 0 $$ , then $$ P(X^2 \leq y) = 0 $$ , so $$ F_Y(y) = 0\qquad\hbox{if}\quad y < 0. $$ If $$ y \geq 0 $$ , then $$ \operatorname{P}(X^2 \le y) = \operatorname{P}(|X| \le \sqrt{y}) = \operatorname{P}(-\sqrt{y} \le X \le \sqrt{y}), $$ so $$ F_Y(y) = F_X(\sqrt{y}) - F_X(-\sqrt{y})\qquad\hbox{if}\quad y \ge 0. $$ ### Example 2 Suppose $$ X $$ is a random variable with a cumulative distribution $$ F_{X}(x) = P(X \leq x) = \frac{1}{(1 + e^{-x})^{\theta}} $$ where $$ \theta > 0 $$ is a fixed parameter. Consider the random variable $$ Y = \mathrm{log}(1 + e^{-X}). $$ Then, $$ F_{Y}(y) = P(Y \leq y) = P(\mathrm{log}(1 + e^{-X}) \leq y) = P(X \geq -\mathrm{log}(e^{y} - 1)).\, $$ The last expression can be calculated in terms of the cumulative distribution of $$ X, $$ so $$ \begin{align} F_Y(y) & = 1 - F_X(-\log(e^y - 1)) \\[5pt] & = 1 - \frac{1}{(1 + e^{\log(e^y - 1)})^\theta} \\[5pt] & = 1 - \frac{1}{(1 + e^y - 1)^\theta} \\[5pt] & = 1 - e^{-y \theta}. \end{align} $$ which is the cumulative distribution function (CDF) of an exponential distribution. ### Example 3 Suppose $$ X $$ is a random variable with a standard normal distribution, whose density is $$ f_X(x) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2}. $$ Consider the random variable $$ Y = X^2. $$ We can find the density using the above formula for a change of variables: $$ f_Y(y) = \sum_{i} f_X(g_{i}^{-1}(y)) \left| \frac{d g_{i}^{-1}(y)}{d y} \right|. $$ In this case the change is not monotonic, because every value of $$ Y $$ has two corresponding values of $$ X $$ (one positive and negative). However, because of symmetry, both halves will transform identically, i.e., $$ f_Y(y) = 2f_X(g^{-1}(y)) \left| \frac{d g^{-1}(y)}{d y} \right|. $$ The inverse transformation is $$ x = g^{-1}(y) = \sqrt{y} $$ and its derivative is $$ \frac{d g^{-1}(y)}{d y} = \frac{1}{2\sqrt{y}} . $$ Then, $$ f_Y(y) = 2\frac{1}{\sqrt{2\pi}}e^{-y/2} \frac{1}{2\sqrt{y}} = \frac{1}{\sqrt{2\pi y}}e^{-y/2}. $$ This is a chi-squared distribution with one degree of freedom. ### Example 4 Suppose $$ X $$ is a random variable with a normal distribution, whose density is $$ f_X(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-(x-\mu)^2/(2\sigma^2)}. $$ Consider the random variable $$ Y = X^2. $$ We can find the density using the above formula for a change of variables: $$ f_Y(y) = \sum_{i} f_X(g_{i}^{-1}(y)) \left| \frac{d g_{i}^{-1}(y)}{d y} \right|. $$ In this case the change is not monotonic, because every value of $$ Y $$ has two corresponding values of $$ X $$ (one positive and negative). Differently from the previous example, in this case however, there is no symmetry and we have to compute the two distinct terms: $$ f_Y(y) = f_X(g_1^{-1}(y))\left|\frac{d g_1^{-1}(y)}{d y} \right| +f_X(g_2^{-1}(y))\left| \frac{d g_2^{-1}(y)}{d y} \right|. $$ The inverse transformation is $$ x = g_{1,2}^{-1}(y) = \pm \sqrt{y} $$ and its derivative is $$ \frac{d g_{1,2}^{-1}(y)}{d y} = \pm \frac{1}{2\sqrt{y}} . $$ Then, $$ f_Y(y) = \frac{1}{\sqrt{2\pi\sigma^2}} \frac{1}{2\sqrt{y}} (e^{-(\sqrt{y}-\mu)^2/(2\sigma^2)}+e^{-(-\sqrt{y}-\mu)^2/(2\sigma^2)}) . $$ This is a noncentral chi-squared distribution with one degree of freedom. ## Some properties - The probability distribution of the sum of two independent random variables is the convolution of each of their distributions. - Probability distributions are not a vector space—they are not closed under linear combinations, as these do not preserve non-negativity or total integral 1—but they are closed under convex combination, thus forming a convex subset of the space of functions (or measures). ## Equivalence of random variables There are several different senses in which random variables can be considered to be equivalent. Two random variables can be equal, equal almost surely, or equal in distribution. In increasing order of strength, the precise definition of these notions of equivalence is given below. ### ### Equality in distribution If the sample space is a subset of the real line, random variables X and Y are equal in distribution (denoted $$ X \stackrel{d}{=} Y $$ ) if they have the same distribution functions: $$ \operatorname{P}(X \le x) = \operatorname{P}(Y \le x)\quad\text{for all }x. $$ To be equal in distribution, random variables need not be defined on the same probability space. Two random variables having equal moment generating functions have the same distribution. This provides, for example, a useful method of checking equality of certain functions of independent, identically distributed (IID) random variables. However, the moment generating function exists only for distributions that have a defined Laplace transform. ### Almost sure equality Two random variables X and Y are equal almost surely (denoted $$ X \; \stackrel{\text{a.s.}}{=} \; Y $$ ) if, and only if, the probability that they are different is zero: $$ \operatorname{P}(X \neq Y) = 0. $$ For all practical purposes in probability theory, this notion of equivalence is as strong as actual equality. It is associated to the following distance: $$ d_\infty(X,Y)=\operatorname{ess} \sup_\omega|X(\omega)-Y(\omega)|, $$ where "ess sup" represents the essential supremum in the sense of measure theory. Equality Finally, the two random variables X and Y are equal if they are equal as functions on their measurable space: $$ X(\omega)=Y(\omega)\qquad\hbox{for all }\omega. $$ This notion is typically the least useful in probability theory because in practice and in theory, the underlying measure space of the experiment is rarely explicitly characterized or even characterizable. ### Practical difference between notions of equivalence Since we rarely explicitly construct the probability space underlying a random variable, the difference between these notions of equivalence is somewhat subtle. Essentially, two random variables considered in isolation are "practically equivalent" if they are equal in distribution -- but once we relate them to other random variables defined on the same probability space, then they only remain "practically equivalent" if they are equal almost surely. For example, consider the real random variables A, B, C, and D all defined on the same probability space. Suppose that A and B are equal almost surely ( $$ A \; \stackrel{\text{a.s.}}{=} \; B $$ ), but A and C are only equal in distribution ( $$ A \stackrel{d}{=} C $$ ). Then $$ A + D \; \stackrel{\text{a.s.}}{=} \; B + D $$ , but in general $$ A + D \; \neq \; C + D $$ (not even in distribution). Similarly, we have that the expectation values $$ \mathbb{E}(AD) = \mathbb{E}(BD) $$ , but in general $$ \mathbb{E}(AD) \neq \mathbb{E}(CD) $$ . Therefore, two random variables that are equal in distribution (but not equal almost surely) can have different covariances with a third random variable. ## Convergence A significant theme in mathematical statistics consists of obtaining convergence results for certain sequences of random variables; for instance the law of large numbers and the central limit theorem. There are various senses in which a sequence $$ X_n $$ of random variables can converge to a random variable $$ X $$ . These are explained in the article on convergence of random variables.
https://en.wikipedia.org/wiki/Random_variable
A communication protocol is a system of rules that allows two or more entities of a communications system to transmit information via any variation of a physical quantity. The protocol defines the rules, syntax, semantics, and synchronization of communication and possible error recovery methods. Protocols may be implemented by hardware, software, or a combination of both. ## Communicating systems use well-defined formats for exchanging various messages. Each message has an exact meaning intended to elicit a response from a range of possible responses predetermined for that particular situation. The specified behavior is typically independent of how it is to be implemented. Communication protocols have to be agreed upon by the parties involved. To reach an agreement, a protocol may be developed into a technical standard. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communication what programming languages are to computations. An alternate formulation states that protocols are to communication what algorithms are to computation. Multiple protocols often describe different aspects of a single communication. A group of protocols designed to work together is known as a protocol suite; when implemented in software they are a protocol stack. Internet communication protocols are published by the Internet Engineering Task Force (IETF). The IEEE (Institute of Electrical and Electronics Engineers) handles wired and wireless networking and the International Organization for Standardization (ISO) handles other types. The ITU-T handles telecommunications protocols and formats for the public switched telephone network (PSTN). As the PSTN and Internet converge, the standards are also being driven towards convergence. Communicating systems ### History The first use of the term protocol in a modern data-commutation context occurs in April 1967 in a memorandum entitled A Protocol for Use in the NPL Data Communications Network. Under the direction of Donald Davies, who pioneered packet switching at the National Physical Laboratory in the United Kingdom, it was written by Roger Scantlebury and Keith Bartlett for the NPL network. On the ARPANET, the starting point for host-to-host communication in 1969 was the 1822 protocol, written by Bob Kahn, which defined the transmission of messages to an IMP. The Network Control Program (NCP) for the ARPANET, developed by Steve Crocker and other graduate students including Jon Postel and Vint Cerf, was first implemented in 1970. The NCP interface allowed application software to connect across the ARPANET by implementing higher-level communication protocols, an early example of the protocol layering concept. The CYCLADES network, designed by Louis Pouzin in the early 1970s was the first to implement the end-to-end principle, and make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a service of the network itself. His team was the first to tackle the highly complex problem of providing user applications with a reliable virtual circuit service while using a best-effort service, an early contribution to what will be the Transmission Control Protocol (TCP). Bob Metcalfe and others at Xerox PARC outlined the idea of Ethernet and the PARC Universal Packet (PUP) for internetworking. Research in the early 1970s by Bob Kahn and Vint Cerf led to the formulation of the Transmission Control Program (TCP). Its specification was written by Cerf with Yogen Dalal and Carl Sunshine in December 1974, still a monolithic design at this time. The International Network Working Group agreed on a connectionless datagram standard which was presented to the CCITT in 1975 but was not adopted by the CCITT nor by the ARPANET. Separate international research, particularly the work of Rémi Després, contributed to the development of the X.25 standard, based on virtual circuits, which was adopted by the CCITT in 1976. Computer manufacturers developed proprietary protocols such as IBM's Systems Network Architecture (SNA), Digital Equipment Corporation's DECnet and Xerox Network Systems. TCP software was redesigned as a modular protocol stack, referred to as TCP/IP. This was installed on SATNET in 1982 and on the ARPANET in January 1983. The development of a complete Internet protocol suite by 1989, as outlined in and , laid the foundation for the growth of TCP/IP as a comprehensive protocol suite as the core component of the emerging Internet. International work on a reference model for communication standards led to the OSI model, published in 1984. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. ### Concept The information exchanged between devices through a network or other media is governed by rules and conventions that can be set out in communication protocol specifications. The nature of communication, the actual data exchanged and any state-dependent behaviors, is defined by these specifications. In digital computing systems, the rules can be expressed by algorithms and data structures. Protocols are to communication what algorithms or programming languages are to computations. Operating systems usually contain a set of cooperating processes that manipulate shared data to communicate with each other. This communication is governed by well-understood protocols, which can be embedded in the process code itself.Ben-Ari 1982, Section 2.7 - Summary, p. 27, summarizes the concurrent programming abstraction. In contrast, because there is no shared memory, communicating systems have to communicate with each other using a shared transmission medium. Transmission is not necessarily reliable, and individual systems may use different hardware or operating systems. To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system. When protocol algorithms are expressed in a portable programming language the protocol software may be made operating system independent. The best-known frameworks are the TCP/IP model and the OSI model. At the time the Internet was developed, abstraction layering had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, the originally monolithic networking programs were decomposed into cooperating protocols. This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design. Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol suite. Some of the best-known protocol suites are TCP/IP, IPX/SPX, X.25, AX.25 and AppleTalk. The protocols can be arranged based on functionality in groups, for instance, there is a group of transport protocols. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interface-functions. To transmit a message, a protocol has to be selected from each layer. The selection of the next protocol is accomplished by extending the message with a protocol selector for each layer. ## Types There are two types of communication protocols, based on their representation of the content being carried: text-based and binary. ### Text-based A text-based protocol or plain text protocol represents its content in human-readable format, often in plain text encoded in a machine-readable encoding such as ASCII or UTF-8, or in structured text-based formats such as Intel hex format, XML or JSON. The immediate human readability stands in contrast to native binary protocols which have inherent benefits for use in a computer environment (such as ease of mechanical parsing and improved bandwidth utilization). Network applications have various methods of encapsulating data. One method very common with Internet protocols is a text oriented representation that transmits requests and responses as lines of ASCII text, terminated by a newline character (and usually a carriage return character). Examples of protocols that use plain, human-readable text for its commands are FTP (File Transfer Protocol), SMTP (Simple Mail Transfer Protocol), early versions of HTTP (Hypertext Transfer Protocol), and the finger protocol. Text-based protocols are typically optimized for human parsing and interpretation and are therefore suitable whenever human inspection of protocol contents is required, such as during debugging and during early protocol development design phases. ### Binary A binary protocol utilizes all values of a byte, as opposed to a text-based protocol which only uses values corresponding to human-readable characters in ASCII encoding. Binary protocols are intended to be read by a machine rather than a human being. Binary protocols have the advantage of terseness, which translates into speed of transmission and interpretation. Binary have been used in the normative documents describing modern standards like EbXML, HTTP/2, HTTP/3 and EDOC. An interface in UML may also be considered a binary protocol. ## Basic requirements Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol must include rules describing the context. These kinds of rules are said to express the syntax of the communication. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kinds of rules are said to express the semantics of the communication. Messages are sent and received on communicating systems to establish communication. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed: Data formats for data exchange Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header and the payload. The actual message is carried in the payload. The header area contains the fields with relevance to the operation of the protocol. Bitstrings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size. Address formats for data exchange Addresses are used to identify both the sender and the intended receiver(s). The addresses are carried in the header area of the bitstrings, allowing the receivers to determine whether the bitstrings are of interest and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address pair (sender address, receiver address). Usually, some address values have special meanings. An all-1s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an addressing scheme. Address mapping Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance, to translate a logical IP address specified by the application to an Ethernet MAC address. This is referred to as address mapping. Routing When systems are not directly connected, intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. The interconnection of networks through routers is called internetworking. Detection of transmission errors Error detection is necessary on networks where data corruption is possible. In a common approach, a CRC of the data area is added to the end of packets, making it possible for the receiver to detect differences caused by corruption. The receiver rejects the packets on CRC differences and arranges somehow for retransmission. Acknowledgements Acknowledgement of correct reception of packets is required for connection-oriented communication. Acknowledgments are sent from receivers back to their respective senders. Loss of information - timeouts and retries Packets may be lost on the network or be delayed in transit. To cope with this, under some protocols, a sender may expect an acknowledgment of correct reception from the receiver within a certain amount of time. Thus, on timeouts, the sender may need to retransmit the information. In case of a permanently broken link, the retransmission has no effect, so the number of retransmissions is limited. Exceeding the retry limit is considered an error. Direction of information flow Direction needs to be addressed if transmissions can only occur in one direction at a time as on half-duplex links or from one sender at a time as on a shared medium. This is known as media access control. Arrangements have to be made to accommodate the case of collision or contention where two parties respectively simultaneously transmit or wish to transmit. Sequence control If long bitstrings are divided into pieces and then sent on the network individually, the pieces may get lost or delayed or, on some types of networks, take different routes to their destination. As a result, pieces may arrive out of sequence. Retransmissions can result in duplicate pieces. By marking the pieces with sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message. Flow control Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender. Queueing Communicating processes or state machines employ queues (or "buffers"), usually FIFO queues, to deal with the messages in the order sent, and may sometimes have multiple queues with different prioritization. ## Protocol design Systems engineering principles have been applied to create a set of common network protocol design principles. The design of complex protocols often involves decomposition into simpler, cooperating protocols. Such a set of cooperating protocols is sometimes called a protocol family or a protocol suite, within a conceptual framework. Communicating systems operate concurrently. An important aspect of concurrent programming is the synchronization of software for receiving and transmitting messages of communication in proper sequencing. Concurrent programming has traditionally been a topic in operating systems theory texts. Formal verification seems indispensable because concurrent programs are notorious for the hidden and sophisticated bugs they contain. A mathematical approach to the study of concurrency and communication is referred to as communicating sequential processes (CSP). Concurrency can also be modeled using finite-state machines, such as Mealy and Moore machines. Mealy and Moore machines are in use as design tools in digital electronics systems encountered in the form of hardware used in telecommunication or electronic devices in general. The literature presents numerous analogies between computer communication and programming. In analogy, a transfer mechanism of a protocol is comparable to a central processing unit (CPU). The framework introduces rules that allow the programmer to design cooperating protocols independently of one another. ### Layering In modern protocol design, protocols are layered to form a protocol stack. Layering is a design principle that divides the protocol design task into smaller steps, each of which accomplishes a specific part, interacting with the other parts of the protocol only in a small number of well-defined ways. Layering allows the parts of a protocol to be designed and tested without a combinatorial explosion of cases, keeping each design relatively simple. The communication protocols in use on the Internet are designed to function in diverse and complex settings. Internet protocols are designed for simplicity and modularity and fit into a coarse hierarchy of functional layers defined in the Internet Protocol Suite. The first two cooperating protocols, the Transmission Control Protocol (TCP) and the Internet Protocol (IP) resulted from the decomposition of the original Transmission Control Program, a monolithic communication protocol, into this layered communication suite. The OSI model was developed internationally based on experience with networks that predated the internet as a reference model for general communication with much stricter rules of protocol interaction and rigorous layering. Typically, application software is built upon a robust data transport layer. Underlying this transport layer is a datagram delivery and routing mechanism that is typically connectionless in the Internet. Packet relaying across networks happens over another layer that involves only network link technologies, which are often specific to certain physical layer technologies, such as Ethernet. Layering provides opportunities to exchange technologies when needed, for example, protocols are often stacked in a tunneling arrangement to accommodate the connection of dissimilar networks. For example, IP may be tunneled across an Asynchronous Transfer Mode (ATM) network. #### Protocol layering Protocol layering forms the basis of protocol design. It allows the decomposition of single, complex protocols into simpler, cooperating protocols. The protocol layers each solve a distinct class of communication problems. Together, the layers make up a layering scheme or model. Computations deal with algorithms and data; Communication involves protocols and messages; So the analog of a data flow diagram is some kind of message flow diagram. To visualize protocol layering and protocol suites, a diagram of the message flows in and between two systems, A and B, is shown in figure 3. The systems, A and B, both make use of the same protocol suite. The vertical flows (and protocols) are in-system and the horizontal message flows (and protocols) are between systems. The message flows are governed by rules, and data formats specified by protocols. The blue lines mark the boundaries of the (horizontal) protocol layers. #### Software layering The software supporting protocols has a layered organization and its relationship with protocol layering is shown in figure 5. To send a message on system A, the top-layer software module interacts with the module directly below it and hands over the message to be encapsulated. The lower module fills in the header data in accordance with the protocol it implements and interacts with the bottom module which sends the message over the communications channel to the bottom module of system B. On the receiving system B the reverse happens, so ultimately the message gets delivered in its original form to the top module of system B. Program translation is divided into subproblems. As a result, the translation software is layered as well, allowing the software layers to be designed independently. The same approach can be seen in the TCP/IP layering. The modules below the application layer are generally considered part of the operating system. Passing data between these modules is much less expensive than passing data between an application program and the transport layer. The boundary between the application layer and the transport layer is called the operating system boundary. #### Strict layering Strictly adhering to a layered model, a practice known as strict layering, is not always the best approach to networking. Strict layering can have a negative impact on the performance of an implementation. Although the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers as abstracting the protocol stack in this way may cause a higher layer to duplicate the functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis. ### Design patterns Commonly recurring problems in the design and implementation of communication protocols can be addressed by software design patterns. ### Formal specification Popular formal methods of describing communication syntax are Abstract Syntax Notation One (an ISO standard) and augmented Backus–Naur form (an IETF standard). Finite-state machine models are used to formally describe the possible interactions of the protocol.Comer 2000, Glossary of Internetworking Terms and Abbreviations, p. 704, term protocol. and communicating finite-state machines ## Protocol development For communication to occur, protocols have to be selected. The rules can be expressed by algorithms and data structures. Hardware and operating system independence is enhanced by expressing the algorithms in a portable programming language. Source independence of the specification provides wider interoperability. Protocol standards are commonly created by obtaining the approval or support of a standards organization, which initiates the standardization process. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often the members are in control of large market shares relevant to the protocol and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol. ### The need for protocol standards The need for protocol standards can be shown by looking at what happened to the Binary Synchronous Communications (BSC) protocol invented by IBM. BSC is an early link-level protocol used to connect two separate nodes. It was originally not intended to be used in a multinode network, but doing so revealed several deficiencies of the protocol. In the absence of standardization, manufacturers and organizations felt free to enhance the protocol, creating incompatible versions on their networks. In some cases, this was deliberately done to discourage users from using equipment from other manufacturers. There are more than 50 variants of the original bi-sync protocol. One can assume, that a standard would have prevented at least some of this from happening. In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as de facto standards. De facto standards are common in emerging markets, niche markets, or markets that are monopolized (or oligopolized). They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill-effects of de facto standards. Positive exceptions exist; a de facto standard operating system like Linux does not have this negative grip on its market, because the sources are published and maintained in an open way, thus inviting competition. ### Standards organizations Some of the standards organizations of relevance for communication protocols are the International Organization for Standardization (ISO), the International Telecommunication Union (ITU), the Institute of Electrical and Electronics Engineers (IEEE), and the Internet Engineering Task Force (IETF). The IETF maintains the protocols in use on the Internet. The IEEE controls many software and hardware protocols in the electronics industry for commercial and consumer devices. The ITU is an umbrella organization of telecommunication engineers designing the public switched telephone network (PSTN), as well as many radio communication systems. For marine electronics the NMEA standards are used. The World Wide Web Consortium (W3C) produces protocols and standards for Web technologies. International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned, cooperate closely with each other. Multiple standards bodies may be involved in the development of a protocol. If they are uncoordinated, then the result may be multiple, incompatible definitions of a protocol, or multiple, incompatible interpretations of messages; important invariants in one definition (e.g., that time-to-live values are monotone decreasing to prevent stable routing loops) may not be respected in another. ### The standardization process In the ISO, the standardization process starts off with the commissioning of a sub-committee workgroup. The workgroup issues working drafts and discussion documents to interested parties (including other standards bodies) in order to provoke discussion and comments. This will generate a lot of questions, much discussion and usually some disagreement. These comments are taken into account and a draft proposal is produced by the working group. After feedback, modification, and compromise the proposal reaches the status of a draft international standard, and ultimately an international standard. International standards are reissued periodically to handle the deficiencies and reflect changing views on the subject. ### OSI standardization A lesson learned from ARPANET, the predecessor of the Internet, was that protocols need a framework to operate. It is therefore important to develop a general-purpose, future-proof framework suitable for structured protocols (such as layered protocols) and their standardization. This would prevent protocol standards with overlapping functionality and would allow clear definition of the responsibilities of a protocol at the different levels (layers). This gave rise to the Open Systems Interconnection model (OSI model), which is used as a framework for the design of standard protocols and services conforming to the various layer specifications. In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic transmission mechanism. The layers above it are numbered. Each layer provides service to the layer above it using the services of the layer immediately below it. The top layer provides services to the application process. The layers communicate with each other by means of an interface, called a service access point. Corresponding layers at each system are called peer entities. To communicate, two peer entities at a given layer use a protocol specific to that layer which is implemented by using services of the layer below. For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it. In the OSI model, the layers and their functionality are (from highest to lowest layer): - The Application layer may provide the following services to the application processes: identification of the intended communication partners, establishment of the necessary authority to communicate, determination of availability and authentication of the partners, agreement on privacy mechanisms for the communication, agreement on responsibility for error recovery and procedures for ensuring data integrity, synchronization between cooperating application processes, identification of any constraints on syntax (e.g. character sets and data structures), determination of cost and acceptable quality of service, selection of the dialogue discipline, including required logon and logoff procedures. - The presentation layer may provide the following services to the application layer: a request for the establishment of a session, data transfer, negotiation of the syntax to be used between the application layers, any necessary syntax transformations, formatting and special purpose transformations (e.g., data compression and data encryption). - The session layer may provide the following services to the presentation layer: establishment and release of session connections, normal and expedited data exchange, a quarantine service which allows the sending presentation entity to instruct the receiving session entity not to release data to its presentation entity without permission, interaction management so presentation entities can control whose turn it is to perform certain control functions, resynchronization of a session connection, reporting of unrecoverable exceptions to the presentation entity. - The transport layer provides reliable and transparent data transfer in a cost-effective way as required by the selected quality of service. It may support the multiplexing of several transport connections on to one network connection or split one transport connection into several network connections. - The network layer does the setup, maintenance and release of network paths between transport peer entities. When relays are needed, routing and relay functions are provided by this layer. The quality of service is negotiated between network and transport entities at the time the connection is set up. This layer is also responsible for network congestion control. - The data link layer does the setup, maintenance and release of data link connections. Errors occurring in the physical layer are detected and may be corrected. Errors are reported to the network layer. The exchange of data link units (including flow control) is defined by this layer. - The physical layer describes details like the electrical characteristics of the physical connection, the transmission techniques used, and the setup, maintenance and clearing of physical connections. In contrast to the TCP/IP layering scheme, which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Connection-oriented communication requires some form of session and (virtual) circuits, hence the (in the TCP/IP model lacking) session layer. The constituent members of ISO were mostly concerned with wide area networks, so the development of RM/OSI concentrated on connection-oriented networks and connectionless networks were first mentioned in an addendum to RM/OSIMarsden 1986, Section 14.11 - Connectionless mode and RM/OSI, p. 195, mentions this. and later incorporated into an update to RM/OSI. At the time, the IETF had to cope with this and the fact that the Internet needed protocols that simply were not there. As a result, the IETF developed its own standardization process based on "rough consensus and running code". The standardization process is described by . Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services and because of this, both TCP and IP could be developed into international standards. ## Wire image The wire image of a protocol is the information that a non-participant observer is able to glean from observing the protocol messages, including both information explicitly given meaning by the protocol, but also inferences made by the observer. Unencrypted protocol metadata is one source making up the wire image, and side-channels including packet timing also contribute. Different observers with different vantages may see different wire images. The wire image is relevant to end-user privacy and the extensibility of the protocol. If some portion of the wire image is not cryptographically authenticated, it is subject to modification by intermediate parties (i.e., middleboxes), which can influence protocol operation. Even if authenticated, if a portion is not encrypted, it will form part of the wire image, and intermediate parties may intervene depending on its content (e.g., dropping packets with particular flags). Signals deliberately intended for intermediary consumption may be left authenticated but unencrypted. The wire image can be deliberately engineered, encrypting parts that intermediaries should not be able to observe and providing signals for what they should be able to. If provided signals are decoupled from the protocol's operation, they may become untrustworthy. Benign network management and research are affected by metadata encryption; protocol designers must balance observability for operability and research against ossification resistance and end-user privacy. The IETF announced in 2014 that it had determined that large-scale surveillance of protocol operations is an attack due to the ability to infer information from the wire image about users and their behaviour, and that the IETF would "work to mitigate pervasive monitoring" in its protocol designs; this had not been done systematically previously. The Internet Architecture Board recommended in 2023 that disclosure of information by a protocol to the network should be intentional, performed with the agreement of both recipient and sender, authenticated to the degree possible and necessary, only acted upon to the degree of its trustworthiness, and minimised and provided to a minimum number of entities. Engineering the wire image and controlling what signals are provided to network elements was a "developing field" in 2023, according to the IAB. ## Ossification Protocol ossification is the loss of flexibility, extensibility and evolvability of network protocols. This is largely due to middleboxes that are sensitive to the wire image of the protocol, and which can interrupt or interfere with messages that are valid but which the middlebox does not correctly recognize. This is a violation of the end-to-end principle. Secondary causes include inflexibility in endpoint implementations of protocols. Ossification is a major issue in Internet protocol design and deployment, as it can prevent new protocols or extensions from being deployed on the Internet, or place strictures on the design of new protocols; new protocols may have to be encapsulated in an already-deployed protocol or mimic the wire image of another protocol. Because of ossification, the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are the only practical choices for transport protocols on the Internet, and TCP itself has significantly ossified, making extension or modification of the protocol difficult. Recommended methods of preventing ossification include encrypting protocol metadata, and ensuring that extension points are exercised and wire image variability is exhibited as fully as possible; remedying existing ossification requires coordination across protocol participants. QUIC is the first IETF transport protocol to have been designed with deliberate anti-ossification properties. ## Taxonomies Classification schemes for protocols usually focus on the domain of use and function. As an example of domain of use, connection-oriented protocols and connectionless protocols are used on connection-oriented networks and connectionless networks respectively. An example of function is a tunneling protocol, which is used to encapsulate packets in a high-level protocol so that the packets can be passed across a transport system using the high-level protocol. A layering scheme combines both function and domain of use. The dominant layering schemes are the ones developed by the IETF and by ISO. Despite the fact that the underlying assumptions of the layering schemes are different enough to warrant distinguishing the two, it is a common practice to compare the two by relating common protocols to the layers of the two schemes. The layering scheme from the IETF is called Internet layering or TCP/IP layering. The layering scheme from ISO is called the OSI model or ISO layering. In networking equipment configuration, a term-of-art distinction is often drawn: The term protocol strictly refers to the transport layer, and the term service refers to protocols utilizing a protocol for transport. In the common case of TCP and UDP, services are distinguished by port numbers. Conformance to these port numbers is voluntary, so in content inspection systems the term service strictly refers to port numbers, and the term application is often used to refer to protocols identified through inspection signatures.
https://en.wikipedia.org/wiki/Communication_protocol
In information retrieval, Okapi BM25 (BM is an abbreviation of best matching) is a ranking function used by search engines to estimate the relevance of documents to a given search query. It is based on the probabilistic retrieval framework developed in the 1970s and 1980s by Stephen E. Robertson, Karen Spärck Jones, and others. The name of the actual ranking function is BM25. The fuller name, Okapi BM25, includes the name of the first system to use it, which was the Okapi information retrieval system, implemented at London's City University in the 1980s and 1990s. BM25 and its newer variants, e.g. BM25F (a version of BM25 that can take document structure and anchor text into account), represent TF-IDF-like retrieval functions used in document retrieval. ## The ranking function BM25 is a bag-of-words retrieval function that ranks a set of documents based on the query terms appearing in each document, regardless of their proximity within the document. It is a family of scoring functions with slightly different components and parameters. One of the most prominent instantiations of the function is as follows. Given a query , containing keywords $$ q_1, ..., q_n $$ , the BM25 score of a document is: $$ \text{score}(D,Q) = \sum_{i=1}^{n} \text{IDF}(q_i) \cdot \frac{f(q_i, D) \cdot (k_1 + 1)}{f(q_i, D) + k_1 \cdot \left(1 - b + b \cdot \frac{|D|}{\text{avgdl}}\right)} $$ where $$ f(q_i, D) $$ is the number of times that the keyword $$ q_i $$ occurs in the document , $$ |D| $$ is the length of the document in words, and is the average document length in the text collection from which documents are drawn. $$ k_1 $$ and are free parameters, usually chosen, in absence of an advanced optimization, as $$ k_1 \in [1.2,2.0] $$ and $$ b = 0.75 $$ . $$ \text{IDF}(q_i) $$ is the IDF (inverse document frequency) weight of the query term $$ q_i $$ . It is usually computed as: $$ \text{IDF}(q_i) = \ln \left(\frac{N - n(q_i) + 0.5}{n(q_i) + 0.5}+1\right) $$ where is the total number of documents in the collection, and $$ n(q_i) $$ is the number of documents containing $$ q_i $$ . There are several interpretations for IDF and slight variations on its formula. In the original BM25 derivation, the IDF component is derived from the Binary Independence Model. ## IDF information theoretic interpretation Here is an interpretation from information theory. Suppose a query term $$ q $$ appears in $$ n(q) $$ documents. Then a randomly picked document $$ D $$ will contain the term with probability $$ \frac{n(q)}{N} $$ (where $$ N $$ is again the cardinality of the set of documents in the collection). Therefore, the information content of the message " $$ D $$ contains $$ q $$ " is: $$ -\log \frac{n(q)}{N} = \log \frac{N}{n(q)}. $$ Now suppose we have two query terms $$ q_1 $$ and $$ q_2 $$ . If the two terms occur in documents entirely independently of each other, then the probability of seeing both $$ q_1 $$ and $$ q_2 $$ in a randomly picked document $$ D $$ is: $$ \frac{n(q_1)}{N} \cdot \frac{n(q_2)}{N}, $$ and the information content of such an event is: $$ \sum_{i=1}^{2} \log \frac{N}{n(q_i)}. $$ With a small variation, this is exactly what is expressed by the IDF component of BM25. ## Modifications - At the extreme values of the coefficient BM25 turns into ranking functions known as BM11 (for $$ b=1 $$ ) and BM15 (for $$ b=0 $$ ). - BM25F (or the BM25 model with Extension to Multiple Weighted Fields) is a modification of BM25 in which the document is considered to be composed from several fields (such as headlines, main text, anchor text) with possibly different degrees of importance, term relevance saturation and length normalization. BM25F defines each type of field as a stream, applying a per-stream weighting to scale each stream against the calculated score. - BM25+ is an extension of BM25. BM25+ was developed to address one deficiency of the standard BM25 in which the component of term frequency normalization by document length is not properly lower-bounded; as a result of this deficiency, long documents which do match the query term can often be scored unfairly by BM25 as having a similar relevancy to shorter documents that do not contain the query term at all. The scoring formula of BM25+ only has one additional free parameter $$ \delta $$ (a default value is in absence of a training data) as compared with BM25: $$ \text{score}(D,Q) = \sum_{i=1}^{n} \text{IDF}(q_i) \cdot \left[ \frac{f(q_i, D) \cdot (k_1 + 1)}{f(q_i, D) + k_1 \cdot \left(1 - b + b \cdot \frac{|D|}{\text{avgdl}}\right)} + \delta \right] $$ ## References ## General references - - - - - ## External links - Category:Ranking functions
https://en.wikipedia.org/wiki/Okapi_BM25
Vector calculus or vector analysis is a branch of mathematics concerned with the differentiation and integration of vector fields, primarily in three-dimensional Euclidean space, $$ \mathbb{R}^3. $$ The term vector calculus is sometimes used as a synonym for the broader subject of multivariable calculus, which spans vector calculus as well as partial differentiation and multiple integration. Vector calculus plays an important role in differential geometry and in the study of partial differential equations. It is used extensively in physics and engineering, especially in the description of electromagnetic fields, gravitational fields, and fluid flow. Vector calculus was developed from the theory of quaternions by J. Willard Gibbs and Oliver Heaviside near the end of the 19th century, and most of the notation and terminology was established by Gibbs and Edwin Bidwell Wilson in their 1901 book, Vector Analysis, though earlier mathematicians such as Isaac Newton pioneered the field. In its standard form using the cross product, vector calculus does not generalize to higher dimensions, but the alternative approach of geometric algebra, which uses the exterior product, does (see below for more). ## Basic objects ### Scalar fields A scalar field associates a scalar value to every point in a space. The scalar is a mathematical number representing a physical quantity. Examples of scalar fields in applications include the temperature distribution throughout space, the pressure distribution in a fluid, and spin-zero quantum fields (known as scalar bosons), such as the Higgs field. These fields are the subject of scalar field theory. ### Vector fields A vector field is an assignment of a vector to each point in a space. A vector field in the plane, for instance, can be visualized as a collection of arrows with a given magnitude and direction each attached to a point in the plane. Vector fields are often used to model, for example, the speed and direction of a moving fluid throughout space, or the strength and direction of some force, such as the magnetic or gravitational force, as it changes from point to point. This can be used, for example, to calculate work done over a line. ### Vectors and pseudovectors In more advanced treatments, one further distinguishes pseudovector fields and pseudoscalar fields, which are identical to vector fields and scalar fields, except that they change sign under an orientation-reversing map: for example, the curl of a vector field is a pseudovector field, and if one reflects a vector field, the curl points in the opposite direction. This distinction is clarified and elaborated in geometric algebra, as described below. ## Vector algebra The algebraic (non-differential) operations in vector calculus are referred to as vector algebra, being defined for a vector space and then applied pointwise to a vector field. The basic algebraic operations consist of: +Notations in vector calculusOperationNotationDescriptionVector additionAddition of two vectors, yielding a vector.Scalar multiplicationMultiplication of a scalar and a vector, yielding a vector.Dot productMultiplication of two vectors, yielding a scalar.Cross productMultiplication of two vectors in , yielding a (pseudo)vector. Also commonly used are the two triple products: +Vector calculus triple productsOperationNotationDescriptionScalar triple productThe dot product of the cross product of two vectors.Vector triple productThe cross product of the cross product of two vectors. ## Operators and theorems ### Differential operators Vector calculus studies various differential operators defined on scalar or vector fields, which are typically expressed in terms of the del operator ( $$ \nabla $$ ), also known as "nabla". The three basic vector operators are: +Differential operators in vector calculusOperationNotationDescriptionNotationalanalogyDomain/RangeGradientMeasures the rate and direction of change in a scalar field.Scalar multiplicationMaps scalar fields to vector fields.DivergenceMeasures the scalar of a source or sink at a given point in a vector field.Dot productMaps vector fields to scalar fields.CurlMeasures the tendency to rotate about a point in a vector field in .Cross productMaps vector fields to (pseudo)vector fields. denotes a scalar field and denotes a vector field Also commonly used are the two Laplace operators: +Laplace operators in vector calculusOperationNotationDescriptionDomain/RangeLaplacianMeasures the difference between the value of the scalar field with its average on infinitesimal balls.Maps between scalar fields.Vector LaplacianMeasures the difference between the value of the vector field with its average on infinitesimal balls.Maps between vector fields. denotes a scalar field and denotes a vector field A quantity called the Jacobian matrix is useful for studying functions when both the domain and range of the function are multivariable, such as a change of variables during integration. ### Integral theorems The three basic vector operators have corresponding theorems which generalize the fundamental theorem of calculus to higher dimensions: +Integral theorems of vector calculus Theorem Statement Description Gradient theorem The line integral of the gradient of a scalar field over a curve is equal to the change in the scalar field between the endpoints and of the curve. Divergence theorem The integral of the divergence of a vector field over an -dimensional solid is equal to the flux of the vector field through the -dimensional closed boundary surface of the solid. Curl (Kelvin–Stokes) theorem The integral of the curl of a vector field over a surface in is equal to the circulation of the vector field around the closed curve bounding the surface. denotes a scalar field and denotes a vector field In two dimensions, the divergence and curl theorems reduce to the Green's theorem: +Green's theorem of vector calculus Theorem Statement Description Green's theorem The integral of the divergence (or curl) of a vector field over some region in equals the flux (or circulation) of the vector field over the closed curve bounding the region.For divergence, . For curl, . and are functions of . ## Applications ### Linear approximations Linear approximations are used to replace complicated functions with linear functions that are almost the same. Given a differentiable function with real values, one can approximate for close to by the formula $$ f(x,y)\ \approx\ f(a,b)+\tfrac{\partial f}{\partial x} (a,b)\,(x-a)+\tfrac{\partial f}{\partial y}(a,b)\,(y-b). $$ The right-hand side is the equation of the plane tangent to the graph of at ### Optimization For a continuously differentiable function of several real variables, a point (that is, a set of values for the input variables, which is viewed as a point in ) is critical if all of the partial derivatives of the function are zero at , or, equivalently, if its gradient is zero. The critical values are the values of the function at the critical points. If the function is smooth, or, at least twice continuously differentiable, a critical point may be either a local maximum, a local minimum or a saddle point. The different cases may be distinguished by considering the eigenvalues of the Hessian matrix of second derivatives. By Fermat's theorem, all local maxima and minima of a differentiable function occur at critical points. Therefore, to find the local maxima and minima, it suffices, theoretically, to compute the zeros of the gradient and the eigenvalues of the Hessian matrix at these zeros. ## Generalizations Vector calculus can also be generalized to other 3-manifolds and higher-dimensional spaces. ### Different 3-manifolds Vector calculus is initially defined for Euclidean 3-space, $$ \mathbb{R}^3, $$ which has additional structure beyond simply being a 3-dimensional real vector space, namely: a norm (giving a notion of length) defined via an inner product (the dot product), which in turn gives a notion of angle, and an orientation, which gives a notion of left-handed and right-handed. These structures give rise to a volume form, and also the cross product, which is used pervasively in vector calculus. The gradient and divergence require only the inner product, while the curl and the cross product also requires the handedness of the coordinate system to be taken into account (see for more detail). Vector calculus can be defined on other 3-dimensional real vector spaces if they have an inner product (or more generally a symmetric nondegenerate form) and an orientation; this is less data than an isomorphism to Euclidean space, as it does not require a set of coordinates (a frame of reference), which reflects the fact that vector calculus is invariant under rotations (the special orthogonal group ). More generally, vector calculus can be defined on any 3-dimensional oriented Riemannian manifold, or more generally pseudo-Riemannian manifold. This structure simply means that the tangent space at each point has an inner product (more generally, a symmetric nondegenerate form) and an orientation, or more globally that there is a symmetric nondegenerate metric tensor and an orientation, and works because vector calculus is defined in terms of tangent vectors at each point. ### Other dimensions Most of the analytic results are easily understood, in a more general form, using the machinery of differential geometry, of which vector calculus forms a subset. Grad and div generalize immediately to other dimensions, as do the gradient theorem, divergence theorem, and Laplacian (yielding harmonic analysis), while curl and cross product do not generalize as directly. From a general point of view, the various fields in (3-dimensional) vector calculus are uniformly seen as being -vector fields: scalar fields are 0-vector fields, vector fields are 1-vector fields, pseudovector fields are 2-vector fields, and pseudoscalar fields are 3-vector fields. In higher dimensions there are additional types of fields (scalar, vector, pseudovector or pseudoscalar corresponding to , , or dimensions, which is exhaustive in dimension 3), so one cannot only work with (pseudo)scalars and (pseudo)vectors. In any dimension, assuming a nondegenerate form, grad of a scalar function is a vector field, and div of a vector field is a scalar function, but only in dimension 3 or 7 (and, trivially, in dimension 0 or 1) is the curl of a vector field a vector field, and only in 3 or 7 dimensions can a cross product be defined (generalizations in other dimensionalities either require $$ n-1 $$ vectors to yield 1 vector, or are alternative Lie algebras, which are more general antisymmetric bilinear products). The generalization of grad and div, and how curl may be generalized is elaborated at Curl § Generalizations; in brief, the curl of a vector field is a bivector field, which may be interpreted as the special orthogonal Lie algebra of infinitesimal rotations; however, this cannot be identified with a vector field because the dimensions differ – there are 3 dimensions of rotations in 3 dimensions, but 6 dimensions of rotations in 4 dimensions (and more generally $$ \textstyle{\binom{n}{2}=\frac{1}{2}n(n-1)} $$ dimensions of rotations in dimensions). There are two important alternative generalizations of vector calculus. The first, geometric algebra, uses -vector fields instead of vector fields (in 3 or fewer dimensions, every -vector field can be identified with a scalar function or vector field, but this is not true in higher dimensions). This replaces the cross product, which is specific to 3 dimensions, taking in two vector fields and giving as output a vector field, with the exterior product, which exists in all dimensions and takes in two vector fields, giving as output a bivector (2-vector) field. This product yields Clifford algebras as the algebraic structure on vector spaces (with an orientation and nondegenerate form). Geometric algebra is mostly used in generalizations of physics and other applied fields to higher dimensions. The second generalization uses differential forms (-covector fields) instead of vector fields or -vector fields, and is widely used in mathematics, particularly in differential geometry, geometric topology, and harmonic analysis, in particular yielding Hodge theory on oriented pseudo-Riemannian manifolds. From this point of view, grad, curl, and div correspond to the exterior derivative of 0-forms, 1-forms, and 2-forms, respectively, and the key theorems of vector calculus are all special cases of the general form of Stokes' theorem. From the point of view of both of these generalizations, vector calculus implicitly identifies mathematically distinct objects, which makes the presentation simpler but the underlying mathematical structure and generalizations less clear. From the point of view of geometric algebra, vector calculus implicitly identifies -vector fields with vector fields or scalar functions: 0-vectors and 3-vectors with scalars, 1-vectors and 2-vectors with vectors. From the point of view of differential forms, vector calculus implicitly identifies -forms with scalar fields or vector fields: 0-forms and 3-forms with scalar fields, 1-forms and 2-forms with vector fields. Thus for example the curl naturally takes as input a vector field or 1-form, but naturally has as output a 2-vector field or 2-form (hence pseudovector field), which is then interpreted as a vector field, rather than directly taking a vector field to a vector field; this is reflected in the curl of a vector field in higher dimensions not having as output a vector field.
https://en.wikipedia.org/wiki/Vector_calculus
In computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) is a thesis about the nature of computable functions. It states that a function on the natural numbers can be calculated by an effective method if and only if it is computable by a Turing machine. The thesis is named after American mathematician Alonzo Church and the British mathematician Alan Turing. Before the precise definition of computable function, mathematicians often used the informal term effectively calculable to describe functions that are computable by paper-and-pencil methods. In the 1930s, several independent attempts were made to formalize the notion of computability: - In 1933, Kurt Gödel, with Jacques Herbrand, formalized the definition of the class of general recursive functions: the smallest class of functions (with arbitrarily many arguments) that is closed under composition, recursion, and minimization, and includes zero, successor, and all projections. - In 1936, Alonzo Church created a method for defining functions called the λ-calculus. Within λ-calculus, he defined an encoding of the natural numbers called the Church numerals. A function on the natural numbers is called λ-computable if the corresponding function on the Church numerals can be represented by a term of the λ-calculus. - Also in 1936, before learning of Church's work, Alan Turing created a theoretical model for machines, now called Turing machines, that could carry out calculations from inputs by manipulating symbols on a tape. Given a suitable encoding of the natural numbers as sequences of symbols, a function on the natural numbers is called Turing computable if some Turing machine computes the corresponding function on encoded natural numbers. Church, Kleene, and Turing proved that these three formally defined classes of computable functions coincide: a function is λ-computable if and only if it is Turing computable, and if and only if it is general recursive. This has led mathematicians and computer scientists to believe that the concept of computability is accurately characterized by these three equivalent processes. Other formal attempts to characterize computability have subsequently strengthened this belief (see below). On the other hand, the Church–Turing thesis states that the above three formally-defined classes of computable functions coincide with the informal notion of an effectively calculable function. Although the thesis has near-universal acceptance, it cannot be formally proven, as the concept of effective calculability is only informally defined. Since its inception, variations on the original thesis have arisen, including statements about what can physically be realized by a computer in our universe (physical Church-Turing thesis) and what can be efficiently computed (Church–Turing thesis (complexity theory)). These variations are not due to Church or Turing, but arise from later work in complexity theory and digital physics. The thesis also has implications for the philosophy of mind (see below). ## Statement in Church's and Turing's words addresses the notion of "effective computability" as follows: "Clearly the existence of CC and RC (Church's and Rosser's proofs) presupposes a precise definition of 'effective'. 'Effective method' is here used in the rather special sense of a method each step of which is precisely predetermined and which is certain to produce the answer in a finite number of steps". Thus the adverb-adjective "effective" is used in a sense of "1a: producing a decided, decisive, or desired effect", and "capable of producing a result".See also which also gives these definitions for "effective" – the first ["producing a decided, decisive, or desired effect"] as the definition for sense "1a" of the word "effective", and the second ["capable of producing a result"] as part of the "Synonym Discussion of EFFECTIVE" there, (in the introductory part, where it summarizes the similarities between the meanings of the words "effective", "effectual", "efficient", and "efficacious"). In the following, the words "effectively calculable" will mean "produced by any intuitively 'effective' means whatsoever" and "effectively computable" will mean "produced by a Turing-machine or equivalent mechanical device". Turing's "definitions" given in a footnote in his 1938 Ph.D. thesis Systems of Logic Based on Ordinals, supervised by Church, are virtually the same: The thesis can be stated as: Every effectively calculable function is a computable function. Church also stated that "No computational procedure will be considered as an algorithm unless it can be represented as a Turing Machine". Turing stated it this way: ## History One of the important problems for logicians in the 1930s was the of David Hilbert and Wilhelm Ackermann, which asked whether there was a mechanical procedure for separating mathematical truths from mathematical falsehoods. This quest required that the notion of "algorithm" or "effective calculability" be pinned down, at least well enough for the quest to begin. But from the very outset Alonzo Church's attempts began with a debate that continues to this day. the notion of "effective calculability" to be (i) an "axiom or axioms" in an axiomatic system, (ii) merely a definition that "identified" two or more propositions, (iii) an empirical hypothesis to be verified by observation of natural events, or (iv) just a proposal for the sake of argument (i.e. a "thesis")? ### Circa 1930–1952 In the course of studying the problem, Church and his student Stephen Kleene introduced the notion of λ-definable functions, and they were able to prove that several large classes of functions frequently encountered in number theory were λ-definable. The debate began when Church proposed to Gödel that one should define the "effectively computable" functions as the λ-definable functions. Gödel, however, was not convinced and called the proposal "thoroughly unsatisfactory". Rather, in correspondence with Church (c. 1934–1935), Gödel proposed axiomatizing the notion of "effective calculability"; indeed, in a 1935 letter to Kleene, Church reported that: But Gödel offered no further guidance. Eventually, he would suggest his recursion, modified by Herbrand's suggestion, that Gödel had detailed in his 1934 lectures in Princeton, New Jersey (Kleene and Rosser transcribed the notes). But he did not think that the two ideas could be satisfactorily identified "except heuristically". Next, it was necessary to identify and prove the equivalence of two notions of effective calculability. Equipped with the λ-calculus and "general" recursion, Kleene with help of Church and J. Barkley Rosser produced proofs (1933, 1935) to show that the two calculi are equivalent. Church subsequently modified his methods to include use of Herbrand–Gödel recursion and then proved (1936) that the is unsolvable: there is no algorithm that can determine whether a well formed formula has a beta normal form. Many years later in a letter to Davis (c. 1965), Gödel said that "he was, at the time of these [1934] lectures, not at all convinced that his concept of recursion comprised all possible recursions". By 1963–1964 Gödel would disavow Herbrand–Gödel recursion and the λ-calculus in favor of the Turing machine as the definition of "algorithm" or "mechanical procedure" or "formal system". A hypothesis leading to a natural law?: In late 1936 Alan Turing's paper (also proving that the is unsolvable) was delivered orally, but had not yet appeared in print. On the other hand, Emil Post's 1936 paper had appeared and was certified independent of Turing's work. Post strongly disagreed with Church's "identification" of effective computability with the λ-calculus and recursion, stating: Rather, he regarded the notion of "effective calculability" as merely a "working hypothesis" that might lead by inductive reasoning to a "natural law" rather than by "a definition or an axiom". This idea was "sharply" criticized by Church. Thus Post in his 1936 paper was also discounting Gödel's suggestion to Church in 1934–1935 that the thesis might be expressed as an axiom or set of axioms. Turing adds another definition, Rosser equates all three: Within just a short time, Turing's 1936–1937 paper "On Computable Numbers, with an Application to the " appeared. In it he stated another notion of "effective computability" with the introduction of his a-machines (now known as the Turing machine abstract computational model). In a proof-sketch added as an appendix to his 1936–1937 paper, Turing showed that the classes of functions defined by λ-calculus and Turing machines coincided. Church was quick to recognise how compelling Turing's analysis was. In his review of Turing's paper he made clear that Turing's notion made "the identification with effectiveness in the ordinary (not explicitly defined) sense evident immediately". In a few years (1939) Turing would propose, like Church and Kleene before him, that his formal definition of mechanical computing agent was the correct one. Thus, by 1939, both Church (1934) and Turing (1939) had individually proposed that their "formal systems" should be definitions of "effective calculability"; neither framed their statements as theses. Rosser (1939) formally identified the three notions-as-definitions: Kleene proposes Thesis I: This left the overt expression of a "thesis" to Kleene. In 1943 Kleene proposed his "Thesis I": The Church–Turing Thesis: Stephen Kleene, in Introduction to Metamathematics, finally goes on to formally name "Church's Thesis" and "Turing's Thesis", using his theory of recursive realizability, having switched from presenting his work in the terminology of Church–Kleene lambda definability to that of Gödel–Kleene recursiveness (partial recursive functions). In this transition, Kleene modified Gödel's general recursive functions to allow for proofs of the unsolvability of problems in the intuitionism of E. J. Brouwer. In his graduate textbook on logic, "Church's thesis" is introduced and basic mathematical results are demonstrated to be unrealizable. Next, Kleene proceeds to present "Turing's thesis", where results are shown to be uncomputable, using his simplified derivation of a Turing machine based on the work of Emil Post. Both theses are proven equivalent by use of "Theorem XXX". Kleene, finally, uses for the first time the term the "Church-Turing thesis" in a section in which he helps to give clarifications to concepts in Alan Turing's paper "The Word Problem in Semi-Groups with Cancellation", as demanded in a critique from William Boone. ### Later developments An attempt to better understand the notion of "effective computability" led Robin Gandy (Turing's student and friend) in 1980 to analyze machine computation (as opposed to human-computation acted out by a Turing machine). Gandy's curiosity about, and analysis of, cellular automata (including Conway's game of life), parallelism, and crystalline automata, led him to propose four "principles (or constraints) ... which it is argued, any machine must satisfy". His most-important fourth, "the principle of causality" is based on the "finite velocity of propagation of effects and signals; contemporary physics rejects the possibility of instantaneous action at a distance". From these principles and some additional constraints—(1a) a lower bound on the linear dimensions of any of the parts, (1b) an upper bound on speed of propagation (the velocity of light), (2) discrete progress of the machine, and (3) deterministic behavior—he produces a theorem that "What can be calculated by a device satisfying principles I–IV is computable." In the late 1990s Wilfried Sieg analyzed Turing's and Gandy's notions of "effective calculability" with the intent of "sharpening the informal notion, formulating its general features axiomatically, and investigating the axiomatic framework". In his 1997 and 2002 work Sieg presents a series of constraints on the behavior of a computor—"a human computing agent who proceeds mechanically". These constraints reduce to: - "(B.1) (Boundedness) There is a fixed bound on the number of symbolic configurations a computor can immediately recognize. - "(B.2) (Boundedness) There is a fixed bound on the number of internal states a computor can be in. - "(L.1) (Locality) A computor can change only elements of an observed symbolic configuration. - "(L.2) (Locality) A computor can shift attention from one symbolic configuration to another one, but the new observed configurations must be within a bounded distance of the immediately previously observed configuration. - "(D) (Determinacy) The immediately recognizable (sub-)configuration determines uniquely the next computation step (and id [instantaneous description])"; stated another way: "A computor's internal state together with the observed configuration fixes uniquely the next computation step and the next internal state." The matter remains in active discussion within the academic community.See also ### The thesis as a definition The thesis can be viewed as nothing but an ordinary mathematical definition. Comments by Gödel on the subject suggest this view, e.g. "the correct definition of mechanical computability was established beyond any doubt by Turing". The case for viewing the thesis as nothing more than a definition is made explicitly by Robert I. Soare, where it is also argued that Turing's definition of computability is no less likely to be correct than the epsilon-delta definition of a continuous function. ## Success of the thesis Other formalisms (besides recursion, the λ-calculus, and the Turing machine) have been proposed for describing effective calculability/computability. Kleene (1952) adds to the list the functions "reckonable in the system S1" of Kurt Gödel 1936, and Emil Post's (1943, 1946) "canonical [also called normal] systems". In the 1950s Hao Wang and Martin Davis greatly simplified the one-tape Turing-machine model (see Post–Turing machine). Marvin Minsky expanded the model to two or more tapes and greatly simplified the tapes into "up-down counters", which Melzak and Lambek further evolved into what is now known as the counter machine model. In the late 1960s and early 1970s researchers expanded the counter machine model into the register machine, a close cousin to the modern notion of the computer. Other models include combinatory logic and Markov algorithms. Gurevich adds the pointer machine model of Kolmogorov and Uspensky (1953, 1958): "... they just wanted to ... convince themselves that there is no way to extend the notion of computable function." All these contributions involve proofs that the models are computationally equivalent to the Turing machine; such models are said to be Turing complete. Because all these different attempts at formalizing the concept of "effective calculability/computability" have yielded equivalent results, it is now generally assumed that the Church–Turing thesis is correct. In fact, Gödel (1936) proposed something stronger than this; he observed that there was something "absolute" about the concept of "reckonable in S1": ## Informal usage in proofs Proofs in computability theory often invoke the Church–Turing thesis in an informal way to establish the computability of functions while avoiding the (often very long) details which would be involved in a rigorous, formal proof. To establish that a function is computable by Turing machine, it is usually considered sufficient to give an informal English description of how the function can be effectively computed, and then conclude "by the Church–Turing thesis" that the function is Turing computable (equivalently, partial recursive). Dirk van Dalen gives the following example for the sake of illustrating this informal use of the Church–Turing thesis: In order to make the above example completely rigorous, one would have to carefully construct a Turing machine, or λ-function, or carefully invoke recursion axioms, or at best, cleverly invoke various theorems of computability theory. But because the computability theorist believes that Turing computability correctly captures what can be computed effectively, and because an effective procedure is spelled out in English for deciding the set B, the computability theorist accepts this as proof that the set is indeed recursive. ## Variations The success of the Church–Turing thesis prompted variations of the thesis to be proposed. For example, the physical Church–Turing thesis states: "All physically computable functions are Turing-computable." The Church–Turing thesis says nothing about the efficiency with which one model of computation can simulate another. It has been proved for instance that a (multi-tape) universal Turing machine only suffers a logarithmic slowdown factor in simulating any Turing machine. A variation of the Church–Turing thesis addresses whether an arbitrary but "reasonable" model of computation can be efficiently simulated. This is called the feasibility thesis, also known as the (classical) complexity-theoretic Church–Turing thesis or the extended Church–Turing thesis, which is not due to Church or Turing, but rather was realized gradually in the development of complexity theory. It states: "A probabilistic Turing machine can efficiently simulate any realistic model of computation." The word 'efficiently' here means up to polynomial-time reductions. This thesis was originally called computational complexity-theoretic Church–Turing thesis by Ethan Bernstein and Umesh Vazirani (1997). The complexity-theoretic Church–Turing thesis, then, posits that all 'reasonable' models of computation yield the same class of problems that can be computed in polynomial time. Assuming the conjecture that probabilistic polynomial time (BPP) equals deterministic polynomial time (P), the word 'probabilistic' is optional in the complexity-theoretic Church–Turing thesis. A similar thesis, called the invariance thesis, was introduced by Cees F. Slot and Peter van Emde Boas. It states: Reasonable' machines can simulate each other within a polynomially bounded overhead in time and a constant-factor overhead in space." The thesis originally appeared in a paper at STOC'84, which was the first paper to show that polynomial-time overhead and constant-space overhead could be simultaneously achieved for a simulation of a Random Access Machine on a Turing machine. If BQP is shown to be a strict superset of BPP, it would invalidate the complexity-theoretic Church–Turing thesis. In other words, there would be efficient quantum algorithms that perform tasks that do not have efficient probabilistic algorithms. This would not however invalidate the original Church–Turing thesis, since a quantum computer can always be simulated by a Turing machine, but it would invalidate the classical complexity-theoretic Church–Turing thesis for efficiency reasons. Consequently, the quantum complexity-theoretic Church–Turing thesis states: "A quantum Turing machine can efficiently simulate any realistic model of computation." Eugene Eberbach and Peter Wegner claim that the Church–Turing thesis is sometimes interpreted too broadly, stating "Though [...] Turing machines express the behavior of algorithms, the broader assertion that algorithms precisely capture what can be computed is invalid". They claim that forms of computation not captured by the thesis are relevant today, terms which they call super-Turing computation. ## Philosophical implications Philosophers have interpreted the Church–Turing thesis as having implications for the philosophy of mind. B. Jack Copeland states that it is an open empirical question whether there are actual deterministic physical processes that, in the long run, elude simulation by a Turing machine; furthermore, he states that it is an open empirical question whether any such processes are involved in the working of the human brain. There are also some important open questions which cover the relationship between the Church–Turing thesis and physics, and the possibility of hypercomputation. When applied to physics, the thesis has several possible meanings: 1. The universe is equivalent to a Turing machine; thus, computing non-recursive functions is physically impossible. This has been termed the strong Church–Turing thesis, or Church–Turing–Deutsch principle, and is a foundation of digital physics. 1. The universe is not equivalent to a Turing machine (i.e., the laws of physics are not Turing-computable), but incomputable physical events are not "harnessable" for the construction of a hypercomputer. For example, a universe in which physics involves random real numbers, as opposed to computable reals, would fall into this category. 1. The universe is a hypercomputer, and it is possible to build physical devices to harness this property and calculate non-recursive functions. For example, it is an open question whether all quantum mechanical events are Turing-computable, although it is known that rigorous models such as quantum Turing machines are equivalent to deterministic Turing machines. (They are not necessarily efficiently equivalent; see above.) John Lucas and Roger Penrose have suggested that the human mind might be the result of some kind of quantum-mechanically enhanced, "non-algorithmic" computation.Also the description of "the non-algorithmic nature of mathematical insight", There are many other technical possibilities which fall outside or between these three categories, but these serve to illustrate the range of the concept. Philosophical aspects of the thesis, regarding both physical and biological computers, are also discussed in Odifreddi's 1989 textbook on recursion theory. ## Non-computable functions One can formally define functions that are not computable. A well-known example of such a function is the Busy Beaver function. This function takes an input n and returns the largest number of symbols that a Turing machine with n states can print before halting, when run with no input. Finding an upper bound on the busy beaver function is equivalent to solving the halting problem, a problem known to be unsolvable by Turing machines. Since the busy beaver function cannot be computed by Turing machines, the Church–Turing thesis states that this function cannot be effectively computed by any method. Several computational models allow for the computation of (Church-Turing) non-computable functions. These are known as hypercomputers. Mark Burgin argues that super-recursive algorithms such as inductive Turing machines disprove the Church–Turing thesis. His argument relies on a definition of algorithm broader than the ordinary one, so that non-computable functions obtained from some inductive Turing machines are called computable. This interpretation of the Church–Turing thesis differs from the interpretation commonly accepted in computability theory, discussed above. The argument that super-recursive algorithms are indeed algorithms in the sense of the Church–Turing thesis has not found broad acceptance within the computability research community.
https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis
In mathematics, engineering, computer science and economics, an optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: - An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set. - A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems. ## Search space In the context of an optimization problem, the search space refers to the set of all possible points or solutions that satisfy the problem's constraints, targets, or goals. These points represent the feasible solutions that can be evaluated to find the optimal solution according to the objective function. The search space is often defined by the domain of the function being optimized, encompassing all valid inputs that meet the problem's requirements. The search space can vary significantly in size and complexity depending on the problem. For example, in a continuous optimization problem, the search space might be a multidimensional real-valued domain defined by bounds or constraints. In a discrete optimization problem, such as combinatorial optimization, the search space could consist of a finite set of permutations, combinations, or configurations. In some contexts, the term search space may also refer to the optimization of the domain itself, such as determining the most appropriate set of variables or parameters to define the problem. Understanding and effectively navigating the search space is crucial for designing efficient algorithms, as it directly influences the computational complexity and the likelihood of finding an optimal solution. ## Continuous optimization problem The standard form of a continuous optimization problem is $$ \begin{align} &\underset{x}{\operatorname{minimize}}& & f(x) \\ &\operatorname{subject\;to} & &g_i(x) \leq 0, \quad i = 1,\dots,m \\ &&&h_j(x) = 0, \quad j = 1, \dots,p \end{align} $$ where - is the objective function to be minimized over the -variable vector , - are called inequality constraints - are called equality constraints, and - and . If , the problem is an unconstrained optimization problem. By convention, the standard form defines a minimization problem. A maximization problem can be treated by negating the objective function. ## Combinatorial optimization problem Formally, a combinatorial optimization problem is a quadruple , where - is a set of instances; - given an instance , is the set of feasible solutions; - given an instance and a feasible solution of , denotes the measure of , which is usually a positive real. - is the goal function, and is either or . The goal is then to find for some instance an optimal solution, that is, a feasible solution with $$ m(x, y) = g\left\{ m(x, y') : y' \in f(x) \right\}. $$ For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure . For example, if there is a graph which contains vertices and , an optimization problem might be "find a path from to that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from to that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'. In the field of approximation algorithms, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.
https://en.wikipedia.org/wiki/Optimization_problem
In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system. Type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that have been proposed as foundations are: - Typed λ-calculus of Alonzo Church - Intuitionistic type theory of Per Martin-Löf Most computerized proof-writing systems use a type theory for their foundation. A common one is Thierry Coquand's Calculus of Inductive Constructions. ## History Type theory was created to avoid paradoxes in naive set theory and formal logic, such as Russell's paradox which demonstrates that, without proper axioms, it is possible to define the set of all sets that are not members of themselves; this set both contains itself and does not contain itself. Between 1902 and 1908, Bertrand Russell proposed various solutions to this problem. By 1908, Russell arrived at a ramified theory of types together with an axiom of reducibility, both of which appeared in Whitehead and Russell's Principia Mathematica published in 1910, 1912, and 1913. This system avoided contradictions suggested in Russell's paradox by creating a hierarchy of types and then assigning each concrete mathematical entity to a specific type. Entities of a given type were built exclusively of subtypes of that type, thus preventing an entity from being defined using itself. This resolution of Russell's paradox is similar to approaches taken in other formal systems, such as Zermelo-Fraenkel set theory. Type theory is particularly popular in conjunction with Alonzo Church's lambda calculus. One notable early example of type theory is Church's simply typed lambda calculus. Church's theory of types helped the formal system avoid the Kleene–Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated that it could serve as a foundation of mathematics and it was referred to as a higher-order logic. In the modern literature, "type theory" refers to a typed system based around lambda calculus. One influential system is Per Martin-Löf's intuitionistic type theory, which was proposed as a foundation for constructive mathematics. Another is Thierry Coquand's calculus of constructions, which is used as the foundation by Rocq (previously known as Coq), Lean, and other computer proof assistants. Type theory is an active area of research, one direction being the development of homotopy type theory. ## Applications ### Mathematical foundations The first computer proof assistant, called Automath, used type theory to encode mathematics on a computer. Martin-Löf specifically developed intuitionistic type theory to encode all mathematics to serve as a new foundation for mathematics. There is ongoing research into mathematical foundations using homotopy type theory. Mathematicians working in category theory already had difficulty working with the widely accepted foundation of Zermelo–Fraenkel set theory. This led to proposals such as Lawvere's Elementary Theory of the Category of Sets (ETCS). #### Homotopy type theory continues in this line using type theory. Researchers are exploring connections between dependent types (especially the identity type) and algebraic topology (specifically homotopy). ### Proof assistants Much of the current research into type theory is driven by proof checkers, interactive proof assistants, and automated theorem provers. Most of these systems use a type theory as the mathematical foundation for encoding proofs, which is not surprising, given the close connection between type theory and programming languages: - LF is used by Twelf, often to define other type theories; - many type theories which fall under higher-order logic are used by the HOL family of provers and PVS; - computational type theory is used by NuPRL; - calculus of constructions and its derivatives are used by Rocq (previously known as Coq), Matita, and Lean; - UTT (Luo's Unified Theory of dependent Types) is used by Agda which is both a programming language and proof assistant Many type theories are supported by LEGO and Isabelle. Isabelle also supports foundations besides type theories, such as ZFC. Mizar is an example of a proof system that only supports set theory. ### Programming languages Any static program analysis, such as the type checking algorithms in the semantic analysis phase of compiler, has a connection to type theory. A prime example is Agda, a programming language which uses UTT (Luo's Unified Theory of dependent Types) for its type system. The programming language ML was developed for manipulating type theories (see LCF) and its own type system was heavily influenced by them. ### Linguistics Type theory is also widely used in formal theories of semantics of natural languages, especially Montague grammar and its descendants. In particular, categorial grammars and pregroup grammars extensively use type constructors to define the types (noun, verb, etc.) of words. The most common construction takes the basic types $$ e $$ and $$ t $$ for individuals and truth-values, respectively, and defines the set of types recursively as follows: - if $$ a $$ and $$ b $$ are types, then so is $$ \langle a,b\rangle $$ ; - nothing except the basic types, and what can be constructed from them by means of the previous clause are types. A complex type $$ \langle a,b\rangle $$ is the type of functions from entities of type $$ a $$ to entities of type $$ b $$ . Thus one has types like $$ \langle e,t\rangle $$ which are interpreted as elements of the set of functions from entities to truth-values, i.e. indicator functions of sets of entities. An expression of type $$ \langle\langle e,t\rangle,t\rangle $$ is a function from sets of entities to truth-values, i.e. a (indicator function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers, like everybody or nobody (Montague 1973, Barwise and Cooper 1981). Type theory with records is a formal semantics representation framework, using records to express type theory types. It has been used in natural language processing, principally computational semantics and dialogue systems.Cooper, Robin (2010). Type theory and semantics in flux. Handbook of the Philosophy of Science. Volume 14: Philosophy of Linguistics. Elsevier. ### Social sciences Gregory Bateson introduced a theory of logical types into the social sciences; his notions of double bind and logical levels are based on Russell's theory of types. ## Logic A type theory is a mathematical logic, which is to say it is a collection of rules of inference that result in judgments. Most logics have judgments asserting "The proposition $$ \varphi $$ is true", or "The formula $$ \varphi $$ is a well-formed formula". A type theory has judgments that define types and assign them to a collection of formal objects, known as terms. A term and its type are often written together as $$ \mathrm{term}:\mathsf{type} $$ . ### Terms A term in logic is recursively defined as a constant symbol, variable, or a function application, where a term is applied to another term. Constant symbols could include the natural number $$ 0 $$ , the Boolean value $$ \mathrm{true} $$ , and functions such as the successor function $$ \mathrm{S} $$ and conditional operator $$ \mathrm{if} $$ . Thus some terms could be $$ 0 $$ , $$ (\mathrm{S}\,0) $$ , $$ (\mathrm{S}\,(\mathrm{S}\,0)) $$ , and $$ (\mathrm{if}\,\mathrm{true}\,0\,(\mathrm{S}\,0)) $$ . ### Judgments Most type theories have 4 judgments: - " $$ T $$ is a type" - " $$ t $$ is a term of type $$ T $$ " - "Type $$ T_1 $$ is equal to type $$ T_2 $$ " - "Terms $$ t_1 $$ and $$ t_2 $$ both of type $$ T $$ are equal" Judgments may follow from assumptions. For example, one might say "assuming $$ x $$ is a term of type $$ \mathsf{bool} $$ and $$ y $$ is a term of type $$ \mathsf{nat} $$ , it follows that $$ (\mathrm{if}\,x\,y\,y) $$ is a term of type $$ \mathsf{nat} $$ ". Such judgments are formally written with the turnstile symbol $$ \vdash $$ . $$ x:\mathsf{bool},y:\mathsf{nat}\vdash(\textrm{if}\,x\,y\,y): \mathsf{nat} $$ If there are no assumptions, there will be nothing to the left of the turnstile. $$ \vdash \mathrm{S}:\mathsf{nat}\to\mathsf{nat} $$ The list of assumptions on the left is the context of the judgment. Capital greek letters, such as and $$ \Delta $$ , are common choices to represent some or all of the assumptions. The 4 different judgments are thus usually written as follows. Formal notation for judgments Description Type is a type (under assumptions ). is a term of type (under assumptions ).Type is equal to type (under assumptions ).Terms and are both of type and are equal (under assumptions ). Some textbooks use a triple equal sign $$ \equiv $$ to stress that this is judgmental equality and thus an extrinsic notion of equality. The judgments enforce that every term has a type. The type will restrict which rules can be applied to a term. ### Rules of Inference A type theory's inference rules say what judgments can be made, based on the existence of other judgments. Rules are expressed as a Gentzen-style deduction using a horizontal line, with the required input judgments above the line and the resulting judgment below the line. For example, the following inference rule states a substitution rule for judgmental equality. $$ \begin{array}{c} \Gamma\vdash t:T_1 \qquad \Delta\vdash T_1 = T_2 \\ \hline \Gamma,\Delta\vdash t:T_2 \end{array} $$ The rules are syntactic and work by rewriting. The metavariables $$ \Gamma $$ , $$ \Delta $$ , $$ t $$ , $$ T_1 $$ , and $$ T_2 $$ may actually consist of complex terms and types that contain many function applications, not just single symbols. To generate a particular judgment in type theory, there must be a rule to generate it, as well as rules to generate all of that rule's required inputs, and so on. The applied rules form a proof tree, where the top-most rules need no assumptions. One example of a rule that does not require any inputs is one that states the type of a constant term. For example, to assert that there is a term $$ 0 $$ of type $$ \mathsf{nat} $$ , one would write the following. $$ \begin{array}{c} \hline \vdash 0 : nat \\ \end{array} $$ #### Type inhabitation Generally, the desired conclusion of a proof in type theory is one of type inhabitation. The decision problem of type inhabitation (abbreviated by $$ \exists t.\Gamma \vdash t : \tau? $$ ) is: Given a context $$ \Gamma $$ and a type $$ \tau $$ , decide whether there exists a term $$ t $$ that can be assigned the type $$ \tau $$ in the type environment $$ \Gamma $$ . Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types. A type theory usually has several rules, including ones to: - create a judgment (known as a context in this case) - add an assumption to the context (context weakening) - rearrange the assumptions - use an assumption to create a variable - define reflexivity, symmetry and transitivity for judgmental equality - define substitution for application of lambda terms - list all the interactions of equality, such as substitution - define a hierarchy of type universes - assert the existence of new types Also, for each "by rule" type, there are 4 different kinds of rules - "type formation" rules say how to create the type - "term introduction" rules define the canonical terms and constructor functions, like "pair" and "S". - "term elimination" rules define the other functions like "first", "second", and "R". - "computation" rules specify how computation is performed with the type-specific functions. For examples of rules, an interested reader may follow Appendix A.2 of the Homotopy Type Theory book, or read Martin-Löf's Intuitionistic Type Theory. ## Connections to foundations The logical framework of a type theory bears a resemblance to intuitionistic, or constructive, logic. Formally, type theory is often cited as an implementation of the Brouwer–Heyting–Kolmogorov interpretation of intuitionistic logic. Additionally, connections can be made to category theory and computer programs. ### Intuitionistic logic When used as a foundation, certain types are interpreted to be propositions (statements that can be proven), and terms inhabiting the type are interpreted to be proofs of that proposition. When some types are interpreted as propositions, there is a set of common types that can be used to connect them to make a Boolean algebra out of types. However, the logic is not classical logic but intuitionistic logic, which is to say it does not have the law of excluded middle nor double negation. Under this intuitionistic interpretation, there are common types that act as the logical operators: Logic Name Logic Notation Type Notation Type NameTrueUnit TypeFalseEmpty TypeImplicationFunctionNotFunction to Empty TypeAndProduct TypeOrSum TypeFor AllDependent ProductExistsDependent Sum Because the law of excluded middle does not hold, there is no term of type $$ \Pi a.A+ (A\to\bot) $$ . Likewise, double negation does not hold, so there is no term of type $$ \Pi A.((A\to\bot)\to\bot)\to A $$ . It is possible to include the law of excluded middle and double negation into a type theory, by rule or assumption. However, terms may not compute down to canonical terms and it will interfere with the ability to determine if two terms are judgementally equal to each other. #### Constructive mathematics Per Martin-Löf proposed his intuitionistic type theory as a foundation for constructive mathematics. Constructive mathematics requires when proving "there exists an $$ x $$ with property $$ P(x) $$ ", one must construct a particular $$ x $$ and a proof that it has property $$ P $$ . In type theory, existence is accomplished using the dependent product type, and its proof requires a term of that type. An example of a non-constructive proof is proof by contradiction. The first step is assuming that $$ x $$ does not exist and refuting it by contradiction. The conclusion from that step is "it is not the case that $$ x $$ does not exist". The last step is, by double negation, concluding that $$ x $$ exists. Constructive mathematics does not allow the last step of removing the double negation to conclude that $$ x $$ exists. Most of the type theories proposed as foundations are constructive, and this includes most of the ones used by proof assistants. It is possible to add non-constructive features to a type theory, by rule or assumption. These include operators on continuations such as call with current continuation. However, these operators tend to break desirable properties such as canonicity and parametricity. ### Curry-Howard correspondence The Curry–Howard correspondence is the observed similarity between logics and programming languages. The implication in logic, "A $$ \to $$ B" resembles a function from type "A" to type "B". For a variety of logics, the rules are similar to expressions in a programming language's types. The similarity goes farther, as applications of the rules resemble programs in the programming languages. Thus, the correspondence is often summarized as "proofs as programs". The opposition of terms and types can also be viewed as one of implementation and specification. By program synthesis, (the computational counterpart of) type inhabitation can be used to construct (all or parts of) programs from the specification given in the form of type information. #### Type inference Many programs that work with type theory (e.g., interactive theorem provers) also do type inferencing. It lets them select the rules that the user intends, with fewer actions by the user. ### Research areas #### Category theory Although the initial motivation for category theory was far removed from foundationalism, the two fields turned out to have deep connections. As John Lane Bell writes: "In fact categories can themselves be viewed as type theories of a certain kind; this fact alone indicates that type theory is much more closely related to category theory than it is to set theory." In brief, a category can be viewed as a type theory by regarding its objects as types (or sorts ), i.e. "Roughly speaking, a category may be thought of as a type theory shorn of its syntax." A number of significant results follow in this way: - cartesian closed categories correspond to the typed λ-calculus (Lambek, 1970); - C-monoids (categories with products and exponentials and one non-terminal object) correspond to the untyped λ-calculus (observed independently by Lambek and Dana Scott around 1980); - locally cartesian closed categories correspond to Martin-Löf type theories (Seely, 1984). The interplay, known as categorical logic, has been a subject of active research since then; see the monograph of Jacobs (1999) for instance. Homotopy type theory Homotopy type theory attempts to combine type theory and category theory. It focuses on equalities, especially equalities between types. Homotopy type theory differs from intuitionistic type theory mostly by its handling of the equality type. In 2016, cubical type theory was proposed, which is a homotopy type theory with normalization. ## Definitions ### Terms and types #### Atomic terms The most basic types are called atoms, and a term whose type is an atom is known as an atomic term. Common atomic terms included in type theories are natural numbers, often notated with the type $$ \mathsf{nat} $$ , Boolean logic values ( $$ \mathrm{true} $$ / $$ \mathrm{false} $$ ), notated with the type $$ \mathsf{bool} $$ , and formal variables, whose type may vary. For example, the following may be atomic terms. - $$ 42:\mathsf{nat} $$ - $$ \mathrm{true}:\mathsf{bool} $$ - $$ x:\mathsf{nat} $$ - $$ y:\mathsf{bool} $$ #### Function terms In addition to atomic terms, most modern type theories also allow for functions. Function types introduce an arrow symbol, and are defined inductively: If $$ \sigma $$ and $$ \tau $$ are types, then the notation $$ \sigma\to\tau $$ is the type of a function which takes a parameter of type $$ \sigma $$ and returns a term of type $$ \tau $$ . Types of this form are known as simple types. Some terms may be declared directly as having a simple type, such as the following term, $$ \mathrm{add} $$ , which takes in two natural numbers in sequence and returns one natural number. $$ \mathrm{add}:\mathsf{nat}\to (\mathsf{nat}\to\mathsf{nat}) $$ Strictly speaking, a simple type only allows for one input and one output, so a more faithful reading of the above type is that $$ \mathrm{add} $$ is a function which takes in a natural number and returns a function of the form $$ \mathsf{nat}\to\mathsf{nat} $$ . The parentheses clarify that $$ \mathrm{add} $$ does not have the type $$ (\mathsf{nat}\to \mathsf{nat})\to\mathsf{nat} $$ , which would be a function which takes in a function of natural numbers and returns a natural number. The convention is that the arrow is right associative, so the parentheses may be dropped from $$ \mathrm{add} $$ 's type. #### Lambda terms New function terms may be constructed using lambda expressions, and are called lambda terms. These terms are also defined inductively: a lambda term has the form $$ (\lambda v .t) $$ , where $$ v $$ is a formal variable and $$ t $$ is a term, and its type is notated $$ \sigma\to\tau $$ , where $$ \sigma $$ is the type of $$ v $$ , and $$ \tau $$ is the type of $$ t $$ . The following lambda term represents a function which doubles an input natural number. $$ (\lambda x.\mathrm{add}\,x\,x): \mathsf{nat}\to\mathsf{nat} $$ The variable is $$ x $$ and (implicit from the lambda term's type) must have type $$ \mathsf{nat} $$ . The term $$ \mathrm{add}\,x\,x $$ has type $$ \mathsf{nat} $$ , which is seen by applying the function application inference rule twice. Thus, the lambda term has type $$ \mathsf{nat}\to\mathsf{nat} $$ , which means it is a function taking a natural number as an argument and returning a natural number. A lambda term is often referred to as an anonymous function because it lacks a name. The concept of anonymous functions appears in many programming languages. ### Inference Rules #### Function application The power of type theories is in specifying how terms may be combined by way of inference rules. Type theories which have functions also have the inference rule of function application: if $$ t $$ is a term of type $$ \sigma\to\tau $$ , and $$ s $$ is a term of type $$ \sigma $$ , then the application of $$ t $$ to $$ s $$ , often written $$ (t\,s) $$ , has type $$ \tau $$ . For example, if one knows the type notations $$ 0:\textsf{nat} $$ , $$ 1:\textsf{nat} $$ , and $$ 2:\textsf{nat} $$ , then the following type notations can be deduced from function application. - $$ (\mathrm{add}\,1): \textsf{nat}\to\textsf{nat} $$ - $$ ((\mathrm{add}\,2)\,0): \textsf{nat} $$ - $$ ((\mathrm{add}\,1)((\mathrm{add}\,2)\,0)): \textsf{nat} $$ Parentheses indicate the order of operations; however, by convention, function application is left associative, so parentheses can be dropped where appropriate. In the case of the three examples above, all parentheses could be omitted from the first two, and the third may simplified to $$ \mathrm{add}\,1\, (\mathrm{add}\,2\,0): \textsf{nat} $$ . #### Reductions Type theories that allow for lambda terms also include inference rules known as $$ \beta $$ -reduction and $$ \eta $$ -reduction. They generalize the notion of function application to lambda terms. Symbolically, they are written - $$ (\lambda v. t)\,s\rightarrow t[v \colon= s] $$ ( $$ \beta $$ -reduction). - $$ (\lambda v. t\, v)\rightarrow t $$ , if $$ v $$ is not a free variable in $$ t $$ ( $$ \eta $$ -reduction). The first reduction describes how to evaluate a lambda term: if a lambda expression $$ (\lambda v .t) $$ is applied to a term $$ s $$ , one replaces every occurrence of $$ v $$ in $$ t $$ with $$ s $$ . The second reduction makes explicit the relationship between lambda expressions and function types: if $$ (\lambda v. t\, v) $$ is a lambda term, then it must be that $$ t $$ is a function term because it is being applied to $$ v $$ . Therefore, the lambda expression is equivalent to just $$ t $$ , as both take in one argument and apply $$ t $$ to it. For example, the following term may be $$ \beta $$ -reduced. $$ (\lambda x.\mathrm{add}\,x\,x)\,2\rightarrow \mathrm{add}\,2\,2 $$ In type theories that also establish notions of equality for types and terms, there are corresponding inference rules of $$ \beta $$ -equality and $$ \eta $$ -equality. ### Common terms and types #### Empty type The empty type has no terms. The type is usually written $$ \bot $$ or $$ \mathbb 0 $$ . One use for the empty type is proofs of type inhabitation. If for a type $$ a $$ , it is consistent to derive a function of type $$ a\to\bot $$ , then $$ a $$ is uninhabited, which is to say it has no terms. #### Unit type The unit type has exactly 1 canonical term. The type is written $$ \top $$ or $$ \mathbb 1 $$ and the single canonical term is written $$ \ast $$ . The unit type is also used in proofs of type inhabitation. If for a type $$ a $$ , it is consistent to derive a function of type $$ \top\to a $$ , then $$ a $$ is inhabited, which is to say it must have one or more terms. #### Boolean type The Boolean type has exactly 2 canonical terms. The type is usually written $$ \textsf{bool} $$ or $$ \mathbb B $$ or $$ \mathbb 2 $$ . The canonical terms are usually $$ \mathrm{true} $$ and $$ \mathrm{false} $$ . #### Natural numbers Natural numbers are usually implemented in the style of Peano Arithmetic. There is a canonical term $$ 0:\mathsf{nat} $$ for zero. Canonical values larger than zero use iterated applications of a successor function $$ \mathrm{S}:\mathsf{nat}\to\mathsf{nat} $$ . ### Type constructors Some type theories allow for types of complex terms, such as functions or lists, to depend on the types of its arguments; these are called type constructors. For example, a type theory could have the dependent type $$ \mathsf{list}\,a $$ , which should correspond to lists of terms, where each term must have type $$ a $$ . In this case, $$ \mathsf{list} $$ has the kind $$ U\to U $$ , where $$ U $$ denotes the universe of all types in the theory. #### Product type The product type, $$ \times $$ , depends on two types, and its terms are commonly written as ordered pairs $$ (s,t) $$ . The pair $$ (s,t) $$ has the product type $$ \sigma\times\tau $$ , where $$ \sigma $$ is the type of $$ s $$ and $$ \tau $$ is the type of $$ t $$ . Each product type is then usually defined with eliminator functions $$ \mathrm{first}:\sigma\times\tau\to\sigma $$ and $$ \mathrm{second}:\sigma\times\tau\to\tau $$ . - $$ \mathrm{first}\,(s,t) $$ returns $$ s $$ , and - $$ \mathrm{second}\,(s,t) $$ returns $$ t $$ . Besides ordered pairs, this type is used for the concepts of logical conjunction and intersection. #### Sum type The sum type is written as either $$ + $$ or $$ \sqcup $$ . In programming languages, sum types may be referred to as tagged unions. Each type $$ \sigma\sqcup\tau $$ is usually defined with constructors $$ \mathrm{left}:\sigma\to(\sigma\sqcup\tau) $$ and $$ \mathrm{right}:\tau\to(\sigma\sqcup\tau) $$ , which are injective, and an eliminator function $$ \mathrm{match}:(\sigma\to\rho)\to(\tau\to\rho)\to(\sigma\sqcup\tau)\to\rho $$ such that - $$ \mathrm{match}\,f\,g\,(\mathrm{left}\,x) $$ returns $$ f\,x $$ , and - $$ \mathrm{match}\,f\,g\,(\mathrm{right}\,y) $$ returns $$ g\,y $$ . The sum type is used for the concepts of logical disjunction and union. ### Polymorphic types Some theories also allow terms to have their definitions depend on types. For instance, an identity function of any type could be written as $$ \lambda x.x:\forall\alpha. \alpha\to\alpha $$ . The function is said to be polymorphic in $$ \alpha $$ , or generic in $$ x $$ . As another example, consider a function $$ \mathrm{append} $$ , which takes in a $$ \mathsf{list}\,a $$ and a term of type $$ a $$ , and returns the list with the element at the end. The type annotation of such a function would be $$ \mathrm{append}:\forall\,a.\mathsf{list}\,a\to a\to\mathsf{list}\,a $$ , which can be read as "for any type $$ a $$ , pass in a $$ \mathsf{list}\,a $$ and an $$ a $$ , and return a $$ \mathsf{list}\,a $$ ". Here $$ \mathrm{append} $$ is polymorphic in $$ a $$ . #### Products and sums With polymorphism, the eliminator functions can be defined generically for all product types as $$ \mathrm{first}:\forall\,\sigma\,\tau.\sigma\times\tau\to\sigma $$ and $$ \mathrm{second}:\forall\,\sigma\,\tau.\sigma\times\tau\to\tau $$ . - $$ \mathrm{first}\,(s,t) $$ returns $$ s $$ , and - $$ \mathrm{second}\,(s,t) $$ returns $$ t $$ . Likewise, the sum type constructors can be defined for all valid types of sum members as $$ \mathrm{left}:\forall\,\sigma\,\tau.\sigma\to(\sigma\sqcup\tau) $$ and $$ \mathrm{right}:\forall\,\sigma\,\tau.\tau\to(\sigma\sqcup\tau) $$ , which are injective, and the eliminator function can be given as $$ \mathrm{match}:\forall\,\sigma\,\tau\,\rho.(\sigma\to\rho)\to(\tau\to\rho)\to(\sigma\sqcup\tau)\to\rho $$ such that - $$ \mathrm{match}\,f\,g\,(\mathrm{left}\,x) $$ returns $$ f\,x $$ , and - $$ \mathrm{match}\,f\,g\,(\mathrm{right}\,y) $$ returns $$ g\,y $$ . ### Dependent typing Some theories also permit types to be dependent on terms instead of types. For example, a theory could have the type $$ \mathsf{vector}\,n $$ , where $$ n $$ is a term of type $$ \mathsf{nat} $$ encoding the length of the vector. This allows for greater specificity and type safety: functions with vector length restrictions or length matching requirements, such as the dot product, can encode this requirement as part of the type. There are foundational issues that can arise from dependent types if a theory is not careful about what dependencies are allowed, such as Girard's Paradox. The logician Henk Barendegt introduced the lambda cube as a framework for studying various restrictions and levels of dependent typing. #### Dependent products and sums Two common type dependencies, dependent product and dependent sum types, allow for the theory to encode BHK intuitionistic logic by acting as equivalents to universal and existential quantification; this is formalized by Curry–Howard Correspondence. As they also connect to products and sums in set theory, they are often written with the symbols $$ \Pi $$ and $$ \Sigma $$ , respectively. Sum types are seen in dependent pairs, where the second type depends on the value of the first term. This arises naturally in computer science where functions may return different types of outputs based on the input. For example, the Boolean type is usually defined with an eliminator function $$ \mathrm{if} $$ , which takes three arguments and behaves as follows. - $$ \mathrm{if}\,\mathrm{true}\,x\,y $$ returns $$ x $$ , and - $$ \mathrm{if}\,\mathrm{false}\,x\,y $$ returns $$ y $$ . Ordinary definitions of $$ \mathrm{if} $$ require $$ x $$ and $$ y $$ to have the same type. If the type theory allows for dependent types, then it is possible to define a dependent type $$ x:\mathsf{bool}\,\vdash\,\mathrm{TF}\,x:U\to U\to U $$ such that - $$ \mathrm{TF}\,\mathrm{true}\,\sigma\,\tau $$ returns $$ \sigma $$ , and - $$ \mathrm{TF}\,\mathrm{false}\,\sigma\,\tau $$ returns $$ \tau $$ . The type of $$ \mathrm{if} $$ may then be written as $$ \forall\,\sigma\,\tau.\Pi_{x:\mathsf{bool}}.\sigma\to\tau\to\mathrm{TF}\,x\,\sigma\,\tau $$ . #### Identity type Following the notion of Curry-Howard Correspondence, the identity type is a type introduced to mirror propositional equivalence, as opposed to the judgmental (syntactic) equivalence that type theory already provides. An identity type requires two terms of the same type and is written with the symbol $$ = $$ . For example, if $$ x+1 $$ and $$ 1+x $$ are terms, then $$ x+1=1+x $$ is a possible type. Canonical terms are created with a reflexivity function, $$ \mathrm{refl} $$ . For a term $$ t $$ , the call $$ \mathrm{refl}\,t $$ returns the canonical term inhabiting the type $$ t=t $$ . The complexities of equality in type theory make it an active research topic; homotopy type theory is a notable area of research that mainly deals with equality in type theory. #### Inductive types Inductive types are a general template for creating a large variety of types. In fact, all the types described above and more can be defined using the rules of inductive types. Two methods of generating inductive types are induction-recursion and induction-induction. A method that only uses lambda terms is Scott encoding. Some proof assistants, such as Rocq (previously known as Coq) and Lean, are based on the calculus for inductive constructions, which is a calculus of constructions with inductive types. ## Differences from set theory The most commonly accepted foundation for mathematics is first-order logic with the language and axioms of Zermelo–Fraenkel set theory with the axiom of choice, abbreviated ZFC. Type theories having sufficient expressibility may also act as a foundation of mathematics. There are a number of differences between these two approaches. - Set theory has both rules and axioms, while type theories only have rules. Type theories, in general, do not have axioms and are defined by their rules of inference. - Classical set theory and logic have the law of excluded middle. When a type theory encodes the concepts of "and" and "or" as types, it leads to intuitionistic logic, and does not necessarily have the law of excluded middle. - In set theory, an element is not restricted to one set. The element can appear in subsets and unions with other sets. In type theory, terms (generally) belong to only one type. Where a subset would be used, type theory can use a predicate function or use a dependently-typed product type, where each element $$ x $$ is paired with a proof that the subset's property holds for $$ x $$ . Where a union would be used, type theory uses the sum type, which contains new canonical terms. - Type theory has a built-in notion of computation. Thus, "1+1" and "2" are different terms in type theory, but they compute to the same value. Moreover, functions are defined computationally as lambda terms. In set theory, "1+1=2" means that "1+1" is just another way to refer the value "2". Type theory's computation does require a complicated concept of equality. - Set theory encodes numbers as sets. Type theory can encode numbers as functions using Church encoding, or more naturally as inductive types, and the construction closely resembles Peano's axioms. - In type theory, proofs are types whereas in set theory, proofs are part of the underlying first-order logic. Proponents of type theory will also point out its connection to constructive mathematics through the BHK interpretation, its connection to logic by the Curry–Howard isomorphism, and its connections to Category theory. ### Properties of type theories Terms usually belong to a single type. However, there are set theories that define "subtyping". Computation takes place by repeated application of rules. Many types of theories are strongly normalizing, which means that any order of applying the rules will always end in the same result. However, some are not. In a normalizing type theory, the one-directional computation rules are called "reduction rules", and applying the rules "reduces" the term. If a rule is not one-directional, it is called a "conversion rule". Some combinations of types are equivalent to other combinations of types. When functions are considered "exponentiation", the combinations of types can be written similarly to algebraic identities. Thus, $$ {\mathbb 0} + A \cong A $$ , $$ {\mathbb 1} \times A \cong A $$ , $$ {\mathbb 1} + {\mathbb 1} \cong {\mathbb 2} $$ , $$ A^{B+C} \cong A^B \times A^C $$ , $$ A^{B\times C} \cong (A^B)^C $$ . ### Axioms Most type theories do not have axioms. This is because a type theory is defined by its rules of inference. This is a source of confusion for people familiar with Set Theory, where a theory is defined by both the rules of inference for a logic (such as first-order logic) and axioms about sets. Sometimes, a type theory will add a few axioms. An axiom is a judgment that is accepted without a derivation using the rules of inference. They are often added to ensure properties that cannot be added cleanly through the rules. Axioms can cause problems if they introduce terms without a way to compute on those terms. That is, axioms can interfere with the normalizing property of the type theory. Some commonly encountered axioms are: - "Axiom K" ensures "uniqueness of identity proofs". That is, that every term of an identity type is equal to reflexivity. - "Univalence Axiom" holds that equivalence of types is equality of types. The research into this property led to cubical type theory, where the property holds without needing an axiom. - "Law of Excluded Middle" is often added to satisfy users who want classical logic, instead of intuitionistic logic. The Axiom of Choice does not need to be added to type theory, because in most type theories it can be derived from the rules of inference. This is because of the constructive nature of type theory, where proving that a value exists requires a method to compute the value. The Axiom of Choice is less powerful in type theory than most set theories, because type theory's functions must be computable and, being syntax-driven, the number of terms in a type must be countable. (See .) ## List of type theories ### Major - Simply typed lambda calculus which is a higher-order logic - Intuitionistic type theory - System F - LF is often used to define other type theories - Calculus of constructions and its derivatives ### Minor - Automath - ST type theory - UTT (Luo's Unified Theory of dependent Types) - some forms of combinatory logic - others defined in the lambda cube (also known as pure type systems) - others under the name typed lambda calculus ### Active research - Homotopy type theory explores equality of types - Cubical Type Theory is an implementation of homotopy type theory
https://en.wikipedia.org/wiki/Type_theory
Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain, and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena. The advancement of science generally depends on the interplay between experimental studies and theory. In some cases, theoretical physics adheres to standards of mathematical rigour while giving little weight to experiments and observations. For example, while developing special relativity, Albert Einstein was concerned with the Lorentz transformation which left Maxwell's equations invariant, but was apparently uninterested in the Michelson–Morley experiment on Earth's drift through a luminiferous aether. Conversely, Einstein was awarded the Nobel Prize for explaining the photoelectric effect, previously an experimental result lacking a theoretical formulation. ## Overview A physical theory is a model of physical events. It is judged by the extent to which its predictions agree with empirical observations. The quality of a physical theory is also judged on its ability to make new predictions which can be verified by new observations. A physical theory differs from a mathematical theorem in that while both are based on some form of axioms, judgment of mathematical applicability is not based on agreement with any experimental results.Mark C. Chu-Carroll, March 13, 2007:Theories, Theorems, Lemmas, and Corollaries. Good Math, Bad Math blog. A physical theory similarly differs from a mathematical theory, in the sense that the word "theory" has a different meaning in mathematical terms. A physical theory involves one or more relationships between various measurable quantities. Archimedes realized that a ship floats by displacing its mass of water, Pythagoras understood the relation between the length of a vibrating string and the musical tone it produces. Other examples include entropy as a measure of the uncertainty regarding the positions and motions of unseen particles and the quantum mechanical idea that (action and) energy are not continuously variable. Theoretical physics consists of several different approaches. In this regard, theoretical particle physics forms a good example. For instance: "phenomenologists" might employ (semi-) empirical formulas and heuristics to agree with experimental results, often without deep physical understanding. "Modelers" (also called "model-builders") often appear much like phenomenologists, but try to model speculative theories that have certain desirable features (rather than on experimental data), or apply the techniques of mathematical modeling to physics problems. Some attempt to create approximate theories, called effective theories, because fully developed theories may be regarded as unsolvable or too complicated. Other theorists may try to unify, formalise, reinterpret or generalise extant theories, or create completely new ones altogether. Sometimes the vision provided by pure mathematical systems can provide clues to how a physical system might be modeled; e.g., the notion, due to Riemann and others, that space itself might be curved. Theoretical problems that need computational investigation are often the concern of computational physics. Theoretical advances may consist in setting aside old, incorrect paradigms (e.g., aether theory of light propagation, caloric theory of heat, burning consisting of evolving phlogiston, or astronomical bodies revolving around the Earth) or may be an alternative model that provides answers that are more accurate or that can be more widely applied. In the latter case, a correspondence principle will be required to recover the previously known result.Enc. Britannica (1994), pg 844. Sometimes though, advances may proceed along different paths. For example, an essentially correct theory may need some conceptual or factual revisions; atomic theory, first postulated millennia ago (by several thinkers in Greece and India) and the two-fluid theory of electricity are two cases in this point. However, an exception to all the above is the wave–particle duality, a theory combining aspects of different, opposing models via the Bohr complementarity principle. Physical theories become accepted if they are able to make correct predictions and no (or few) incorrect ones. The theory should have, at least as a secondary objective, a certain economy and elegance (compare to mathematical beauty), a notion sometimes called "Occam's razor" after the 13th-century English philosopher William of Occam (or Ockham), in which the simpler of two theories that describe the same matter just as adequately is preferred (but conceptual simplicity may mean mathematical complexity). They are also more likely to be accepted if they connect a wide range of phenomena. Testing the consequences of a theory is part of the scientific method. Physical theories can be grouped into three categories: mainstream theories, proposed theories and fringe theories. ## History Theoretical physics began at least 2,300 years ago, under the Pre-socratic philosophy, and continued by Plato and Aristotle, whose views held sway for a millennium. During the rise of medieval universities, the only acknowledged intellectual disciplines were the seven liberal arts of the Trivium like grammar, logic, and rhetoric and of the Quadrivium like arithmetic, geometry, music and astronomy. During the Middle Ages and Renaissance, the concept of experimental science, the counterpoint to theory, began with scholars such as Ibn al-Haytham and Francis Bacon. As the Scientific Revolution gathered pace, the concepts of matter, energy, space, time and causality slowly began to acquire the form we know today, and other sciences spun off from the rubric of natural philosophy. Thus began the modern era of theory with the Copernican paradigm shift in astronomy, soon followed by Johannes Kepler's expressions for planetary orbits, which summarized the meticulous observations of Tycho Brahe; the works of these men (alongside Galileo's) can perhaps be considered to constitute the Scientific Revolution. The great push toward the modern concept of explanation started with Galileo, one of the few physicists who was both a consummate theoretician and a great experimentalist. The analytic geometry and mechanics of Descartes were incorporated into the calculus and mechanics of Isaac Newton, another theoretician/experimentalist of the highest order, writing Principia Mathematica. In it contained a grand synthesis of the work of Copernicus, Galileo and Kepler; as well as Newton's theories of mechanics and gravitation, which held sway as worldviews until the early 20th century. Simultaneously, progress was also made in optics (in particular colour theory and the ancient science of geometrical optics), courtesy of Newton, Descartes and the Dutchmen Snell and Huygens. In the 18th and 19th centuries Joseph-Louis Lagrange, Leonhard Euler and William Rowan Hamilton would extend the theory of classical mechanics considerably. They picked up the interactive intertwining of mathematics and physics begun two millennia earlier by Pythagoras. Among the great conceptual achievements of the 19th and 20th centuries were the consolidation of the idea of energy (as well as its global conservation) by the inclusion of heat, electricity and magnetism, and then light. The laws of thermodynamics, and most importantly the introduction of the singular concept of entropy began to provide a macroscopic explanation for the properties of matter. Statistical mechanics (followed by statistical physics and Quantum statistical mechanics) emerged as an offshoot of thermodynamics late in the 19th century. Another important event in the 19th century was the discovery of electromagnetic theory, unifying the previously separate phenomena of electricity, magnetism and light. The pillars of modern physics, and perhaps the most revolutionary theories in the history of physics, have been relativity theory and quantum mechanics. Newtonian mechanics was subsumed under special relativity and Newton's gravity was given a kinematic explanation by general relativity. Quantum mechanics led to an understanding of blackbody radiation (which indeed, was an original motivation for the theory) and of anomalies in the specific heats of solids — and finally to an understanding of the internal structures of atoms and molecules. Quantum mechanics soon gave way to the formulation of quantum field theory (QFT), begun in the late 1920s. In the aftermath of World War 2, more progress brought much renewed interest in QFT, which had since the early efforts, stagnated. The same period also saw fresh attacks on the problems of superconductivity and phase transitions, as well as the first applications of QFT in the area of theoretical condensed matter. The 1960s and 70s saw the formulation of the Standard model of particle physics using QFT and progress in condensed matter physics (theoretical foundations of superconductivity and critical phenomena, among others), in parallel to the applications of relativity to problems in astronomy and cosmology respectively. All of these achievements depended on the theoretical physics as a moving force both to suggest experiments and to consolidate results — often by ingenious application of existing mathematics, or, as in the case of Descartes and Newton (with Leibniz), by inventing new mathematics. Fourier's studies of heat conduction led to a new branch of mathematics: infinite, orthogonal series. Modern theoretical physics attempts to unify theories and explain phenomena in further attempts to understand the Universe, from the cosmological to the elementary particle scale. Where experimentation cannot be done, theoretical physics still tries to advance through the use of mathematical models. ## Mainstream theories Mainstream theories (sometimes referred to as central theories) are the body of knowledge of both factual and scientific views and possess a usual scientific quality of the tests of repeatability, consistency with existing well-established science and experimentation. There do exist mainstream theories that are generally accepted theories based solely upon their effects explaining a wide variety of data, although the detection, explanation, and possible composition are subjects of debate. ### ### Examples - Big Bang - Chaos theory - Classical mechanics - Classical field theory - Dynamo theory - Field theory - Ginzburg–Landau theory - Kinetic theory of gases - Classical electromagnetism - Perturbation theory (quantum mechanics) - Physical cosmology - Quantum chromodynamics - Quantum complexity theory - Quantum electrodynamics - Quantum field theory - Quantum field theory in curved spacetime - Quantum information theory - Quantum mechanics - Quantum thermodynamics - Relativistic quantum mechanics - Scattering theory - Standard Model - Statistical physics - Theory of relativity - Wave–particle duality ## Proposed theories The proposed theories of physics are usually relatively new theories which deal with the study of physics which include scientific approaches, means for determining the validity of models and new types of reasoning used to arrive at the theory. However, some proposed theories include theories that have been around for decades and have eluded methods of discovery and testing. Proposed theories can include fringe theories in the process of becoming established (and, sometimes, gaining wider acceptance). Proposed theories usually have not been tested. In addition to the theories like those listed below, there are also different interpretations of quantum mechanics, which may or may not be considered different theories since it is debatable whether they yield different predictions for physical experiments, even in principle. For example, AdS/CFT correspondence, Chern–Simons theory, graviton, magnetic monopole, string theory, theory of everything. ## Fringe theories Fringe theories include any new area of scientific endeavor in the process of becoming established and some proposed theories. It can include speculative sciences. This includes physics fields and physical theories presented in accordance with known evidence, and a body of associated predictions have been made according to that theory. Some fringe theories go on to become a widely accepted part of physics. Other fringe theories end up being disproven. Some fringe theories are a form of protoscience and others are a form of pseudoscience. The falsification of the original theory sometimes leads to reformulation of the theory. Examples - Aether (classical element) - Luminiferous aether - Digital physics - Electrogravitics - Stochastic electrodynamics - Tesla's dynamic theory of gravity ## Thought experiments vs real experiments "Thought" experiments are situations created in one's mind, asking a question akin to "suppose you are in this situation, assuming such is true, what would follow?". They are usually created to investigate phenomena that are not readily experienced in every-day situations. Famous examples of such thought experiments are Schrödinger's cat, the EPR thought experiment, simple illustrations of time dilation, and so on. These usually lead to real experiments designed to verify that the conclusion (and therefore the assumptions) of the thought experiments are correct. The EPR thought experiment led to the Bell inequalities, which were then tested to various degrees of rigor, leading to the acceptance of the current formulation of quantum mechanics and probabilism as a working hypothesis.
https://en.wikipedia.org/wiki/Theoretical_physics
In mathematics, a power series (in one variable) is an infinite series of the form $$ \sum_{n=0}^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots $$ where represents the coefficient of the nth term and c is a constant called the center of the series. Power series are useful in mathematical analysis, where they arise as Taylor series of infinitely differentiable functions. In fact, Borel's theorem implies that every power series is the Taylor series of some smooth function. In many situations, the center c is equal to zero, for instance for Maclaurin series. In such cases, the power series takes the simpler form $$ \sum_{n=0}^\infty a_n x^n = a_0 + a_1 x + a_2 x^2 + \dots. $$ The partial sums of a power series are polynomials, the partial sums of the Taylor series of an analytic function are a sequence of converging polynomial approximations to the function at the center, and a converging power series can be seen as a kind of generalized polynomial with infinitely many terms. Conversely, every polynomial is a power series with only finitely many non-zero terms. Beyond their role in mathematical analysis, power series also occur in combinatorics as generating functions (a kind of formal power series) and in electronic engineering (under the name of the Z-transform). The familiar decimal notation for real numbers can also be viewed as an example of a power series, with integer coefficients, but with the argument x fixed at . In number theory, the concept of p-adic numbers is also closely related to that of a power series. ## Examples ### Polynomial Every polynomial of degree can be expressed as a power series around any center , where all terms of degree higher than have a coefficient of zero. For instance, the polynomial $$ f(x) = x^2 + 2x + 3 $$ can be written as a power series around the center $$ c = 0 $$ as $$ f(x) = 3 + 2 x + 1 x^2 + 0 x^3 + 0 x^4 + \cdots $$ or around the center $$ c = 1 $$ as $$ f(x) = 6 + 4(x - 1) + 1(x - 1)^2 + 0(x - 1)^3 + 0(x - 1)^4 + \cdots. $$ One can view power series as being like "polynomials of infinite degree", although power series are not polynomials in the strict sense. ### Geometric series, exponential function and sine The geometric series formula $$ \frac{1}{1 - x} = \sum_{n=0}^\infty x^n = 1 + x + x^2 + x^3 + \cdots, $$ which is valid for $$ |x| < 1 $$ , is one of the most important examples of a power series, as are the exponential function formula $$ e^x = \sum_{n=0}^\infty \frac{x^n}{n!} = 1 + x + \frac{x^2}{2!} + \frac{x^3}{3!} + \cdots $$ and the sine formula $$ \sin(x) = \sum_{n=0}^\infty \frac{(-1)^n x^{2n+1}}{(2n + 1)!} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} + \cdots, $$ valid for all real x. These power series are examples of Taylor series (or, more specifically, of Maclaurin series). ### On the set of exponents Negative powers are not permitted in an ordinary power series; for instance, $$ x^{-1} + 1 + x^{1} + x^{2} + \cdots $$ is not considered a power series (although it is a Laurent series). Similarly, fractional powers such as $$ x^\frac{1}{2} $$ are not permitted; fractional powers arise in Puiseux series. The coefficients $$ a_n $$ must not depend on thus for instance $$ \sin(x) x + \sin(2x) x^2 + \sin(3x) x^3 + \cdots $$ is not a power series. ## Radius of convergence A power series $$ \sum_{n=0}^\infty a_n(x-c)^n $$ is convergent for some values of the variable , which will always include since $$ (x-c)^0 = 1 $$ and the sum of the series is thus $$ a_0 $$ for . The series may diverge for other values of , possibly all of them. If is not the only point of convergence, then there is always a number with such that the series converges whenever and diverges whenever . The number is called the radius of convergence of the power series; in general it is given as $$ r = \liminf_{n\to\infty} \left|a_n\right|^{-\frac{1}{n}} $$ or, equivalently, $$ r^{-1} = \limsup_{n\to\infty} \left|a_n\right|^\frac{1}{n}. $$ This is the Cauchy–Hadamard theorem; see limit superior and limit inferior for an explanation of the notation. The relation $$ r^{-1} = \lim_{n\to\infty}\left|{a_{n+1}\over a_n}\right| $$ is also satisfied, if this limit exists. The set of the complex numbers such that is called the disc of convergence of the series. The series converges absolutely inside its disc of convergence and it converges uniformly on every compact subset of the disc of convergence. For , there is no general statement on the convergence of the series. However, Abel's theorem states that if the series is convergent for some value such that , then the sum of the series for is the limit of the sum of the series for where is a real variable less than that tends to . ## Operations on power series ### Addition and subtraction When two functions f and g are decomposed into power series around the same center c, the power series of the sum or difference of the functions can be obtained by termwise addition and subtraction. That is, if $$ f(x) = \sum_{n=0}^\infty a_n (x - c)^n $$ and $$ g(x) = \sum_{n=0}^\infty b_n (x - c)^n $$ then $$ f(x) \pm g(x) = \sum_{n=0}^\infty (a_n \pm b_n) (x - c)^n. $$ The sum of two power series will have a radius of convergence of at least the smaller of the two radii of convergence of the two series, but possibly larger than either of the two. For instance it is not true that if two power series $$ \sum_{n=0}^\infty a_n x^n $$ and $$ \sum_{n=0}^\infty b_n x^n $$ have the same radius of convergence, then $$ \sum_{n=0}^\infty \left(a_n + b_n\right) x^n $$ also has this radius of convergence: if $$ a_n = (-1)^n $$ and $$ b_n = (-1)^{n+1} \left(1 - \frac{1}{3^n}\right) $$ , for instance, then both series have the same radius of convergence of 1, but the series $$ \sum_{n=0}^\infty \left(a_n + b_n\right) x^n = \sum_{n=0}^\infty \frac{(-1)^n}{3^n} x^n $$ has a radius of convergence of 3. ### Multiplication and division With the same definitions for $$ f(x) $$ and $$ g(x) $$ , the power series of the product and quotient of the functions can be obtained as follows: $$ \begin{align} f(x)g(x) &= \biggl(\sum_{n=0}^\infty a_n (x-c)^n\biggr)\biggl(\sum_{n=0}^\infty b_n (x - c)^n\biggr) \\ &= \sum_{i=0}^\infty \sum_{j=0}^\infty a_i b_j (x - c)^{i+j} \\ &= \sum_{n=0}^\infty \biggl(\sum_{i=0}^n a_i b_{n-i}\biggr) (x - c)^n. \end{align} $$ The sequence $$ m_n = \sum_{i=0}^n a_i b_{n-i} $$ is known as the Cauchy product of the sequences $$ a_n $$ and For division, if one defines the sequence $$ d_n $$ by $$ \frac{f(x)}{g(x)} = \frac{\sum_{n=0}^\infty a_n (x - c)^n}{\sum_{n=0}^\infty b_n (x - c)^n} = \sum_{n=0}^\infty d_n (x - c)^n $$ then $$ f(x) = \biggl(\sum_{n=0}^\infty b_n (x - c)^n\biggr)\biggl(\sum_{n=0}^\infty d_n (x - c)^n\biggr) $$ and one can solve recursively for the terms $$ d_n $$ by comparing coefficients. Solving the corresponding equations yields the formulae based on determinants of certain matrices of the coefficients of $$ f(x) $$ and $$ g(x) $$ $$ d_0=\frac{a_0}{b_0} $$ $$ d_n=\frac{1}{b_0^{n+1}} \begin{vmatrix} a_n &b_1 &b_2 &\cdots&b_n \\ a_{n-1}&b_0 &b_1 &\cdots&b_{n-1}\\ a_{n-2}&0 &b_0 &\cdots&b_{n-2}\\ \vdots &\vdots&\vdots&\ddots&\vdots \\ a_0 &0 &0 &\cdots&b_0\end{vmatrix} $$ ### Differentiation and integration Once a function $$ f(x) $$ is given as a power series as above, it is differentiable on the interior of the domain of convergence. It can be differentiated and integrated by treating every term separately since both differentiation and integration are linear transformations of functions: $$ \begin{align} f'(x) &= \sum_{n=1}^\infty a_n n (x - c)^{n-1} = \sum_{n=0}^\infty a_{n+1} (n + 1) (x - c)^n, \\ \int f(x)\,dx &= \sum_{n=0}^\infty \frac{a_n (x - c)^{n+1}}{n + 1} + k = \sum_{n=1}^\infty \frac{a_{n-1} (x - c)^n}{n} + k. \end{align} $$ Both of these series have the same radius of convergence as the original series. ## Analytic functions A function f defined on some open subset U of R or C is called analytic if it is locally given by a convergent power series. This means that every a ∈ U has an open neighborhood V ⊆ U, such that there exists a power series with center a that converges to f(x) for every x ∈ V. Every power series with a positive radius of convergence is analytic on the interior of its region of convergence. All holomorphic functions are complex-analytic. Sums and products of analytic functions are analytic, as are quotients as long as the denominator is non-zero. If a function is analytic, then it is infinitely differentiable, but in the real case the converse is not generally true. For an analytic function, the coefficients an can be computed as $$ a_n = \frac{f^{\left( n \right)} \left( c \right)}{n!} $$ where $$ f^{(n)}(c) $$ denotes the nth derivative of f at c, and $$ f^{(0)}(c) = f(c) $$ . This means that every analytic function is locally represented by its Taylor series. The global form of an analytic function is completely determined by its local behavior in the following sense: if f and g are two analytic functions defined on the same connected open set U, and if there exists an element such that for all , then for all . If a power series with radius of convergence r is given, one can consider analytic continuations of the series, that is, analytic functions f which are defined on larger sets than and agree with the given power series on this set. The number r is maximal in the following sense: there always exists a complex number with such that no analytic continuation of the series can be defined at . The power series expansion of the inverse function of an analytic function can be determined using the Lagrange inversion theorem. ### Behavior near the boundary The sum of a power series with a positive radius of convergence is an analytic function at every point in the interior of the disc of convergence. However, different behavior can occur at points on the boundary of that disc. For example: 1. Divergence while the sum extends to an analytic function: $$ \sum_{n=0}^{\infty}z^n $$ has radius of convergence equal to $$ 1 $$ and diverges at every point of $$ |z|=1 $$ . Nevertheless, the sum in $$ |z|<1 $$ is $$ \frac{1}{1-z} $$ , which is analytic at every point of the plane except for $$ z=1 $$ . 1. Convergent at some points divergent at others: $$ \sum_{n=1}^{\infty}\frac{z^n}{n} $$ has radius of convergence $$ 1 $$ . It converges for $$ z=-1 $$ , while it diverges for $$ z=1 $$ . 1. Absolute convergence at every point of the boundary: $$ \sum_{n=1}^{\infty}\frac{z^n}{n^2} $$ has radius of convergence $$ 1 $$ , while it converges absolutely, and uniformly, at every point of $$ |z|=1 $$ due to Weierstrass M-test applied with the hyper-harmonic convergent series $$ \sum_{n=1}^{\infty}\frac{1}{n^2} $$ . 1. Convergent on the closure of the disc of convergence but not continuous sum: Sierpiński gave an example of a power series with radius of convergence $$ 1 $$ , convergent at all points with $$ |z|=1 $$ , but the sum is an unbounded function and, in particular, discontinuous. A sufficient condition for one-sided continuity at a boundary point is given by Abel's theorem. ## Formal power series In abstract algebra, one attempts to capture the essence of power series without being restricted to the fields of real and complex numbers, and without the need to talk about convergence. This leads to the concept of formal power series, a concept of great utility in algebraic combinatorics. ## Power series in several variables An extension of the theory is necessary for the purposes of multivariable calculus. A power series is here defined to be an infinite series of the form $$ f(x_1, \dots, x_n) = \sum_{j_1, \dots, j_n = 0}^\infty a_{j_1, \dots, j_n} \prod_{k=1}^n (x_k - c_k)^{j_k}, $$ where is a vector of natural numbers, the coefficients are usually real or complex numbers, and the center and argument are usually real or complex vectors. The symbol $$ \Pi $$ is the product symbol, denoting multiplication. In the more convenient multi-index notation this can be written $$ f(x) = \sum_{\alpha \in \N^n} a_\alpha (x - c)^\alpha. $$ where $$ \N $$ is the set of natural numbers, and so $$ \N^n $$ is the set of ordered n-tuples of natural numbers. The theory of such series is trickier than for single-variable series, with more complicated regions of convergence. For instance, the power series $$ \sum_{n=0}^\infty x_1^n x_2^n $$ is absolutely convergent in the set $$ \{ (x_1, x_2): |x_1 x_2| < 1\} $$ between two hyperbolas. (This is an example of a log-convex set, in the sense that the set of points $$ (\log |x_1|, \log |x_2|) $$ , where $$ (x_1, x_2) $$ lies in the above region, is a convex set. More generally, one can show that when c=0, the interior of the region of absolute convergence is always a log-convex set in this sense.) On the other hand, in the interior of this region of convergence one may differentiate and integrate under the series sign, just as one may with ordinary power series. ## Order of a power series Let be a multi-index for a power series . The order of the power series f is defined to be the least value $$ r $$ such that there is aα ≠ 0 with $$ r = |\alpha| = \alpha_1 + \alpha_2 + \cdots + \alpha_n $$ , or $$ \infty $$ if f ≡ 0. In particular, for a power series f(x) in a single variable x, the order of f is the smallest power of x with a nonzero coefficient. This definition readily extends to Laurent series. ## Notes ## References - ## External links - - - Powers of Complex Numbers by Michael Schreiber, Wolfram Demonstrations Project. Category:Real analysis Category:Complex analysis Category:Multivariable calculus Category:Series (mathematics)
https://en.wikipedia.org/wiki/Power_series
Numerical Recipes is the generic title of a series of books on algorithms and numerical analysis by William H. Press, Saul A. Teukolsky, William T. Vetterling and Brian P. Flannery. In various editions, the books have been in print since 1986. The most recent edition was published in 2007. ## Overview The Numerical Recipes books cover a range of topics that include both classical numerical analysis (interpolation, integration, linear algebra, differential equations, and so on), signal processing (Fourier methods, filtering), statistical treatment of data, and a few topics in machine learning (hidden Markov model, support vector machines). The writing style is accessible and has an informal tone. The emphasis is on understanding the underlying basics of techniques, not on the refinements that may, in practice, be needed to achieve optimal performance and reliability. Few results are proved with any degree of rigor, although the ideas behind proofs are often sketched, and references are given. Importantly, virtually all methods that are discussed are also implemented in a programming language, with the code printed in the book. Each variant of the book is keyed to a specific language. According to the publisher, Cambridge University Press, the Numerical Recipes books are historically the all-time best-selling books on scientific programming methods. In recent years, Numerical Recipes books have been cited in the scientific literature more than 3000 times per year according to ISI Web of Knowledge (e.g., 3962 times in the year 2008). And as of the end of 2017, the book had over 44000 citations on Google Scholar. ## History The first publication was in 1986 with the title,”Numerical Recipes, The Art of Scientific Computing”, containing code in both Fortran and Pascal; an accompanying book, “Numerical Recipes Example Book (Pascal)” was first published in 1985. (A preface note in “Examples" mentions that the main book was also published in 1985, but the official note in that book says 1986.) Supplemental editions followed with code in Pascal, BASIC, and C. Numerical Recipes took, from the start, an opinionated editorial position at odds with the conventional wisdom of the numerical analysis community: However, as it turned out, the 1980s were fertile years for the "black box" side, yielding important libraries such as BLAS and LAPACK, and integrated environments like MATLAB and Mathematica. By the early 1990s, when Second Edition versions of Numerical Recipes (with code in C, Fortran-77, and Fortran-90) were published, it was clear that the constituency for Numerical Recipes was by no means the majority of scientists doing computation, but only that slice that lived between the more mathematical numerical analysts and the larger community using integrated environments. The Second Edition versions occupied a stable role in this niche environment. By the mid-2000s, the practice of scientific computing had been radically altered by the mature Internet and Web. Recognizing that their Numerical Recipes books were increasingly valued more for their explanatory text than for their code examples, the authors significantly expanded the scope of the book, and significantly rewrote a large part of the text. They continued to include code, still printed in the book, now in C++, for every method discussed. The Third Edition was also released as an electronic book, eventually made available on the Web for free (with nags) or by paid or institutional subscription (with faster, full access and no nags). In 2015 Numerical Recipes sold its historic two-letter domain name nr.com and became `numerical.recipes` instead. ## Reception ### Content Numerical Recipes is a single volume that covers a very broad range of algorithms. Unfortunately that format skewed the choice of algorithms towards simpler and shorter early algorithms which were not as accurate, efficient or stable as later more complex algorithms. (Date estimated by Editor's remark. Last update circa September 1999; older clone) The first edition had also some minor bugs, which were fixed in later editions; however according to the authors for years they were encountering on the internet rumors that Numerical Recipes is "full of bugs". They attributed this to people using outdated versions of the code, bugs in other parts of the code and misuse of routines which require some understanding to use correctly. The rebuttal does not, however, cover criticisms regarding lack of mentions to code limitations, boundary conditions, and more modern algorithms, another theme in Snyder's comment compilation. A precision issue in Bessel functions has persisted to the third edition according to Pavel Holoborodko. Despite criticism by numerical analysts, engineers and scientists generally find the book conveniently broad in scope. Norman Gray concurs in the following quote: Numerical Recipes [nr] does not claim to be a numerical analysis textbook, and it makes a point of noting that its authors are (astro-)physicists and engineers rather than analysts, and so share the motivations and impatience of the book's intended audience. The declared premise of the NR authors is that you will come to grief one way or the other if you use numerical routines you do not understand. They attempt to give you enough mathematical detail that you understand the routines they present, in enough depth that you can diagnose problems when they occur, and make more sophisticated choices about replacements when the NR routines run out of steam. [...] ### License The code listings are copyrighted and commercially licensed by the Numerical Recipes authors. A license to use the code is given with the purchase of a book, but the terms of use are highly restrictive. For example, programmers need to make sure NR code cannot be extracted from their finished programs and used – a difficult requirement with dubious enforceability. However, Numerical Recipes does include the following statement regarding copyrights on computer programs:Copyright does not protect ideas, but only the expression of those ideas in a particular form. In the case of a computer program, the ideas consist of the program's methodology and algorithm, including the necessary sequence of steps adopted by the programmer. The expression of those ideas is the program source code... If you analyze the ideas contained in a program, and then express those ideas in your own completely different implementation, then that new program implementation belongs to you. One early motivation for the GNU Scientific Library was that a free library was needed as a substitute for Numerical Recipes. ### Style Another line of criticism centers on the coding style of the books, which strike some modern readers as "Fortran-ish", though written in contemporary, object-oriented C++. The authors have defended their very terse coding style as necessary to the format of the book because of space limitations and for readability. ## Titles in the series (partial list) The books differ by edition (1st, 2nd, and 3rd) and by the computer language in which the code is given. - Numerical Recipes. The Art of Scientific Computing, 1st Edition, 1986, . (Fortran and Pascal) - Numerical Recipes in C. The Art of Scientific Computing, 1st Edition, 1988, . - Numerical Recipes in Pascal. The Art of Scientific Computing, 1st Edition, 1989, . - Numerical Recipes in Fortran. The Art of Scientific Computing, 1st Edition, 1989, . - Numerical Recipes in BASIC. The Art of Scientific Computing, 1st Edition, 1991, . (supplemental edition) - Numerical Recipes in Fortran 77. The Art of Scientific Computing, 2nd Edition, 1992, . - Numerical Recipes in C. The Art of Scientific Computing, 2nd Edition, 1992, . - Numerical Recipes in Fortran 90. The Art of Parallel Scientific Computing, 2nd Edition, 1996, . - Numerical Recipes in C++. The Art of Scientific Computing, 2nd Edition, 2002, . - Numerical Recipes. The Art of Scientific Computing, 3rd Edition, 2007, . (C++ code) The books are published by Cambridge University Press. ## References ## External links - - Current electronic edition of Numerical Recipes (limited free page views). - - Older versions of Numerical Recipes available electronically (links to C, Fortran 77, and Fortran 90 versions in various formats, plus other hosted books) - W. Van Snyder, Why not use Numerical Recipes? , full four-page mirror by Lek-Heng Lim (includes discussion of alternatives) Category:Computer science books Category:Engineering textbooks Category:Mathematics books Category:Numerical software
https://en.wikipedia.org/wiki/Numerical_Recipes
In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics. A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines), and for a directed graph are also known as edges but also sometimes arrows or arcs. The vertices may be part of the graph structure, or may be external entities represented by integer indices or references. A graph data structure may also associate to each edge some edge value, such as a symbolic label or a numeric attribute (cost, capacity, length, etc.). ## Operations The basic operations provided by a graph data structure G usually include: - : tests whether there is an edge from the vertex x to the vertex y; - : lists all vertices y such that there is an edge from the vertex x to the vertex y; - : adds the vertex x, if it is not there; - : removes the vertex x, if it is there; - : adds the edge z from the vertex x to the vertex y, if it is not there; - : removes the edge from the vertex x to the vertex y, if it is there; - : returns the value associated with the vertex x; - : sets the value associated with the vertex x to v. Structures that associate values to the edges usually also provide: - : returns the value associated with the edge (x, y); - : sets the value associated with the edge (x, y) to v. ## Common data structures for graph representation Adjacency list Vertices are stored as records or objects, and every vertex stores a list of adjacent vertices. This data structure allows the storage of additional data on the vertices. Additional data can be stored if edges are also stored as objects, in which case each vertex stores its incident edges and each edge stores its incident vertices. Adjacency matrix A two-dimensional matrix, in which the rows represent source vertices and columns represent destination vertices. Data on edges and vertices must be stored externally. Only the cost for one edge can be stored between each pair of vertices. Incidence matrix A two-dimensional matrix, in which the rows represent the vertices and columns represent the edges. The entries indicate the incidence relation between the vertex at a row and edge at a column. The following table gives the time complexity cost of performing various operations on graphs, for each of these representations, with |V| the number of vertices and |E| the number of edges. In the matrix representations, the entries encode the cost of following an edge. The cost of edges that are not present are assumed to be ∞. Adjacency list Adjacency matrix Incidence matrix Store graph Add vertex Add edge Remove vertex Remove edge Are vertices x and y adjacent (assuming that their storage positions are known)? Remarks Slow to remove vertices and edges, because it needs to find all vertices or edges Slow to add or remove vertices, because matrix must be resized/copied Slow to add or remove vertices and edges, because matrix must be resized/copied Adjacency lists are generally preferred for the representation of sparse graphs, while an adjacency matrix is preferred if the graph is dense; that is, the number of edges $$ |E| $$ is close to the number of vertices squared, $$ |V|^2 $$ , or if one must be able to quickly look up if there is an edge connecting two vertices. ### More efficient representation of adjacency sets The time complexity of operations in the adjacency list representation can be improved by storing the sets of adjacent vertices in more efficient data structures, such as hash tables or balanced binary search trees (the latter representation requires that vertices are identified by elements of a linearly ordered set, such as integers or character strings). A representation of adjacent vertices via hash tables leads to an amortized average time complexity of $$ O(1) $$ to test adjacency of two given vertices and to remove an edge and an amortized average time complexity of $$ O(\deg(x)) $$ to remove a given vertex of degree $$ \deg(x) $$ . The time complexity of the other operations and the asymptotic space requirement do not change. ## Parallel representations The parallelization of graph problems faces significant challenges: Data-driven computations, unstructured problems, poor locality and high data access to computation ratio. The graph representation used for parallel architectures plays a significant role in facing those challenges. Poorly chosen representations may unnecessarily drive up the communication cost of the algorithm, which will decrease its scalability. In the following, shared and distributed memory architectures are considered. ### Shared memory In the case of a shared memory model, the graph representations used for parallel processing are the same as in the sequential case, since parallel read-only access to the graph representation (e.g. an adjacency list) is efficient in shared memory. ### Distributed memory In the distributed memory model, the usual approach is to partition the vertex set $$ V $$ of the graph into $$ p $$ sets $$ V_0, \dots, V_{p-1} $$ . Here, $$ p $$ is the amount of available processing elements (PE). The vertex set partitions are then distributed to the PEs with matching index, additionally to the corresponding edges. Every PE has its own subgraph representation, where edges with an endpoint in another partition require special attention. For standard communication interfaces like MPI, the ID of the PE owning the other endpoint has to be identifiable. During computation in a distributed graph algorithms, passing information along these edges implies communication. Partitioning the graph needs to be done carefully - there is a trade-off between low communication and even size partitioning But partitioning a graph is a NP-hard problem, so it is not feasible to calculate them. Instead, the following heuristics are used. 1D partitioning: Every processor gets $$ n/p $$ vertices and the corresponding outgoing edges. This can be understood as a row-wise or column-wise decomposition of the adjacency matrix. For algorithms operating on this representation, this requires an All-to-All communication step as well as $$ \mathcal{O}(m) $$ message buffer sizes, as each PE potentially has outgoing edges to every other PE. 2D partitioning: Every processor gets a submatrix of the adjacency matrix. Assume the processors are aligned in a rectangle $$ p = p_r \times p_c $$ , where $$ p_r $$ and $$ p_c $$ are the amount of processing elements in each row and column, respectively. Then each processor gets a submatrix of the adjacency matrix of dimension $$ (n/p_r)\times(n/p_c) $$ . This can be visualized as a checkerboard pattern in a matrix. Therefore, each processing unit can only have outgoing edges to PEs in the same row and column. This bounds the amount of communication partners for each PE to $$ p_r + p_c - 1 $$ out of $$ p = p_r \times p_c $$ possible ones. ## Compressed representations Graphs with trillions of edges occur in machine learning, social network analysis, and other areas. Compressed graph representations have been developed to reduce I/O and memory requirements. General techniques such as Huffman coding are applicable, but the adjacency list or adjacency matrix can be processed in specific ways to increase efficiency. ## Graph traversal ### Breadth first search and depth first search Breadth-first search (BFS) and depth-first search (DFS) are two closely-related approaches that are used for exploring all of the nodes in a given connected component. Both start with an arbitrary node, the "root".
https://en.wikipedia.org/wiki/Graph_%28abstract_data_type%29