Quantum mechanics
$\Delta x \, \Delta p \ge \frac{\hbar}{2}$
Uncertainty principle
Introduction to...

Mathematical formulation of...

Equations
Schrödinger equation
Pauli equation
Klein–Gordon equation
Dirac equation
This box: view  talk  edit

In physics, especially quantum mechanics, the Schrödinger equation is an equation that describes how the quantum state of a physical system changes in time. Physics (Greek Physis - φύσις in everyday terms is the Science of Matter and its motion. Quantum mechanics is the study of mechanical systems whose dimensions are close to the Atomic scale such as Molecules Atoms Electrons In Quantum physics, a quantum state is a mathematical object that fully describes a quantum system. In Physics the word system has a technical meaning namely it is the portion of the physical Universe chosen for analysis It is as central to quantum mechanics as Newton's laws are to classical mechanics. Quantum mechanics is the study of mechanical systems whose dimensions are close to the Atomic scale such as Molecules Atoms Electrons Newton's laws of motion are three Physical laws which provide relationships between the Forces acting on a body and the motion of the Classical mechanics is used for describing the motion of Macroscopic objects from Projectiles to parts of Machinery, as well as Astronomical objects

In the standard interpretation of quantum mechanics, the quantum state, also called a wavefunction or state vector, is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe atomic and subatomic systems, electrons and atoms, but also macroscopic systems, possibly even the whole universe. The equation is named after Erwin Schrödinger who discovered it in 1926. [1]

Schrodinger's equation can be mathematically transformed into the Heisenberg formalism, and into the Feynman path integral. Werner Heisenberg (5 December 1901 in Würzburg &ndash1 February 1976 in Munich) was a German theoretical physicist best known for enunciating the Matrix mechanics is a formulation of Quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925 Richard Phillips Feynman (ˈfaɪnmən May 11 1918 – February 15 1988 was an American Physicist known for the Path integral formulation of quantum This article is about a formulation of quantum mechanics For integrals along a path also known as line or contour integrals see Line integral. The Schrödinger equation describes time in a way that is inconvenient for relativistic theories, a problem which is less severe in Heisenberg's formulation and completely absent in the path integral.

## Historical background and development

Einstein interpreted Planck's quanta as photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, a mysterious wave-particle duality. The theoretical and experimental justification for the Schrödinger equation motivates the discovery of the Schrödinger equation, the equation that describes the dynamics of Albert Einstein ( German: ˈalbɐt ˈaɪ̯nʃtaɪ̯n; English: ˈælbɝt ˈaɪnstaɪn (14 March 1879 – 18 April 1955 was a German -born theoretical In Physics, the photon is the Elementary particle responsible for electromagnetic phenomena In Physics and Chemistry, wave–particle duality is the concept that all Matter and Energy exhibits both Wave -like and Since energy and momentum are related in the same way as frequency and wavenumber in relativity, it followed that the momentum of a photon is proportional to its wavenumber. In relativity, a four-vector is a vector in a four-dimensional real Vector space, called Minkowski space. Wavenumber in most physical sciences is a Wave property inversely related to Wavelength, having SI units of reciprocal meters

DeBroglie hypothesized that this is true for all particles, for electrons as well as photons, that the energy and momentum of an electron are the frequency and wavenumber of a wave. Louis-Victor-Pierre-Raymond 7th duc de Broglie, FRS (də bʁœj ( August 15 1892 &ndash March 19 1987) was a French Assuming that the waves travel roughly along classical paths, he showed that they form standing waves only for certain discrete frequencies, discrete energy levels which reproduced the old quantum condition. A standing wave, also known as a stationary wave, is a Wave that remains in a constant position The old quantum theory was a collection of results from the years 1900-1925 which predate modern Quantum mechanics.

Following up on these ideas, Schrödinger decided to find a proper wave equation for the electron. He was guided by Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system--- the trajectories of light rays become sharp tracks which obey a principle of least action. Sir William Rowan Hamilton (4 August 1805 &ndash 2 September 1865 was an Irish Mathematician, Physicist, and Astronomer who In Optics, Fermat's principle or the principle of least time is the idea that the path taken between two points by a ray of light is the path that can be Hamilton believed that mechanics was the zero-wavelength limit of wave propagation, but did not formulate an equation for those waves. This is what Schrödinger did, and a modern version of his reasoning is contained in the box below.

Using this equation, Schrödinger computed the spectral lines for hydrogen by treating a hydrogen atom's single negatively charged electron as a wave, $\psi\;$, moving in a potential well, V, created by the positively charged proton. The phase of an oscillation or wave is the fraction of a complete cycle corresponding to an offset in the displacement from a specified reference point at time t = 0 In the Physics of Wave propagation (especially Electromagnetic waves, a plane wave (also spelled planewave) is a constant-frequency wave whose Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted In Quantum Mechanics, a phase factor is a complex scalar number of Absolute value 1 that multiplies a bra or ket. Classical mechanics is used for describing the motion of Macroscopic objects from Projectiles to parts of Machinery, as well as Astronomical objects A spectral line is a dark or bright line in an otherwise uniform and continuous spectrum, resulting from an excess or deficiency of photons in a narrow frequency range compared Hydrogen (ˈhaɪdrədʒən is the Chemical element with Atomic number 1 Electric charge is a fundamental conserved property of some Subatomic particles which determines their Electromagnetic interaction. The electron is a fundamental Subatomic particle that was identified and assigned the negative charge in 1897 by J A potential well is the region surrounding a Local minimum of Potential energy. The proton ( Greek πρῶτον / proton "first" is a Subatomic particle with an Electric charge of one positive This computation reproduced the energy levels of the Bohr model. In Atomic physics, the Bohr model created by Niels Bohr depicts the Atom as a small positively charged nucleus surrounded by Electrons

But this was not enough, since Sommerfeld had already seemingly correctly reproduced relativistic corrections. In Atomic physics, the fine structure describes the splitting of the Spectral lines of Atoms due to first order relativistic corrections Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein-Gordon equation in a Coulomb potential (in natural units):

$(E + {e^2\over r} )^2 \psi = - \nabla^2\psi + m^2 \psi$

He found the standing-waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. The Klein–Gordon equation ( Klein–Fock–Gordon equation or sometimes Klein–Gordon–Fock equation) is a relativistic version of the Schrödinger At a point in space the electric potential is the Potential energy per unit of charge that is associated with a static (time-invariant Electric field In Physics, natural units are Physical units of Measurement defined in terms of universal Physical constants, such that some chosen physical Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin with a lover.

While there, Schrödinger decided that the earlier nonrelativistic calculations were novel enough to publish, and decided to leave off the problem of relativistic corrections for the future. He put together his wave equation and the spectral analysis of hydrogen in a paper in 1926. [2]. The paper was enthusiastically endorsed by Einstein, who saw the matter-waves as the visualizable antidote to what he considered to be the overly formal matrix mechanics. Matrix mechanics is a formulation of Quantum mechanics created by Werner Heisenberg, Max Born, and Pascual Jordan in 1925

The Schrödinger equation defines the behaviour of $\psi\;$, but does not interpret what $\psi\;$ is. Schrödinger tried unsuccessfully, in his fourth paper, to interpret it as a charge density. [3] In 1926 Max Born, just a few days after Schrödinger's fourth and final paper was published, successfully interpreted $\psi\;$ as a probability amplitude[4]. Max Born (11 December 1882 &ndash 5 January 1970 was a German Physicist and Mathematician who was instrumental in the development of Quantum In Quantum mechanics, a probability amplitude is a complex -valued function that describes an uncertain or unknown quantity Schrödinger, though, always opposed a statistical or probabilistic approach, with its associated discontinuities; like Einstein, who believed that quantum mechanics was a statistical approximation to an underlying deterministic theory, Schrödinger was never reconciled to the Copenhagen interpretation. Statistics is a mathematical science pertaining to the collection analysis interpretation or explanation and presentation of Data. In certain interpretations of quantum mechanics, wave function collapse is one of two processes by which Quantum systems apparently evolve according to the laws of The Copenhagen interpretation is an interpretation of Quantum mechanics. [5]

## Mathematical forms

There are various closely related equations which go under Schrödinger's name,

### Time-dependent Schrödinger equation

The time-dependent Schrödinger equation for a system with energy operator $\hat H$ is,

$\hat H \psi\left(\mathbf{r}, t\right) = i \hbar \frac{\partial \psi}{\partial t} \left(\mathbf{r}, t\right)$

where ψ is the wavefunction, $\hbar$ is the reduced Planck's constant and i is the imaginary unit. In Mathematics, an operator is a function which operates on (or modifies another function The Planck constant (denoted h\ is a Physical constant used to describe the sizes of quanta. Definition By definition the imaginary unit i is one solution (of two of the Quadratic equation The form of the Hamiltonian is different for different systems.

For a non-relativistic particle moving in a potential, the Hamiltonian operator is the sum of the kinetic and potential energies :

$\hat H = - \frac{\hbar^2}{2m} \nabla^2 + V\left(\mathbf{r}\right)$

And the Schrödinger equation is a partial differential equation

$-\frac{\hbar^2}{2 m} \left[\frac{\partial^2 \psi}{\partial x^2} + \frac{\partial^2 \psi}{\partial y^2} + \frac{\partial^2 \psi}{\partial z^2} \right] + V \psi = i \hbar \frac{\partial \psi}{\partial t}$

### Time-independent Schrödinger equation

When the Hamiltonian does not depend on time (which, in the case of a single particle, is when the potential energy does not change in time) there are special solutions of the time-dependent equation which form standing waves. This page is about the scientific concept of relativity for philosophical or sociological theories about relativity see Relativism. The kinetic energy of an object is the extra Energy which it possesses due to its motion Potential energy can be thought of as Energy stored within a physical system In Mathematics, partial differential equations ( PDE) are a type of Differential equation, i A standing wave, also known as a stationary wave, is a Wave that remains in a constant position These waves have a constant energy/frequency, oscillating in time without changing shape. They obey what is sometimes called the time-independent Schrödinger equation

$\hat H \psi = E \psi \,$

which means that

$i \hbar \frac{\partial \psi}{\partial t} = E \psi$

so that ψ has constant frequency.

$\psi\left(\mathbf{r}, t\right) = \phi\left(\mathbf{r}\right) e^{-iEt/\hbar}$

where $\phi\left(\mathbf{r}\right)$ is the value of ψ at t = 0. Such a solution describes a stationary state in quantum mechanics, a state with a definite value of the energy. In Quantum mechanics, a stationary state is an Eigenstate of a Hamiltonian, or in other words a state of definite energy In such a state, all the probabilities for the outcomes of any measurement do not depend on time.

For a particle in a one-dimensional potential, the standing-wave condition is:

$-\frac{\hbar^2}{2 m} \frac{d^2 \phi (x)}{dx^2} + V(x) \phi (x) = E \phi (x).$

In more dimensions, the only difference is more space derivatives:

$\left[-\frac{\hbar^2}{2 m} \nabla^2 + V(\mathbf{r}) \right] \phi (\mathbf{r}) = E \phi (\mathbf{r}),$

where $\nabla^2$ is the Laplacian. In Mathematics and Physics, the Laplace operator or Laplacian, denoted by \Delta\  or \nabla^2  and named after

### Bra-ket versions

In the mathematical formulation of quantum mechanics, a physical system is fully described by a vector in a complex Hilbert space, the collection of all possible normalizable wavefunctions. The mathematical formulation of quantum mechanics is the body of mathematical formalisms which permits a rigorous description of Quantum mechanics. Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted This article assumes some familiarity with Analytic geometry and the concept of a limit. A normalizable wavefunction is just an alternate name for a vector in Hilbert space, the two terms are synonyms. This is true even though in general the vectors do not describe the probability amplitudes for a particle to be in a certain position, so they don't "wave" in any physical sense. The only wavefunction that is a wave in space and time is the wavefunction for a single particle in the position representation.

Two nonzero vectors which are multiples of each other, two wavefunctions which are the same up to rescaling, represent the same physical state. The Schrödinger equation is the rate of change of the state vector.

$i\hbar {d \over dt } |\psi\rangle = \hat H(t)|\psi\rangle$

where $|\psi\rangle$ is a ket, an infinite component complex vector. Bra-ket notation is a standard notation for describing Quantum states in the theory of Quantum mechanics composed of angle brackets (chevrons and Vertical and $\hat H(t)$ is the Hamiltonian, a linear map from kets to kets. In Quantum mechanics, the Hamiltonian H is the Observable corresponding to the Total energy of the system The Hamiltonian should be a self-adjoint operator, so that its eigenvalues are real. In Mathematics, on a finite-dimensional Inner product space, a self-adjoint operator is one that is its own adjoint, or equivalently one whose matrix

The nonzero elements of a Hilbert space are by definition normalizable and it is convenient to represent a state by an element of the ray which has unit length. For every time-independent Hamiltonian operator, $\hat H$, there exists a set of quantum states, $\left|\psi_n\right\rang$, known as energy eigenstates, and corresponding real numbers En satisfying the eigenvalue equation,

$\hat H \left|\psi_n\right\rang = E_n \left|\psi_n \right\rang.$

Alternatively, ψ is said to be an eigenstate (eigenket) of $\hat H$ with eigenvalue E. In Mathematics, given a Linear transformation, an of that linear transformation is a nonzero vector which when that transformation is applied to it changes In Mathematics, given a Linear transformation, an of that linear transformation is a nonzero vector which when that transformation is applied to it changes Such a state possesses a definite total energy, whose value En is the eigenvalue of the Hamiltonian. The corresponding eigenvector $\psi_n\,$ is normalizable to unity. This eigenvalue equation is referred to as the time-independent Schrödinger equation. We purposely left out the variable(s) on which the wavefunction $\psi_n\,$ depends.

Self-adjoint operators, such as the Hamiltonian, have the property that their eigenvalues are always real numbers, as we would expect, since the energy is a physically observable quantity. In Mathematics, on a finite-dimensional Inner product space, a self-adjoint operator is one that is its own adjoint, or equivalently one whose matrix In Mathematics, the real numbers may be described informally in several different ways Sometimes more than one linearly independent state vector correspond to the same energy En. In Linear algebra, a family of vectors is linearly independent if none of them can be written as a Linear combination of finitely many other vectors If the maximum number of linearly independent eigenvectors corresponding to En equals k, we say that the energy level En is k-fold degenerate. When k=1 the energy level is called non-degenerate.

On inserting a solution of the time-independent Schrödinger equation into the full Schrödinger equation, we get

$\mathrm{i} \hbar \frac{\partial}{\partial t} \left| \psi_n \left(t\right) \right\rangle = E_n \left|\psi_n\left(t\right)\right\rang.$

It is relatively easy to solve this equation. One finds that the energy eigenstates (i. e. , solutions of the time-independent Schrödinger equation) change as a function of time only trivially, namely, only by a complex phase:

$\left| \psi \left(t\right) \right\rangle = \mathrm{e}^{-\mathrm{i} Et / \hbar} \left|\psi\left(0\right)\right\rang.$

It immediately follows that the probability amplitude,

$\psi(t)^*\psi(t) = \mathrm{e}^{\mathrm{i} Et / \hbar}\mathrm{e}^{-\mathrm{i} Et / \hbar}\psi(0)^*\psi(0) = |\psi(0)|^2,$

is time-independent. The phase of an oscillation or wave is the fraction of a complete cycle corresponding to an offset in the displacement from a specified reference point at time t = 0 In Quantum mechanics, a probability amplitude is a complex -valued function that describes an uncertain or unknown quantity Because of a similar cancellation of phase factors in bra and ket, all average (expectation) values of time-independent observables (physical quantities) computed from $\psi(t)\,$ are time-independent. In Physics, particularly in Quantum physics, a system observable is a property of the system state that can be determined by some sequence of physical

Energy eigenstates are convenient to work with because they form a complete set of states. That is, the eigenvectors $\left\{\left|n\right\rang\right\}$ form a basis for the state space. Basis vector redirects here For basis vector in the context of crystals see Crystal structure. We introduced here the short-hand notation $|\,n\,\rang = \psi_n$. Then any state vector that is a solution of the time-dependent Schrödinger equation (with a time-independent $\hat H$) $\left|\psi\left(t\right)\right\rang$ can be written as a linear superposition of energy eigenstates:

$\left|\psi\left(t\right)\right\rang = \sum_n c_n(t) \left|n\right\rang \quad,\quad \hat H \left|n\right\rang = E_n \left|n\right\rang \quad,\quad \sum_n \left|c_n\left(t\right)\right|^2 = 1.$

(The last equation enforces the requirement that $\left|\psi\left(t\right)\right\rang$, like all state vectors, may be normalized to a unit vector. In Mathematics, linear combinations are a concept central to Linear algebra and related fields of mathematics ) Applying the Hamiltonian operator to each side of the first equation, the time-dependent Schrödinger equation in the left-hand side and using the fact that the energy basis vectors are by definition linearly independent, we readily obtain

$\mathrm{i}\hbar \frac{\partial c_n}{\partial t} = E_n c_n\left(t\right).$

Therefore, if we know the decomposition of $\left|\psi\left(t\right)\right\rang$ into the energy basis at time t = 0, its value at any subsequent time is given simply by

$\left|\psi\left(t\right)\right\rang = \sum_n \mathrm{e}^{-\mathrm{i}E_nt/\hbar} c_n\left(0\right) \left|n\right\rang.$

Note that when some values $c_n(0)\,$ are not equal to zero for differing energy values $E_n\,$, the left-hand side is not an eigenvector of the energy operator $\hat H$. In Linear algebra, a family of vectors is linearly independent if none of them can be written as a Linear combination of finitely many other vectors The left-hand is an eigenvector when the only $c_n(0)\,$-values not equal to zero belong the same energy, so that $\mathrm{e}^{-\mathrm{i}E_nt/\hbar}$ can be factored out. In many real-world application this is the case and the state vector $\psi(t)\,$ (containing time only in its phase factor) is then a solution of the time-independent Schrödinger equation.

## Properties

### Linearity

The Schrödinger equation (in any form) is linear in the wavefunction, meaning that if ψ(x,t) and φ(x,t) are solutions, then so is aψ + bφ, where a and b are any complex numbers. The word linear comes from the Latin word linearis, which means created by lines. Complex plane In Mathematics, the complex numbers are an extension of the Real numbers obtained by adjoining an Imaginary unit, denoted This property of the Schrödinger equation has important consequences.

### Conservation of probability

In order to describe how probability density changes with time, we define a probability current or probability flux. In Quantum mechanics, the probability current (sometimes called probability flux) is a concept describing the flow of Probability density. In Quantum mechanics, the probability current (sometimes called probability flux) is a concept describing the flow of Probability density. The probability flux represents a flowing of probability across space.

For example, consider a Gaussian probability curve centered around x0 with x0 moving at speed v to the right. Carl Friedrich Gauss (1777 &ndash 1855 is the Eponym of all of the topics listed below One may say that the probability is flowing towards the right, i. e. , there is a probability flux directed to the right.

The probability flux $\mathbf{j}$ is defined as:

$\mathbf{j} = {\hbar \over m} \cdot {1 \over {2 \mathrm{i}}} \left( \psi ^{*} \nabla \psi - \psi \nabla \psi^{*} \right) = {\hbar \over m} \operatorname{Im} \left( \psi ^{*} \nabla \psi \right)$

and measured in units of (probability)/(area × time) = r−2t−1.

The probability flux satisfies the required continuity equation for a conserved quantity, i. A continuity equation is a Differential equation that describes the conservative transport of some kind of quantity e. :

${ \partial \over \partial t} P\left(x,t\right) + \nabla \cdot \mathbf{j} = 0$

where $P\left(x, t\right)$ is the probability density and measured in units of (probability)/(volume) = r−3. This equation is the mathematical equivalent of the probability conservation law. Probability is the likelihood or chance that something is the case or will happen In Physics, a conservation law states that a particular measurable property of an isolated Physical system does not change as the system evolves

A standard calculation shows that for a plane wave described by the wavefunction,

$\psi (x,t) = \, A e^{ \mathrm{i} (k x - \omega t)}$

the probability flux is given by

$j\left(x,t\right) = \left|A\right|^2 {k \hbar \over m}$

showing that not only is the probability of finding the particle in a plane wave state the same everywhere at all times, but also that it is moving at constant speed everywhere. In the Physics of Wave propagation (especially Electromagnetic waves, a plane wave (also spelled planewave) is a constant-frequency wave whose

### Correspondence principle

Main article: Ehrenfest Theorem

The Schrödinger equation satisfies the correspondence principle. The Ehrenfest theorem, named after Paul Ehrenfest, relates the time Derivative of the expectation value for a quantum mechanical operator This article discusses quantum theory For other uses see Correspondence principle (disambiguation.

## Solutions

Analytical solutions of the time-independent Schrödinger equation can be obtained for a variety of relatively simple conditions. Much insight in Quantum mechanics can be gained from understanding the solutions to the time-dependent non-relativistic Schrödinger equation in an appropriate Configuration These solutions provide insight into the nature of quantum phenomena and sometimes provide a reasonable approximation of the behavior of more complex systems (e. g. , in statistical mechanics, molecular vibrations are often approximated as harmonic oscillators). Statistical mechanics is the application of Probability theory, which includes mathematical tools for dealing with large populations to the field of Mechanics Several of the more common analytical solutions can be found in the list of quantum mechanical systems with analytical solutions. Much insight in Quantum mechanics can be gained from understanding the solutions to the time-dependent non-relativistic Schrödinger equation in an appropriate Configuration

For many systems, however, there is no analytic solution to the Schrödinger equation. In these cases, one must resort to approximate solutions. Some of the common techniques are:

## Free Schrödinger equation

When the potential is zero, the Schrödinger equation is linear with constant coefficients:

$i \frac{\partial \psi}{\partial t}=-{1\over 2m}\nabla^2\psi$

where $\scriptstyle \hbar$ has been set to 1. In Quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system A variational principle is a principle in Physics which is expressed in terms of the Calculus of variations. In Computational physics and Computational chemistry, the Hartree-Fock ( HF) method is an approximate method for the determination of the ground-state In Computational chemistry, Post-Hartree-Fock methods are the set of methods developed to improve on the Hartree-Fock (HF or self-consistent field (SCF method Quantum Monte Carlo is a large class of computer algorithms that simulate Quantum systems with the idea of solving the Many-body problem. Density functional theory (DFT is a quantum mechanical theory used in Physics and Chemistry to investigate the Electronic structure (principally In Physics, the WKB (Wentzel–Kramers–Brillouin approximation also known as WKBJ (Wentzel–Kramers–Brillouin–Jeffreys approximation is the most familiar The discrete delta potential method is a combination of both numerical and analytic method used to solve the Schrödinger equation the main feature of this method is to obtain In Physics, natural units are Physical units of Measurement defined in terms of universal Physical constants, such that some chosen physical The solution ψt(x) for any initial condition ψ0(x) can be found by Fourier transforms. This article specifically discusses Fourier transformation of functions on the Real line; for other kinds of Fourier transformation see Fourier analysis and Because the coefficients are constant, an initial plane wave:

$\psi_0(x) = A e^{i k x}\,$

stays a plane wave. Only the coefficient changes. Substituting:

${dA \over dt} = -{i k^2 \over 2m} A\,$

So that A is also oscillating in time:

$A(t) = A e^{- i {k^2 \over 2m} t}\,$

and the solution is:

$\psi_t(x) = A e^{i k x - i \omega t}\,$

Where ω = k2 / 2m, a restatement of DeBroglie's relations.

To find the general solution, write the initial condition as a sum of plane waves by taking its Fourier transform:

$\psi_0(x) = \int_k \psi(k) e^{ikx}\,$

The equation is linear, so each plane waves evolves independently:

$\psi_t(x) = \int_k \psi(k)e^{-i\omega t} e^{ikx}\,$

Which is the general solution. This article specifically discusses Fourier transformation of functions on the Real line; for other kinds of Fourier transformation see Fourier analysis and When complemented by an effective method for taking Fourier transforms, it becomes an efficient algorithm for finding the wavefunction at any future time--- Fourier transform the initial conditions, multiply by a phase, and transform back.

### Gaussian Wavepacket

An easy and instructive example is the Gaussian wavepacket:

$\psi_0(x) = e^{-x^2 / 2a}\,$

where a is a positive real number, the square of the width of the wavepacket. The total normalization of this wavefunction is:

$\langle \psi|\psi\rangle = \int_x \psi^* \psi = \sqrt{\pi a}$

The Fourier transform is a Gaussian again in terms of the wavenumber k:

$\psi_0(k) = (2\pi a)^{d/2} e^{- a k^2/2}\,$

With the physics convention which puts the factors of in Fourier transforms in the k-measure.

$\psi_0(x) = \int_k \psi_0(k) e^{-ikx} = \int {d^dk \over (2\pi)^d} \psi_0(k) e^{-ikx}$

Each separate wave only phase-rotates in time, so that the time dependent Fourier-transformed solution is:

$\psi_t(k) = (2\pi a)^{d/2} e^{- a { k^2\over 2} - it {k^2\over 2m}} = (2\pi a)^{d/2} e^{-(a+it/m){k^2\over 2}}\,$

The inverse Fourier transform is still a Gaussian, but the parameter a has become complex, and there is an overall normalization factor.

$\psi_t(x) = \left({a \over a + i t/m}\right)^{d/2} e^{- {x^2\over 2(a + i t/m)} }\,$

The branch of the square root is determined by continuity in time--- it is the value which is nearest to the positive square root of a. It is convenient to rescale time to absorb m, replacing t/m by t.

The integral of ψ over all space is invariant, because it is the inner product of ψ with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy state, with wavefunction η(x), the inner product:

$\langle \eta | \psi \rangle = \int_x \eta(x) \psi_t(x)$,

only changes in time in a simple way: its phase rotates with a frequency determined by the energy of η. When η has zero energy, like the infinite wavelength wave, it doesn't change at all.

The sum of the absolute square of ψ is also invariant, which is a statement of the conservation of probability. Explicitly in one dimension:

$|\psi|^2 = \psi\psi^* = {a \over \sqrt{a^2+t^2} } e^{-{x^2 a \over a^2 + t^2}}$

Which gives the norm:

$\int |\psi|^2 = \sqrt{\pi a}$

which has preserved its value, as it must.

The width of the Gaussian is the interesting quantity, and it can be read off from the form of | ψ2 | :

$\sqrt{a^2 + t^2 \over a}\,$.

The width eventually grows linearly in time, as $\scriptstyle t/\sqrt{a}$. This is wave-packet spreading--- no matter how narrow the initial wavefunction, a Schrodinger wave eventually fills all of space. The linear growth is a reflection of the momentum uncertainty--- the wavepacket is confined to a narrow width $\scriptstyle \sqrt{a}$ and so has a momentum which is uncertain by the reciprocal amount $\scriptstyle 1/\sqrt{a}$, a spread in velocity of $\scriptstyle 1/m\sqrt{a}$, and therefore in the future position by $\scriptstyle t/m\sqrt{a}$, where the factor of m has been restored by undoing the earlier rescaling of time.

### Galilean Invariance

Galilean boosts are transformations which look at the system from the point of view of an observer moving with a steady velocity -v. A boost must change the physical properties of a wavepacket in the same way as in classical mechanics:

$p'= p + mv\,$
$x'= x + vt\,$

So that the phase factor of a free Schrodinger plane wave:

$p x - E t = (p' - mv)(x' - vt) - {(p'-mv)^2\over 2} t = p' x' + E' t + m v x - {mv^2\over 2}t\,$

is only different in the boosted coordinates by a phase which depends on x and t, but not on p.

An arbitrary superposition of plane wave solutions with different values of p is the same superposition of boosted plane waves, up to an overall x,t dependent phase factor. So any solution to the free Schrodinger equation, ψt(x), can be boosted into other solutions:

$\psi'_t(x) = \psi_t(x + vt) e^{ i mv x - i {mv^2\over 2}t}\,$

Boosting a constant wavefunction produces a plane-wave. More generally, boosting a plane-wave:

$\psi_t(x) = e^{ipx - i {p^2\over 2m} t}\,$

produces a boosted wave:

$\psi'_t(x) = e^{ i p(x + vt) - i{p^2\over 2m}t + imv x - i {mv^2\over 2}t} = e^{i(p+mv)x + i {(p+mv)^2\over 2m}t }\,$

$\psi_t(x) = {1\over \sqrt{a+it/m}} e^{ - {x^2\over 2a} }\,$

produces the moving Gaussian:

$\psi_t(x) = {1\over \sqrt{a + it/m}} e^{ - {(x + vt)^2 \over 2a} + i m v x - i {mv^2\over 2} t } \,$

Which spreads in the same way.

### Free Propagator

The narrow-width limit of the Gaussian wavepacket solution is the propagator K. For other differential equations, this is sometimes called the Green's function, but in quantum mechanics it is traditional to reserve the name Green's function for the time Fourier transform of K. When a is the infinitesimal quantity ε, the Gaussian initial condition, rescaled so that its integral is one:

$\psi_0(x) = {1\over \sqrt{2\pi \epsilon} } e^{-{x^2\over 2\epsilon}}\,$

becomes a delta function, so that its time evolution:

$K_t(x) = {1\over \sqrt{2\pi (i t + \epsilon)}} e^{ - x^2 \over 2it+\epsilon }\,$

gives the propagator.

Note that a very narrow initial wavepacket instantly becomes infinitely wide, with a phase which is more rapidly oscillatory at large values of x. This might seem strange--- the solution goes from being concentrated at one point to being everywhere at all later times, but it is a reflection of the momentum uncertainty of a localized particle. In Quantum physics, the Heisenberg uncertainty principle states that locating a particle in a small region of space makes the Momentum of the particle uncertain Also note that the norm of the wavefunction is infinite, but this is also correct since the square of a delta function is divergent in the same way.

The factor of ε is an infinitesimal quantity which is there to make sure that integrals over K are well defined. In the limit that ε becomes zero, K becomes purely oscillatory and integrals of K are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit $\scriptstyle \epsilon\rightarrow 0$ is to only to be taken after the final state is calculated.

The propagator is the amplitude for reaching point x at time t, when starting at the origin, x=0. By translation invariance, the amplitude for reaching a point x when starting at point y is the same function, only translated:

$K_t(x,y) = K_t(x-y) = {1\over \sqrt{2\pi it}} e^{-i(x-y)^2 \over 2t} \,$

In the limit when t is small, the propagator converges to a delta function:

$\lim_{t\rightarrow 0} K_t(x-y) = \delta(x-y)$

but only in the sense of distributions. The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero. In Mathematical analysis, distributions (also known as generalized functions) are objects which generalize functions and Probability distributions To see this, note that the integral over all space of K is equal to 1 at all times:

$\int_x K_t(x) = 1\,$

since this integral is the inner-product of K with the uniform wavefunction. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit $\epsilon\rightarrow zero$ is taken after everything else.

So the propagation kernel is the future time evolution of a delta function, and it is continuous in a sense, it converges to the initial delta function at small times. If the initial wavefunction is an infinitely narrow spike at position x0:

$\psi_0(x) = \delta(x - x_0)\,$

it becomes the oscillatory wave:

$\psi_t(x) = {1\over \sqrt{2\pi i t}} e^{ -i (x-x_0) ^2 /2t}\,$

Since every function can be written as a sum of narrow spikes:

$\psi_0(x) = \int_y \psi_0(y) \delta(x-y)\,$

the time evolution of every function is determined by the propagation kernel:

$\psi_t(x) = \int_y \psi_0(x) {1\over \sqrt{2\pi it}} e^{-i (x-x_0)^2 / 2t}\,$

And this is an alternate way to express the general solution. The intepretation of this expression is that the amplitude for a particle to be found at point x at time t is the amplitude that it started at x0 times the amplitude that it went from x0 to x, summed over all the possible starting points. In other words, it is a convolution of the kernel K with the initial condition. In Mathematics and in particular Functional analysis, convolution is a mathematical operation on two functions f and

$\psi_t = K * \psi_0\,$

Since the amplitude to travel from x to y after a time t + t' can be considered in two steps, the propagator obeys the identity:

$\int_y K(x-y;t)K(y-z;t') = K(x-z;t+t')\,$

Which can be interpreted as follows: the amplitude to travel from x to z in time t+t' is the sum of the amplitude to travel from x to y in time t multiplied by the amplitude to travel from y to z in time t', summed over all possible intermediate states y. This is a property of an arbitrary quantum system, and by subdividing the time into many segments, it allows the time evolution to be expressed as a path integral.

### Analytic Continuation to Diffusion

The spreading of wavepackets in quantum mechanics is directly related to the spreading of probability densities in diffusion. Diffusion is the net movement of particles (typically molecules from an area of high concentration to an area of low concentration by uncoordinated random movement For a particle which is random walking, the probability density function at any point satisfies the diffusion equation:

${\partial \over \partial t} \rho = {1\over 2} {\partial \over \partial x^2 } \rho$

where the factor of 2, which can be removed by a rescaling either time or space, is only for convenience. The diffusion equation is a Partial differential equation which describes density fluctuations in a material undergoing Diffusion.

A solution of this equation is the spreading gaussian:

$\rho_t(x) = {1\over \sqrt{2\pi t}} e^{-x^2 \over 2t}$

and since the integral of ρt, is constant, while the width is becoming narrow at small times, this function approaches a delta function at t=0:

$\lim_{t\rightarrow 0} \rho_t(x) = \delta(x)\,$

again, only in the sense of distributions, so that

$\lim_{t\rightarrow 0} \int_x f(x) \rho_t(x) = f(0)\,$

for any smooth test function f. In Mathematical analysis, distributions (also known as generalized functions) are objects which generalize functions and Probability distributions

The spreading Gaussian is the propagation kernel for the diffusion equation, and it obeys the identity:

$K_{t+t'}(x) = K_{t}*K_{t'}\,$

Which allows diffusion to be expressed as a path integral. The propagation is the exponential of an operator H:

$K_t(x) = e^{-tH}\,$

which is the infinitesimal diffusion operator.

$H= -{\nabla^2\over 2}\,$

The exponential can be defined over a range of t's which include complex values, so long as integrals over the propagation kernel stay convergent.

$K_t(x) = e^{-zH}\,$

As long as the real part of z is positive, for large values of x K is exponentially decreasing and integrals over K are absolutely convergent.

The limit of this expression for z coming close to the pure imaginary axis is the Schrodinger propagator:

$K_t(x) = e^{-(iz+\epsilon)H}\,$

and this gives a more conceptual explanation for the time evolution of Gaussians. From the fundamental identity of exponentiation, or path integration:

$K_z * K_{z'} = K_{z+z'}\,$

Holds for all complex z values where the integrals are absolutely convergent so that the operators are well defined.

So that quantum evolution starting from a Gaussian:

$\psi_0 = K_t(x)\,$

gives the time evolved state:

$\psi_t = K_{it} * K_t = K_{t+it}\,$

This explains the diffusive form of the Gaussian solutions:

$\psi_t(x) = {1\over \sqrt{a+it} } e^{- {x^2\over 2(a+it)} }\,$

## Variational Principle

The variational principle asserts that for any any Hermitian matrix A, the lowest eigenvalue minimizes the quantity:

$\langle v,Av \rangle = \sum_{ij} A_{ij} v^*_i v_j\,$

on the unit sphere < v,v > = 1. A variational principle is a principle in Physics which is expressed in terms of the Calculus of variations. This follows by the method of Lagrange multipliers, at the minimum the gradient of the function is parallel to the gradient of the constraint:

${\partial\over \partial v_i} \langle v,Av\rangle = \lambda {\partial \over \partial v_i} \langle v,v\rangle\,$

which is the eigenvalue condition

$\sum_{j} A_{ij} v_j = \lambda v_i\,$

so that the extreme values of a quadratic form A are the eigenvalues of A, and the value of the function at the extreme values is just the corresponding eigenvalue:

$\langle v,Av\rangle = \lambda\langle v,v\rangle = \lambda\,$

When the hermitian matrix is the Hamiltonian, the minimum value is the lowest energy level. In mathematical optimization problems the method of Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the extrema of

In the space of all wavefunctions, the unit sphere is the space of all normalized wavefunctions ψ, the ground state minimizes

$\langle \psi | H |\psi \rangle = \int \psi^* H \psi = \int \psi^* (-\nabla^2 + V(x)) \psi \,$

or, after an integration by parts,

$\langle \psi | H |\psi \rangle = \int |\nabla \psi|^2 + V(x) |\psi|^2\,$

All the stationary points are real, since the integrand is real. In general, when a wavefunction ψ is an eigenstate of the Schrodinger equation in a potential, the real and imaginary part of psi are both separately eigenstates with the same eigenvalue.

The lowest energy state has a positive definite wavefunction, because given a ψ which minimizes the integral, | ψ | , the absolute value, is also a minimizer. But this minimizer has sharp corners at places where ψ changes sign, and these sharp corners can be rounded out to reduce the gradient contribution.

## Potential and Ground State

For a particle in a positive definite potential, the ground state wavefunction is real and positive, and has a dual interpretation as the probability density for a diffusion process. The analogy between diffusion and nonrelativistic quantum motion, originally discovered and exploited by Schrodinger, has led to many exact solutions.

A positive definite wavefunction:

$\psi = e^{-W(x)}\,$

is a solution to the time-independent Schrodinger equation with m=1 and potential:

$V(x) = {1\over 2} |\nabla W|^2 - {1\over 2} \nabla^2 W\,$

with zero total energy. W, the logarithm of the ground state wavefunction. The second derivative term is higher order in $\scriptstyle \hbar$, and ignoring it gives the semiclassical approximation. In Physics, the WKB (Wentzel–Kramers–Brillouin approximation also known as WKBJ (Wentzel–Kramers–Brillouin–Jeffreys approximation is the most familiar

The form of the ground state wavefunction is motivated by the observation that the ground state wavefunction is the Boltzmann probability for a different problem, the probability for finding a particle diffusing in space with the free-energy at different points given by W. If the diffusion obeys detailed balance and the diffusion constant is everywhere the same, the Fokker Planck equation for this diffusion is the Schrodinger equation when the time parameter is allowed to be imaginary. The Fokker–Planck equation describes the Time evolution of the Probability density function of the position of a particle and can be generalized to other observables This analytic continuation gives the eigenstates a dual interpretation--- either as the energy levels of a quantum system, or the relaxation times for a stochastic equation.

### Harmonic Oscillator

W should grow at infinity, so that the wavefunction has a finite integral. The simplest analytic form is:

$W(x) = \omega x^2\,$

with an arbitrary constant ω, which gives the potential:

$V(x) = {1\over 2} \omega^2 x^2 - {\omega \over 2}\,$

This potential describes a Harmonic oscillator, with the ground state wavefunction:

$\psi(x) = e^{-\omega x^2 }\,$

The total energy is zero, but the potential is shifted by a constant. This article is about the harmonic oscillator in classical mechanics The ground state energy of the usual unshifted Harmonic oscillator potential:

$V(x) = {\omega x^2 \over 2}\,$

$E_0 = {a\over 2}\,$

which is the zero point energy of the oscillator. In Physics, the zero-point energy is the lowest possible Energy that a Quantum mechanical Physical system may possess and is the energy of the

### Coulomb Potential

Another simple but useful form is

$W(x) = 2a|x|\,$

where W is proportional to the radial coordinate. This is the ground state for two different potentials, depending on the dimension. In one dimension, the corresponding potential is singular at the origin, where it has some nonzero density:

$V(x) = 2a^2 + a \delta(x)\,$

and, up to some rescaling of variables, this is the lowest energy state for a delta function potential, with the bound state energy added on.

$V(x) = a \delta(x)\,$

with the ground state energy:

$E_0 = - 2a^2\,$

and the ground state wavefunction:

$\psi = e^{-2a|x|}\,$

In higher dimensions, the same form gives the potential:

$V(x) = 2a^2+ { 2a (d-1) \over r};\,$

which can be identified as the attractive Coulomb law, up to an additive constant which is the ground state energy. This is the superpotential that describes the lowest energy level of the Hydrogen atom, once the mass is restored by dimensional analysis:

$\psi_0 = e^{-r/r_0}\,$

where r0 is the Bohr radius, with energy

$E_0 = - {2a\over d-1}\,$

The ansatz

$W(x) = a r + b \log(r)\,$

modifies the Coulomb potential to include a quadratic term proportional to 1 / r2, which is useful for nonzero angular momentum. In Atomic physics, the Bohr model created by Niels Bohr depicts the Atom as a small positively charged nucleus surrounded by Electrons In physics and mathematics an ansatz ( Ger, "anset onset" today "setup" plural Ansätze) is an educated guess that is

## Operator Formalism

### Galilean Invariance

Galilean symmetry requires that H(p) is quadratic in p in both the classical and quantum Hamiltonian formalism. In order for Galilean boosts to produce a p-independent phase factor, px - Ht must have a very special form--- translations in p need to be compensated by a shift in H. This is only true when H is quadratic.

The infinitesimal generator of Boosts in both the classical and quantum case is:

$B = \sum_i m_i x_i(t) - t \sum_i p_i\,$

where the sum is over the different particles, and B,x,p are vectors.

The poisson bracket/commutator of $\scriptstyle B\cdot v$ with x and p generate infinitesimal boosts, with v the infinitesimal boost velocity vector:

$[B\cdot v ,x_i] = vt\,$
$[B\cdot v ,p_i] = v m_i\,$

Iterating these relations is simple, since they add a constant amount at each step. By iterating, the dv's incrementally sum up to the finite quantity V:

$x \rightarrow x_i + Vt$
$p \rightarrow p_i + m_i V$

B divided by the total mass is the current center of mass position minus the time times the center of mass velocity:

$B = M X_\mathrm{cm} - t P_\mathrm{cm}\,$

In other words, B/M is the current guess for the position that the center of mass had at time zero.

The statement that B doesn't change with time is the center of mass theorem. For a Galilean invariant system, the center of mass moves with a constant velocity, and the total kinetic energy is the sum of the center of mass kinetic energy and the kinetic energy measured relative to the center of mass.

Since B is explicitly time dependent, H does not commute with B, rather:

${dB\over dt} = [H,B] + {\partial B \over \partial t} = 0\,$

this gives the transformation law for H under infinitesimal boosts:

$[B\cdot v,H] = - P_\mathrm{cm} v\,$

the interpretation of this formula is that the change in H under an infinitesimal boost is entirely given by the change of the center of mass kinetic energy, which is the dot product of the total momentum with the infinitesimal boost velocity.

The two quantities (H,P) form a representation of the Galilean group with central charge M, where only H and P are classical functions on phase-space or quantum mechanical operators, while M is a parameter. The Galilean transformation is used to transform between the coordinates of two Reference frames which differ only by constant relative motion within the constructs of Newtonian In Theoretical physics, a central charge is an Operator Z that commutes with all the other symmetry operators The transformation law for infinitesimal v:

$P' = P + M v\,$
$H' = H - P\dot v\,$

can be iterated as before--- P goes from P to P+MV in infinitesimal increments of v, while H changes at each step by an amount proportional to P, which changes linearly. The final value of H is then changed by the value of P halfway between the starting value and the ending value:

$H' = H - (P+{MV\over 2})\cdot V = H - P\cdot V - {MV^2\over 2}\,$

The factors proportional to the central charge M are the extra wavefunction phases.

Boosts give too much information in the single-particle case, since Galilean symmetry completely determines the motion of a single particle. Given a multi-particle time dependent solution:

$\psi_t(x_1,x_2...,x_n)\,$

with a potential that depends only on the relative positions of the particles, it can be used to generate the boosted solution:

$\psi'_t = \psi_t(x_1 + v t, ..., x_2 + vt) e^{i P_\mathrm{cm}\cdot X_\mathrm{cm} - {Mv_\mathrm{cm}^2\over 2}t}\,$

For the standing wave problem, the motion of the center of mass just adds an overall phase. When solving for the energy levels of multiparticle systems, Galilean invariance allows the center of mass motion to be ignored.

## Relativistic generalisations

The Schrödinger equation does not take into account relativistic effects, meaning that the Schrödinger equation is invariant under a Galilean transformation, but not under a Lorentz transformation. Before the creation of Quantum field theory, physicists attempted to formulate versions of the Schrödinger equation which were compatible with Special relativity. Special relativity (SR (also known as the special theory of relativity or STR) is the Physical theory of Measurement in Inertial The Galilean transformation is used to transform between the coordinates of two Reference frames which differ only by constant relative motion within the constructs of Newtonian In Physics, the Lorentz transformation converts between two different observers' measurements of space and time where one observer is in constant motion with respect to

Relativistically valid generalisations incorporating ideas from special relativity include the Klein-Gordon equation and the Dirac equation. Special relativity (SR (also known as the special theory of relativity or STR) is the Physical theory of Measurement in Inertial The Klein–Gordon equation ( Klein–Fock–Gordon equation or sometimes Klein–Gordon–Fock equation) is a relativistic version of the Schrödinger In Physics, the Dirac equation is a relativistic quantum mechanical wave equation formulated by British physicist Paul Dirac in 1928 and provides