Parameters Probability density functionUsing maximum convention Cumulative distribution function $a,b \in (-\infty,\infty) \,\!$ $a \le x \le b \,\!$ $\begin{matrix} \frac{1}{b - a} & \mbox{for }a \le x \le b \\ \\ 0 & \mathrm{for}\ xb \end{matrix} \,\!$ $\begin{matrix} 0 & \mbox{for }x < a \\ \frac{x-a}{b-a} & ~~~~~ \mbox{for }a \le x < b \\ 1 & \mbox{for }x \ge b \end{matrix} \,\!$ $\frac{a+b}{2} \,\!$ $\frac{a+b}{2} \,\!$ any value in $[a,b] \,\!$ $\frac{(b-a)^2}{12} \,\!$ $0 \,\!$ $-\frac{6}{5} \,\!$ $\ln(b-a) \,\!$ $\frac{e^{tb}-e^{ta}}{t(b-a)} \,\!$ $\frac{e^{itb}-e^{ita}}{it(b-a)} \,\!$

In probability theory and statistics, the continuous uniform distribution is a family of probability distributions such that for each member of the family, all intervals of the same length on the distribution's support are equally probable. In Mathematics, the support of a function is the set of points where the function is not zero or the closure of that set In Mathematics, a probability density function (pdf is a function that represents a Probability distribution in terms of Integrals Formally a probability In Probability theory and Statistics, the cumulative distribution function (CDF, also probability distribution function or just distribution function In Probability theory and Statistics, a median is described as the number separating the higher half of a sample a population or a Probability distribution In Statistics, the mode is the value that occurs the most frequently in a Data set or a Probability distribution. In Probability theory and Statistics, the variance of a Random variable, Probability distribution, or sample is one measure of In Probability theory and Statistics, skewness is a measure of the asymmetry of the Probability distribution of a real -valued In Probability theory and Statistics, kurtosis (from the Greek word κυρτός kyrtos or kurtos, meaning bulging is a measure of the "peakedness" In Probability theory and Statistics, the moment-generating function of a Random variable X is M_X(t=\operatorname{E}\left(e^{tX}\right In Probability theory, the characteristic function of any Random variable completely defines its Probability distribution. Probability theory is the branch of Mathematics concerned with analysis of random phenomena Statistics is a mathematical science pertaining to the collection analysis interpretation or explanation and presentation of Data. In Probability theory and Statistics, a probability distribution identifies either the probability of each value of an unidentified Random variable In Mathematics, an interval is a set of Real numbers with the property that any number that lies between two numbers in the set is also included in the set The support is defined by the two parameters, a and b, which are its minimum and maximum values. The distribution is often abbreviated U(a,b).

## Characterization

### Probability density function

The probability density function of the continuous uniform distribution is:

$f(x)=\left\{\begin{matrix} \frac{1}{b - a} & \ \ \ \mathrm{for}\ a \le x \le b, \\ \\ 0 & \mathrm{for}\ xb, \end{matrix}\right.$

The values at the two boundaries a and b are usually unimportant because they do not alter the values of the integrals of f(xdx over any interval, nor of x f(xdx or the like. In Mathematics, a probability density function (pdf is a function that represents a Probability distribution in terms of Integrals Formally a probability Sometimes they are chosen to be zero, and sometimes chosen to be 1/(b − a). The latter is appropriate in the context of estimation by the method of maximum likelihood. Maximum likelihood estimation ( MLE) is a popular statistical method used for fitting a mathematical model to some data In the context of Fourier analysis, one may take the value of f(a) or f(b) to be 1/(2(b − a)), since then the inverse transform of many integral transforms of this uniform function will yield back the function itself, rather than a function which is equal "almost everywhere", i. In mathematics Fourier analysis is a subject area which grew out of the study of Fourier series In Mathematics, an integral transform is any transform T of the following form (Tf(u = \int_{t_1}^{t_2} K(t u\ f(t\ dt In Measure theory (a branch of Mathematical analysis) one says that a property holds almost everywhere if the set of elements for which the property does e. except on a set of points with zero measure. In Mathematics the concept of a measure generalizes notions such as "length" "area" and "volume" (but not all of its applications have to do with Also, it is consistent with the sign function which has no such ambiguity.

### Cumulative distribution function

$F(x)=\left\{\begin{matrix} 0 & \mbox{for }x < a \\ \\ \frac{x-a}{b-a} & \ \ \ \mbox{for }a \le x < b \\ \\ 1 & \mbox{for }x \ge b \end{matrix}\right. \,\!$

### Generating functions

#### Moment-generating function

$M_x = E(e^{tx}) = \frac{e^{tb}-e^{ta}}{t(b-a)} \,\!$

from which we may calculate the raw moments m k

$m_1=\frac{a+b}{2}, \,\!$
$m_2=\frac{a^2+ab+b^2}{3}, \,\!$
$m_k=\frac{1}{k+1}\sum_{i=0}^k a^ib^{k-i}. \,\!$

For a random variable following this distribution, the expected value is then m1 = (a + b)/2 and the variance is m2 − m12 = (b − a)2/12. In Probability theory and Statistics, the cumulative distribution function (CDF, also probability distribution function or just distribution function In Probability theory and Statistics, the moment-generating function of a Random variable X is M_X(t=\operatorname{E}\left(e^{tX}\right A random variable is a rigorously defined mathematical entity used mainly to describe Chance and Probability in a mathematical way In Probability theory and Statistics, the variance of a Random variable, Probability distribution, or sample is one measure of

#### Cumulant-generating function

For n ≥ 2, the nth cumulant of the uniform distribution on the interval [0, 1] is bn/n, where bn is the nth Bernoulli number. In Probability theory and Statistics, a Random variable X has an Expected value μ = E ( X) and a Variance σ2 In Mathematics, the Bernoulli numbers are a Sequence of Rational numbers with deep connections to Number theory.

## Properties

### Generalization to Borel sets

This distribution can be generalized to more complicated sets than intervals. If S is a Borel set of positive, finite measure, the uniform probability distribution on S can be specified by defining the pdf to be zero outside S and constantly equal to 1/K on S, where K is the Lebesgue measure of S. In Mathematics, the Lebesgue measure, named after Henri Lebesgue, is the standard way of assigning a Length, Area or Volume to

### Order statistics

Let X1, . . . , Xn be an i.i.d. sample from U(0,1). "IID" or "iid" redirects here For other uses see IID (disambiguation. Let X(k) be the kth order statistic from this sample. In Statistics, the k th order statistic of a Statistical sample is equal to its k th-smallest value Then the probability distribution of X(k) is a Beta distribution with parameters k and n − k + 1. In Probability theory and Statistics, the beta distribution is a family of continuous Probability distributions defined on the interval 1 parameterized The expected value is

$\operatorname{E}(X_{(k)}) = {k \over n+1}.$

This fact is useful when making Q-Q plots. In Statistics, a Q-Q plot ( "Q" stands for Quantile) is a graphical method for diagnosing differences between the Probability

The variances are

$\operatorname{Var}(X_{(k)}) = {k (n-k+1) \over (n+1)^2 (n+2)} .$

### 'Uniformity'

The probability that a uniformly distributed random variable falls within any interval of fixed length is independent of the location of the interval itself (but it is dependent on the interval size), so long as the interval is contained in the distribution's support.

To see this, if X ≈ U(0,b) and [x, x+d] is a subinterval of [0,b] with fixed d > 0, then

$P\left(X\in\left [ x,x+d \right ]\right) = \int_{x}^{x+d} \frac{\mathrm{d}y}{b-a}\, = \frac{d}{b-a} \,\!$

which is independent of x. This fact motivates the distribution's name.

## Standard uniform

Restricting a = 0 and b = 1, the resulting distribution U(0,1) is called a standard uniform distribution.

One interesting property of the standard uniform distribution is that if u1 has a standard uniform distribution, then so does 1-u1.

## Related distributions

If X has a standard uniform distribution,

• Y = -ln(X)/λ has an exponential distribution with (rate) parameter λ. WikipediaWikiProject Probability#Standards for a discussionof standards used for probability distribution articles such as this one
• Y = 1 - X1/n has a beta distribution with parameters 1 and n. In Probability theory and Statistics, the beta distribution is a family of continuous Probability distributions defined on the interval 1 parameterized (Note this implies that the standard uniform distribution is a special case of the beta distribution, with parameters 1 and 1. )

## Relationship to other functions

As long as the same conventions are followed at the transition points, the probability density function may also be expressed in terms of the Heaviside step function:

$f(x)=\frac{\operatorname{H}(x-a)-\operatorname{H}(x-b)}{b-a}, \,\!$

or in terms of the rectangle function

$f(x)=\frac{1}{b-a}\,\operatorname{rect}\left(\frac{x-\left(\frac{a+b}{2}\right)}{b-a}\right) .$

There is no ambiguity at the transition point of the sign function. The Heaviside step function, H, also called the unit step function, is a discontinuous function whose value is zero for negative The rectangular function (also known as the rectangle function, rect function, unit pulse, or the normalized Boxcar function) Using the half-maximum convention at the transition points, the uniform distribution may be expressed in terms of the sign function as:

$f(x)=\frac{ \sgn{(x-a)}-\sgn{(x-b)}} {2(b-a)}.$

## Applications

In statistics, when a p-value is used as a test statistic for a simple null hypothesis, and the distribution of the test statistic is continuous, then the test statistic (p-value) is uniformly distributed between 0 and 1 if the null hypothesis is true. Statistics is a mathematical science pertaining to the collection analysis interpretation or explanation and presentation of Data. In statistical Hypothesis testing the p-value is the Probability of obtaining a result at least as extreme as the one that was actually observed given See also Statistical hypothesis testing In Statistics, a null hypothesis ( H 0 is a plausible hypothesis (scenario which may explain

### Sampling from a uniform distribution

There are many applications in which it is useful to run simulation experiments. Many programming languages have the ability to generate pseudo-random numbers which are effectively distributed according to the standard uniform distribution. A programming language is an Artificial language that can be used to write programs which control the behavior of a machine particularly a Computer. A Pseudorandom number sequence is a Sequence of numbers that has been computed by some defined arithmetic process but is effectively a Random number sequence for the

If u is a value sampled from the standard uniform distribution, then the value a + (ba)u follows the uniform distribution parametrised by a and b, as described above.

### Sampling from an arbitrary distribution

The uniform distribution is useful for sampling from arbitrary distributions. Inverse transform sampling, also known as the probability integral transform is a method of generating sample numbers at random from any Probability distribution given its A general method is the inverse transform sampling method, which uses the cumulative distribution function (CDF) of the target random variable. In Probability theory and Statistics, the cumulative distribution function (CDF, also probability distribution function or just distribution function This method is very useful in theoretical work. Since simulations using this method require inverting the CDF of the target variable, alternative methods have been devised for the cases where the cdf is not known in closed form. One such method is rejection sampling. In Mathematics, rejection sampling is a technique used to generate observations from a distribution.

The normal distribution is an important example where the inverse transform method is not efficient. The normal distribution, also called the Gaussian distribution, is an important family of Continuous probability distributions applicable in many fields However, there is an exact method, the Box-Muller transformation, which uses the inverse transform to convert two independent uniform random variables into two independent normally distributed random variables. A Box-Muller transform (by George Edward Pelham Box and Mervin Edgar Muller 1958 is a method of generating pairs of independent standard normally distributed A random variable is a rigorously defined mathematical entity used mainly to describe Chance and Probability in a mathematical way The normal distribution, also called the Gaussian distribution, is an important family of Continuous probability distributions applicable in many fields