# Maximum likelihood: Wikis

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

# Encyclopedia

Maximum likelihood estimation (MLE) is a popular statistical method used for fitting a statistical model to data, and providing estimates for the model's parameters.

The method of maximum likelihood corresponds to many well-known estimation methods in statistics. For example, suppose you are interested in the heights of Americans. You have a sample of some number of Americans, but not the entire population, and record their heights. Further, you are willing to assume that heights are normally distributed with some unknown mean and variance. The sample mean is then the maximum likelihood estimator of the population mean, and the sample variance is a close approximation to the maximum likelihood estimator of the population variance (see examples below).

For a fixed set of data and underlying probability model, maximum likelihood picks the values of the model parameters that make the data "more likely" than any other values of the parameters would make them. Maximum likelihood estimation gives a unique and easy way to determine solution in the case of the normal distribution and many other problems, although in very complex problems this may not be the case. If a uniform prior distribution is assumed over the parameters, the maximum likelihood estimate coincides with the most probable values thereof.

## History

Maximum-likelihood estimation was recommended, analyzed and vastly popularized by R. A. Fisher between 1912 and 1922 (although it had been used earlier by Gauss, Laplace, Thiele, and F. Y. Edgeworth).[1] Reviews of the development of maximum likelihood have been provided by a number of authors.[2]

## Principles

Suppose there is a sample x1, x2, …, xn of n independent observations, drawn from an unknown probability density (or probability mass) f0(·). It is however known that the function f0 belongs to a certain family of distributions { f(·|θ), θ ∈ Θ }, called the parametric model, so that f0 corresponds to θ = θ0, which is called the “true value” of the parameter. It is desirable to find the value $\scriptstyle\hat\theta$ (the estimator) which would be as close to the true value θ0 as possible.

Both the observed variables xi and the parameters θ can be vectors.

The idea behind the method of maximum likelihood is to first find the joint density function for all observations. For iid sample this density function will be

$f(x_1,x_2,\ldots,x_n\;|\;\theta) = f(x_1|\theta)\cdot f(x_2|\theta)\cdots f(x_n|\theta)\,$

Now we want to look at this function at a different angle: let the observed values x1, x2, …, xn be fixed “parameters” of this function, whereas the value of θ is allowed to vary freely. From this point of view this function is called the likelihood:

$\mathcal{L}(\theta\,|\,x_1,\ldots,x_n) = \prod_{i=1}^n f(x_i|\theta).$

In practice it is always more convenient to work with the scaled logarithm of the likelihood function, called the log-likelihood:

$\hat\ell(\theta\,|\,x_1,\ldots,x_n) = \frac1n\ln\mathcal{L} = \frac1n \sum_{i=1}^n \ln f(x_i|\theta).$

The method of maximum likelihood estimates θ0 by finding the value of θ that maximizes $\scriptstyle\hat\ell(\theta|x)$. This is the maximum likelihood estimator (MLE) of θ0:

$\hat\theta_\mathrm{mle} = \underset{\theta\in\Theta}{\operatorname{arg\,max}}\ \hat\ell(\theta\,|\,x_1,\ldots,x_n).$

The estimator is the same regardless whether we optimize the likelihood or the log-likelihood function.

For selected models the maximum likelihood estimator can be found as an explicit function of the observed data x1, …, xn. More often however, the closed-form solution to this maximization problem doesn’t exist, and the solution has to be found numerically using various optimization algorithms. For certain problems the maximum likelihood estimates may not be unique, or even may not exist (meaning that the log-likelihood function goes to ∞ for certain values of the parameters θ).

In the exposition above we have assumed that the data are independent and identically distributed. The method can be applied however to a broader setting, as long as it is possible to write the joint density function f(x1,…,xn | θ), and its parameter θ has a finite dimension which does not depend on the sample size n. In a simpler extension we may allow for data heterogeneity, so that the joint density is equal to f1(x1|θ) · f2(x2|θ) · … · fn(xn|θ). In a more complicated case of time series models we have to drop the independence assumption as well.

## Properties

Maximum likelihood is the extremum estimator based upon the objective function

$\ell(\theta) = \operatorname{E}[\, \ln f(x_i|\theta) \,]$

and its sample analogue, the log-likelihood $\scriptstyle\hat\ell(\theta|x)$. The expectation here is taken with respect to the true density f(·|θ0).

For a large class of problems, the maximum likelihood estimator possesses a number of attractive asymptotic properties:

• consistency: the estimator converges in probability to the value being estimated.
• asymptotic normality, as the sample size increases, the distribution of the MLE tends to the Gaussian distribution with mean θ and covariance matrix equal to the inverse of the Fisher information matrix.
• efficiency, i.e., it achieves the Cramér-Rao lower bound when the sample size tends to infinity. This means that no asymptotically unbiased estimator has lower asymptotic mean squared error than the MLE.
• and even second-order efficiency after correction for bias.

By the mathematical meaning of the word asymptotic, asymptotic properties are properties which are approached in the limit, that is when the sample-size goes to infinity. Logically, asymptotic properties are irrelevant for finite samples.[3] The theory does not tell us how large the sample needs to be in order to obtain a good enough degree of approximation. Many statisticians believe that asymptotic properties are approximately true when the sample size is "large enough" in the following sense: Some asymptotic properties appear to be approximately true, when the sample size is moderately large. However, such asymptotic heuristics are supported by experience and simulation more than mathematics.[4] When statisticians do not have a better alternative, they often make inferences about the estimated parameters by extrapolating to the asymptotic Gaussian distribution of the MLE. When statisticians do this, the Fisher information matrix is often usefully estimated by the observed information matrix.

However these properties hold only if certain regularity conditions for the model hold, these regularity requirements are discussed in more details below.

### Consistency

Under certain (fairly weak) conditions outlined below, the maximum likelihood estimator is consistent. The consistency means that having a sufficiently large number of observations n, it is possible to find the value of θ0 with arbitrary precision. In mathematical terms this means that as n goes to infinity the estimator $\scriptstyle\hat\theta$ converges in probability to its true value:

$\hat\theta_\mathrm{mle}\ \xrightarrow{p}\ \theta_0\ .$

Under slightly stronger conditions, the estimator converges almost surely (or strongly) too:

$\hat\theta_\mathrm{mle}\ \xrightarrow{a.s.}\ \theta_0\ .$

The conditions for consistency mentioned above are:[5]

1. Identification of the model:
$\theta \neq \theta_0 \quad \Leftrightarrow \quad f(\cdot|\theta)\neq f(\cdot|\theta_0)\ .$
In other words, different parameter values θ correspond to different distributions within the model. If this condition did not hold, there would be some value θ1 such that θ0 and θ1 generate an identical distribution of the observable data. Then we wouldn’t be able to distinguish between these two parameters even with an infinite amount of data — these parameters would have been observationally equivalent.
The identification condition is absolutely necessary for the ML estimator to be consistent. When this condition holds, the limiting likelihood function (θ|·) has unique global maximum at θ0.
2. Compactness: the parameter space Θ of the model is compact.

The identification condition establishes that the log-likelihood has a unique global maximum. Compactness however is needed to ensure that the likelihood cannot approach the maximum value arbitrarily close at some other point (as demonstrated for example in the picture on the right).

Compactness is not a necessary condition, and can be replaced by some other requirements, such as:

• concavity of the log-likelihood function, or
• existence of a compact neighborhood N of θ0 such that outside of N the log-likelihood function was less than the maximum by at least some ε > 0.
3. Continuity: the function ln f(x|θ) is continuous in θ for almost all x’s:
$\Pr\!\big[\; \ln f(x\,|\,\theta) \;\in\; \mathbb{C}^0(\Theta) \;\big] = 1.$
The continuity here can be replaced with a slightly weaker condition of upper semi-continuity.
4. Dominance: there exists an integrable function d(x) such that
$\big|\ln f(x\,|\,\theta)\big| < d(x) \quad \text{for all}\ \theta\in\Theta.$
By the uniform law of large numbers, the dominance condition together with continuity establish the uniform convergence in probability of the log-likelihood:
$\sup_{\theta\in\Theta} \big|\,\hat\ell(x|\theta) - \ell(\theta)\,\big|\ \xrightarrow{p}\ 0.$

The dominance condition can be employed in the case of i.i.d. observations. In the non-i.i.d. case the uniform convergence in probability can be checked by showing that the sequence $\scriptstyle\hat\ell(x|\theta)$ is stochastically equicontinuous.

If one wants to demonstrate that the ML estimator $\scriptstyle\hat\theta$ converges to θ0 almost surely, then a stronger condition of uniform convergence almost surely has to be imposed:

$\sup_{\theta\in\Theta} \big\|\;\hat\ell(x|\theta) - \ell(\theta)\;\big\| \ \xrightarrow{a.s.}\ 0.$

### Asymptotic normality

Maximum-likelihood estimators can lack asymptotic normality and can be inconsistent if there is a failure of one (or more) of the above regularity conditions:

Estimate on boundary. Sometimes the maximum likelihood estimate lies on the boundary of the set of possible parameters, or (if the boundary is not, strictly speaking, allowed) the likelihood gets larger and larger as the parameter approaches the boundary. Standard asymptotic theory needs the assumption that the true parameter value lies away from the boundary. If we have enough data, the maximum likelihood estimate will keep away from the boundary too. But with smaller samples, the estimate can lie on the boundary. In such cases, the asymptotic theory clearly does not give a practically useful approximation. Examples here would be variance-component models, where each component of variance, σ2, must satisfy the constraint σ2 ≥0.

Data boundary parameter-dependent. For the theory to apply in a simple way, the set of data values which has positive probability (or positive probability density) should not depend on the unknown parameter. A simple example where such parameter-dependence does hold is the case of estimating θ from a set of independent identically distributed when the common distribution is uniform on the range (0,θ). For estimation purposes the relevant range of θ is such that θ cannot be less than the largest observation. Because the interval (0,θ) is not compact, there exists no maximum for the likelihood function: For any estimate of theta, there exists a greater estimate that also has greater likelihood. In contrast, the interval [0,θ] includes the end-point θ and is compact, in which case the maximum-likelihood estimator exists. However, in this case, the maximum-likelihood estimator is biased. Asymptotically, this maximum-likelihood estimator is not normally distributed.[6]

Nuisance parameters. For maximum likelihood estimations, a model may have a number of nuisance parameters. For the asymptotic behaviour outlined to hold, the number of nuisance parameters should not increase with the number of observations (the sample size). A well-known example of this case is where observations occur as pairs, where the observations in each pair have a different (unknown) mean but otherwise the observations are independent and Normally distributed with a common variance. Here for 2N observations, there are N+1 parameters. It is well-known that the maximum likelihood estimate for the variance does not converge to the true value of the variance.

Increasing information. For the asymptotics to hold in cases where the assumption of independent identically distributed observations does not hold, a basic requirement is that the amount of information in the data increases indefinitely as the sample size increases. Such a requirement may not be met if either there is too much dependence in the data (for example, if new observations are essentially identical to existing observations), or if new independent observations are subject to an increasing observation error.

Some regularity conditions which ensure this behavior are:

1. The first and second derivatives of the log-likelihood function must be defined.
2. The Fisher information matrix must not be zero, and must be continuous as a function of the parameter.
3. The maximum likelihood estimator is consistent.

Suppose that conditions for consistency of maximum likelihood estimator are satisfied, and [7]

1. θ0 ∈ interior(Θ);
2. f(x|θ) > 0 and is twice continuously differentiable in θ in some neighborhood N of θ0;
3. ∫ supθN||∇θf(x|θ)||dx < ∞, and ∫ supθN||∇θθf(x|θ)||dx < ∞;
4. I = E[∇θlnf(x|θ0) ∇θlnf(x|θ0)′] exists and is nonsingular;
5. E[ supθN||∇θθlnf(x|θ)||] < ∞.

Then the maximum likelihood estimator has asymptotically normal distribution:

$\sqrt{n}\big(\hat\theta_\mathrm{mle} - \theta_0\big)\ \xrightarrow{d}\ \mathcal{N}(0,\,I^{-1}).$

Proof, skipping the technicalities:

Since the log-likelihood function is differentiable, and θ0 lies in the interior of the parameter set, in the maximum the first-order condition will be satisfied:

$\nabla_{\!\theta}\, \hat\ell(\hat\theta|x) = \frac1n \sum_{i=1}^n \nabla_{\!\theta}\ln f(x_i|\hat\theta) = 0.$

When the log-likelihood is twice differentiable, this expression can be expanded into a Taylor series around the point θ = θ0:

$0 = \frac1n \sum_{i=1}^n \nabla_{\!\theta}\ln f(x_i|\theta_0) + \Bigg[\, \frac1n \sum_{i=1}^n \nabla_{\!\theta\theta}\ln f(x_i|\tilde\theta) \,\Bigg] (\hat\theta - \theta_0),$

where $\tilde\theta$ is some point intermediate between θ0 and $\hat\theta$. From this expression we can derive that

$\sqrt{n}(\hat\theta - \theta_0) = \Bigg[\, {- \frac1n \sum_{i=1}^n \nabla_{\!\theta\theta}\ln f(x_i|\tilde\theta)} \,\Bigg]^{-1} \frac1\sqrt{n} \sum_{i=1}^n \nabla_{\!\theta}\ln f(x_i|\theta_0)$

Here the expression in square brackets converges in probability to H = E[−∇θθln f(x|θ0)] by the law of large numbers. The continuous mapping theorem ensures that the inverse of this expression also converges in probability, to H−1. The second sum, by the central limit theorem, converges in distribution to a multivariate normal with mean zero and variance matrix equal to the Fisher information I. Thus, applying the Slutsky’s theorem to the whole expression, we obtain that

$\sqrt{n}(\hat\theta - \theta_0)\ \ \xrightarrow{d}\ \ \mathcal{N}\big(0,\ H^{-1}IH^{-1}\big).$

Finally, the information equality guarantees that when the model is correctly specified, matrix H will be equal to the Fisher information I, so that the variance expression simplifies to just I−1.

### Functional invariance

The maximum likelihood estimator selects the parameter value which gives the observed data the largest possible probability (or probability density, in the continuous case). If the parameter consists of a number of components, then we define their separate maximum likelihood estimators, as the corresponding component of the MLE of the complete parameter. Consistent with this, if $\widehat{\theta}$ is the MLE for θ, and if g(θ) is any transformation of θ, then the MLE for α = g(θ) is by definition

$\widehat{\alpha} = g(\widehat{\theta}).\,\!$

It maximizes the so-called profile likelihood:

$\bar{L}(\alpha) = \sup_{\theta: \alpha = g(\theta)} L(\theta).$

The MLE is also invariant with respect to certain transformations of the data. If Y = g(X) where g is one to one and does not depend on the parameters to be estimated, then the density functions satisfy

fY(y) = fX(x) / | g'(x) |

and hence the likelihood functions for X and Y differ only by a term that does not depend on the model parameters.

For example, the MLE parameters of the log-normal distribution are the same as those of the normal distribution fitted to the logarithm of the data.

### Higher-order properties

The standard asymptotics tells that the maximum-likelihood estimator is √n-consistent and asymptotically efficient, meaning that it reaches the Cramér-Rao bound:

$\sqrt{n}(\hat\theta_\text{mle} - \theta_0)\ \ \xrightarrow{d}\ \ \mathcal{N}(0,\ I^{-1}),$

where I is the Fisher information matrix:

$I_{jk} = \operatorname{E} \bigg[\;{-\frac{\partial^2\ln f_{\theta_0}(x_t)}{\partial\theta_j\,\partial\theta_k}} \;\bigg].$

In particular, it means that the bias of the maximum-likelihood estimator is equal to zero up to the order n−1/2. However when we consider the higher-order terms in the expansion of the distribution of this estimator, it turns out that θmle has bias of order n−1. This bias is equal to (componentwise) [8]

$b_s \equiv \operatorname{E}[(\hat\theta_\mathrm{mle} - \theta_0)_s] = \frac1n \cdot I^{si}I^{jk} \big( \tfrac12 K_{ijk} + J_{j,ik} \big)$

where Einstein’s summation convention over the repeating indices has been adopted; Ijk denotes the j,k-th component of the inverse Fisher information matrix I−1, and

$\tfrac12 K_{ijk} + J_{j,ik} = \operatorname{E} \bigg[\; \frac12 \frac{\partial^3 \ln f_{\theta_0}(x_t)}{\partial\theta_i\,\partial\theta_j\,\partial\theta_k} + \frac{\partial\ln f_{\theta_0}(x_t)}{\partial\theta_j} \frac{\partial^2\ln f_{\theta_0}(x_t)}{\partial\theta_i\,\partial\theta_k} \;\bigg].$

Using these formulas it is possible to estimate the second-order bias of the maximum likelihood estimator, and correct for that bias by subtracting it:

$\hat\theta^*_\mathrm{mle} = \hat\theta_\mathrm{mle} - \hat b .$

This estimator is unbiased up to the terms of order n−1, and is called the bias-corrected maximum likelihood estimator.

This bias-corrected estimator is second-order efficient (at least within the curved exponential family), meaning that it has minimal mean squared error among all second-order bias-corrected estimators, up to the terms of the order n−2. It is possible to continue this process, that is to derive the third-order bias-correction term, and so on. However as was shown by Kano (1996), the maximum-likelihood estimator is not third-order efficient.

## Examples

### Discrete uniform distribution

Consider a case where n tickets numbered from 1 to n are placed in a box and one is selected at random (see uniform distribution); thus, the sample size is 1. If n is unknown, then the maximum-likelihood estimator $\hat{n}$ of n is the number m on the drawn ticket. (The likelihood is 0 for n < m, 1/n for n ≥ m, and this is greatest when n = m. Note that the maximum likelihood estimate of n occurs at the lower extreme of possible values {mm + 1, ...}, rather than somewhere in the “middle” of the range of possible values, which would result in less bias.) The expected value of the number m on the drawn ticket, and therefore the expected value of $\hat{n}$ , is (n + 1)/2. As a result, the maximum likelihood estimator for n will systematically underestimate n by (n − 1)/2 with a sample size of 1.

### Discrete distribution, finite parameter space

Consider tossing an unfair coin 80 times: i.e., the sample might be something like x1 = H, x2 = T, ..., x80 = T, and the count of the number of HEADS "H" is observed. Call the probability of tossing a HEAD p, and the probability of tossing TAILS 1 − p (so here p is θ above). Suppose the outcome is 49 HEADS and 31 TAILS, and suppose the coin was taken from a box containing three coins: one which gives HEADS with probability p = 1/3, one which gives HEADS with probability p = 1/2 and another which gives HEADS with probability p = 2/3. The coins have lost their labels, so which one it was is unknown. Using maximum likelihood estimation the coin that has the largest likelihood can be found, given the data that were observed. By using the probability mass function of the binomial distribution with sample size equal to 80, number successes equal to 49 but different values of p (the "probability of success"), the likelihood function (defined below) takes one of three values:

\begin{align} \Pr(\mathrm{H} = 49 \mid p=1/3) & = \binom{80}{49}(1/3)^{49}(1-1/3)^{31} \approx 0.000, \\[6pt] \Pr(\mathrm{H} = 49 \mid p=1/2) & = \binom{80}{49}(1/2)^{49}(1-1/2)^{31} \approx 0.012, \\[6pt] \Pr(\mathrm{H} = 49 \mid p=2/3) & = \binom{80}{49}(2/3)^{49}(1-2/3)^{31} \approx 0.054. \end{align}

The likelihood is maximized when p = 2/3, and so this the maximum likelihood estimate for p.

### Discrete distribution, continuous parameter space

Now suppose that there was only one coin but its p could have been any value 0 ≤ p ≤ 1. The likelihood function to be maximised is

$L(p) = f_D(\mathrm{H} = 49 \mid p) = \binom{80}{49} p^{49}(1-p)^{31},$

and the maximisation is over all possible values 0 ≤ p ≤ 1.

Likelihood of different proportion parameter values for a binomial process with t = 3 and n = 10

One way to maximize this function is by differentiating with respect to p and setting to zero:

\begin{align} {0}&{} = \frac{\partial}{\partial p} \left( \binom{80}{49} p^{49}(1-p)^{31} \right) \\[8pt] & {}\propto 49p^{48}(1-p)^{31} - 31p^{49}(1-p)^{30} \\[8pt] & {}= p^{48}(1-p)^{30}\left[ 49(1-p) - 31p \right] \\[8pt] & {}= p^{48}(1-p)^{30}\left[ 49 - 80p \right] \end{align}

which has solutions p = 0, p = 1, and p = 49/80. The solution which maximizes the likelihood is clearly p = 49/80 (since p = 0 and p = 1 result in a likelihood of zero). Thus the maximum likelihood estimator for p is 49/80.

This result is easily generalized by substituting a letter such as t in the place of 49 to represent the observed number of 'successes' of our Bernoulli trials, and a letter such as n in the place of 80 to represent the number of Bernoulli trials. Exactly the same calculation yields the maximum likelihood estimator t / n for any sequence of n Bernoulli trials resulting in t 'successes'.

### Continuous distribution, continuous parameter space

For the normal distribution $\mathcal{N}(\mu, \sigma^2)$ which has probability density function

$f(x\mid \mu,\sigma^2) = \frac{1}{\sqrt{2\pi}\ \sigma\ } \exp{\left(-\frac {(x-\mu)^2}{2\sigma^2} \right)},$

the corresponding probability density function for a sample of n independent identically distributed normal random variables (the likelihood) is

$f(x_1,\ldots,x_n \mid \mu,\sigma^2) = \prod_{i=1}^{n} f( x_{i}\mid \mu, \sigma^2) = \left( \frac{1}{2\pi\sigma^2} \right)^{n/2} \exp\left( -\frac{ \sum_{i=1}^{n}(x_i-\mu)^2}{2\sigma^2}\right),$

or more conveniently:

$f(x_1,\ldots,x_n \mid \mu,\sigma^2) = \left( \frac{1}{2\pi\sigma^2} \right)^{n/2} \exp\left(-\frac{ \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\bar{x}-\mu)^2}{2\sigma^2}\right),$

where $\bar{x}$ is the sample mean.

This family of distributions has two parameters: θ = (μσ), so we maximize the likelihood, $\mathcal{L} (\mu,\sigma) = f(x_1,\ldots,x_n \mid \mu, \sigma)$, over both parameters simultaneously, or if possible, individually.

Since the logarithm is a continuous strictly increasing function over the range of the likelihood, the values which maximize the likelihood will also maximize its logarithm. Since maximizing the logarithm often requires simpler algebra, it is the logarithm which is maximized below. (Note: the log-likelihood is closely related to information entropy and Fisher information.)

\begin{align} 0 & = \frac{\partial}{\partial \mu} \log \left( \left( \frac{1}{2\pi\sigma^2} \right)^{n/2} \exp\left(-\frac{ \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\bar{x}-\mu)^2}{2\sigma^2}\right) \right) \\[6pt] & = \frac{\partial}{\partial \mu} \left( \log\left( \frac{1}{2\pi\sigma^2} \right)^{n/2} - \frac{ \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\bar{x}-\mu)^2}{2\sigma^2} \right) \\[6pt] & = 0 - \frac{-2n(\bar{x}-\mu)}{2\sigma^2} \end{align}

which is solved by

$\hat\mu = \bar{x} = \sum^n_{i=1}x_i/n.$

This is indeed the maximum of the function since it is the only turning point in μ and the second derivative is strictly less than zero. Its expectation value is equal to the parameter μ of the given distribution,

$E \left[ \widehat\mu \right] = \mu, \,$

which means that the maximum-likelihood estimator $\widehat\mu$ is unbiased.

Similarly we differentiate the log likelihood with respect to σ and equate to zero:

\begin{align} 0 & = \frac{\partial}{\partial \sigma} \log \left( \left( \frac{1}{2\pi\sigma^2} \right)^{n/2} \exp\left(-\frac{ \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\bar{x}-\mu)^2}{2\sigma^2}\right) \right) \\[6pt] & = \frac{\partial}{\partial \sigma} \left( \frac{n}{2}\log\left( \frac{1}{2\pi\sigma^2} \right) - \frac{ \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\bar{x}-\mu)^2}{2\sigma^2} \right) \\[6pt] & = -\frac{n}{\sigma} + \frac{ \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\bar{x}-\mu)^2}{\sigma^3} \end{align}

which is solved by

$\widehat\sigma^2 = \sum_{i=1}^n(x_i-\widehat{\mu})^2/n.$

Inserting $\widehat\mu$ we obtain

$\widehat\sigma^2 = \frac{1}{n} \sum_{i=1}^{n} (x_{i} - \bar{x})^2 = \frac{1}{n}\sum_{i=1}^n x_i^2 -\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^n x_i x_j.$

To calculate its expected value, it is convenient to rewrite the expression in terms of zero-mean random variables (statistical error) $\delta_i \equiv \mu - x_i$. Expressing the estimate in these variables yields

$\widehat\sigma^2 = \frac{1}{n} \sum_{i=1}^{n} (\mu - \delta_i)^2 -\frac{1}{n^2}\sum_{i=1}^n\sum_{j=1}^n (\mu - \delta_i)(\mu - \delta_j).$

Simplifying the expression above, utilizing the facts that $E\left[\delta_i\right] = 0$ and $E[\delta_i^2] = \sigma^2$, allows us to obtain

$E \left[ \widehat{\sigma^2} \right]= \frac{n-1}{n}\sigma^2.$

This means that the estimator $\widehat\sigma$ is biased. However, $\widehat\sigma$ is consistent.

Formally we say that the maximum likelihood estimator for θ = (μ,σ2) is:

$\widehat{\theta} = \left(\widehat{\mu},\widehat{\sigma}^2\right).$

In this case the MLEs could be obtained individually. In general this may not be the case, and the MLEs would have to be obtained simultaneously.

### Non-independent variables

It may be the case that variables are correlated, that is, not independent. Two random variables X and Y are independent only if their joint probability density function is the product of the individual probability density functions, i.e.

$f(x,y)=f(x)f(y)\,$

Suppose one constructs an order-n Gaussian vector out of random variables $(x_1,\ldots,x_n)\,$, where each variable has means given by $(\mu_1, \ldots, \mu_n)\,$. Furthermore, let the covariance matrix be denoted by Σ,

The joint probability density function of these n random variables is then given by:

$f(x_1,\ldots,x_n)=\frac{1}{(2\pi)^{n/2}\sqrt{\text{det}(\Sigma)}} \exp\left( -\frac{1}{2} \left[x_1-\mu_1,\ldots,x_n-\mu_n\right]\Sigma^{-1} \left[x_1-\mu_1,\ldots,x_n-\mu_n\right]^T \right)$

In the two variable case, the joint probability density function is given by:

$f(x,y) = \frac{1}{2\pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp\left[ -\frac{1}{2(1-\rho^2)} \left(\frac{(x-\mu_x)^2}{\sigma_x^2} - \frac{2\rho(x-\mu_x)(y-\mu_y)}{\sigma_x\sigma_y} + \frac{(y-\mu_y)^2}{\sigma_y^2}\right) \right]$

In this and other cases where a joint density function exists, the likelihood function is defined as above, under Principles, using this density.

## Applications

Maximum likelihood estimation is used for a wide range of statistical models, including:

These uses arise across applications in widespead set of fields, including:

## Notes

1. ^ Edgeworth (Sep 1908, Dec 1908)
2. ^ Savage (1976), Pratt (1976), Stigler (1978, 1986, 1999), Hald (1998, 1999), Aldrich (1997).
3. ^ Kolmogorov. "Tables of Random Numbers". 1954.
4. ^ See the discussion of the gap between the Berry-Esseen approximation bound, suggesting 10 thousand samples, and the usual heuristic that the normal approximation is a good approximation of the sample mean's distribution for 30 independent samples in Jørgen Hoffman Jørgensen. Probability with a View Towards Statistics, Volume I, page 399.
5. ^ Newey & McFadden (1994, Theorem 2.5.)
6. ^ Lehmann and Casella.
7. ^ Newey & McFadden (1994, Theorem 3.3.)
8. ^ Cox & Snell (1968, formula (20))

## References

• Aldrich, John (1997). "R.A. Fisher and the making of maximum likelihood 1912–1922". Statistical Science 12 (3): pp. 162–176. doi:10.1214/ss/1030037906. MR1617519.
• Anderson, Erling B. 1970. "Asymptotic Properties of Conditional Maximum Likelihood Estimators". Journal of the Royal Statistical Society B 32, 283-301.
• Andersen, Erling B. 1980. Discrete Statistical Models with Social Science Applications. North Holland, 1980.
• Debabrata Basu. Statistical Information and Likelihood : A Collection of Critical Essays by Dr. D. Basu ; J.K. Ghosh, editor. Lecture Notes in Statistics Volume 45, Springer-Verlag, 1988.
• Cox, D.R.; Snell, E.J. (1968). "A general definition of residuals". Journal of the Royal Statistical Society. Series B (Methodological): pp. 248–275. JSTOR 2984505.
• Edgeworth, F.Y. (Sep 1908). "On the probable errors of frequency-constants". Journal of the Royal Statistical Society 71 (3): pp. 499–512. JSTOR 2339293.
• Edgeworth, F.Y. (Dec 1908). "On the probable errors of frequency-constants". Journal of the Royal Statistical Society 71 (4): pp. 651–678. JSTOR 2339378.
• Ferguson, Thomas S (1996). A course in large sample theory. Chapman & Hall.
• Hald, Anders (1998). A history of mathematical statistics from 1750 to 1930. New York: Wiley.
• Hald, Anders (1999). "On the history of maximum likelihood in relation to inverse probability and least squares". Statistical Science 14 (2): pp. 214–222. JSTOR 2676741.
• Kano, Y. (1996). "Third-order efficiency implies fourth-order efficiency". Journal of the Japan Statistical Society 26: 101–117.
• Le Cam, Lucien (1990). "Maximum likelihood — an introduction". ISI Review 58 (2): pp. 153–171.
• Le Cam, Lucien; Lo Yang, Grace (2000). Asymptotics in statistics: some basic concepts. Springer. ISBN 0-387-95036-2.
• Le Cam, Lucien (1986). Asymptotic methods in statistical decision theory. Springer-Verlag.
• Lehmann, E.L.; Casella, G. (1998). Theory of Point Estimation, 2nd ed. Springer. ISBN 0-387-98502-6.
• Newey, Whitney K.; McFadden, Daniel (1994). Large sample estimation and hypothesis testing. Handbook of econometrics, vol.IV, Ch.36. Elsevier Science. pp. 2111–2245.
• Pratt, John W. (1976). "F. Y. Edgeworth and R. A. Fisher on the efficiency of maximum likelihood estimation". The Annals of Statistics 4 (3): pp. 501–514. JSTOR 2958222.
• Savage, Leonard J. (1976). "On rereading R. A. Fisher". The Annals of Statistics 4 (3): pp. 441–500. JSTOR 2958221.
• Stigler, Stephen M. (1978). "Francis Ysidro Edgeworth, statistician". Journal of the Royal Statistical Society. Series A (General) 141 (3): pp. 287–322. JSTOR 2344804.
• Stigler, Stephen M. (1986). The history of statistics: the measurement of uncertainty before 1900. Harvard University Press. ISBN 0-674-40340-1.
• Stigler, Stephen M. (1999). Statistics on the table: the history of statistical concepts and methods. Harvard University Press. ISBN 0-674-83601-4.
• van der Vaart, A.W. (1998). Asymptotic Statistics. ISBN 0-521-78450-6.