The Full Wiki

Likelihood-ratio test: Wikis

Advertisements
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

In statistics, a likelihood ratio test is used to compare the fit of two models one of which is nested within the other.

Both models are fitted to the data and their log-likelihood recorded. The test statistic (usually denoted D) is twice the difference in these log-likelihoods:

 \begin{align} D & = -2(\ln(\text{likelihood for null model}) - \ln(\text{likelihood for alternative model})) \ & = -2\ln\left( \frac{\text{likelihood for null model}}{\text{likelihood for alternative model}} \right). \end{align}

The model with more parameters will always fit at least as well (have a greater log-likelihood). Whether it fits significantly better and should thus be preferred can be determined by deriving the probability or p-value of the obtained difference D. In many cases, the probability distribution of the test statistic can be approximated by a chi-square distribution with (df1 − df2) degrees of freedom, where df1 and df2 are the degrees of freedom of models 1 and 2 respectively.

The test requires nested models, that is, models in which the more complex one can be transformed into the simpler model by imposing a set of linear constraints on the parameters.

In a concrete case, if model 1 has 1 free parameter and a log-likelihood of 8012 and the alternative model has 3 degrees of freedom and a LL of 8024, then the probability of this difference is that of chi-square of 24 = 2·(8024 − 8012) under 2 = 3 − 1 degrees of freedom. Certain assumptions must be met for the statistic to follow a chi-squared distribution and often empirical p-values are computed.

Contents

Background

The likelihood ratio, often denoted by Λ (the capital Greek letter lambda), is the ratio of the likelihood function varying the parameters over two different sets in the numerator and denominator. A likelihood-ratio test is a statistical test for making a decision between two hypotheses based on the value of this ratio.

It is central to the NeymanPearson approach to statistical hypothesis testing, and, like statistical hypothesis testing generally, is both widely used and much criticized; see Criticism, below.

Simple-versus-simple hypotheses

A statistical model is often a parametrized family of probability density functions or probability mass functions f(x | θ). A simple-vs-simple hypotheses test has completely specified models under both the null and alternative hypotheses, which for convenience are written in terms of fixed values of a notional parameter θ:

 \begin{align} H_0 &:& \theta=\theta_0 ,\ H_1 &:& \theta=\theta_1 . \end{align}

Note that under either hypothesis, the distribution of the data is fully specified; there are no unknown parameters to estimate. The likelihood ratio test statistic can be written as[1]:

 \Lambda(x) = \frac{ L(\theta_0|x) }{ L(\theta_1|x) } = \frac{ f(x|\theta_0) }{ f(x|\theta_1) }

or

\Lambda(x)=\frac{L(\theta_0\mid x)}{\sup\{\,L(\theta\mid x):\theta\in\{\theta_0,\theta_1\}\}},

where L(θ | x) is the likelihood function. Note that some references may use the reciprocal as the definition.[2] In the form stated here, the likelihood ratio is small if the alternative model is better than the null model and the likelihood ratio test provides the decision rule as:

If Λ > c, do not reject H0;
If Λ < c, reject H0;
Reject with probability q if Λ = c.

The values c, \; q are usually chosen to obtain a specified significance level α, through the relation: q\cdot P(\Lambda=c \;|\; H_0) + P(\Lambda < c \; | \; H_0) = \alpha . The Neyman-Pearson lemma states that this likelihood ratio test is the most powerful among all level-α tests for this problem.

Definition (likelihood ratio test for composite hypotheses)

A null hypothesis is often stated by saying the parameter θ is in a specified subset Θ0 of the parameter space Θ.

 \begin{align} H_0 &:& \theta \in \Theta_0\ H_1 &:& \theta \in \Theta_0^{\complement} \end{align}

The likelihood function is L(θ | x) = f(x | θ) (with f(x | θ) being the pdf or pmf) is a function of the parameter θ with x held fixed at the value that was actually observed, i.e., the data. The likelihood ratio test statistic is [3]

\Lambda(x)=\frac{\sup\{\,L(\theta\mid x):\theta\in\Theta_0\,\}}{\sup\{\,L(\theta\mid x):\theta\in\Theta\,\}}.

A likelihood ratio test is any test with critical region (or rejection region) of the form \{x|\Lambda \le c\} where c is any number satisfying 0\le c\le 1. Many common test statistics such as the Z-test, the F-test, Pearson's chi-square test and the G-test are tests for nested models and can be phrased as log-likelihood ratios or approximations thereof.

Advertisements

Interpretation

Being a function of the data x, the LR is therefore a statistic. The likelihood-ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).

The numerator corresponds to the maximum probability of an observed outcome under the null hypothesis. The denominator corresponds to the maximum probability of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator. The likelihood ratio hence is between 0 and 1. Lower values of the likelihood ratio mean that the observed result was much less likely to occur under the null hypothesis as compared to the alternate. Higher values of the statistic mean that the observed outcome was more than or equally likely or nearly as likely to occur under the null hypothesis as compared to the alternate, and the null hypothesis cannot be rejected.

Approximation

If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to accept/reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine. A convenient result, though, says that as the sample size n approaches \infty, the test statistic − 2log(Λ) for a nested model will be asymptotically χ2 distributed with degrees of freedom equal to the difference in dimensionality of Θ and Θ0. This means that for a great variety of hypotheses, a practitioner can compute the likelihood ratio Λ for the data and compare − 2log(Λ) to the chi squared value corresponding to a desired statistical significance as an approximate statistical test.

Examples

Coin tossing

An example, in the case of Pearson's test, we might try to compare two coins to determine whether they have the same probability of coming up heads. Our observation can be put into a contingency table with rows corresponding to the coin and columns corresponding to heads or tails. The elements of the contingency table will be the number of times the coin for that row came up heads or tails. The contents of this table are our observation X.

Heads Tails
Coin 1 k1H k1T
Coin 2 k2H k2T

Here Θ consists of the parameters p1H, p1T, p2H, and p2T, which are the probability that coin 1 (2) comes up heads (tails). The hypothesis space H is defined by the usual constraints on a distribution, 0 \le p_{ij} \le 1, and piH + piT = 1. The null hypothesis H0 is the sub-space where p1j = p2j. In all of these constraints, i = 1,2 and j = H,T.

Writing nij for the best values for pij under the hypothesis H, maximum likelihood is achieved with

n_{ij} = \frac{k_{ij}}{k_{iH}+k_{iT}}.

Writing mij for the best values for pij under the null hypothesis H0, maximum likelihood is achieved with

m_{ij} = \frac{k_{1j}+k_{2j}}{k_{1H}+k_{2H}+k_{1T}+k_{2T}},

which does not depend on the coin i.

The hypothesis and null hypothesis can be rewritten slightly so that they satisfy the constraints for the logarithm of the likelihood ratio to have the desired nice distribution. Since the constraint causes the two-dimensional H to be reduced to the one-dimensional H0, the asymptotic distribution for the test will be χ2(1), the χ2 distribution with one degree of freedom.

For the general contingency table, we can write the log-likelihood ratio statistic as

-2 \log \Lambda = 2\sum_{i, j} k_{ij} \log \frac{n_{ij}}{m_{ij}}.

Criticism

Theoretical

Bayesian criticisms of classical likelihood ratio tests focus on two issues:[citation needed]

  1. the supremum function in the calculation of the likelihood ratio, saying that this takes no account of the uncertainty about θ and that using maximum likelihood estimates in this way can promote complicated alternative hypotheses with an excessive number of free parameters;
  2. testing the probability that the sample would produce a result as extreme or more extreme under the null hypothesis, saying that this bases the test on the probability of extreme events that did not happen.

Instead they put forward methods such as Bayes factors, which explicitly take uncertainty about the parameters into account, and which are based on the evidence that did occur.

There are two frequentist replies to this critique.[citation needed] The first is that in practice, likelihood ratio tests are used as the basis of confidence intervals, which do reflect the uncertainty about θ – though in turn there is Bayesian criticism of confidence intervals as themselves ill-conceived, and that credible intervals are a better alternative. The second is that likelihood ratio tests provide a practicable approach to statistical inference – they can easily be computed, by contrast to Bayesian posterior probabilities, which are more computationally intensive. The Bayesian reply to the latter is that computers obviate any such advantage.

Practical

It has been suggested that the results of diagnostic tests could be more accurately interpreted if presented in terms of likelihood ratios [4]. A large likelihood ratio, for example 10 or more, suggests the disease is present, while small ratios (<0.1), help rule out disease[5].

In practice, however, physicians rarely make these calculations[6] and when they do, they often make errors.[7] A randomized controlled trial compared how well physicians interpreted diagnostic tests that were presented as either sensitivity and specificity, a likelihood ratio, or an inexact graphic of the likelihood ratio, found no difference between the three modes in interpretation of test results.[8]

References

  1. ^ Mood, Duane C. Boes; Introduction to theory of Statistics (page 410)
  2. ^ Cox, D. R. and Hinkley, D. V Theoretical Statistics, Chapman and Hall, 1974. (page 92)
  3. ^ George Casella, Roger L. Berger Statistical Inference, Second edition (page 375)
  4. ^ Jaeschke R, Guyatt GH, Sackett DL (1994). "Users’ guides to the medical literature. III. How to use an article about a diagnostic test. B. What are the results and will they help me in caring for my patients? The Evidence-Based Medicine Working Group". JAMA 271 (9): 703–7. doi:10.1001/jama.271.9.703. PMID 8309035. 
  5. ^ McGee S (2002). "Simplifying likelihood ratios". Journal of general internal medicine : official journal of the Society for Research and Education in Primary Care Internal Medicine 17 (8): 646–9. PMID 12213147. 
  6. ^ Reid MC, Lane DA, Feinstein AR (1998). "Academic calculations versus clinical judgments: practicing physicians’ use of quantitative measures of test accuracy". Am. J. Med. 104 (4): 374–80. doi:10.1016/S0002-9343(98)00054-0. PMID 9576412. 
  7. ^ Steurer J, Fischer JE, Bachmann LM, Koller M, ter Riet G (2002). "Communicating accuracy of tests to general practitioners: a controlled study". BMJ 324 (7341): 824–6. doi:10.1136/bmj.324.7341.824. PMID 11934776. 
  8. ^ Puhan MA, Steurer J, Bachmann LM, ter Riet G (2005). "A randomized trial of ways to describe test accuracy: the effect on physicians' post-test probability estimates". Ann. Intern. Med. 143 (3): 184–9. PMID 16061916. 

See also

Context

External links


Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message