Generalized linear model: Wikis

Advertisements
  

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

Encyclopedia

From Wikipedia, the free encyclopedia

In statistics, the generalized linear model (GLM) is a flexible generalization of ordinary least squares regression. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.

Generalized linear models were formulated by John Nelder and Robert Wedderburn as a way of unifying various other statistical models, including linear regression, logistic regression and Poisson regression, under one framework.[1] This allowed them to develop a general algorithm for maximum likelihood estimation in all these models. It extends naturally to encompass many other models as well.

Contents

Overview

In a GLM, each outcome of the dependent variables, Y, is assumed to be generated from a particular distribution function in the exponential family, a large range of probability distributions that includes the normal, binomial and poisson distributions, among others. The mean, μ, of the distribution depends on the independent variables, X, through:

\operatorname{E}(\mathbf{Y}) = \boldsymbol{\mu} = g^{-1}(\mathbf{X}\boldsymbol{\beta})

where E(Y) is the expected value of Y; Xβ is the linear predictor, a linear combination of unknown parameters, β; g is the link function.

In this framework, the variance is typically a function, V, of the mean:

 \operatorname{Var}(\mathbf{Y}) = \operatorname{V}( \boldsymbol{\mu} ) = \operatorname{V}(g^{-1}(\mathbf{X}\boldsymbol{\beta})).

It is convenient if V follows from the exponential family distribution, but it may simply be that the variance is a function of the predicted value.

The unknown parameters, β, are typically estimated with maximum likelihood, maximum quasi-likelihood, or Bayesian techniques.

Model components

The GLM consists of three elements.

1. A probability distribution from the exponential family.
2. A linear predictor η = Xβ .
3. A link function g such that E(Y) = μ = g-1(η).
Advertisements

Probability distribution

The exponential family of distributions are those probability distributions, parameterized by θ and τ, whose density functions f (or probability mass function, for the case of a discrete distribution) can be expressed in the form

 f_Y(y; \theta, \tau) = \exp{\left(\frac{a(y)b(\theta) - c(\theta)} {h(\tau)} + d(y,\tau) \right)}. \,\!

τ, called the dispersion parameter, typically is known and is usually related to the variance of the distribution. The functions a, b, c, d, and h are known. Many, although not all, common distributions are in this family.

θ is related to the mean of the distribution. If a is the identity function, then the distribution is said to be in canonical form. If, in addition, b is the identity and τ is known, then θ is called the canonical parameter and is related to the mean through

 \mu = \operatorname{E}(Y) = c'(\theta). \,\!

Under this scenario, the variance of the distribution can be shown to be[2]

\operatorname{Var}(Y) = c''(\theta) h(\tau). \,\!

Linear predictor

The linear predictor is the quantity which incorporates the information about the independent variables into the model. The symbol η (Greek "eta") is typically used to denote a linear predictor. It is related to the expected value of the data (thus, "predictor") through the link function.

η is expressed as linear combinations (thus, "linear") of unknown parameters β. The coefficients of the linear combination are represented as the matrix of independent variables X. η can thus be expressed as

 \eta = \mathbf{X}\boldsymbol{\beta}.\,

The elements of X are either measured by the experimenters or stipulated by them in the modeling design process.

Link function

The link function provides the relationship between the linear predictor and the mean of the distribution function. There are many commonly used link functions, and their choice can be somewhat arbitrary. It can be convenient to match the domain of the link function to the range of the distribution function's mean.

When using a distribution function with a canonical parameter θ, a link function exists which allows for XTY to be a sufficient statistic for β. This occurs when the link function equates θ and the linear predictor. Following is a table of canonical link functions and their inverses (sometimes referred to as the mean function, as done here) used for several distributions in the exponential family.

Canonical Link Functions
Distribution Name Link Function Mean Function
Normal Identity \mathbf{X}\boldsymbol{\beta}=\mu\,\! \mu=\mathbf{X}\boldsymbol{\beta}\,\!
Exponential Inverse \mathbf{X}\boldsymbol{\beta}=\mu^{-1}\,\! \mu=(\mathbf{X}\boldsymbol{\beta})^{-1}\,\!
Gamma
Inverse
Gaussian
Inverse
squared
\mathbf{X}\boldsymbol{\beta}=\mu^{-2}\,\! \mu=(\mathbf{X}\boldsymbol{\beta})^{-1/2}\,\!
Poisson Log \mathbf{X}\boldsymbol{\beta}=\ln{(\mu)}\,\! \mu=\exp{(\mathbf{X}\boldsymbol{\beta})}\,\!
Binomial Logit \mathbf{X}\boldsymbol{\beta}=\ln{\left(\frac{\mu}{1-\mu}\right)}\,\! \mu=\frac{\exp{(\mathbf{X}\boldsymbol{\beta})}}{1 + \exp{(\mathbf{X}\boldsymbol{\beta})}} = \frac{1}{1 + \exp{(-\mathbf{X}\boldsymbol{\beta})}}\,\!
Multinomial

In the cases of the exponential and gamma distributions, the domain of the canonical link function is not the same as the permitted range of the mean. In particular, the linear predictor may be negative, which would give an impossible negative mean. When maximizing the likelihood, precautions must be taken to avoid this. An alternative is to use a noncanonical link function.

Fitting

Maximum likelihood

The maximum likelihood estimates can be found using an iteratively reweighted least squares algorithm using either a Newton–Raphson method with updates of the form:

 \boldsymbol\beta^{(t+1)} = \boldsymbol\beta^{(t)} + \mathcal{J}^{-1}(\boldsymbol\beta^{(t)}) u(\boldsymbol\beta^{(t)}),

where \mathcal{J}(\boldsymbol\beta^{(t)}) is the observed information matrix (the negative of the Hessian matrix) and u(\boldsymbol\beta^{(t)}) is the score function; or a Fisher's scoring method:

 \boldsymbol\beta^{(t+1)} = \boldsymbol\beta^{(t)} + \mathcal{I}^{-1}(\boldsymbol\beta^{(t)}) u(\boldsymbol\beta^{(t)}),

where \mathcal{I}(\boldsymbol\beta^{(t)}) is the Fisher information matrix. Note that if the canonical link function is used, then the two methods are the same.[3].

Bayesian methods

In general, the posterior distribution cannot be found in closed form and so must be approximated, usually using Laplace approximations or some type of Markov chain Monte Carlo method such as Gibbs sampling.

Examples

General linear models

A possible point of confusion has to do with the distinction between generalized linear models and the general linear model, two broad statistical models. The general linear model may be viewed as a case of the generalized linear model with identity link. As most exact results of interest are obtained only for the general linear model, the general linear model has undergone a somewhat longer historical development. Results for the generalized linear model with non-identity link are asymptotic (tending to work well with large samples).

Linear regression

A simple, very important example of a generalized linear model (also an example of a general linear model) is linear regression. Here the distribution function is the normal distribution with constant variance and the link function is the identity, which is the canonical link if the variance is known. Unlike most other GLMs, there is a closed form solution for the maximum likelihood parameter estimates.

Binomial data

When the response data, Y, are binary (taking on only values 0 and 1), the distribution function is generally chosen to be the binomial distribution and the interpretation of μi is then the probability, p, of Yi taking on the value one.

There are several popular link functions for binomial functions; the most typical is the canonical logit link:

g(p) = \ln \left( { p \over 1-p } \right).

GLMs with this setup are logistic regression models.

In addition, the inverse of any continuous cumulative distribution function (CDF) can be used for the link since the CDF's range is [0,1], the range of the binomial mean. The normal CDF Φ is a popular choice and yields the probit model. Its link is

g(p) = \Phi^{-1}(p).\,\!

The identity link is also sometimes used for binomial data to yield the linear probability model, but a drawback of this model is that the predicted probabilities can be greater than one or less than zero. In implementation it is possible to fix the nonsensical probabilities outside of [0,1], but interpreting the coefficients can be difficult. The model's primary merit is that near p = 0.5 it is approximately a linear transformation of the probit and logit―econometricians sometimes call this the Harvard model.

The variance function for binomial data is given by:

\operatorname{Var}(Y_{i})= \tau\mu_{i} (1-\mu_{i})\,\!

where the dispersion parameter τ is typically fixed at exactly one. When it is not, the resulting quasi-likelihood model often described as binomial with overdispersion or quasibinomial.

Count data

Another example of generalized linear models includes Poisson regression which models count data using the Poisson distribution. The link is typically the logarithm, the canonical link.

The variance function is proportional to the mean

\operatorname{var}(Y_{i}) = \tau\mu_{i},\,

where the dispersion parameter τ is typically fixed at exactly one. When it is not, the resulting quasi-likelihood model is often described as poisson with overdispersion or quasipoisson.

Extensions

Correlated or clustered data

The standard GLM assumes that the observations are uncorrelated. Extensions have been developed to allow for correlation between observations, as occurs for example in longitudinal studies and clustered designs:

  • Generalized estimating equations (GEEs) allow for the correlation between observations without the use of an explicit probability model for the origin of the correlations, so there is no explicit likelihood. They are suitable when the random effects and their variances are not of inherent interest, as they allow for the correlation without explaining its origin. The focus is on estimating the average response over the population ("population-averaged" effects) rather than the regression parameters that would enable prediction of the effect of changing one or more components of X on a given individual. GEEs are usually used in conjunction with Huber-White standard errors. [4][5]
  • Generalized linear mixed models (GLMMs) are an extension to GLMs that includes random effects in the linear predictor, giving an explicit probability model that explains the origin of the correlations. The resulting "subject-specific" parameter estimates are suitable when the focus is on estimating the effect of changing one or more components of X on a given individual. GLMMs are a particular type of multilevel model (mixed model). In general, fitting GLMMs is more computationally complex and intensive than fitting GEEs.
  • Hierarchical generalized linear models (HGLMs) are similar to GLMMs apart from two distinctions:
  1. The random effects can have any distribution in the exponential family, whereas current GLMMs nearly always have normal random effects;
  2. They are not as computationally intensive, as instead of integrating out the random effects they are based on a modified form of likelihood known as the hierarchical likelihood or h-likelihood.
The theoretical basis and accuracy of the methods used in HGLMs have been the subject of some debate in the statistical literature. As of 2008, the method is only available in one statistical software package, namely Genstat.[6]

Generalized additive models

Generalized additive models (GAMs) are another extension to GLMs in which the linear predictor η is not restricted to be linear in the covariates X but is the sum of smoothing functions applied to the xis:

\eta = \beta_0 + f_1(x_1) + f_2(x_2) + \ldots \,\!

The smoothing functions fi are estimated from the data. In general this requires a large number of data points and is computationally intensive.[7][8]

Multinomial regression

The binomial case may be easily extended to allow for a multinomial distribution as the response. There are two ways in which this is usually done:

Ordered response

If the response variable is an ordinal measurement, then one may fit a model function of the form:

 g(\mu_m) = \eta_m = \beta_0 + X_1 \beta_1 + \ldots + X_p \beta_p + \gamma_2 + \ldots + \gamma_m = \eta_1 + \gamma_2 + \ldots + \gamma_m \,   where  \mu_m = \mathrm{P}(Y \leq m) \,.

for m > 2. Different links g lead to proportional odds models or ordered probit models.

Unordered response

If the response variable nominal measurement, or the data does not satisfy the assumptions of an ordered model, one may fit a model of the following form:

 g(\mu_m) = \eta_m = \beta_{m,0} + X_1 \beta_{m,1} + \ldots + X_{m,p} \beta_p \,   where  \mu_m = \mathrm{P}(Y = m \mid Y \in \{1,m\} ) \,.

for m > 2. Different links g lead to multinomial logit or multinomial probit models. These are less efficient then the ordered response models, as more parameters are estimated.

Etymology

The term "generalized linear model", and especially its abbreviation GLM, can be confused with general linear model. John Nelder has expressed regret about this in a conversation with Stephen Senn:

Senn: I must confess to having some confusion when I was a young statistician between general linear models and generalized linear models. Do you regret the terminology?

Nelder: I think probably I do. I suspect we should have found some more fancy name for it that would have stuck and not been confused with the general linear model, although general and generalized are not quite the same. I can see why it might have been better to have thought of something else.[9]

See also

Notes

  1. ^ Nelder, John; Wedderburn, Robert (1972). "Generalized Linear Models". Journal of the Royal Statistical Society. Series A (General) 135 (3): 370–384. JSTOR 2344614.  
  2. ^ McCullagh and Nelder (1989), Chapter 2.
  3. ^ McCullagh and Nelder (1989), Page 43.
  4. ^ Zeger, Scott L.; Liang, Kung-Yee; Albert, Paul S. (1988). "Models for Longitudinal Data: A Generalized Estimating Equation Approach". Biometrics 44 (4): 1049–1060. JSTOR 2531734.  
  5. ^ Hardin, James; Hilbe, Joseph (2003). Generalized Estimating Equations. London: Chapman and Hall/CRC. ISBN 1584883073.  
  6. ^ Lee, Youngjo; Nelder, John; Pawitan, Yudi (2006). Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood. Chapman & Hall/CRC. ISBN 1584886315.  
  7. ^ Hastie, T. J.; Tibshirani, R. J. (1990). Generalized Additive Models. Chapman & Hall/CRC. ISBN 9780412343902.  
  8. ^ Wood, Simon (2006). Generalized Additive Models: An Introduction with R. Chapman & Hall/CRC. ISBN 1-584-88474-6.  
  9. ^ Senn, Stephen (2003). "A conversation with John Nelder". Statistical Science 18 (1): 118–131. doi:10.1214/ss/1056397489. http://projecteuclid.org/euclid.ss/1056397489.  

References

Further reading

  • Dobson, A.J.; Barnett, A.G. (2008). Introduction to Generalized Linear Models (3rd ed.). Boca Raton, FL: Chapman and Hall/CRC. ISBN 1584881658.  
  • Hardin, James; Hilbe, Joseph (2007). Generalized Linear Models and Extensions (2nd ed.). College Station: Stata Press. ISBN 1597180149.  

External links


Advertisements






Got something to say? Make a comment.
Your name
Your email address
Message