# Binomial regression: Wikis

Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.

# Encyclopedia

In statistics, binomial regression is a technique in which the response (often referred to as Y) is the result of a series of Bernoulli trials, or a series of one of two possible disjoint outcomes (traditionally denoted "success" or 1, and "failure" or 0).[1] In binomial regression, the probability of a success is related to explanatory variables: the corresponding concept in ordinary regression is to relate the mean value of the unobserved response to explanatory variables.

A binomial regression model is a special case of a generalised linear model.

## Example application

In one published example of an application of binomial regression,[2] the details were as follows. The observed outcome variable was whether or not a fault occurred in an industrial process. There were two explanatory variables: the first was a simple two-case factor representing whether or not a modified version of the process was used and the second was an ordinary quantitative variable measuring the purity of the material being supplied for the process.

## Specification of model

The results are assumed to be binomially distributed.[1] They are often fitted as a generalised linear model where the predicted values μ are the probabilities that any individual event will result in a success. The likelihood of the predictions is then given by

$L(Y|\boldsymbol{\mu})=\prod_{i=1}^n \left ( 1_{y_i=1}(\mu_i) + 1_{y_i=0} (1-\mu_i) \right ), \,\!$

where 1A is the indicator function which takes on the value one when the event A occurs, and zero otherwise: in this formulation, for any given observation yi, only one of the two terms inside the product contributes, according to whether yi=0 or 1. The likelihood function is more fully specified by defining the formal parameters μi as parameterised functions of the explanatory variables: this defines the likelihood in terms of a much reduced number of parameters. Fitting of the model is usually achieved by employing the method of maximum likelihood to determine these parameters. In practice, the use of a formulation as a generalised linear model allows advantage to be taken of certain algorithmic ideas which are applicable across the whole class of more general models but which do not apply to all maximum likelihood problems.

Models used in binomial regression can often be extended to multinomial data.

There are many methods of generating the values of μ in systematic ways that allow for interpretation of the model; they are discussed below.

There is a requirement that the modelling linking the probabilities μ to the explanatory variables should be of a form which only produces values in the range 0 to 1. Many models can be fitted into the form

$\boldsymbol{\mu} = g(\boldsymbol{\eta}) \, .$

Here η is an intermediate variable representing a linear combination, containing the regression parameters, of the explanatory variables. The function g is the cumulative distribution function of some probability distribution. Usually this probability distribution has a range from minus infinity to plus infinity so that any finite value of η is transformed by the function g to a value inside the range 0 to 1.

In the case of logistic regression, the link function is the log of the odds ratio or logistic function. In the case of probit, the link is the normal distribution. The linear probability model is not a proper binomial regression specification because predictions need not be in the range of zero to one, it is sometimes used for this type of data when the probability space is where interpretation occurs or when the analyst lacks sufficient sophistication to fit or calculate approximate linearizations of probabilities for interpretation.

## Latent variable interpretation / derivation

A latent variable model involving a binomial observed variable Y can be constructed such that Y is related to the latent variable Y * via

$Y = \begin{cases} 0, & \mbox{if }Y^*>0 \ 1, & \mbox{if }Y^*<0. \end{cases}$

The latent variable Y * is then related to a set of regression variables X by the model

$Y^* = X\beta + \epsilon \ .$

This results in a binomial regression model.

The variance of ε can not be identified and when it is not of interest is often assumed to be equal to one. If ε is normally distributed, then a probit is the appropriate model and if ε is log-Weibul distributed, then a logit is appropriate. If ε is uniformly distributed, then a linear probability model is appropriate.

## Notes

1. ^ a b Sanford Weisberg (2005). "Binomial Regression". Applied Linear Regression. Wiley-IEEE. pp. 253–254. ISBN 0471663794.
2. ^ Cox & Snell (1981), Example H, p91

## References

Cox, D.R., Snell, E.J. (1981) Applied Statistics: Principles and Examples, Chapman and Hall. ISBN 0-412-16570-8