The weighted mean is similar to an arithmetic mean (the most common type of average), where instead of each of the data points contributing equally to the final average, some data points contribute more than others. The notion of weighted mean plays a role in descriptive statistics and also occurs in a more general form in several other areas of mathematics.
If all the weights are equal, then the weighted mean is the same as the arithmetic mean. While weighted means generally behave in a similar fashion to arithmetic means, they do have a few counterintuitive properties, as captured for instance in Simpson's paradox.
The term weighted average usually refers to a weighted arithmetic mean, but weighted versions of other means can also be calculated, such as the weighted geometric mean and the weighted harmonic mean.
Given two school classes, one with 20 students, and one with 30 students, the grades in each class on a test were:
The straight average for the morning class is 80 and the straight average of the afternoon class is 90. The straight average of 80 and 90 is 85, the mean of the two class means. However, this does not account for the difference in number of students in each class, and the value of 85 does not reflect the average student grade (independent of class). The average student grade can be obtained by either averaging all the numbers without regard to classes, or weighting the class means by the number of students in each class:
Or, using a weighted mean of the class means:
The weighted mean makes it possible to find the average student grade also in the case where only the class means and the number of students in each class are available.
Formally, the weighted mean of a nonempty set of data
with nonnegative weights
is the quantity
which means:
Therefore data elements with a high weight contribute more to the weighted mean than do elements with a low weight. The weights cannot be negative. Some may be zero, but not all of them (since division by zero is not allowed).
The formulas are simplified when the weights are normalized such that they sum up to 1, i.e. . For such normalized weights the weighted mean is simply .
The common mean is a special case of the weighted mean where all data have equal weights, w_{i} = w.
This is used for weighting a response variable based upon its dependency on x, a distance variable.
Since only the relative weights are relevant, any weighted mean can be expressed using coefficients that sum to one. Such a linear combination is called a convex combination.
Using the previous example, we would get the following:
This simplifies to:
The weighted sample mean with normalized weights is itself a random variable. Its expected value and standard deviation are related to the expected values and standard deviations of the observations as follows.
If the observations have expected values , then the weighted sample mean has expectation . Particularly, if the expectations of all observations are equal, , then the expectation of the weighted sample mean will be the same, .
For uncorrelated observations with standard deviations σ_{i}, the weighted sample mean has standard deviation . Consequently, when the standard deviations of all observations are equal, σ_{i} = d, the weighted sample mean will have standard deviation . Here V_{2} is the quantity , such that . It attains its minimum value for equal weights, and its maximum when all weights except one are zero. In the former case we have , which is related to the central limit theorem.
For the weighted mean of a list of data for which each element comes from a different probability distribution with known variance , one possible choice for the weights is given by:
The weighted mean in this case is:
and the variance of the weighted mean is:
which reduces to , when all
The significance of this choice is that this weighted mean is the maximum likelihood estimator of the mean of the probability distributions under the assumption that they are independent and normally distributed with the same mean.
Weighted means are typically used to find the weighted mean of experimental data, rather than theoretically generated data. In this case, there will be some error in the variance of each data point. Typically experimental errors may be underestimated due to the experimenter not taking into account all sources of error in calculating the variance of each data point. In this event, the variance in the weighted mean must be corrected to account for the fact that χ^{2} is too large. The correction that must be made is
where is χ^{2} divided by the number of degrees of freedom, in this case n − 1. This gives the variance in the weighted mean as:
Typically when a mean is calculated it is important to know the variance and standard deviation of that mean. When a weighted mean μ ^{*} with normalized weights is used, the variance of the weighted sample is different from the variance of the unweighted sample. The biased weighted sample variance is defined similarly to the normal biased sample variance:
For small samples, it is customary to use an unbiased estimator for the population variance. In normal unweighted samples, the N in the denominator (corresponding to the sample size) is changed to N − 1. While this is simple in unweighted samples, it is not straightforward when the sample is weighted. The unbiased estimator of a weighted population variance is given by ^{[1]}:
where as introduced previously. The degrees of freedom of the weighted, unbiased sample variance vary accordingly from N − 1 down to 0.
The standard deviation is simply the square root of the variance above.
In the general case, suppose that , is the covariance matrix relating the quantities x_{i}, is the common mean to be estimated, and is the design matrix [1, ..., 1] (of length n). The Gauss–Markov theorem states that the estimate of the mean having minimum variance is given by:
and
Consider the time series of an independent variable x and a dependent variable y, with n observations sampled at discrete times t_{i}. In many common situations, the value of y at time t_{i} depends not only on x_{i} but also on its past values. Commonly, the strength of this dependence decreases as the separation of observations in time increases. To model this situation, one may replace the independent variable by its sliding mean z for a window size m.
Weighted Mean Equivalence  Range (15) 

Strong  3.34  5.00 
Satisfactory  1.67  3.33 
Weak  0.00  1.66 
In the scenario described in the previous section, most frequently the decrease in interaction strength obeys a negative exponential law. If the observations are sampled at equidistant times, then exponential decrease is equivalent to decrease by a constant fraction 0 < Δ < 1 at each time step. Setting w = 1 − Δ we can define m normalized weights by , where V_{1} is the sum of the unnormalized weights. In this case V_{1} is simply , approaching V_{1} = 1 / (1 − w) for large values of m.
The damping constant w must correspond to the actual decrease of interaction strength. If this cannot be determined from theoretical considerations, then the following properties of exponentially decreasing weights are useful in making a suitable choice: at step (1 − w) ^{− 1}, the weight approximately equals e ^{− 1}(1 − w) = 0.39(1 − w), the tail area the value e ^{− 1}, the head area 1 − e ^{− 1} = 0.61. The tail area at step n is . Where primarily the closest n observations matter and the effect of the remaining observations can be ignored safely, then choose w such that the tail area is sufficiently small.
The concept of weighted average can be extended to functions.^{[2]}
