31st  Top numerical analysis topics 
2nd  Top calculus topics 
In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. (See numerical integration for more on quadrature rules.) An npoint Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree 2n − 1 or less by a suitable choice of the points x_{i} and weights w_{i} for i = 1,...,n. The domain of integration for such a rule is conventionally taken as [−1, 1], so the rule is stated as
Gaussian quadrature as above will only produce accurate results if the function f(x) is well approximated by a polynomial function within the range [1,1]. The method is, for example, not suitable for functions with singularities.
Common weighting functions include (GaussChebyshev) and (GaussHermite).
It can be shown (see Press, et al., or Stoer and Bulirsch) that the evaluation points are just the roots of a polynomial belonging to a class of orthogonal polynomials.
Contents 
For the integration problem stated above, the associated polynomials are Legendre polynomials, P_{n}(x). With the n^{th} polynomial normalized to give P_{n}(1) = 1, the i^{th} Gauss node, x_{i}, is the i^{th} root of P_{n}; its weight is given by (Abramowitz & Stegun 1972, p. 887)
Some loworder rules for solving the integration problem are listed below.
Number of points, n  Points, x_{i}  Weights, w_{i} 

1  0  2 
2  1  
3  0  ^{8}⁄_{9} 
^{5}⁄_{9}  
4  
5  0  ^{128}⁄_{225} 
An integral over [a, b] must be changed into an integral over [−1, 1] before applying the Gaussian quadrature rule. This change of interval can be done in the following way:
After applying the Gaussian quadrature rule, the following approximation is:
The integration problem can be expressed in a slightly more general way by introducing a positive weight function ω into the integrand, and allowing an interval other than [−1, 1]. That is, the problem is to calculate
for some choices of a, b, and ω. For a = −1, b = 1, and ω(x) = 1, the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun (A & S).
Interval  ω(x)  Orthogonal polynomials  A & S  For more information, see … 

[−1, 1]  Legendre polynomials  25.4.29  Section Rules for the basic problem, above  
(−1, 1)  Jacobi polynomials  25.4.33 (β = 0)  
(−1, 1)  Chebyshev polynomials (first kind)  25.4.38  Chebyshev–Gauss quadrature  
[−1, 1]  Chebyshev polynomials (second kind)  25.4.40  Chebyshev–Gauss quadrature  
[0, ∞)  Laguerre polynomials  25.4.45  Gauss–Laguerre quadrature  
(−∞, ∞)  Hermite polynomials  25.4.46  Gauss–Hermite quadrature 
Let p_{n} be a nontrivial polynomial of degree n such that
If we pick the n nodes x_{i} to be the zeros of p_{n} , then there exist n weights w_{i} which make the Gaussquadrature computed integral exact for all polynomials h(x) of degree 2n − 1 or less. Furthermore, all these nodes x_{i} will lie in the open interval (a, b) (Stoer & Bulirsch 2002, pp. 172–175).
The polynomial p_{n} is said to be an orthogonal polynomial of degree n associated to the weight function ω(x). It is unique up to a constant normalization factor. The idea underlying the proof is that, because of its sufficiently low degree, h(x) can be divided by p_{n}(x) to produce quotient q(x) and remainder r(x), both latter polynomials which will have degree strictly lower than n, so that both will be orthogonal to p_{n}(x), by the defining property of p_{n}(x). Thus
Because of the choice of nodes x_{i} , the corresponding relation
holds also. The exactness of the computed integral for h(x) then follows from corresponding exactness for polynomials of degree only n or less (as is r(x)). This (superficially less exacting) requirement can now be satisfied by choosing the weights w_{i} equal to the integrals (using same weight function ω) of the Lagrange basis polynomials of these particular n nodes x_{i}.
For computing the nodes x_{i} and weights w_{i} of Gaussian quadrature rules, the fundamental tool is the threeterm recurrence relation satisfied by the set of orthogonal polynomials associated to the corresponding weight function. For n points, these nodes and weights can be computed in O(n^{2}) operations by the following algorithm.
If, for instance, p_{n} is the monic orthogonal polynomial of degree n (the orthogonal polynomial of degree n with the highest degree coefficient equal to one), one can show that such orthogonal polynomials are related through the recurrence relation
From this, nodes and weights can be computed from the eigenvalues and eigenvectors of an associated linear algebra problem. This is usually named as the Golub–Welsch algorithm (Gil, Segura & Temme 2007).
The starting idea comes from the observation that, if x_{i} is a root of the orthogonal polynomial p_{n} then, using the previous recurrence formula for and because p_{n}(x_{j}) = 0, we have
where
and J is the socalled Jacobi matrix:
The nodes of gaussian quadrature can therefore be computed as the eigenvalues of a tridiagonal matrix.
For computing the weights and nodes, it is preferable to consider the symmetric tridiagonal matrix with elements , and . and are equivalent and therefore have the same eigenvalues (the nodes). The weights can be computed from the matrix J. If φ^{(j)} is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated to the eigenvalue x_{j}, the corresponding weight can be computed from the first component of this eigenvector, namely:
where μ_{0} is the integral of the weight function
See, for instance, (Gil, Segura & Temme 2007) for further details.
The error of a Gaussian quadrature rule can be stated as follows (Stoer & Bulirsch 2002, Thm 3.6.24). For an integrand which has 2n continuous derivatives,
for some ξ in (a, b), where p_{n} is the orthogonal polynomial of degree n and where
In the important special case of ω(x) = 1, we have the error estimate (Kahaner, Moler & Nash 1989, §5.2)
Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order 2n derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful.
Important consequence of the above equation is that Gaussian quadrature of order n is accurate for all polynomials up to degree 2n–1.
If the interval [a, b] is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at zero for odd numbers), and thus the integrand must be evaluated at every point. Gauss–Kronrod rules are extensions of Gauss quadrature rules generated by adding n + 1 points to an npoint rule in such a way that the resulting rule is of order 3n + 1. This allows for computing higherorder estimates while reusing the function values of a lowerorder estimate. The difference between a Gauss quadrature rule and its Kronrod extension are often used as an estimate of the approximation error.
Also known as Lobatto quadrature (Abramowitz & Stegun 1972, p. 888), named after Dutch mathematician Rehuel Lobatto.
It is similar to Gaussian quadrature with the following differences:
Lobatto quadrature of function f(x) on interval [–1, +1]:
Abscissas: x_{i} is the (i − 1)^{st} zero of P'_{n − 1}(x).
Weights:
Remainder:
In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. (See numerical integration for more on quadrature rules.) An npoint Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree 2n − 1 or less by a suitable choice of the points x_{i} and weights w_{i} for i = 1,...,n. The domain of integration for such a rule is conventionally taken as [−1, 1], so the rule is stated as
Gaussian quadrature as above will only produce accurate results if the function f(x) is well approximated by a polynomial function within the range [1,1]. The method is not, for example, suitable for functions with singularities. However, if the integrated function can be written as $f(x)\; =\; W(x)\; g(x)\backslash ,$, where g(x) is approximately polynomial, and W(x) is known, then there are alternative weights $w\_i$ such that
Common weighting functions include $W(x)=(1x^2)^\{1/2\}\backslash ,$ (GaussChebyshev) and $W(x)=e^\{x^2\}$ (GaussHermite).
It can be shown (see Press, et al., or Stoer and Bulirsch) that the evaluation points are just the roots of a polynomial belonging to a class of orthogonal polynomials.
Contents 
For the integration problem stated above, the associated polynomials are Legendre polynomials, P_{n}(x). With the n^{th} polynomial normalized to give P_{n}(1) = 1, the i^{th} Gauss node, x_{i}, is the i^{th} root of P_{n}; its weight is given by (Abramowitz & Stegun 1972, p. 887)
Some loworder rules for solving the integration problem are listed below.
Number of points, n  Points, x_{i }  Weights, w_{i} 

1  0  2 
2  $\backslash pm\; 1/\backslash sqrt\{3\}$  1 
3  0  ^{8}⁄_{9} 
$\backslash pm\backslash sqrt\{15\}/5$  ^{5}⁄_{9}  
4  $\backslash pm\backslash sqrt\{\backslash Big(\; 3\; \; 2\backslash sqrt\{6/5\}\; \backslash Big)/7\}$  $\backslash tfrac\{18+\backslash sqrt\{30\}\}\{36\}$ 
$\backslash pm\backslash sqrt\{\backslash Big(\; 3\; +\; 2\backslash sqrt\{6/5\}\; \backslash Big)/7\}$  $\backslash tfrac\{18\backslash sqrt\{30\}\}\{36\}$  
5  0  ^{128}⁄_{225} 
$\backslash pm\backslash tfrac13\backslash sqrt\{52\backslash sqrt\{10/7\}\}$  $\backslash tfrac\{322+13\backslash sqrt\{70\}\}\{900\}$  
$\backslash pm\backslash tfrac13\backslash sqrt\{5+2\backslash sqrt\{10/7\}\}$  $\backslash tfrac\{32213\backslash sqrt\{70\}\}\{900\}$ 
An integral over [a, b] must be changed into an integral over [−1, 1] before applying the Gaussian quadrature rule. This change of interval can be done in the following way:
\int_a^b f(x)\,dx = \frac{ba}{2} \int_{1}^1 f\left(\frac{ba}{2}x + \frac{a+b}{2}\right)\,dx
After applying the Gaussian quadrature rule, the following approximation is:
\int_a^b f(x)\,dx \approx \frac{ba}{2} \sum_{i=1}^n w_i f\left(\frac{ba}{2}x_i + \frac{a+b}{2}\right)
The integration problem can be expressed in a slightly more general way by introducing a positive weight function ω into the integrand, and allowing an interval other than [−1, 1]. That is, the problem is to calculate
for some choices of a, b, and ω. For a = −1, b = 1, and ω(x) = 1, the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun (A & S).
Interval  ω(x)  Orthogonal polynomials  A & S  For more information, see ... 

[−1, 1]  $1\backslash ,$  Legendre polynomials  25.4.29  Section Rules for the basic problem, above 
(−1, 1)  $(1x)^\backslash alpha\; (1+x)^\backslash beta,\backslash quad\; \backslash alpha,\; \backslash beta\; >\; 1\backslash ,$  Jacobi polynomials  25.4.33 ($\backslash beta=0$)  
(−1, 1)  $\backslash frac\{1\}\{\backslash sqrt\{1\; \; x^2\}\}$  Chebyshev polynomials (first kind)  25.4.38  Chebyshev–Gauss quadrature 
[−1, 1]  $\backslash sqrt\{1\; \; x^2\}$  Chebyshev polynomials (second kind)  25.4.40  Chebyshev–Gauss quadrature 
[0, ∞)  $e^\{x\}\backslash ,$  Laguerre polynomials  25.4.45  Gauss–Laguerre quadrature 
(−∞, ∞)  $e^\{x^2\}$  Hermite polynomials  25.4.46  Gauss–Hermite quadrature 
Let $p\_n$ be a nontrivial polynomial of degree n such that
\int_a^b \omega(x) \, x^k p_n(x) \, dx = 0, \quad \text{for all }k=0,1,\ldots,n1.
If we pick the n nodes x_{i} to be the zeros of p_{n} , then there exist n weights w_{i} which make the Gaussquadrature computed integral exact for all polynomials $h(x)$ of degree 2n − 1 or less. Furthermore, all these nodes x_{i} will lie in the open interval (a, b) (Stoer & Bulirsch 2002, pp. 172–175).
The polynomial $p\_n$ is said to be an orthogonal polynomial of degree n associated to the weight function $\backslash omega\; (x)$. It is unique up to a constant normalization factor. The idea underlying the proof is that, because of its sufficiently low degree, $h(x)$ can be divided by $p\_n(x)$ to produce quotient $q(x)$ and remainder $r(x)$, both latter polynomials which will have degree strictly lower than n, so that both will be orthogonal to $p\_n(x)$, by the defining property of $p\_n(x)$. Thus
Because of the choice of nodes x_{i} , the corresponding relation
holds also. The exactness of the computed integral for $h(x)$ then follows from corresponding exactness for polynomials of degree only n or less (as is $r(x)$). This (superficially less exacting) requirement can now be satisfied by choosing the weights w_{i} equal to the integrals (using same weight function $\backslash omega$) of the Lagrange basis polynomials $\backslash ell\_i(x)$ of these particular n nodes x_{i}.
For computing the nodes $x\_i$ and weights $w\_i$ of Gaussian quadrature rules, the fundamental tool is the threeterm recurrence relation satisfied by the set of orthogonal polynomials associated to the corresponding weight function. For n points, these nodes and weights can be computed in O(n^{2}) operations by the following algorithm.
If, for instance, $p\_n$ is the monic orthogonal polynomial of degree n (the orthogonal polynomial of degree n with the highest degree coefficient equal to one), one can show that such orthogonal polynomials are related through the recurrence relation
From this, nodes and weights can be computed from the eigenvalues and eigenvectors of an associated linear algebra problem. This is usually named as the Golub–Welsch algorithm (Gil, Segura & Temme 2007).
The starting idea comes from the observation that, if $x\_i$ is a root of the orthogonal polynomial $p\_n$ then, using the previous recurrence formula for $k=0,1,\backslash ldots,\; n1$ and because $p\_n\; (x\_j)=0$, we have
$J\backslash tilde\{P\}=x\_j\; \backslash tilde\{P\}$
where $\backslash tilde\{P\}=[p\_0\; (x\_j),p\_1\; (x\_j),...,p\_\{n1\}(x\_j)]^\{T\}$
and $J$ is the socalled Jacobi matrix:
$\backslash mathbf\{J\}=\backslash left(\; \backslash begin\{array\}\{llllll\}\; B\_0\; \&\; 1\; \&\; 0\; \&\; \backslash ldots\; \&\; \backslash ldots\; \&\; \backslash ldots\backslash \backslash \; A\_1\; \&\; B\_1\; \&\; 1\; \&\; 0\; \&\; \backslash ldots\; \&\; \backslash ldots\; \backslash \backslash \; 0\; \&\; A\_2\; \&\; B\_2\; \&\; 1\; \&\; 0\; \&\; \backslash ldots\; \backslash \backslash \; \backslash ldots\; \&\; \backslash ldots\; \&\; \backslash ldots\; \&\; \backslash ldots\; \&\; \backslash ldots\; \&\; \backslash ldots\; \backslash \backslash \; \backslash ldots\; \&\; \backslash ldots\; \&\; \backslash ldots\; \&\; A\_\{n2\}\; \&\; B\_\{n2\}\; \&\; 1\; \backslash \backslash \; \backslash ldots\; \&\; \backslash ldots\; \&\; \backslash ldots\; \&\; \backslash ldots\; \&\; A\_\{n1\}\; \&\; B\_\{n1\}\; \backslash end\{array\}\; \backslash right).$
The nodes of gaussian quadrature can therefore be computed as the eigenvalues of a tridiagonal matrix.
For computing the weights and nodes, it is preferable to consider the symmetric tridiagonal matrix $\backslash mathcal\{J\}$ with elements $\backslash mathcal\{J\}\_\{i,i\}=J\_\{i,i\}=B\_\{i1\},\backslash ,\; i=1,\backslash ldots,n$ and $\backslash mathcal\{J\}\_\{i1,i\}=\backslash mathcal\{J\}\_\{i,i1\}=\backslash sqrt\{J\_\{i,i1\}J\_\{i1,i\}\}=\backslash sqrt\{A\_\{i1\}\},\backslash ,\; i=2,\backslash ldots,n.$ $\backslash mathbf\{J\}$ and $\backslash mathcal\{J\}$ are similar matrices and therefore have the same eigenvalues (the nodes). The weights can be computed from the corresponding eigenvectors: If $\backslash phi^\{(j)\}$ is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated to the eigenvalue $x\_j$, the corresponding weight can be computed from the first component of this eigenvector, namely:
$w\_j=\backslash mu\_0\; \backslash left(\backslash phi\_1^\{(j)\}\backslash right)^2$
where $\backslash mu\_0$ is the integral of the weight function
$\backslash mu\_0=\backslash int\_a^b\; w(x)\; dx.$
See, for instance, (Gil, Segura & Temme 2007) for further details.
The error of a Gaussian quadrature rule can be stated as follows (Stoer & Bulirsch 2002, Thm 3.6.24). For an integrand which has 2n continuous derivatives,
= \frac{f^{(2n)}(\xi)}{(2n)!} \, (p_n,p_n)
for some ξ in (a, b), where p_{n} is the orthogonal polynomial of degree n and where
In the important special case of ω(x) = 1, we have the error estimate (Kahaner, Moler & Nash 1989, §5.2)
Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order 2n derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful.
Important consequence of the above equation is that Gaussian quadrature of order n is accurate for all polynomials up to degree 2n–1.
If the interval [a, b] is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at zero for odd numbers), and thus the integrand must be evaluated at every point. Gauss–Kronrod rules are extensions of Gauss quadrature rules generated by adding $n+1$ points to an $n$point rule in such a way that the resulting rule is of order $3n+1$. This allows for computing higherorder estimates while reusing the function values of a lowerorder estimate. The difference between a Gauss quadrature rule and its Kronrod extension are often used as an estimate of the approximation error.
Also known as Lobatto quadrature (Abramowitz & Stegun 1972, p. 888), named after Dutch mathematician Rehuel Lobatto.
It is similar to Gaussian quadrature with the following differences:
Lobatto quadrature of function f(x) on interval [–1, +1]:
\int_{1}^{1}{f(x) \, dx} = \frac {2} {n(n1)}[f(1) + f(1)] + \sum_{i = 2} ^{n1} {w_i f(x_i)} + R_n.
Abscissas: $x\_i$ is the $(i1)$^{st} zero of $P\text{'}\_\{n1\}(x)$.
Weights:
w_i = \frac{2}{n(n1)[P_{n1}(x_i)]^2} \quad (x_i \ne \pm 1).
Remainder: $$
R_n = \frac { n (n1)^3 2^{2n1} [(n2)!]^4} {(2n1) [(2n2)!]^3} f^{(2n2)}(\xi), \quad (1 < \xi < 1)
