The approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because
In the mathematical field of numerical analysis, the numerical stability of an algorithm in numerical analysis indicates how the error is propagated by the algorithm.
Contents 
One commonly distinguishes between the relative error and the absolute error. The absolute error is the magnitude of the difference between the exact value and the approximation. The relative error is the absolute error divided by the magnitude of the exact value. The percent error is the relative error expressed in terms of per 100.
As an example, if the exact value is 50 and the approximation is 49.9, then the absolute error is 0.1 and the relative error is 0.1/50 = 0.002. The relative error is often used to compare approximations of numbers of widely differing size; for example, approximating the number 1,000 with an absolute error of 3 is, in most applications, much worse than approximating the number 1,000,000 with an absolute error of 3; in the first case the relative error is .003 and in the second it is only .000003
Given some value v and its approximation v_{approx}, the absolute error is
where the vertical bars denote the absolute value. If the relative error is
and the percent error is
These definitions can be extended to the case when v and v_{approx} are ndimensional vectors, by replacing the absolute value with an nnorm.^{[1]}
