4th  Top numerical analysis topics 
5th  Differential_calculus">Top calculus topics: Differential calculus 
In numerical analysis, Newton's method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively better approximations to the zeroes (or roots) of a realvalued function. Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root. Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be, depends on the problem. This is discussed in detail below. Unfortunately, when iteration begins far from the desired root, Newton's method can easily lead an unwary user astray with little warning. Thus, good implementations of the method embed it in a routine that also detects and perhaps overcomes possible convergence failures.
Given a function ƒ(x) and its derivative ƒ '(x), we begin with a first guess x_{0}. Provided the function is reasonably wellbehaved a better approximation x_{1} is
The process is repeated until a sufficiently accurate value is reached:
An important and somewhat surprising application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number using only multiplication and subtraction.
The algorithm is first in the class of Householder's methods, succeeded by Halley's method.
Contents 
The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the xintercept of this tangent line (which is easily done with elementary algebra). This xintercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated.
Suppose ƒ : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current approximation x_{n}. Then we can derive the formula for a better approximation, x_{n+1} by referring to the diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point.
That is
Here, f ' denotes the derivative of the function f. Then by simple algebra we can derive
We start the process off with some arbitrary initial value x_{0}. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem.) The method will usually converge, provided this initial guess is close enough to the unknown zero, and that ƒ'(x_{0}) ≠ 0. Furthermore, for a zero of multiplicity 1, the convergence is at least quadratic (see rate of convergence) in a neighbourhood of the zero, which intuitively means that the number of correct digits roughly at least doubles in every step. More details can be found in the analysis section below.
Newton's method can also be used to find a minimum or maximum of a function. The derivative is zero at a minimum or maximum, so minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes:
Newton's method was described by Isaac Newton in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson). However, his description differs substantially from the modern description given above: Newton applies the method only to polynomials. He does not compute the successive approximations x_{n}, but computes a sequence of polynomials and only at the end, he arrives at an approximation for the root x. Finally, Newton views the method as purely algebraic and fails to notice the connection with calculus. Isaac Newton probably derived his method from a similar but less precise method by Vieta. The essence of Vieta's method can be found in the work of the Persian mathematician, Sharaf alDin alTusi, while his successor Jamshīd alKāshī used a form of Newton's method to solve x^{P} − N = 0 to find roots of N (Ypma 1995). A special case of Newton's method for calculating square roots was known much earlier and is often called the Babylonian method.
Newton's method was used by 17th century Japanese mathematician Seki Kōwa to solve singlevariable equations, though the connection with calculus was missing.
Newton's method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a simplified description in Analysis aequationum universalis. Raphson again viewed Newton's method purely as an algebraic method and restricted its use to polynomials, but he describes the method in terms of the successive approximations x_{n} instead of the more complicated sequence of polynomials used by Newton. Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using fluxional calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.
Arthur Cayley in 1879 in The NewtonFourier imaginary problem was the first who noticed the difficulties in generalizing the Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of the theory of iterations of rational functions.
Newton's method is an extremely powerful technique—in general the convergence is quadratic: the error is essentially squared (the number of accurate digits roughly doubles) at each step . However, there are some difficulties with the method.
Since the most serious of the problems above is the possibility of a failure of convergence, Press et al. (1992) present a version of Newton's method that starts at the midpoint of an interval in which the root is known to lie and stops the iteration if an iterate is generated that lies outside the interval.
Developers of large scale computer systems involving root finding tend to prefer the secant method over Newton's method because the use of a difference quotient in place of the derivative in Newton's method implies that the additional code to compute the derivative need not be maintained. In practice, the advantages of maintaining a smaller code base usually outweigh the superior convergence characteristics of Newton's method.
Suppose that the function ƒ has a zero at α, i.e., ƒ(α) = 0.
If f is continuously differentiable and its derivative is nonzero at α, then there exists a neighborhood of α such that for all starting values x_{0} in that neighborhood, the sequence {x_{n}} will converge to α.
If the function is continuously differentiable and its derivative is not 0 at α and it has a second derivative at α then the convergence is quadratic or faster. If the second derivative is not 0 at α then the convergence is merely quadratic.
If the derivative is 0 at α, then the convergence is usually only linear. Specifically, if ƒ is twice continuously differentiable, ƒ '(α) = 0 and ƒ ''(α) ≠ 0, then there exists a neighborhood of α such that for all starting values x_{0} in that neighborhood, the sequence of iterates converges linearly, with rate log_{10} 2 (Süli & Mayers, Exercise 1.6). Alternatively if ƒ '(α) = 0 and ƒ '(x) ≠ 0 for x ≠ 0, x in a neighborhood U of α, α being a zero of multiplicity r, and if ƒ ∈ C^{r}(U) then there exists a neighborhood of α such that for all starting values x_{0} in that neighborhood, the sequence of iterates converges linearly.
However, even linear convergence is not guaranteed in pathological situations.
In practice these results are local and the neighborhood of convergence are not known a priori, but there are also some results on global convergence, for instance, given a right neighborhood U_{+} of α, if f is twice differentiable in U_{+} and if , in U_{+}, then, for each x_{0} in U_{+} the sequence x_{k} is monotonically decreasing to α.
Consider the problem of finding the square root of a number. There are many methods of computing square roots, and Newton's method is one.
For example, if one wishes to find the square root of 612, this is equivalent to finding the solution to
The function to use in Newton's method is then,
with derivative,
With an initial guess of 10, the sequence given by Newton's method is
Where the correct digits are underlined. With only a few iterations one can obtain a solution accurate to many decimal places.
Consider the problem of finding the positive number x with cos(x) = x^{3}. We can rephrase that as finding the zero of f(x) = cos(x) − x^{3}. We have f'(x) = −sin(x) − 3x^{2}. Since cos(x) ≤ 1 for all x and x^{3} > 1 for x > 1, we know that our zero lies between 0 and 1. We try a starting value of x_{0} = 0.5. (Note that a starting value of 0 will lead to an undefined result, showing the importance of using a starting point that is close to the zero.)
The correct digits are underlined in the above example. In particular, x_{6} is correct to the number of decimal places given. We see that the number of correct digits after the decimal point increases from 2 (for x_{3}) to 5 and 10, illustrating the quadratic convergence.
Newton's method is only guaranteed to converge if certain conditions are satisfied, so depending on the shape of the function and the starting point it may or may not converge.
In some cases the conditions on function necessary for convergence are satisfied, but the point chosen as the initial point is not in the interval where the method converges. In such cases a different method, such as bisection, should be used to obtain a better estimate for the zero to use as an initial point.
Consider the function:
It has a maximum at x=0 and solutions of f(x) = 0 at x = ±1. If we start iterating from the stationary point x_{0}=0 (where the derivative is zero), x_{1} will be undefined:
The same issue occurs if, instead of the starting point, any iteration point is stationary. Even if the derivative is not zero but is small, the next iteration will be far away from the desired zero.
For some functions, some starting points may enter an infinite cycle, preventing convergence. Let
and take 0 as the starting point. The first iteration produces 1 and the second iteration returns to 0 so the sequence will oscillate between the two without converging to a root. In general, the behavior of the sequence can be very complex. (See Newton fractal.)
If the function is not continuously differentiable in a neighborhood of the root then it is possible that Newton's method will always diverge and fail, unless the solution is guessed on the first try.
A simple example of a function where Newton's method diverges is the cube root, which is continuous and infinitely differentiable, except for x = 0, where its derivative is undefined (this, however, does not affect the algorithm, since it will never require the derivative if the solution is already found):
For any iteration point x_{n}, the next iteration point will be:
The algorithm overshoots the solution and lands on the other side of the yaxis, farther away than it initially was; applying Newton's method actually doubles the distances from the solution at each iteration.
In fact, the iterations diverge to infinity for every f(x) =  x  ^{α}, where . In the limiting case of (square root), the iterations will oscillate indefinitely between points x_{0} and −x_{0}, so they do not converge in this case either.
If the derivative is not continuous at the root, then convergence may fail to occur in any neighborhood of the root. Consider the function
Its derivative is:
Within any neighborhood of the root, this derivative keeps changing sign as x approaches 0 from the right (or from the left) while f(x) ≥ x − x^{2} > 0 for 0 < x < 1.
So f(x)/f'(x) is unbounded near the root, and Newton's method will diverge almost everywhere in any neighborhood of it, even though:
In some cases the iterates converge but do not converge as quickly as promised. In these cases simpler methods converge just as quickly as Newton's method.
If the first derivative is zero at the root, then convergence will not be quadratic. Indeed, let
then and consequently . So convergence is not quadratic, even though the function is infinitely differentiable everywhere.
Similar problems occur even when the root is only "nearly" double. For example, let
Then the first few iterates starting at x_{0} = 1 are 1, 0.500250376, 0.251062828, 0.127507934, 0.067671976, 0.041224176, 0.032741218, 0.031642362; it takes six iterations to reach a point where the convergence appears to be quadratic.
If there is no second derivative at the root, then convergence may fail to be quadratic. Indeed, let
Then
And
except when where it is undefined. Given ,
which has approximately 4/3 times as many bits of precision as has. This is less than the 2 times as many which would be required for quadratic convergence. So the convergence of Newton's method (in this case) is not quadratic, even though: the function is continuously differentiable everywhere; the derivative is not zero at the root; and is infinitely differentiable except at the desired root.
When dealing with complex functions, Newton's method can be directly applied to find their zeroes. Each zero has a basin of attraction, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundary of the basins of attraction is a fractal. In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge.
One may use Newton's method also to solve systems of k (nonlinear) equations, which amounts to finding the zeroes of continuously differentiable functions F : R^{k} → R^{k}. In the formulation given above, one then has to left multiply with the inverse of the kbyk Jacobian matrix J_{F}(x_{n}) instead of dividing by f '(x_{n}). Rather than actually computing the inverse of this matrix, one can save time by solving the system of linear equations
for the unknown x_{n+1} − x_{n}. Again, this method only works if the initial value x_{0} is close enough to the true zero. Typically, a wellbehaved region is located first with some other method and Newton's method is then used to "polish" a root which is already known approximately.
Another generalization is the Newton's method to find a root of a function F defined in a Banach space. In this case the formulation is
where is the Fréchet derivative applied at the point X_{n}. One needs the Fréchet derivative to be boundedly invertible at each X_{n} in order for the method to be applicable. A condition for existence of and convergence to a root is given by the Newton–Kantorovich theorem.
In numerical analysis, Newton's method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively better approximations to the zeroes (or roots) of a realvalued function. Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root. Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be, depends on the problem (detailed below). Unfortunately, when iteration begins far from the desired root, Newton's method can fail to converge with little warning; thus, implementations often include a routine that attempts to detect and overcome possible convergence failures.
Given a function ƒ(x) and its derivative ƒ '(x), we begin with a first guess x_{0}. Provided the function is reasonably wellbehaved a better approximation x_{1} is
The process is repeated until a sufficiently accurate value is reached:
An important and somewhat surprising application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number using only multiplication and subtraction.
The algorithm is first in the class of Householder's methods, succeeded by Halley's method.
Contents 
The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the xintercept of this tangent line (which is easily done with elementary algebra). This xintercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated.
Suppose ƒ : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current approximation x_{n}. Then we can derive the formula for a better approximation, x_{n+1} by referring to the diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point.
That is
Here, f ' denotes the derivative of the function f. Then by simple algebra we can derive
We start the process off with some arbitrary initial value x_{0}. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem.) The method will usually converge, provided this initial guess is close enough to the unknown zero, and that ƒ'(x_{0}) ≠ 0. Furthermore, for a zero of multiplicity 1, the convergence is at least quadratic (see rate of convergence) in a neighbourhood of the zero, which intuitively means that the number of correct digits roughly at least doubles in every step. More details can be found in the analysis section below.
Newton's method can also be used to find a minimum or maximum of a function. The derivative is zero at a minimum or maximum, so minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes:
Newton's method was described by Isaac Newton in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson). However, his description differs substantially from the modern description given above: Newton applies the method only to polynomials. He does not compute the successive approximations $x\_n$, but computes a sequence of polynomials and only at the end, he arrives at an approximation for the root x. Finally, Newton views the method as purely algebraic and fails to notice the connection with calculus. Isaac Newton probably derived his method from a similar but less precise method by Vieta. The essence of Vieta's method can be found in the work of the Persian mathematician, Sharaf alDin alTusi, while his successor Jamshīd alKāshī used a form of Newton's method to solve $x^P\; \; N\; =\; 0$ to find roots of N (Ypma 1995). A special case of Newton's method for calculating square roots was known much earlier and is often called the Babylonian method.
Newton's method was used by 17th century Japanese mathematician Seki Kōwa to solve singlevariable equations, though the connection with calculus was missing.
Newton's method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a simplified description in Analysis aequationum universalis. Raphson again viewed Newton's method purely as an algebraic method and restricted its use to polynomials, but he describes the method in terms of the successive approximations x_{n} instead of the more complicated sequence of polynomials used by Newton. Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using fluxional calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.
Arthur Cayley in 1879 in The NewtonFourier imaginary problem was the first who noticed the difficulties in generalizing the Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of the theory of iterations of rational functions.
Newton's method is an extremely powerful technique—in general the convergence is quadratic: the error is essentially squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method.
Since the most serious of the problems above is the possibility of a failure of convergence, Press et al. (1992) presented a version of Newton's method that starts at the midpoint of an interval in which the root is known to lie and stops the iteration if an iterate is generated that lies outside the interval.
Developers of large scale computer systems involving root finding tend to prefer the secant method over Newton's method because the use of a difference quotient in place of the derivative in Newton's method implies that the additional code to compute the derivative need not be maintained. In practice, the advantages of maintaining a smaller code base usually outweigh the superior convergence characteristics of Newton's method.
Suppose that the function ƒ has a zero at α, i.e., ƒ(α) = 0.
If f is continuously differentiable and its derivative is nonzero at α, then there exists a neighborhood of α such that for all starting values x_{0} in that neighborhood, the sequence {x_{n}} will converge to α.
If the function is continuously differentiable and its derivative is not 0 at α and it has a second derivative at α then the convergence is quadratic or faster. If the second derivative is not 0 at α then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of α, then:
where $\backslash Delta\; x\_i\; \backslash triangleq\; x\_i\; \; \backslash alpha\; \backslash ,.$
If the derivative is 0 at α, then the convergence is usually only linear. Specifically, if ƒ is twice continuously differentiable, ƒ '(α) = 0 and ƒ ''(α) ≠ 0, then there exists a neighborhood of α such that for all starting values x_{0} in that neighborhood, the sequence of iterates converges linearly, with rate log_{10} 2 (Süli & Mayers, Exercise 1.6). Alternatively if ƒ '(α) = 0 and ƒ '(x) ≠ 0 for x ≠ 0, x in a neighborhood U of α, α being a zero of multiplicity r, and if ƒ ∈ C^{r}(U) then there exists a neighborhood of α such that for all starting values x_{0} in that neighborhood, the sequence of iterates converges linearly.
However, even linear convergence is not guaranteed in pathological situations.
In practice these results are local and the neighborhood of convergence are not known a priori, but there are also some results on global convergence, for instance, given a right neighborhood U_{+} of α, if f is twice differentiable in U_{+} and if $f\text{'}\; \backslash ne\; 0\; \backslash !$, $f\; \backslash cdot\; f$ > 0 \! in U_{+}, then, for each x_{0} in U_{+} the sequence x_{k} is monotonically decreasing to α.
According to Taylor's theorem, any function f(x) which has a continuous second derivative can be represented by an expansion about a point that is close to a root of f(x). Suppose this root is $\backslash alpha\; \backslash ,.$ Then the expansion of f(α) about x_{n} is: {{{Template:Safesubst:}}}

 ({{{3}}}) 
where the Lagrange form of the Taylor series expansion remainder is
where ξ_{n} is in between x_{n} and $\backslash alpha\; \backslash ,.$
Since $\backslash alpha\; \backslash ,$ is the root, (1) becomes: {{{Template:Safesubst:}}}

 ({{{3}}}) 
Dividing equation (2) by $f^\backslash prime(x\_n)\backslash ,$ and rearranging gives {{{Template:Safesubst:}}}

 ({{{3}}}) 
Remembering that x_{n+1} is defined by {{{Template:Safesubst:}}}

 ({{{3}}}) 
one finds that
That is, {{{Template:Safesubst:}}}

 ({{{3}}}) 
Taking absolute value of both sides gives {{{Template:Safesubst:}}}
$\backslash left$ 
 ( {\epsilon_{n+1) 
Equation (6) shows that the rate of convergence is quadratic if following conditions are satisfied:
The term sufficiently close in this context means the following:
(a) Taylor approximation is accurate enough such that we can ignore higher order terms,
(b) $\backslash frac\; 1\; \{2\}\backslash left\; \{\backslash frac\; \{f^\{\backslash prime\backslash prime\}\; (x\_n)\}\{f^\backslash prime(x\_n)\}\}\backslash right\; \backslash left\; \{\backslash frac\; \{f^\{\backslash prime\backslash prime\}\; (\backslash alpha)\}\{f^\backslash prime(\backslash alpha)\}\}\backslash right\; ,\; \backslash text\{\; for\; some\; \}\; c\backslash infty,\backslash ,\; math>$
(c) $C\; \backslash left\; \{\backslash frac\; \{f^\{\backslash prime\backslash prime\}\; (\backslash alpha)\}\{f^\backslash prime(\backslash alpha)\}\}\backslash right\; \backslash epsilon\_n<1,\; \backslash text\{\; for\; \}n\backslash in\; \backslash Zeta\; ^+\backslash cup\backslash \{0\backslash \}\; \backslash text\{\; and\; \}C\; \backslash text\{\; satisfying\; condition\; (b)\; \}.\backslash ,$
Finally, (7) can be expressed in the following way:
where M is the supremum of the variable coefficient of $\{\backslash epsilon^2\}\_n\; \backslash ,$ on the interval $I\backslash ,$ defined in the condition 1, that is:
$M\; =\; \backslash sup\_\{x\; \backslash in\; I\}\; \backslash frac\; 1\; \{2\}\backslash left\; \{\backslash frac\; \{f^\{\backslash prime\backslash prime\}\; (x)\}\{f^\backslash prime(x)\}\}\backslash right\; .\; \backslash ,$
The initial point $x\_0\; \backslash ,$ has to be chosen such that conditions 1 through 3 are satisfied, where the third condition requires that $M\backslash left\; \backslash epsilon\_0\; \backslash right\; <1.\backslash ,$
Consider the problem of finding the square root of a number. There are many methods of computing square roots, and Newton's method is one.
For example, if one wishes to find the square root of 612, this is equivalent to finding the solution to
The function to use in Newton's method is then,
with derivative,
With an initial guess of 10, the sequence given by Newton's method is
x_1 & = & x_0  \dfrac{f(x_0)}{f'(x_0)} & = & 10  \dfrac{10^2  612}{2 \cdot 10} & = & 35.6 \quad\quad\quad{} \\ x_2 & = & x_1  \dfrac{f(x_1)}{f'(x_1)} & = & 35.6  \dfrac{35.6^2  612}{2 \cdot 35.6} & = & \underline{2}6.3955056 \\ x_3 & = & \vdots & = & \vdots & = & \underline{24.7}906355 \\ x_4 & = & \vdots & = & \vdots & = & \underline{24.7386}883 \\ x_5 & = & \vdots & = & \vdots & = & \underline{24.7386338}
\end{matrix}
Where the correct digits are underlined. With only a few iterations one can obtain a solution accurate to many decimal places.
Consider the problem of finding the positive number x with cos(x) = x^{3}. We can rephrase that as finding the zero of f(x) = cos(x) − x^{3}. We have f'(x) = −sin(x) − 3x^{2}. Since cos(x) ≤ 1 for all x and x^{3} > 1 for x > 1, we know that our zero lies between 0 and 1. We try a starting value of x_{0} = 0.5. (Note that a starting value of 0 will lead to an undefined result, showing the importance of using a starting point that is close to the zero.)
x_1 & = & x_0  \dfrac{f(x_0)}{f'(x_0)} & = & 0.5  \dfrac{\cos(0.5)  (0.5)^3}{\sin(0.5)  3(0.5)^2} & = & 1.112141637097 \\ x_2 & = & x_1  \dfrac{f(x_1)}{f'(x_1)} & = & \vdots & = & \underline{0.}909672693736 \\ x_3 & = & \vdots & = & \vdots & = & \underline{0.86}7263818209 \\ x_4 & = & \vdots & = & \vdots & = & \underline{0.86547}7135298 \\ x_5 & = & \vdots & = & \vdots & = & \underline{0.8654740331}11 \\ x_6 & = & \vdots &= & \vdots & = & \underline{0.865474033102}
\end{matrix}
The correct digits are underlined in the above example. In particular, x_{6} is correct to the number of decimal places given. We see that the number of correct digits after the decimal point increases from 2 (for x_{3}) to 5 and 10, illustrating the quadratic convergence.
Newton's method is only guaranteed to converge if certain conditions are satisfied, so depending on the shape of the function and the starting point it may or may not converge.
In some cases the conditions on function necessary for convergence are satisfied, but the point chosen as the initial point is not in the interval where the method converges. In such cases a different method, such as bisection, should be used to obtain a better estimate for the zero to use as an initial point.
Consider the function:
It has a maximum at x=0 and solutions of f(x) = 0 at x = ±1. If we start iterating from the stationary point x_{0}=0 (where the derivative is zero), x_{1} will be undefined, since the tangent at (0,1) is parallel to the xaxis:
The same issue occurs if, instead of the starting point, any iteration point is stationary. Even if the derivative is small but not zero, the next iteration will be a far worse approximation.
For some functions, some starting points may enter an infinite cycle, preventing convergence. Let
and take 0 as the starting point. The first iteration produces 1 and the second iteration returns to 0 so the sequence will oscillate between the two without converging to a root. In general, the behavior of the sequence can be very complex. (See Newton fractal.)
If the function is not continuously differentiable in a neighborhood of the root then it is possible that Newton's method will always diverge and fail, unless the solution is guessed on the first try.
A simple example of a function where Newton's method diverges is the cube root, which is continuous and infinitely differentiable, except for x = 0, where its derivative is undefined (this, however, does not affect the algorithm, since it will never require the derivative if the solution is already found):
For any iteration point x_{n}, the next iteration point will be:
The algorithm overshoots the solution and lands on the other side of the yaxis, farther away than it initially was; applying Newton's method actually doubles the distances from the solution at each iteration.
In fact, the iterations diverge to infinity for every $f(x)\; =\; x^\backslash alpha$, where $0\; <\; \backslash alpha\; <\; \backslash tfrac\{1\}\{2\}$. In the limiting case of $\backslash alpha\; =\; \backslash tfrac\{1\}\{2\}$ (square root), the iterations will oscillate indefinitely between points x_{0} and −x_{0}, so they do not converge in this case either.
If the derivative is not continuous at the root, then convergence may fail to occur in any neighborhood of the root. Consider the function
0 & \text{if } x = 0,\\ x + x^2\sin\left(\frac{2}{x}\right) & \text{if } x \neq 0. \end{cases}
Its derivative is:
1 & \text{if } x = 0,\\ 1 + 2\,x\,\sin\left(\frac{2}{x}\right)  2\,\cos\left(\frac{2}{x}\right) & \text{if } x \neq 0. \end{cases}
Within any neighborhood of the root, this derivative keeps changing sign as x approaches 0 from the right (or from the left) while f(x) ≥ x − x^{2} > 0 for 0 < x < 1.
So f(x)/f'(x) is unbounded near the root, and Newton's method will diverge almost everywhere in any neighborhood of it, even though:
In some cases the iterates converge but do not converge as quickly as promised. In these cases simpler methods converge just as quickly as Newton's method.
If the first derivative is zero at the root, then convergence will not be quadratic. Indeed, let
then $f\text{'}(x)\; =\; 2x\; \backslash !$ and consequently $x\; \; f(x)/f\text{'}(x)\; =\; x/2\; \backslash !$. So convergence is not quadratic, even though the function is infinitely differentiable everywhere.
Similar problems occur even when the root is only "nearly" double. For example, let
Then the first few iterates starting at x_{0} = 1 are 1, 0.500250376, 0.251062828, 0.127507934, 0.067671976, 0.041224176, 0.032741218, 0.031642362; it takes six iterations to reach a point where the convergence appears to be quadratic.
If there is no second derivative at the root, then convergence may fail to be quadratic. Indeed, let
Then
And
except when $x\; =\; 0\; \backslash !$ where it is undefined. Given $x\_n\; \backslash !$,
which has approximately 4/3 times as many bits of precision as $x\_n\; \backslash !$ has. This is less than the 2 times as many which would be required for quadratic convergence. So the convergence of Newton's method (in this case) is not quadratic, even though: the function is continuously differentiable everywhere; the derivative is not zero at the root; and $f\; \backslash !$ is infinitely differentiable except at the desired root.
When dealing with complex functions, Newton's method can be directly applied to find their zeroes. Each zero has a basin of attraction, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundary of the basins of attraction is a fractal. In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge.
One may use Newton's method also to solve systems of k (nonlinear) equations, which amounts to finding the zeroes of continuously differentiable functions F : R^{k} → R^{k}. In the formulation given above, one then has to left multiply with the inverse of the kbyk Jacobian matrix J_{F}(x_{n}) instead of dividing by f '(x_{n}). Rather than actually computing the inverse of this matrix, one can save time by solving the system of linear equations
for the unknown x_{n+1} − x_{n}. Again, this method only works if the initial value x_{0} is close enough to the true zero. Typically, a wellbehaved region is located first with some other method and Newton's method is then used to "polish" a root which is already known approximately.
Another generalization is the Newton's method to find a root of a function F defined in a Banach space. In this case the formulation is
where $F\text{'}\_\{X\_n\}$ is the Fréchet derivative applied at the point $X\_n$. One needs the Fréchet derivative to be boundedly invertible at each $X\_n$ in order for the method to be applicable. A condition for existence of and convergence to a root is given by the Newton–Kantorovich theorem.

Wikimedia Commons has media related to: Newton Method 

In math, Newton's method (also known as the Newton–Raphson method, named after Sir Isaac Newton and Joseph Raphson) is a method for finding the zeroes (or roots) of a function whose values are all real. Newton's method often converges very quickly, especially if the "guess value" begins sufficiently near the desired root. Just how close "sufficiently near" needs to be, and just how fast "remarkably quickly" can be, depends on the problem. Unfortunately, when the initial guess is far from the root, Newton's method can lead an unwary user astray with little warning.
