Interactive Educational Modules in
Scientific Computing

Newton's Method

This module demonstrates Newton's method for minimizing a nonlinear function. Given an approximate solution value, a new approximate solution value is given by the minimum of a local quadratic approximation to the objective function. The step to the new point is given by the solution to a linear system involving the Hessian matrix of the objective function, with the negative of its gradient as right-hand side. The process is repeated until convergence, which is usually very rapid.

The user selects a problem either by choosing a preset example or typing in a desired objective function f(x, y). Contours of the objective function are drawn on the plot. An initial guess (x, y) can be selected either be typing it in or by clicking on the plot. The steps of Newton's method are then carried out sequentially by repeatedly clicking on NEXT or on the currently highlighted step. The current point (x, y) is indicated by a bullet on the plot and all values are also shown numerically in the table below. At each iteration, the Newton step s is computed by solving a linear system H s = −∇f, where H is the Hessian matrix of f. The approximate solution is updated by s, and the process is then repeated. If the starting guess is close enough to the true minimum, then Newton's method usually converges to it, typically with a quadratic convergence rate. For the special case of a quadratic function, Newton's method converges in a single iteration.

Reference: Michael T. Heath, Scientific Computing, An Introductory Survey, 2nd edition, McGraw-Hill, New York, 2002. See Section 6.5.3, especially Algorithm 6.4 and Example 6.12.

Developers: Jeffrey Naisbitt and Michael Heath