Interactive Educational Modules in
Scientific Computing

Steepest Descent

This module demonstrates the method of steepest descent for minimizing a nonlinear function in two dimensions. From a given starting point, a one-dimensional minimization of the objective function is performed along the negative of its gradient vector. The process is then repeated from the new point until convergence, which can be very slow.

The user selects a problem either by choosing a preset example or typing in a desired objective function f(x, y). Contours of the objective function are drawn on the plot. An initial guess (x, y) can be selected either by typing it in or clicking on the plot. The steps of the method of steepest descent are then carried out sequentially by repeatedly clicking on NEXT or on the currently highlighted step. The current point (x, y) is indicated by a bullet on the plot and all values are also shown numerically in the table below. At each iteration, the negative gradient is taken as a search direction. The minimum of f along −∇f is taken as the next approximate solution, and the process is then repeated. If the starting guess is close enough to the true minimum, then the method of steepest descent converges to it, typically with a linear convergence rate.

Reference: Michael T. Heath, Scientific Computing, An Introductory Survey, 2nd edition, McGraw-Hill, New York, 2002. See Section 6.5.2, especially Algorithm 6.3 and Example 6.11.

Developers: Jeffrey Naisbitt and Michael Heath