BFGS Method
This module demonstrates the BFGS method for minimizing a nonlinear
function in two dimensions. Given an approximate solution value, a new
approximate solution value is given by the minimum of a local quadratic
approximation to the objective function. The step to the new point is
given by the solution to a linear system involving an approximation to
the Hessian matrix of the objective function, with the negative of its
gradient as right-hand side. The process is repeated until
convergence, which is usually fairly rapid.
The user selects a problem either by choosing a preset example or
typing in a desired objective function f(x,
y). Contours of the objective function are drawn on the
plot. An initial guess (x, y) can be
selected either be typing it in or by clicking on the plot. The steps
of the BFGS method are then carried out sequentially by repeatedly
clicking on NEXT or on the currently highlighted step. The current
point (x, y) is indicated by a bullet on the
plot and all values are also shown numerically in the table below. At
each iteration, the Newton step s is computed by solving a
linear system B s =
−∇f, where B is the approximate
Hessian matrix of f. The approximate solution is updated
by s, the approximate Hessian matrix B is updated
based on the difference y in successive values of the
gradient, and the process is then repeated. If the starting guess is
close enough to the true minimum, then the BFGS method usually
converges to it, typically with a superlinear convergence rate.
Reference: Michael T. Heath, Scientific Computing,
An Introductory Survey, 2nd edition, McGraw-Hill, New York,
2002. See Section 6.5.5, especially Algorithm 6.5 and Example 6.13.
Developers: Jeffrey Naisbitt and Michael Heath