Trajectory following optimization by gradient transformation differential equations

For minimizing a scalar-valued function, we develop and investigate a family of gradient transformation differential equation algorithms. This family includes, as special cases: steepest descent, Newton's method, Levenberg-Marquardt, and a gradient enhanced Newton algorithm that we develop. Using Rosenbrock's "banana" function we study the stiffness of the gradient transformation family in terms of Lyapunov exponent time histories. For the example function, Newton's method and the Levenberg-Marquardt modification do not yield global asymptotic stability, whereas steepest descent does. However, Newton's method (from an initial point where it does work) is not stiff and is approximately 100 times as fast as steepest descent. In contrast, the gradient enhanced Newton method is globally convergent, is not stiff, and is approximately 25 times faster than Newton's method and approximately 2500 times faster than steepest descent.