QN-like variable storage conjugate gradients

Both conjugate gradient and quasi-Newton methods are quite successful at minimizing smooth nonlinear functions of several variables, and each has its advantages. In particular, conjugate gradient methods require much less storage to implement than a quasi-Newton code and therefore find application when storage limitations occur. They are, however, slower, so there have recently been attempts to combine CG and QN algorithms so as to obtain an algorithm with good convergence properties and low storage requirements. One such method is the code CONMIN due to Shanno and Phua; it has proven quite successful but it has one limitation. It has no middle ground, in that it either operates as a quasi-Newton code using O(n2) storage locations, or as a conjugate gradient code using 7n locations, but it cannot take advantage of the not unusual situation where more than 7n locations are available, but a quasi-Newton code requires an excessive amount of storage.In this paper we present a way of looking at conjugate gradient algorithms which was in fact given by Shanno and Phua but which we carry further, emphasize and clarify. This applies in particular to Beale's 3-term recurrence relation. Using this point of view, we develop a new combined CG-QN algorithm which can use whatever storage is available; CONMIN occurs as a special case. We present numerical results to demonstrate that the new algorithm is never worse than CONMIN and that it is almost always better if even a small amount of extra storage is provided.

[1]  Roger Fletcher,et al.  A Rapidly Convergent Descent Method for Minimization , 1963, Comput. J..

[2]  C. M. Reeves,et al.  Function minimization by conjugate gradients , 1964, Comput. J..

[3]  J. Crank,et al.  Persistent discretization errors in partial differential equations of parabolic type , 1964, Comput. J..

[4]  Shmuel S. Oren,et al.  Optimal conditioning of self-scaling variable Metric algorithms , 1976, Math. Program..

[5]  A. Perry A Modified Conjugate Gradient Algorithm for Unconstrained Nonlinear Optimization , 1975 .

[6]  M. J. D. Powell,et al.  Restart procedures for the conjugate gradient method , 1977, Math. Program..

[7]  P. Toint Some numerical results using a sparse matrix updating formula in unconstrained optimization , 1978 .

[8]  David F. Shanno,et al.  Conjugate Gradient Methods with Inexact Searches , 1978, Math. Oper. Res..

[9]  Albert G. Buckley,et al.  A combined conjugate-gradient quasi-Newton minimization algorithm , 1978, Math. Program..

[10]  D. Shanno,et al.  Numerical comparison of several variable-metric algorithms , 1978 .

[11]  Albert G. Buckley,et al.  Extending the relationship between the conjugate gradient and BFGS algorithms , 1978, Math. Program..

[12]  L. Nazareth A Relationship between the BFGS and Conjugate Gradient Algorithms and Its Implications for New Algorithms , 1979 .

[13]  David F. Shanno,et al.  Remark on “Algorithm 500: Minimization of Unconstrained Multivariate Functions [E4]” , 1980, TOMS.

[14]  J. Nocedal Updating Quasi-Newton Matrices With Limited Storage , 1980 .

[15]  A. Buckley A Portable Package for Testing Minimization Algorithms , 1982 .