Convergence conditions, line search algorithms and trust region implementations for the Polak–Ribière conjugate gradient method

We study globally convergent implementations of the Polak–Ribière (PR) conjugate gradient method for the unconstrained minimization of continuously differentiable functions. More specifically, first we state sufficient convergence conditions, which imply that limit points produced by the PR iteration are stationary points of the objective function and we prove that these conditions are satisfied, in particular, when the objective function has some generalized convexity property and exact line searches are performed. In the general case, we show that the convergence conditions can be enforced by means of various inexact line search schemes where, in addition to the usual acceptance criteria, further conditions are imposed on the stepsize. Then we define a new trust region implementation, which is compatible with the behavior of the PR method in the quadratic case, and may perform different linesearches in dependence of the norm of the search direction. In this framework, we show also that it is possible to define globally convergent modified PR iterations that permit exact linesearches at every iteration. Finally, we report the results of a numerical experimentation on a set of large problems.

[1]  C. Storey,et al.  Efficient generalized conjugate gradient algorithms, part 1: Theory , 1991 .

[2]  C. M. Reeves,et al.  Function minimization by conjugate gradients , 1964, Comput. J..

[3]  M. S. Bazaraa,et al.  Nonlinear Programming , 1979 .

[4]  M. Powell Convergence properties of algorithms for nonlinear optimization , 1986 .

[5]  L. Grippo,et al.  Global convergence and stabilization of unconstrained minimization methods without derivatives , 1988 .

[6]  David F. Shanno Globally convergent conjugate gradient algorithms , 1985, Math. Program..

[7]  David F. Shanno,et al.  Conjugate Gradient Methods with Inexact Searches , 1978, Math. Oper. Res..

[8]  R. D. Murphy,et al.  Iterative solution of nonlinear equations , 1994 .

[9]  James M. Ortega,et al.  Iterative solution of nonlinear equations in several variables , 2014, Computer science and applied mathematics.

[10]  Yu-Hong Dai,et al.  Conjugate Gradient Methods with Armijo-type Line Searches , 2002 .

[11]  Jorge Nocedal,et al.  Global Convergence Properties of Conjugate Gradient Methods for Optimization , 1992, SIAM J. Optim..

[12]  C. Storey,et al.  Generalized Polak-Ribière algorithm , 1992 .

[13]  E. Polak,et al.  Note sur la convergence de méthodes de directions conjuguées , 1969 .

[14]  Ya-Xiang Yuan,et al.  Convergence Properties of Nonlinear Conjugate Gradient Methods , 1999, SIAM J. Optim..

[15]  Marcos Raydan,et al.  The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem , 1997, SIAM J. Optim..

[16]  Luigi Grippo,et al.  Stopping criteria for linesearch methods without derivatives , 1984, Math. Program..

[17]  M. Powell Nonconvex minimization calculations and the conjugate gradient method , 1984 .

[18]  D. Shanno On the Convergence of a New Conjugate Gradient Algorithm , 1978 .

[19]  Luigi Grippo,et al.  A globally convergent version of the Polak-Ribière conjugate gradient method , 1997, Math. Program..

[20]  O. Nelles,et al.  An Introduction to Optimization , 1996, IEEE Antennas and Propagation Magazine.

[21]  M. Al-Baali Descent Property and Global Convergence of the Fletcher—Reeves Method with Inexact Line Search , 1985 .

[22]  Magnus R. Hestenes,et al.  Conjugate Direction Methods in Optimization , 1980 .

[23]  R. Fletcher Practical Methods of Optimization , 1988 .

[24]  M. Hestenes,et al.  Methods of conjugate gradients for solving linear systems , 1952 .