GRADIENT METHOD WITH DYNAMICAL RETARDS FOR LARGE-SCALE OPTIMIZATION PROBLEMS

We consider a generalization of the gradient method with retards for the solution of large-scale uncon- strained optimization problems. Recently, the gradient method with retards was introduced to find global minimizers of large-scale quadratic functions. The most interesting feature of this method is that it does not involve a decrease in the objective function, which allows fast local convergence. On the other hand, nonmonotone globalization strate- gies, that preserve local behavior for the nonquadratic case, have proved to be very effective when associated with low storage methods. In this work, the gradient method with retards is generalized and combined in a dynamical way with nonmonotone globalization strategies to obtain a new method for minimizing nonquadratic functions, that can deal efficiently with large problems. Encouraging numerical experiments on well-known test problems are presented.