Hybridization of accelerated gradient descent method

We present a gradient descent algorithm with a line search procedure for solving unconstrained optimization problems which is defined as a result of applying Picard-Mann hybrid iterative process on accelerated gradient descent SM method described in Stanimirović and Miladinović (Numer. Algor. 54, 503–520, 2010). Using merged features of both analyzed models, we show that new accelerated gradient descent model converges linearly and faster then the starting SM method which is confirmed trough displayed numerical test results. Three main properties are tested: number of iterations, CPU time and number of function evaluations. The efficiency of the proposed iteration is examined for the several values of the correction parameter introduced in Khan (2013).

[1]  C. M. Reeves,et al.  Function minimization by conjugate gradients , 1964, Comput. J..

[2]  Marcos Raydan,et al.  Preconditioned Barzilai-Borwein method for the numerical solution of partial differential equations , 1996, Numerical Algorithms.

[3]  Claude Lemaréchal,et al.  A view of line-searches , 1981 .

[4]  Neculai Andrei,et al.  An Unconstrained Optimization Test Functions Collection , 2008 .

[5]  R. Tyrrell Rockafellar,et al.  Convex Analysis , 1970, Princeton Landmarks in Mathematics and Physics.

[6]  Phillipp Kaestner,et al.  Linear And Nonlinear Programming , 2016 .

[7]  Safeer Hussain Khan A Picard-Mann hybrid iterative process , 2013 .

[8]  A. Goldstein On Steepest Descent , 1965 .

[9]  Milena J. Petrović,et al.  A Transformation of Accelerated Double Step Size Method for Unconstrained Optimization , 2015 .

[10]  David G. Luenberger,et al.  Linear and nonlinear programming , 1984 .

[11]  Florian A. Potra,et al.  Efficient line search algorithm for unconstrained optimization , 1995 .

[12]  L. Armijo Minimization of functions having Lipschitz continuous first partial derivatives. , 1966 .

[13]  Milena J. Petrovic An Accelerated Double Step Size model in unconstrained optimization , 2015, Appl. Math. Comput..

[14]  C. Zălinescu Convex analysis in general vector spaces , 2002 .

[15]  W. R. Mann,et al.  Mean value methods in iteration , 1953 .

[16]  Zhen-Jun Shi,et al.  Convergence of line search methods for unconstrained optimization , 2004, Appl. Math. Comput..

[17]  Milena J. Petrović,et al.  Accelerated Double Direction Method for Solving Unconstrained Optimization Problems , 2014 .

[18]  S. Ishikawa Fixed points by a new iteration method , 1974 .

[19]  Heinz H. Bauschke,et al.  Convex Analysis and Monotone Operator Theory in Hilbert Spaces , 2011, CMS Books in Mathematics.

[20]  Siam Rfview,et al.  CONVERGENCE CONDITIONS FOR ASCENT METHODS , 2016 .

[21]  D. Blackwell An analog of the minimax theorem for vector payoffs. , 1956 .

[22]  Neculai Andrei,et al.  An acceleration of gradient descent algorithm with backtracking for unconstrained optimization , 2006, Numerical Algorithms.

[23]  L. Liao,et al.  R-linear convergence of the Barzilai and Borwein gradient method , 2002 .

[24]  Ya-Xiang Yuan,et al.  Optimization Theory and Methods: Nonlinear Programming , 2010 .

[25]  David J. Thuente,et al.  Line search algorithms with guaranteed sufficient decrease , 1994, TOMS.

[26]  Y. Nesterov A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .

[27]  J. A. Clarkson Uniformly convex spaces , 1936 .

[28]  Predrag S. Stanimirovic,et al.  Accelerated gradient descent methods with line search , 2010, Numerical Algorithms.

[29]  R. Rockafellar Convex Analysis: (pms-28) , 1970 .

[30]  James M. Ortega,et al.  Iterative solution of nonlinear equations in several variables , 2014, Computer science and applied mathematics.

[31]  P. Wolfe Convergence Conditions for Ascent Methods. II , 1969 .

[32]  É. Picard Mémoire sur la théorie des équations aux dérivées partielles et la méthode des approximations successives , 1890 .