An adaptive conic trust-region method for unconstrained optimization

It is well known that among the current methods for unconstrained optimization problems the quasi-Newton methods with global strategy may be the most efficient methods, which have local superlinear convergence. However, when the iterative point is far away from the solution of the problem, quasi-Newton method may proceed slowly for the general unconstrained optimization problems. In this article an adaptive conic trust-region method for unconstrained optimization is presented. Not only the gradient information but also the values of the objective function are used to construct the local model at the current iterative point. Moreover, we define a concept of super steepest descent direction and embed its information into the local model. The amount of computation in each iteration of this adaptive algorithm is the same as that of the standard quasi-Newton method with trust region. Some numerical results show that the modified method requires fewer iterations than the standard methods to reach the solution of the optimization problem. Global and local convergence of the method is also analyzed.

[1]  K. A. Ariyawansa,et al.  Local and Q-superlinear convergence of a class of collinear scaling algorithms that extends quasi-newton methods with broyden's bounded-⊘ class of updates† ‡ , 1992 .

[2]  S. Sheng,et al.  Interpolation by conic model for unconstrained optimization , 1995, Computing.

[3]  David M. author-Gay Computing Optimal Locally Constrained Steps , 1981 .

[4]  Jorge J. Moré,et al.  Computing a Trust Region Step , 1983 .

[5]  D. Sorensen Newton's method with a model trust region modification , 1982 .

[6]  R. Fletcher Practical Methods of Optimization , 1988 .

[7]  James M. Ortega,et al.  Iterative solution of nonlinear equations in several variables , 2014, Computer science and applied mathematics.

[8]  D. Sorensen THE Q-SUPERLINEAR CONVERGENCE OF A COLLINEAR SCALING ALGORITHM FOR UNCONSTRAINED OPTIMIZATION* , 1980 .

[9]  Mordecai Avriel,et al.  Nonlinear programming , 1976 .

[10]  Jorge J. Moré,et al.  Testing Unconstrained Optimization Software , 1981, TOMS.

[11]  Ali Bouaricha,et al.  Tensor Methods for Large, Sparse Unconstrained Optimization , 1996, SIAM J. Optim..

[12]  Richard H. Byrd,et al.  A Family of Trust Region Based Algorithms for Unconstrained Minimization with Strong Global Convergence Properties. , 1985 .

[13]  K. A. Ariyawansa Deriving collinear scaling algorithms as extensions of quasi-Newton methods and the local convergence of DFP- and BFGS-related collinear scaling algorithms , 1990, Math. Program..

[14]  Franz Rendl,et al.  A semidefinite framework for trust region subproblems with applications to large scale minimization , 1997, Math. Program..

[15]  N. Deng,et al.  Nonquadratic Model Methods in Unconstrained Optimization , 1994 .

[16]  W. Davidon Conic Approximations and Collinear Scalings for Optimizers , 1980 .

[17]  Bobby Schnabel,et al.  Conic Methods for Unconstrained Minimization and Tensor Methods for Nonlinear Equations , 1982, ISMP.

[18]  Wenyu Sun,et al.  A trust region method for conic model to solve unconstraind optimizaions , 1996 .

[19]  L. Grandinetti Some investigations in a new algorithm for nonlinear optimization based on conic models of the objective function , 1984 .

[20]  Wen-yuSun,et al.  CONIC TRUST REGION METHOD FOR LINEARLY CONSTRAINED OPTIMIZATION , 2003 .

[21]  Wenyu Sun,et al.  Global convergence of nonmonotone descent methods for unconstrained optimization problems , 2002 .

[22]  Ya-Xiang Yuan,et al.  A Conic Trust-Region Method for Nonlinearly Constrained Optimization , 2001, Ann. Oper. Res..

[23]  Ya-Xiang Yuan,et al.  Optimization theory and methods , 2006 .

[24]  Bobby Schnabel,et al.  Tensor Methods for Unconstrained Optimization Using Second Derivatives , 1991, SIAM J. Optim..

[25]  R. D. Murphy,et al.  Iterative solution of nonlinear equations , 1994 .

[26]  John E. Dennis,et al.  Numerical methods for unconstrained optimization and nonlinear equations , 1983, Prentice Hall series in computational mathematics.