Optimization of unconstrained functions with sparse hessian matrices-newton-type methods

Newton-type methods for unconstrained optimization problems have been very successful when coupled with a modified Cholesky factorization to take into account the possible lack of positive-definiteness in the Hessian matrix. In this paper we discuss the application of these method to large problems that have a sparse Hessian matrix whose sparsity is known a priori.Quite often it is difficult, if not impossible, to obtain an analytic representation of the Hessian matrix. Determining the Hessian matrix by the standard method of finite-differences is costly in terms of gradient evaluations for large problems. Automatic procedures that reduce the number of gradient evaluations by exploiting sparsity are examined and a new procedure is suggested.Once a sparse approximation to the Hessian matrix has been obtained, there still remains the problem of solving a sparse linear system of equations at each iteration. A modified Cholesky factorization can be used. However, many additional nonzeros (fill-in) may be created in the factors, and storage problems may arise. One way of approaching this problem is to ignore fill-in in a systematic manner. Such technique are calledpartial factorization schemes. Various existing partial factorization are analyzed and three new ones are developed.The above algorithms were tested on a set of problems. The overall conclusions were that these methods perfom well in practice.

[1]  D. Sorensen Newton's method with a model trust region modification , 1982 .

[2]  Mukund N. Thapa,et al.  Optimization of unconstrained functions with sparse Hessian matrices—Quasi-Newton methods , 1983, Math. Program..

[3]  James M. Ortega,et al.  Iterative solution of nonlinear equations in several variables , 2014, Computer science and applied mathematics.

[4]  J. Meijerink,et al.  An iterative solution method for linear systems of which the coefficient matrix is a symmetric -matrix , 1977 .

[5]  J. J. Moré,et al.  Estimation of sparse jacobian matrices and graph coloring problems , 1983 .

[6]  John D. Ramsdell,et al.  Estimation of Sparse Jacobian Matrices , 1983 .

[7]  M. Powell,et al.  On the Estimation of Sparse Jacobian Matrices , 1974 .

[8]  Philip E. Gill,et al.  Practical optimization , 1981 .

[9]  Philip E. Gill,et al.  Newton-type methods for unconstrained and linearly constrained optimization , 1974, Math. Program..

[10]  M. Powell,et al.  On the Estimation of Sparse Hessian Matrices , 1979 .

[11]  T. Manteuffel An incomplete factorization technique for positive definite linear systems , 1980 .

[12]  P. Toint Some numerical results using a sparse matrix updating formula in unconstrained optimization , 1978 .

[13]  N. Munksgaard,et al.  Solving Sparse Symmetric Sets of Linear Equations by Preconditioned Conjugate Gradients , 1980, TOMS.

[14]  S. Thomas McCormick,et al.  Optimal approximation of sparse hessians and its equivalence to a graph coloring problem , 1983, Math. Program..

[15]  I. Gustafsson A class of first order factorization methods , 1978 .

[16]  P. Gill,et al.  Conjugate-Gradient Methods for Large-Scale Nonlinear Optimization. , 1979 .

[17]  H. H. Rosenbrock,et al.  An Automatic Method for Finding the Greatest or Least Value of a Function , 1960, Comput. J..

[18]  R. Dembo,et al.  INEXACT NEWTON METHODS , 1982 .