On the convergence of the Newton/log-barrier method

Abstract.In the Newton/log-barrier method, Newton steps are taken for the log-barrier function for a fixed value of the barrier parameter until a certain convergence criterion is satisfied. The barrier parameter is then decreased and the Newton process is repeated. A naive analysis indicates that Newton’s method does not exhibit superlinear convergence to the minimizer of each instance of the log-barrier function until it reaches a very small neighborhood, namely within O(μ2) of the minimizer, where μ is the barrier parameter. By analyzing the structure of the barrier Hessian and gradient in terms of the subspace of active constraint gradients and the associated null space, we show that this neighborhood is in fact much larger –O(μσ) for any σ∈(1,2] – thus explaining why reasonably fast local convergence can be attained in practice. Moreover, we show that the overall convergence rate of the Newton/log-barrier algorithm is superlinear in the number of function/derivative evaluations, provided that the nonlinear program is formulated with a linear objective and that the schedule for decreasing the barrier parameter is related in a certain way to the step length and convergence criteria for each Newton process.

[1]  Anthony V. Fiacco,et al.  Nonlinear programming;: Sequential unconstrained minimization techniques , 1968 .

[2]  W. Murray,et al.  Analytical expressions for the eigenvalues and eigenvectors of the Hessian matrices of barrier and penalty functions , 1971 .

[3]  W. Murray Numerical Methods for Unconstrained Optimization , 1975 .

[4]  M. R. Osborne,et al.  Trajectory analysis and extrapolation in barrier function methods , 1978, The Journal of the Australian Mathematical Society. Series B. Applied Mathematics.

[5]  D. J. Bell,et al.  Numerical Methods for Unconstrained Optimization , 1979 .

[6]  C. G. Broyden,et al.  A smooth sequential penalty function method for solving nonlinear programming problems , 1984 .

[7]  R. Fletcher Practical Methods of Optimization , 1988 .

[8]  N. Gould On the convegence of a sequential penalty function method for constrained minimization , 1989 .

[9]  Margaret H. Wright,et al.  Interior methods for constrained optimization , 1992, Acta Numerica.

[10]  C. Roos,et al.  On the classical logarithmic barrier function method for a class of smooth convex programming problems , 1992 .

[11]  F. Jarre Interior-point methods for convex programming , 1992 .

[12]  Tamás Terlaky,et al.  A Large-Step Analytic Center Method for a Class of Smooth Convex Programming Problems , 1992, SIAM J. Optim..

[13]  P. Toint,et al.  A note on using alternative second-order models for the subproblems arising in barrier function methods for minimization , 1994 .

[14]  Walter Murray,et al.  Line Search Procedures for the Logarithmic Barrier Function , 1994, SIAM J. Optim..

[15]  Margaret H. Wright,et al.  Some properties of the Hessian of the logarithmic barrier function , 1994, Math. Program..

[16]  Margaret H. Wright,et al.  Why a Pure Primal Newton Barrier Step May be Infeasible , 1995, SIAM J. Optim..

[17]  Jean-Pierre Dussault,et al.  A two parameter mixed interior-exterior penalty algorithm , 1995, Math. Methods Oper. Res..

[18]  Stephen J. Wright Stability of Linear Equations Solvers in Interior-Point Methods , 1995, SIAM J. Matrix Anal. Appl..

[19]  Michael L. Overton,et al.  A Primal-dual Interior Method for Nonconvex Nonlinear Programming , 1998 .

[20]  Margaret H. Wright,et al.  Ill-Conditioning and Computational Error in Interior Methods for Nonlinear Programming , 1998, SIAM J. Optim..

[21]  Stephen J. Wright,et al.  The role of linear objective functions in barrier methods , 1999, Math. Program..

[22]  Jorge Nocedal,et al.  An Interior Point Algorithm for Large-Scale Nonlinear Programming , 1999, SIAM J. Optim..