Simple examples for the failure of Newton’s method with line search for strictly convex minimization
暂无分享,去创建一个
In this paper two simple examples of a twice continuously differentiable strictly convex function $$f$$f are presented for which Newton’s method with line search converges to a point where the gradient of $$f$$f is not zero. The first example uses a line search based on the Wolfe conditions. For the second example, some strictly convex function $$f$$f is defined as well as a sequence of descent directions for which exact line searches do not converge to the minimizer of $$f$$f. Then $$f$$f is perturbed such that these search directions coincide with the Newton directions for the perturbed function while leaving the exact line search invariant.
[1] Robert E. Mahony,et al. Convergence of the Iterates of Descent Methods for Analytic Cost Functions , 2005, SIAM J. Optim..
[2] D K Smith,et al. Numerical Optimization , 2001, J. Oper. Res. Soc..
[3] Walter F. Mascarenhas. Newton’s iterates can converge to non-stationary points , 2008, Math. Program..
[4] Jorge Nocedal,et al. On the convergence of Newton iterations to non-stationary points , 2004, Math. Program..