Convergence of Local Search
暂无分享,去创建一个
Surprisingly enough, this approach can be used to analyze deterministic as well as randomized optimization algorithms. To establish convergence it is enough that in each step the sufficient decrease condition is satisfied and the search directions (the directions between two successive iterates) are not orthogonal to the gradient direction in all but finitely many iterations. We show that the sufficient decrease condition is satisfied for instance for the Random Pursuit algorithm [2] as well as for the Random Gradient method [3]. Both these method propose new search directions uniformly at random. We show that, at the expense of only slightly increasing the variance, also different search directions, as, eg., random standard unit vectors, can be used instead. This gives raise to analyze a broad class of existing randomized local search algorithms. We conclude with an example of optimization over symmetric matrices. Acknowledgments The project CG Learning acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number: 255827. References
[1] V. G. Karmanov. Convergence estimates for iterative minimization methods , 1974 .
[2] V. G. Karmanov. On Convergence of a Random Search Method in Convex Minimization Problems , 1975 .
[3] A. Lewis,et al. Randomized Hessian estimation and directional search , 2011 .
[4] Christian L. Müller,et al. Optimization of Convex Functions with Random Pursuit , 2011, SIAM J. Optim..