Convergence of Local Search

Surprisingly enough, this approach can be used to analyze deterministic as well as randomized optimization algorithms. To establish convergence it is enough that in each step the sufficient decrease condition is satisfied and the search directions (the directions between two successive iterates) are not orthogonal to the gradient direction in all but finitely many iterations. We show that the sufficient decrease condition is satisfied for instance for the Random Pursuit algorithm [2] as well as for the Random Gradient method [3]. Both these method propose new search directions uniformly at random. We show that, at the expense of only slightly increasing the variance, also different search directions, as, eg., random standard unit vectors, can be used instead. This gives raise to analyze a broad class of existing randomized local search algorithms. We conclude with an example of optimization over symmetric matrices. Acknowledgments The project CG Learning acknowledges the financial support of the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission, under FET-Open grant number: 255827. References