Generalized drift analysis in continuous domain: linear convergence of (1 + 1)-ES on strongly convex functions with Lipschitz continuous gradients

We prove the linear convergence of the (1 + 1)-Evolution Strategy (ES) with a success based step-size adaptation on a broad class of functions, including strongly convex functions with Lipschitz continuous gradients, which is often assumed to analyze gradient based methods. Our proof is based on the methodology recently developed to analyze the same algorithm on the spherical function, namely the additive drift analysis on unbounded continuous domain. An upper bound of the expected first hitting time is derived, from which we can conclude that our algorithm converges linearly. We investigate the class of functions that satisfy the assumptions of our main theorem, revealing that strongly convex functions with Lipschitz continuous gradients and their strictly increasing transformation satisfy the assumptions. To the best of our knowledge, this is the first paper showing the linear convergence of the (1+1)-ES on such a broad class of functions. This opens the possibility to compare the (1 + 1)-ES and gradient based methods in theory.

[1]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[2]  Jens Jägersküpper,et al.  How the (1+1) ES using isotropic mutations minimizes positive definite quadratic forms , 2006, Theor. Comput. Sci..

[3]  Jens Jägersküpper,et al.  Algorithmic analysis of a basic evolutionary algorithm for continuous optimization , 2007, Theor. Comput. Sci..

[4]  W. Vent,et al.  Rechenberg, Ingo, Evolutionsstrategie — Optimierung technischer Systeme nach Prinzipien der biologischen Evolution. 170 S. mit 36 Abb. Frommann‐Holzboog‐Verlag. Stuttgart 1973. Broschiert , 1975 .

[5]  Jens Jägersküpper,et al.  Analysis of a Simple Evolutionary Algorithm for Minimization in Euclidean Spaces , 2003, ICALP.

[6]  Petros Koumoutsakos,et al.  Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES) , 2003, Evolutionary Computation.

[7]  Anne Auger,et al.  Principled Design of Continuous Stochastic Search: From Theory to Practice , 2014, Theory and Principled Methods for the Design of Metaheuristics.

[8]  Petros Koumoutsakos,et al.  Learning probability distributions in continuous evolutionary algorithms – a comparative review , 2004, Natural Computing.

[9]  Tobias Glasmachers,et al.  Global Convergence of the (1 + 1) Evolution Strategy to a Critical Point , 2017, Evolutionary Computation.

[10]  Youhei Akimoto,et al.  Analysis of a natural gradient algorithm on monotonic convex-quadratic-composite functions , 2012, GECCO '12.

[11]  Anne Auger,et al.  Drift theory in continuous search spaces: expected hitting time of the (1 + 1)-ES with 1/5 success rule , 2018, GECCO.