Algorithms (X, sigma, eta): Quasi-random Mutations for Evolution Strategies

Randomization is an efficient tool for global optimization. We here define a method which keeps: - the order 0 of evolutionary algorithms (no gradient); - the stochastic aspect of evolutionary algorithms; - the efficiency of so-called low-dispersion points; and which ensures under mild assumptions global convergence with linear convergence rate. We use i) sampling on a ball instead of Gaussian sampling (in a way inspired by trust regions), ii) an original rule for step-size adaptation; iii) quasi-monte-carlo sampling (low dispersion points) instead of Monte-Carlo sampling. We prove in this framework linear convergence rates i) for global optimization and not only local optimization; ii) under very mild assumptions on the regularity of the function (existence of derivatives is not required). Though the main scope of this paper is theoretical, numerical experiments are made to backup the mathematical results.

[1]  Günter Rudolph,et al.  Convergence of non-elitist strategies , 1994, Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence.

[2]  Gnter Rudolph Convergence rate of evolutionary algorithms for a class of convex objective functions , 1997 .

[3]  Marc Schoenauer,et al.  Rigorous Hitting Times for Binary Mutations , 1999, Evolutionary Computation.

[4]  Arnold J. Stromberg,et al.  Number-theoretic Methods in Statistics , 1996 .

[5]  Hans-Georg Beyer,et al.  The Theory of Evolution Strategies , 2001, Natural Computing Series.

[6]  Günter Rudolph,et al.  Convergence analysis of canonical genetic algorithms , 1994, IEEE Trans. Neural Networks.

[7]  Günter Rudolph,et al.  How Mutation and Selection Solve Long-Path Problems in Polynomial Expected Time , 1996, Evolutionary Computation.

[8]  Yann Landrin-Schweitzer,et al.  Perturbation Theory for Evolutionary Algorithms: Towards an Estimation of Convergence Speed , 2000, PPSN.

[9]  Harald Niederreiter,et al.  Random number generation and Quasi-Monte Carlo methods , 1992, CBMS-NSF regional conference series in applied mathematics.

[10]  Thomas Jansen,et al.  On the analysis of the (1+1) evolutionary algorithm , 2002, Theor. Comput. Sci..

[11]  Anne Auger,et al.  Convergence results for the (1, lambda)-SA-ES using the theory of phi-irreducible Markov chains , 2005, Theor. Comput. Sci..

[12]  Hans-Paul Schwefel,et al.  Numerical Optimization of Computer Models , 1982 .

[13]  G. Unter Rudolph Convergence Rates of Evolutionary Algorithms for a Class of Convex Objective Functions , 1997 .

[14]  Pierre L'Ecuyer,et al.  Global Stochastic Optimization with Low-Dispersion Point Sets , 1998, Oper. Res..

[15]  A. Auger Convergence results for the ( 1 , )-SA-ES using the theory of-irreducible Markov chains , 2005 .

[16]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[17]  S. Goldfeld,et al.  Maximization by Quadratic Hill-Climbing , 1966 .

[18]  Reuven Y. Rubinstein,et al.  Optimization of computer simulation models with rare events , 1997 .

[19]  Kevin Barraclough,et al.  I and i , 2001, BMJ : British Medical Journal.

[20]  Raphaël Cerf,et al.  An Asymptotic Theory of Genetic Algorithms , 1995, Artificial Evolution.

[21]  C. Tricot Curves and Fractal Dimension , 1994 .

[22]  Jacques Lévy Véhel,et al.  Holder functions and deception of genetic algorithms , 1998, IEEE Trans. Evol. Comput..