Variable metric evolution strategies by mutation matrix adaptation

Abstract The covariance matrix adaptation evolution strategy (CMA-ES) is one of the most successful evolutionary algorithms. CMA-ES incrementally learns the variable metric by evolving a full covariance matrix. Yet, it suffers from high computational overload. In this paper, we propose two efficient variants of CMA-ES, termed mutation matrix adaptation (MMA-ES) and exponential MMA-ES (xMMA-ES). These variants are derived by taking the first order approximation of the update of the covariance matrix in CMA-ES. Both variants avoid the computational costly matrix decomposition while keeping the simplicity of the update scheme of CMA-ES. We analyze the properties and connections of MMA-ES and xMMA-ES to other variants of evolution strategies. We have experimentally studied the proposed algorithms’ behaviors and performances. xMMA-ES and MMA-ES generally outperform or perform competitively to CMA-ES. We have investigated the performance of MMA-ES with the BiPop restart strategy on the BBOB benchmarks. The experimental results validate the performance of the proposed algorithms.

[1]  Petros Koumoutsakos,et al.  Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES) , 2003, Evolutionary Computation.

[2]  Nikolaus Hansen,et al.  The CMA Evolution Strategy: A Tutorial , 2016, ArXiv.

[3]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[4]  Oswin Krause,et al.  A More Efficient Rank-one Covariance Matrix Update for Evolution Strategies , 2015, FOGA.

[5]  Xin Yao,et al.  Large scale evolutionary optimization using cooperative coevolution , 2008, Inf. Sci..

[6]  Hans-Georg Beyer,et al.  The Dynamics of Cumulative Step Size Adaptation on the Ellipsoid Model , 2016, Evolutionary Computation.

[7]  Anne Auger,et al.  Principled Design of Continuous Stochastic Search: From Theory to Practice , 2014, Theory and Principled Methods for the Design of Metaheuristics.

[8]  Terence D. Sanger,et al.  Optimal unsupervised learning in a single-layer linear feedforward neural network , 1989, Neural Networks.

[9]  Hans-Paul Schwefel,et al.  Evolution strategies – A comprehensive introduction , 2002, Natural Computing.

[10]  Hans-Georg Beyer,et al.  Convergence Analysis of Evolutionary Algorithms That Are Based on the Paradigm of Information Geometry , 2014, Evolutionary Computation.

[11]  Antonio LaTorre,et al.  A comprehensive comparison of large scale global optimizers , 2015, Inf. Sci..

[12]  Qingfu Zhang,et al.  A Simple Yet Efficient Evolution Strategy for Large-Scale Black-Box Optimization , 2018, IEEE Transactions on Evolutionary Computation.

[13]  Charles A. Bouman,et al.  The Sparse Matrix Transform for Covariance Estimation and Analysis of High Dimensional Signals , 2011, IEEE Transactions on Image Processing.

[14]  Robert E. Mahony,et al.  Optimization Algorithms on Matrix Manifolds , 2007 .

[15]  Qingfu Zhang,et al.  Fast Covariance Matrix Adaptation for Large-Scale Black-Box Optimization , 2020, IEEE Transactions on Cybernetics.

[16]  Gene H. Golub,et al.  Matrix computations , 1983 .

[17]  Christian Igel,et al.  Efficient covariance matrix update for variable metric evolution strategies , 2009, Machine Learning.

[18]  Qingfu Zhang,et al.  Evolution strategies for continuous optimization: A survey of the state-of-the-art , 2020, Swarm Evol. Comput..

[19]  Bernhard Sendhoff,et al.  Simplify Your Covariance Matrix Adaptation Evolution Strategy , 2017, IEEE Transactions on Evolutionary Computation.