REMEDA: Random Embedding EDA for Optimising Functions with Intrinsic Dimension

It has been observed that in many real-world large scale problems only few variables have a major impact on the function value: While there are many inputs to the function, there are just few degrees of freedom. We refer to such functions as having a low intrinsic dimension. In this paper we devise an Estimation of Distribution Algorithm (EDA) for continuous optimisation that exploits intrinsic dimension without knowing the influential subspace of the input space, or its dimension, by employing the idea of random embedding. While the idea is applicable to any optimiser, EDA is known to be remarkably successful in low dimensional problems but prone to the curse of dimensionality in larger problems because its model building step requires large population sizes. Our method, Random Embedding in Estimation of Distribution Algorithm (REMEDA) remedies this weakness and is able to optimise very large dimensional problems as long as their intrinsic dimension is low.

[1]  Frank Hutter,et al.  Automated configuration of algorithms for solving hard computational problems , 2009 .

[2]  Raymond Ros,et al.  A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity , 2008, PPSN.

[3]  S. Szarek,et al.  Chapter 8 - Local Operator Theory, Random Matrices and Banach Spaces , 2001 .

[4]  Xin Yao,et al.  Multilevel cooperative coevolution for large scale optimization , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).

[5]  Peter A. N. Bosman,et al.  On empirical memory design, faster selection of bayesian factorizations and parameter-free gaussian EDAs , 2009, GECCO.

[6]  Ata Kabán,et al.  Heavy tails with parameter adaptation in random projection based continuous EDA , 2015, 2015 IEEE Congress on Evolutionary Computation (CEC).

[7]  Nando de Freitas,et al.  Bayesian Optimization in a Billion Dimensions via Random Embeddings , 2013, J. Artif. Intell. Res..

[8]  Yoshua Bengio,et al.  Random Search for Hyper-Parameter Optimization , 2012, J. Mach. Learn. Res..

[9]  Karsten Weicker,et al.  On the improvement of coevolutionary optimizers by learning variable interdependencies , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[10]  Zhenyu Yang,et al.  Large-Scale Global Optimization Using Cooperative Coevolution with Variable Interaction Learning , 2010, PPSN.

[11]  Ata Kabán,et al.  Toward Large-Scale Continuous EDA: A Random Matrix Theory Perspective , 2013, Evolutionary Computation.

[12]  Nikolaus Hansen,et al.  The CMA Evolution Strategy: A Comparing Review , 2006, Towards a New Evolutionary Computation.

[13]  Ata Kabán,et al.  Multivariate Cauchy EDA Optimisation , 2014, IDEAL.

[14]  Peter Tiño,et al.  Scaling Up Estimation of Distribution Algorithms for Continuous Optimization , 2011, IEEE Transactions on Evolutionary Computation.

[15]  Qiqi Wang,et al.  Erratum: Active Subspace Methods in Theory and Practice: Applications to Kriging Surfaces , 2013, SIAM J. Sci. Comput..

[16]  Xin Yao,et al.  Unified eigen analysis on multivariate Gaussian based estimation of distribution algorithms , 2008, Inf. Sci..