Convergence phases, variance trajectories, and runtime analysis of continuous EDAs

Considering the available body of literature on continuous EDAs, one must state that many important questions are still unanswered, e.g.: How do continuous EDAs really work, and how can we increase their efficiency further? The first question must be answered on the basis of formal models, but despite some recent results, the majority of contributions to the field is experimental. The second questionshould be answered by exploiting the insights that have been gained from formal models. We contribute to the theoretical literature on continuous EDAs by focussing on a simple, yet important, question: How should the variances used tosample offspring from change over an EDA run? To answer this question, the convergence process is separated into three phases and it is shown that for each phase, a preferable strategy exists for setting the variances. It is highly likely that the use of variances that have been estimated with maximum likelihood is not optimal. Thus, variance modification policies are not just a nice add-on. In the light of our findings, they become an integral component of continuous EDAs, and they should consider the specific requirements of all phases of the optimization process.

[1]  Martin Pelikan,et al.  Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications (Studies in Computational Intelligence) , 2006 .

[2]  Dirk Thierens,et al.  Exploiting gradient information in continuous iterated density estimation evolutionary algorithms , 2001 .

[3]  Franz Rothlauf,et al.  The correlation-triggered adaptive variance scaling IDEA (CT-AVS-IDEA) , 2006 .

[4]  Petros Koumoutsakos,et al.  Learning Probability Distributions in Continuous Evolutionary Algorithms - a Comparative Review , 2004, Nat. Comput..

[5]  Franz Rothlauf,et al.  The correlation-triggered adaptive variance scaling IDEA , 2006, GECCO.

[6]  Petros Koumoutsakos,et al.  A Mixed Bayesian Optimization Algorithm with Variance Adaptation , 2004, PPSN.

[7]  Jörn Grahl,et al.  Behaviour of UMDAc algorithm with truncation selection on monotonous functions , 2005 .

[8]  N. L. Johnson,et al.  Continuous Multivariate Distributions, Volume 1: Models and Applications , 2019 .

[9]  Jens Jägersküpper,et al.  Rigorous Runtime Analysis of the (1+1) ES: 1/5-Rule and Ellipsoidal Fitness Landscapes , 2005, FOGA.

[10]  Jens Jägersküpper,et al.  Rigorous runtime analysis of a (μ+1)ES for the sphere function , 2005, GECCO '05.

[11]  Marcus Gallagher,et al.  On the importance of diversity maintenance in estimation of distribution algorithms , 2005, GECCO '05.

[12]  Pedro Larrañaga,et al.  Mathematical modelling of UMDAc algorithm with tournament selection. Behaviour on linear and quadratic functions , 2002, Int. J. Approx. Reason..

[13]  Bo Yuan,et al.  A Mathematical Modelling Technique for the Analysis of the Dynamics of a Simple Continuous EDA , 2006, 2006 IEEE International Conference on Evolutionary Computation.