Approximating the search distribution to the selection distribution in EDAs

In an Estimation of Distribution Algorithm (EDA) with an infinite sized population the selection distribution equals the search distribution. For a finite sized population these distributions are different. In practical EDAs the goal of the search distribution learning algorithm is to approximate the selection distribution. The source data is the selected set, which is derived from the population by applying a selection operator. The new approach described here eliminates the explicit use of the selection operator and the selected set. We rewrite for a finite population the selection distribution equations of four selection operators. The new equation is called the empirical selection distribution. Then we show how to build the search distribution that gives the best approximation to the empirical selection distribution. Our approach gives place to practical EDAs which can be easily and directly implemented from well established theoretical results. This paper also shows how common EDAs with discrete and real variables are adapted to take advantage of the empirical selection distribution. A comparison and discussion of performance is presented.

[1]  H. Mühlenbein,et al.  From Recombination of Genes to the Estimation of Distributions I. Binary Parameters , 1996, PPSN.

[2]  Dirk Thierens,et al.  Expanding from Discrete to Continuous Estimation of Distribution Algorithms: The IDEA , 2000, PPSN.

[3]  Robert Tibshirani,et al.  An Introduction to the Bootstrap , 1994 .

[4]  Qingfu Zhang,et al.  On the convergence of a class of estimation of distribution algorithms , 2004, IEEE Transactions on Evolutionary Computation.

[5]  Shumeet Baluja,et al.  A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning , 1994 .

[6]  A. Ochoa,et al.  A factorized distribution algorithm based on polytrees , 2000, Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512).

[7]  J. A. Lozano,et al.  Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms (Studies in Fuzziness and Soft Computing) , 2006 .

[8]  Yunpeng Cai,et al.  Probabilistic modeling for continuous EDA with Boltzmann selection and Kullback-Leibeler divergence , 2006 .

[9]  M. Pelikán,et al.  The Bivariate Marginal Distribution Algorithm , 1999 .

[10]  Gregory F. Cooper,et al.  A Bayesian method for the induction of probabilistic networks from data , 1992, Machine Learning.

[11]  Franz Rothlauf,et al.  SDR: a better trigger for adaptive variance scaling in normal EDAs , 2007, GECCO '07.

[12]  J. A. Lozano,et al.  Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation , 2001 .

[13]  D. Goldberg,et al.  Evolutionary Algorithm Using Marginal Histogram Models in Continuous Domain , 2007 .

[14]  David E. Goldberg,et al.  iBOA: the incremental bayesian optimization algorithm , 2008, GECCO '08.

[15]  Peter A. N. Bosman,et al.  Convergence phases, variance trajectories, and runtime analysis of continuous EDAs , 2007, GECCO '07.

[16]  Heinz Mühlenbein,et al.  From Recombination of Genes to the Estimation of Distributions II. Continuous Parameters , 1996, PPSN.

[17]  D. Goldberg,et al.  BOA: the Bayesian optimization algorithm , 1999 .

[18]  Franz Rothlauf,et al.  The correlation-triggered adaptive variance scaling IDEA , 2006, GECCO.

[19]  Heinz Mühlenbein,et al.  The Equation for Response to Selection and Its Use for Prediction , 1997, Evolutionary Computation.

[20]  Martin Pelikan,et al.  Scalable Optimization via Probabilistic Modeling: From Algorithms to Applications (Studies in Computational Intelligence) , 2006 .