Population-based incremental learning with memory scheme for changing environments

In recent years there has been a growing interest in studying evolutionary algorithms for dynamic optimization problems due to its importance in real world applications. Several approaches have been developed, such as the memory scheme. This paper investigates the application of the memory scheme for population-based incremental learning (PBIL) algorithms, a class of evolutionary algorithms, for dynamic optimization problems. A PBIL-specific memory scheme is proposed to improve its adaptability in dynamic environments. In this memory scheme the working probability vector is stored together with the best sample it creates in the memory and is used to reactivate old environments when change occurs. Experimental study based on a series of dynamic environments shows the efficiency of the memory scheme for PBILs in dynamic environments. In this paper, the relationship between the memory scheme and the multi-population scheme for PBILs in dynamic environments is also investigated. The experimental results indicate a negative interaction of the multi-population scheme on the memory scheme for PBILs in the dynamic test environments.

[1]  Hajime Kita,et al.  Adaptation to Changing Environments by Means of the Memory Based Thermodynamical Genetic Algorithm , 1997, ICGA.

[2]  S. Louis,et al.  Genetic Algorithms for Open Shop Scheduling and Re-scheduling , 1996 .

[3]  Emma Hart,et al.  A Comparison of Dominance Mechanisms and Simple Mutation on Non-stationary Problems , 1998, PPSN.

[4]  Kok Cheong Wong,et al.  A New Diploid Scheme and Dominance Change Mechanism for Non-Stationary Function Optimization , 1995, ICGA.

[5]  T. Krink,et al.  Dynamic memory model for non-stationary optimization , 2002, Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600).

[6]  Rich Caruana,et al.  Removing the Genetics from the Standard Genetic Algorithm , 1995, ICML.

[7]  Shengxiang Yang,et al.  Non-stationary problem optimization using the primal-dual genetic algorithm , 2003, The 2003 Congress on Evolutionary Computation, 2003. CEC '03..

[8]  Jürgen Branke,et al.  Evolutionary Optimization in Dynamic Environments , 2001, Genetic Algorithms and Evolutionary Computation.

[9]  J. A. Lozano,et al.  Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation , 2001 .

[10]  Xin Yao,et al.  Experimental study on population-based incremental learning algorithms for dynamic optimization problems , 2005, Soft Comput..

[11]  David E. Goldberg,et al.  Nonstationary Function Optimization Using Genetic Algorithms with Dominance and Diploidy , 1987, ICGA.

[12]  Dipankar Dasgupta,et al.  Nonstationary Function Optimization using the Structured Genetic Algorithm , 1992, PPSN.

[13]  Zbigniew Michalewicz,et al.  Searching for optima in non-stationary environments , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[14]  John J. Grefenstette,et al.  Case-Based Initialization of Genetic Algorithms , 1993, ICGA.

[15]  Dan Boneh,et al.  On genetic algorithms , 1995, COLT '95.

[16]  H. Mühlenbein,et al.  From Recombination of Genes to the Estimation of Distributions I. Binary Parameters , 1996, PPSN.

[17]  Jürgen Branke,et al.  Memory enhanced evolutionary algorithms for changing optimization problems , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[18]  Shumeet Baluja,et al.  A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning , 1994 .