Novel Associative Memory Retrieving Strategies for Evolutionary Algorithms in Dynamic Environments

Recently, Evolutionary Algorithms (EAs) with associative memory schemes have been developed to solve Dynamic Optimization Problems (DOPs). Current associative memory schemes always retrieve both the best memory individual and the corresponding environmental information. However, the memory individual with the best fitness could not be the most appropriate one for new environments. In this paper, two novel associative memory retrieving strategies are proposed to obtain the most appropriate memory environmental information. In these strategies, two best individuals are first selected from the two best memory individuals and the current best individual. Then, their corresponding environmental information is evaluated according to either the survivability or the diversity, one of which is retrieved. In experiments, the proposed two strategies were embedded into the state-of-the-art algorithm, i.e. the MPBIL, and tested on three dynamic functions in cyclic environments. Experiment results demonstrate that the proposed retrieving strategies enhance the search ability in cyclic environments.

[1]  Shengxiang Yang,et al.  A self-organizing random immigrants genetic algorithm for dynamic optimization problems , 2007, Genetic Programming and Evolvable Machines.

[2]  Jürgen Branke,et al.  Evolutionary optimization in uncertain environments-a survey , 2005, IEEE Transactions on Evolutionary Computation.

[3]  Shengxiang Yang,et al.  Explicit Memory Schemes for Evolutionary Algorithms in Dynamic Environments , 2007, Evolutionary Computation in Dynamic and Uncertain Environments.

[4]  Xin Yao,et al.  Population-Based Incremental Learning With Associative Memory for Dynamic Environments , 2008, IEEE Transactions on Evolutionary Computation.

[5]  Jürgen Branke,et al.  A Multi-population Approach to Dynamic Optimization Problems , 2000 .

[6]  Franz Oppacher,et al.  Reconstructing the shifting balance theory in a GA: taking Sewall Wright seriously , 2000, Proceedings of the 2000 Congress on Evolutionary Computation. CEC00 (Cat. No.00TH8512).

[7]  Xin Yao,et al.  Experimental study on population-based incremental learning algorithms for dynamic optimization problems , 2005, Soft Comput..

[8]  Hajime Kita,et al.  Genetic algorithms for adaptation to dynamic environments - a survey , 2000, 2000 26th Annual Conference of the IEEE Industrial Electronics Society. IECON 2000. 2000 IEEE International Conference on Industrial Electronics, Control and Instrumentation. 21st Century Technologies.

[9]  Shumeet Baluja,et al.  A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning , 1994 .

[10]  Jürgen Branke,et al.  Memory enhanced evolutionary algorithms for changing optimization problems , 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406).

[11]  Shengxiang Yang,et al.  Associative Memory Scheme for Genetic Algorithms in Dynamic Environments , 2006, EvoWorkshops.

[12]  Shengxiang Yang,et al.  Memory-based immigrants for genetic algorithms in dynamic environments , 2005, GECCO '05.

[13]  John J. Grefenstette,et al.  Case-Based Initialization of Genetic Algorithms , 1993, ICGA.

[14]  David E. Goldberg,et al.  Nonstationary Function Optimization Using Genetic Algorithms with Dominance and Diploidy , 1987, ICGA.

[15]  Shengxiang Yang,et al.  Population-based incremental learning with memory scheme for changing environments , 2005, GECCO '05.

[16]  John J. Grefenstette,et al.  Genetic Algorithms for Changing Environments , 1992, PPSN.

[17]  Helen G. Cobb,et al.  An Investigation into the Use of Hypermutation as an Adaptive Operator in Genetic Algorithms Having Continuous, Time-Dependent Nonstationary Environments , 1990 .

[18]  Rasmus K. Ursem,et al.  Multinational GAs: Multimodal Optimization Techniques in Dynamic Environments , 2000, GECCO.