Estimating the Advantage of Age-Layering in Evolutionary Algorithms

In an age-layered evolutionary algorithm, candidates are evaluated on a small number of samples first; if they seem promising, they are evaluated with more samples, up to the entire training set. In this manner, weak candidates can be eliminated quickly, and evolution can proceed faster. In this paper, the fitness-level method is used to derive a theoretical upper bound for the runtime of (k+1) age-layered evolutionary strategy, showing a significant potential speedup compared to a non-layered counterpart. The parameters of the upper bound are estimated experimentally in the 11-Multiplexer problem, verifying that the theory can be useful in configuring age layering for maximum advantage. The predictions are validated in a practical implementation of age layering, confirming that 60-fold speedups are possible with this technique.

[1]  L. Darrell Whitley,et al.  Genetic Reinforcement Learning with Multilayer Neural Networks , 1991, ICGA.

[2]  Douglas C. Montgomery,et al.  Response Surface Methodology: Process and Product Optimization Using Designed Experiments , 1995 .

[3]  Risto Miikkulainen,et al.  Culling and Teaching in Neuro-Evolution , 1997, ICGA.

[4]  Phil Husbands,et al.  Evolutionary robotics , 2014, Evolutionary Intelligence.

[5]  Dario Floreano,et al.  Evolutionary robots with on-line self-organization and behavioral fitness , 2000, Neural Networks.

[6]  Alexander Kosorukoff,et al.  Using Incremental Evaluation And Adaptive Choice Of Operators In A Genetic Algorithm , 2002, GECCO.

[7]  S. Abbasi,et al.  Friendship modeling for cooperative co-evolutionary fuzzy systems: a hybrid GA-GP algorithm , 2003, 22nd International Conference of the North American Fuzzy Information Processing Society, NAFIPS 2003.

[8]  Bernard De Baets,et al.  Is Fitness Inheritance Useful for Real-World Applications? , 2003, EMO.

[9]  Mehrdad Salami,et al.  A fast evaluation strategy for evolutionary algorithms , 2003, Appl. Soft Comput..

[10]  Ingo Wegener,et al.  Methods for the Analysis of Evolutionary Algorithms on Pseudo-Boolean Functions , 2003 .

[11]  Yaochu Jin,et al.  A comprehensive survey of fitness approximation in evolutionary computation , 2005, Soft Comput..

[12]  António Gaspar-Cunha,et al.  A Multi-Objective Evolutionary Algorithm Using Neural Networks to Approximate Fitness Evaluations , 2005, Int. J. Comput. Syst. Signals.

[13]  Gregory Hornby,et al.  ALPS: the age-layered population structure for reducing the problem of premature convergence , 2006, GECCO.

[14]  Andy J. Keane,et al.  Recent advances in surrogate-based optimization , 2009 .

[15]  Antonio J. Rivera,et al.  GP-COACH: Genetic Programming-based learning of COmpact and ACcurate fuzzy rule-based classification systems for High-dimensional problems , 2010, Inf. Sci..

[16]  Josh C. Bongard,et al.  Guarding against premature convergence while accelerating evolutionary search , 2010, GECCO '10.

[17]  Yaochu Jin,et al.  Surrogate-assisted evolutionary computation: Recent advances and future challenges , 2011, Swarm Evol. Comput..

[18]  André I. Khuri,et al.  Response surface methodology , 2010 .

[19]  Una-May O'Reilly,et al.  EC-Star: A Massive-Scale, Hub and Spoke, Distributed Genetic Programming System , 2013 .

[20]  Franck Dernoncourt,et al.  Imprecise selection and fitness approximation in a large-scale evolutionary rule based system for blood pressure prediction , 2013, GECCO.

[21]  Hormoz Shahrzad,et al.  Introducing an Age-Varying Fitness Estimation Function , 2013 .

[22]  Dirk Sudholt,et al.  General Upper Bounds on the Runtime of Parallel Evolutionary Algorithms* , 2014, Evolutionary Computation.

[23]  Hormoz Shahrzad,et al.  Tackling the Boolean Multiplexer Function Using a Highly Distributed Genetic Programming System , 2014, GPTP.

[24]  Kalyanmoy Deb,et al.  A bilevel optimization approach to automated parameter tuning , 2014, GECCO.

[25]  Kalyanmoy Deb,et al.  Constrained Efficient Global Optimization for Pultrusion Process , 2015 .

[26]  Kenneth O. Stanley,et al.  Why Greatness Cannot Be Planned , 2015, Springer International Publishing.

[27]  Risto Miikkulainen,et al.  Evolutionary Bilevel Optimization for Complex Control Tasks , 2015, GECCO.

[28]  H. P. de Vladar,et al.  Why Greatness Cannot Be Planned: The Myth of the Objective , 2016, Leonardo.