On promising regions and optimization effectiveness of continuous and deceptive functions

This paper evaluates the performance of three evolutionary algorithms to globally optimize complex continuous functions. The performance is evaluated by measuring the algorithms success rate to find the global optimum in several trials. At each set of trials, the search-space is reduced to be closer to the global optimum, so that the starting population is generated in an even more promising region. According to the results, it is possible to can conclude that, in high complexity problems, a good performance of classical evolutionary algorithms can not be expected. The paper also evaluates the performance of an evolutionary algorithm in a deceptive function. In this case, the reduced search-space is the model which generates the deceptive function. The success rates with and without the use of the starting model were compared. In this case, the use of a better starting model substantially increases the performance.

[1]  D. E. Goldberg,et al.  Simple Genetic Algorithms and the Minimal, Deceptive Problem , 1987 .

[2]  Yuping Wang,et al.  An orthogonal genetic algorithm with quantization for global numerical optimization , 2001, IEEE Trans. Evol. Comput..

[3]  Russell C. Eberhart,et al.  A new optimizer using particle swarm theory , 1995, MHS'95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science.

[4]  Jorge Nocedal,et al.  Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization , 1997, TOMS.

[5]  Stephen F. Smith,et al.  Improving Genetic Algorithms by Search Space Reductions (with Applications to Flow Shop Scheduling) , 1999, GECCO.

[6]  David E. Goldberg,et al.  Genetic Algorithms in Search Optimization and Machine Learning , 1988 .

[7]  Jing J. Liang,et al.  Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization , 2005 .

[8]  Liang Guo,et al.  Search space reduction in QoS routing , 1999, Proceedings. 19th IEEE International Conference on Distributed Computing Systems (Cat. No.99CB37003).

[9]  David E. Goldberg,et al.  Learning Linkage , 1996, FOGA.

[10]  I H Osman,et al.  Meta-Heuristics Theory and Applications , 2011 .

[11]  Rainer Storn,et al.  Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces , 1997, J. Glob. Optim..

[12]  Mauro Birattari,et al.  Swarm Intelligence , 2012, Lecture Notes in Computer Science.

[13]  D. Goldberg,et al.  Linkage learning through probabilistic expression , 2000 .

[14]  Fernando G. Lobo,et al.  Extended Compact Genetic Algorithm in C , 1999 .

[15]  Lalit M. Patnaik,et al.  Learning neural network weights using genetic algorithms-improving performance by search-space reduction , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[16]  David E. Goldberg,et al.  The Design of Innovation: Lessons from and for Competent Genetic Algorithms , 2002 .

[17]  Rich Caruana,et al.  Removing the Genetics from the Standard Genetic Algorithm , 1995, ICML.

[18]  Michael E. Wall,et al.  Galib: a c++ library of genetic algorithm components , 1996 .

[19]  D. Wolpert,et al.  No Free Lunch Theorems for Search , 1995 .

[20]  D. E. Goldberg,et al.  Genetic Algorithms in Search , 1989 .

[21]  Kit Yan Chan,et al.  A Taguchi method-based crossover operator for the parametrical problems , 2003, The 2003 Congress on Evolutionary Computation, 2003. CEC '03..

[22]  John H. Holland,et al.  Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence , 1992 .

[23]  Qingfu Zhang,et al.  An orthogonal genetic algorithm for multimedia multicast routing , 1999, IEEE Trans. Evol. Comput..

[24]  G. Harik Linkage Learning via Probabilistic Modeling in the ECGA , 1999 .