How does Selecting a Benchmark Function Suite Influence the Estimation of an Algorithm's Quality?

This paper is focused on answering the question how the selection of a testbed on which the newly proposed algorithms are evaluated influence t he estimation of an algorithm's quality. New algorithms are usually tested on well-known benchmark function suites, where the goal is to achieve the best results of the algorithm in the shortest time. A lot of questions have arisen when looking for the most suitable testbed, for instance, which benchmark to take, and which version of it is the most representative for determining the best algorithms. In this study, the newly proposed algorithms introducing the coalition game concept for solving global optimization were tested by solving two different benchmark function suites, i.e., CEC-14 and CEC-18, in order to show that selecting the different CEC benchmark suites does not have a crucial impact on estimating the algorithm's quality.

[1]  M. Friedman A Comparison of Alternative Tests of Significance for the Problem of $m$ Rankings , 1940 .

[2]  O. G. Haywood Military Decision and Game Theory , 1954, Oper. Res..

[3]  Rainer Storn,et al.  Differential Evolution – A Simple and Efficient Heuristic for global Optimization over Continuous Spaces , 1997, J. Glob. Optim..

[4]  M. Nowak,et al.  Evolutionary game theory , 1995, Current Biology.

[5]  Tilman Börgers Recent books on evolutionary game theory , 2001 .

[6]  Christos H. Papadimitriou,et al.  Game theory and mathematical economics: a theoretical computer scientist's introduction , 2001, Proceedings 2001 IEEE International Conference on Cluster Computing.

[7]  A. Kai Qin,et al.  Self-adaptive differential evolution algorithm for numerical optimization , 2005, 2005 IEEE Congress on Evolutionary Computation.

[8]  Janez Brest,et al.  Self-Adapting Control Parameters in Differential Evolution: A Comparative Study on Numerical Benchmark Problems , 2006, IEEE Transactions on Evolutionary Computation.

[9]  Olof Leimar Game theory and biology , 2008 .

[10]  Rudolf Avenhaus,et al.  Inspection Games , 2009, Encyclopedia of Complexity and Systems Science.

[11]  Francisco Herrera,et al.  A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms , 2011, Swarm Evol. Comput..

[12]  Andreas Pitsillides,et al.  Game Theory in Communication Networks: Cooperative Resolution of Interactive Networking Scenarios , 2012 .

[13]  Xin-She Yang,et al.  A literature survey of benchmark functions for global optimisation problems , 2013, Int. J. Math. Model. Numer. Optimisation.

[14]  Alex S. Fukunaga,et al.  Improving the search performance of SHADE using linear population size reduction , 2014, 2014 IEEE Congress on Evolutionary Computation (CEC).

[15]  Ponnuthurai Nagaratnam Suganthan,et al.  Problem Definitions and Evaluation Criteria for the CEC 2014 Special Session and Competition on Single Objective Real-Parameter Numerical Optimization , 2014 .

[16]  Anne Auger,et al.  COCO: The Experimental Procedure , 2016, ArXiv.

[17]  Janez Brest,et al.  Single objective real-parameter optimization: Algorithm jSO , 2017, 2017 IEEE Congress on Evolutionary Computation (CEC).

[18]  Kenneth O. Stanley,et al.  Open-Ended Evolution and Open-Endedness: Editorial Introduction to the Open-Ended Evolution I Special Issue , 2019, Artificial Life.

[19]  Iztok Fister,et al.  Cooperative game concepts in solving global optimization , 2019, 2019 IEEE Congress on Evolutionary Computation (CEC).