Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites

Several test function suites are being used for numerical benchmarking of multiobjective optimization algorithms. While they have some desirable properties, like well-understood Pareto sets and Pareto fronts of various shapes, most of the currently used functions possess characteristics that are arguably under-represented in real-world problems. They mainly stem from the easier construction of such functions and result in improbable properties such as separability, optima located exactly at the boundary constraints, and the existence of variables that solely control the distance between a solution and the Pareto front. Here, we propose an alternative way to constructing multiobjective problems—by combining existing single-objective problems from the literature. We describe in particular the bbob-biobj test suite with 55 bi-objective functions in continuous domain, and its extended version with 92 bi-objective functions (bbob-biobj-ext). Both test suites have been implemented in the COCO platform for black-box optimization benchmarking. Finally, we recommend a general procedure for creating test suites for an arbitrary number of objectives. Besides providing the formal function definitions and presenting their (known) properties, this paper also aims at giving the rationale behind our approach in terms of groups of functions with similar properties, objective space normalization, and problem instances. The latter allows us to easily compare the performance of deterministic and stochastic solvers, which is an often overlooked issue in benchmarking.

[1]  Frank Kursawe,et al.  A Variant of Evolution Strategies for Vector Optimization , 1990, PPSN.

[2]  Qingfu Zhang,et al.  Multiobjective optimization Test Instances for the CEC 2009 Special Session and Competition , 2009 .

[3]  Gary B. Lamont,et al.  Multiobjective evolutionary algorithm test suites , 1999, SAC '99.

[4]  Marco Laumanns,et al.  Performance assessment of multiobjective optimizers: an analysis and review , 2003, IEEE Trans. Evol. Comput..

[5]  Lothar Thiele,et al.  Comparison of Multiobjective Evolutionary Algorithms: Empirical Results , 2000, Evolutionary Computation.

[6]  Raymond Ros,et al.  Real-Parameter Black-Box Optimization Benchmarking 2009: Experimental Setup , 2009 .

[7]  Kalyanmoy Deb,et al.  Multi-objective optimization using evolutionary algorithms , 2001, Wiley-Interscience series in systems and optimization.

[8]  Anne Auger,et al.  COCO: The Experimental Procedure , 2016, ArXiv.

[9]  Bernd Bischl,et al.  Model-Based Multi-objective Optimization: Taxonomy, Multi-Point Proposal, Toolbox and Benchmark , 2015, EMO.

[10]  Marco Laumanns,et al.  Scalable Test Problems for Evolutionary Multiobjective Optimization , 2005, Evolutionary Multiobjective Optimization.

[11]  Miqing Li,et al.  Benchmark Functions for the CEC'2017 Competition on Many-Objective Optimization , 2017 .

[12]  Qingfu Zhang,et al.  Multiobjective Optimization Problems With Complicated Pareto Sets, MOEA/D and NSGA-II , 2009, IEEE Transactions on Evolutionary Computation.

[13]  Jing J. Liang,et al.  Problem Definitions for Performance Assessment of Multi-objective Optimization Algorithms , 2007 .

[14]  Anne Auger,et al.  Biobjective Performance Assessment with the COCO Platform , 2016, ArXiv.

[15]  R. Lyndon While,et al.  A review of multiobjective test problems and a scalable test problem toolkit , 2006, IEEE Transactions on Evolutionary Computation.

[16]  Dimo Brockhoff,et al.  Benchmarking Numerical Multiobjective Optimizers Revisited , 2015, GECCO.

[17]  Anne Auger,et al.  COCO: a platform for comparing continuous optimizers in a black-box setting , 2016, Optim. Methods Softw..

[18]  Tong Heng Lee,et al.  Evolutionary Algorithms for Multi-Objective Optimization: Performance Assessments and Comparisons , 2004, Artificial Intelligence Review.

[19]  Peter J. Fleming,et al.  An Overview of Evolutionary Algorithms in Multiobjective Optimization , 1995, Evolutionary Computation.

[20]  Nicholas I. M. Gould,et al.  CUTEr and SifDec: A constrained and unconstrained testing environment, revisited , 2003, TOMS.