COCO: a platform for comparing continuous optimizers in a black-box setting

We introduce COCO, an open-source platform for Comparing Continuous Optimizers in a black-box setting. COCO aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. The platform and the underlying methodology allow to benchmark in the same framework deterministic and stochastic solvers for both single and multiobjective optimization. We present the rationals behind the (decade-long) development of the platform as a general proposition for guidelines towards better benchmarking. We detail underlying fundamental concepts of COCO such as the definition of a problem as a function instance, the underlying idea of instances, the use of target values, and runtime defined by the number of function calls as the central performance measure. Finally, we give a quick overview of the basic code structure and the currently available test suites.

[1]  Nicholas I. M. Gould,et al.  A Note on Performance Profiles for Benchmarking Software , 2016, ACM Trans. Math. Softw..

[2]  Qingfu Zhang,et al.  This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1 RM-MEDA: A Regularity Model-Based Multiobjective Estimation of , 2022 .

[3]  Anne Auger,et al.  The Impact of Variation Operators on the Performance of SMS-EMOA on the Bi-objective BBOB-2016 Test Suite , 2016, GECCO.

[4]  Anne Auger,et al.  Using Well-Understood Single-Objective Functions in Multiobjective Black-Box Optimization Test Suites , 2019, Evolutionary Computation.

[5]  Anne Auger,et al.  Benchmarking RM-MEDA on the Bi-objective BBOB-2016 Test Suite , 2016, GECCO.

[6]  D. Eichmann More Test Examples For Nonlinear Programming Codes , 2016 .

[7]  M. Kenward,et al.  An Introduction to the Bootstrap , 2007 .

[8]  Ye Tian,et al.  PlatEMO: A MATLAB Platform for Evolutionary Multi-Objective Optimization [Educational Forum] , 2017, IEEE Computational Intelligence Magazine.

[9]  David S. Johnson,et al.  A theoretician's guide to the experimental analysis of algorithms , 1999, Data Structures, Near Neighbor Searches, and Methodology.

[10]  Anne Auger,et al.  A Comparative Study of Large-Scale Variants of CMA-ES , 2018, PPSN.

[11]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[12]  Michèle Sebag,et al.  Black-box optimization benchmarking of NIPOP-aCMA-ES and NBIPOP-aCMA-ES on the BBOB-2012 noiseless testbed , 2012, GECCO '12.

[13]  Anne Auger,et al.  Comparing results of 31 algorithms from the black-box optimization benchmarking BBOB-2009 , 2010, GECCO '10.

[14]  Anne Auger,et al.  Benchmarking MATLAB's gamultiobj (NSGA-II) on the Bi-objective BBOB-2016 Test Suite , 2016, GECCO.

[15]  S. T. Buckland,et al.  An Introduction to the Bootstrap. , 1994 .

[16]  Michèle Sebag,et al.  Black-box optimization benchmarking of IPOP-saACM-ES and BIPOP-saACM-ES on the BBOB-2012 noiseless testbed , 2012, GECCO '12.

[17]  Jorge J. Moré,et al.  Testing Unconstrained Optimization Software , 1981, TOMS.

[18]  L. Darrell Whitley,et al.  Evaluating Evolutionary Algorithms , 1996, Artif. Intell..

[19]  John D. Hunter,et al.  Matplotlib: A 2D Graphics Environment , 2007, Computing in Science & Engineering.

[20]  D. Shanno Conditioning of Quasi-Newton Methods for Function Minimization , 1970 .

[21]  Pascal Kerschke,et al.  Single- and multi-objective game-benchmark for evolutionary algorithms , 2019, GECCO.

[22]  Michael R. Bussieck,et al.  PAVER 2.0: an open source environment for automated performance analysis of benchmarking data , 2014, J. Glob. Optim..

[23]  R. Fletcher,et al.  A New Approach to Variable Metric Algorithms , 1970, Comput. J..

[24]  Sébastien Bubeck,et al.  Convex Optimization: Algorithms and Complexity , 2014, Found. Trends Mach. Learn..

[25]  M. Powell The NEWUOA software for unconstrained optimization without derivatives , 2006 .

[26]  Mahmoud Fouz,et al.  BBOB: Nelder-Mead with resize and halfruns , 2009, GECCO '09.

[27]  D. Goldfarb A family of variable-metric methods derived by variational means , 1970 .

[28]  Dimo Brockhoff,et al.  Stopping criteria, initialization, and implementations of BFGS and their effect on the BBOB test suite , 2018, GECCO.

[29]  Eckart Zitzler,et al.  Evolutionary Multi-Criterion Optimization, Third International Conference, EMO 2005, Guanajuato, Mexico, March 9-11, 2005, Proceedings , 2005, EMO.

[30]  Dimo Brockhoff,et al.  Anytime Benchmarking of Budget-Dependent Algorithms with the COCO Platform , 2017 .

[31]  Bogdan Filipic,et al.  DEMO: Differential Evolution for Multiobjective Optimization , 2005, EMO.

[32]  Anne Auger,et al.  Biobjective Performance Assessment with the COCO Platform , 2016, ArXiv.

[33]  Ambros M. Gleixner,et al.  Feature-Based Algorithm Selection for Mixed Integer Programming , 2018 .

[34]  Lothar Thiele,et al.  Multiobjective Optimization Using Evolutionary Algorithms - A Comparative Case Study , 1998, PPSN.

[35]  Dimo Brockhoff,et al.  Mixed-integer benchmark problems for single- and bi-objective optimization , 2019, GECCO.

[36]  Fernando G. Lobo,et al.  A parameter-less genetic algorithm , 1999, GECCO.

[37]  Yurii Nesterov,et al.  Lectures on Convex Optimization , 2018 .

[38]  Antonio J. Nebro,et al.  jMetal: A Java framework for multi-objective optimization , 2011, Adv. Eng. Softw..

[39]  Hao Wang,et al.  IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics , 2018, ArXiv.

[40]  Yaroslav D. Sergeyev,et al.  Algorithm 829: Software for generation of classes of test functions with known local and global minima for global optimization , 2003, TOMS.

[41]  Anne Auger,et al.  Benchmarking the local metamodel CMA-ES on the noiseless BBOB'2013 test bed , 2013, GECCO.

[42]  K. Price Differential evolution vs. the functions of the 2/sup nd/ ICEO , 1997, Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC '97).

[43]  Thomas Stützle,et al.  Evaluating Las Vegas Algorithms: Pitfalls and Remedies , 1998, UAI.

[44]  Bernd Bischl,et al.  Analyzing the BBOB Results by Means of Benchmarking Concepts , 2015, Evolutionary Computation.

[45]  S S Stevens,et al.  On the Theory of Scales of Measurement. , 1946, Science.

[46]  Dimo Brockhoff,et al.  Benchmarking Numerical Multiobjective Optimizers Revisited , 2015, GECCO.

[47]  Mauricio G. C. Resende,et al.  Designing and reporting on computational experiments with heuristic methods , 1995, J. Heuristics.

[48]  Anne Auger,et al.  COCO: The Bi-objective Black Box Optimization Benchmarking (bbob-biobj) Test Suite , 2016, ArXiv.

[49]  Nicola Beume,et al.  SMS-EMOA: Multiobjective selection based on dominated hypervolume , 2007, Eur. J. Oper. Res..

[50]  Warren Hare,et al.  Best practices for comparing optimization algorithms , 2017, Optimization and Engineering.

[51]  Stefan M. Wild,et al.  Benchmarking Derivative-Free Optimization Algorithms , 2009, SIAM J. Optim..

[52]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[53]  Raymond Ros,et al.  Benchmarking the NEWUOA on the BBOB-2009 function testbed , 2009, GECCO '09.

[54]  Jakob Bossek Performance assessment of multi-objective evolutionary algorithms with the R package ecr , 2018, GECCO.

[55]  John N. Hooker,et al.  Testing heuristics: We have it all wrong , 1995, J. Heuristics.

[56]  Anne Auger,et al.  Benchmarking the pure random search on the BBOB-2009 testbed , 2009, GECCO '09.

[57]  Arnold Neumaier,et al.  Global Optimization by Multilevel Coordinate Search , 1999, J. Glob. Optim..

[58]  Michael H. Goldwasser,et al.  Data Structures, Near Neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges, Proceedings of a DIMACS Workshop, USA, 1999 , 2002, Data Structures, Near Neighbor Searches, and Methodology.

[59]  A. Hoffman,et al.  Computational Experience in Solving Linear Programs , 2017 .

[60]  Raymond Ros,et al.  Real-Parameter Black-Box Optimization Benchmarking 2009: Experimental Setup , 2009 .

[61]  Klaus Schittkowski,et al.  Test examples for nonlinear programming codes , 1980 .

[62]  Anne Auger,et al.  Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions , 2009 .

[63]  Marco Laumanns,et al.  PISA: A Platform and Programming Language Independent Interface for Search Algorithms , 2003, EMO.

[64]  C. G. Broyden The Convergence of a Class of Double-rank Minimization Algorithms 1. General Considerations , 1970 .

[65]  John A. Nelder,et al.  A Simplex Method for Function Minimization , 1965, Comput. J..

[66]  Anne Auger,et al.  COCO: Performance Assessment , 2016, ArXiv.

[67]  Tea Tusar,et al.  Performance of the DEMO Algorithm on the Bi-objective BBOB Test Suite , 2016, GECCO.

[68]  Anne Auger,et al.  Performance evaluation of an advanced local search evolutionary algorithm , 2005, 2005 IEEE Congress on Evolutionary Computation.

[69]  Anne Auger,et al.  COCO: The Experimental Procedure , 2016, ArXiv.

[70]  Jorge J. Moré,et al.  Benchmarking optimization software with performance profiles , 2001, Math. Program..

[71]  Anne Auger,et al.  The Impact of Search Volume on the Performance of RANDOMSEARCH on the Bi-objective BBOB-2016 Test Suite , 2016, GECCO.

[72]  Charles Audet,et al.  Derivative-Free and Blackbox Optimization , 2017 .