Quantifying and comparing performance of numerical optimization algorithms is an important aspect of research in search and optimization. However, this task turns out to be tedious and difficult to realize even in the single-objective case – at least if one is willing to accomplish it in a scientifically decent and rigorous way. The COCO software used for the BBOB workshops (2009, 2010 and 2012) furnishes most of this tedious task for its participants: (1) choice and implementation of a wellmotivated single-objective benchmark function testbed, (2) design of an experimental set-up, (3) generation of data output for (4) post-processing and presentation of the results in graphs and tables. What remains to be done for practitioners is to allocate CPU-time, run their favorite black-box real-parameter optimizer in a few dimensions a few hundreds of times and execute the provided post-processing scripts. Two testbeds are provided, • noise-free functions • noisy functions and practitioners can freely choose any or all of them. The post-processing provides a quantitative performance assessment in graphs and tables, categorized by function properties like multi-modality, ill-conditioning, global structure, separability,. . . This document describes the experimental setup and touches the question of how the results are displayed. The benchmark function definitions, source code of the benchmark functions and for the post-processing and this report are available at http: //coco.gforge.inria.fr/. ∗NH is with the TAO Team of INRIA Saclay–Île-de-France at the LRI, Université-Paris Sud, 91405 Orsay cedex, France †AA is with the TAO Team of INRIA Saclay–Île-de-France at the LRI, Université-Paris Sud, 91405 Orsay cedex, France ‡SF is with the Research Center PPE, University of Applied Science Vorarlberg, Hochschulstrasse 1, 6850 Dornbirn, Austria §RR has been with the TAO Team of INRIA Saclay–Île-de-France at the LRI, UniversitéParis Sud, 91405 Orsay cedex, France
[1]
Robert Tibshirani,et al.
An Introduction to the Bootstrap
,
1994
.
[2]
K. Price.
Differential evolution vs. the functions of the 2/sup nd/ ICEO
,
1997,
Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC '97).
[3]
Thomas Stützle,et al.
Evaluating Las Vegas Algorithms: Pitfalls and Remedies
,
1998,
UAI.
[4]
Fernando G. Lobo,et al.
A parameter-less genetic algorithm
,
1999,
GECCO.
[5]
Nikolaus Hansen,et al.
A restart CMA evolution strategy with increasing population size
,
2005,
2005 IEEE Congress on Evolutionary Computation.
[6]
Anne Auger,et al.
Performance evaluation of an advanced local search evolutionary algorithm
,
2005,
2005 IEEE Congress on Evolutionary Computation.
[7]
Anne Auger,et al.
Benchmarking the pure random search on the BBOB-2009 testbed
,
2009,
GECCO '09.
[8]
Raymond Ros,et al.
Real-Parameter Black-Box Optimization Benchmarking 2009: Experimental Setup
,
2009
.
[9]
Raymond Ros,et al.
Black-box optimization benchmarking the IPOP-CMA-ES on the noiseless testbed: comparison to the BIPOP-CMA-ES
,
2010,
GECCO '10.