Quantitative Performance Assessment of Multiobjective Optimizers: The Average Runtime Attainment Function

Numerical benchmarking of multiobjective optimization algorithms is an important task needed to understand and recommend algorithms. So far, two main approaches to assessing algorithm performance have been pursued: using set quality indicators, and the empirical attainment function and its higher-order moments as a generalization of empirical cumulative distributions of function values. Both approaches have their advantages but rely on the choice of a quality indicator and/or take into account only the location of the resulting solution sets and not when certain regions of the objective space are attained. In this paper, we propose the average runtime attainment function as a quantitative measure of the performance of a multiobjective algorithm. It estimates, for any point in the objective space, the expected runtime to find a solution that weakly dominates this point. After defining the average runtime attainment function and detailing the relation to the empirical attainment function, we illustrate how the average runtime attainment function plot displays algorithm performance and differences in performance for some algorithms that have been previously run on the biobjective bbob-biobj test suite of the COCO platform.

[1]  Anne Auger,et al.  COCO: The Bi-objective Black Box Optimization Benchmarking (bbob-biobj) Test Suite , 2016, ArXiv.

[2]  Anne Auger,et al.  Benchmarking RM-MEDA on the Bi-objective BBOB-2016 Test Suite , 2016, GECCO.

[3]  Anne Auger,et al.  COCO: a platform for comparing continuous optimizers in a black-box setting , 2016, Optim. Methods Softw..

[4]  Peter J. Fleming,et al.  On the Performance Assessment and Comparison of Stochastic Multiobjective Optimizers , 1996, PPSN.

[5]  Thomas Stützle,et al.  Exploratory Analysis of Stochastic Local Search Algorithms in Biobjective Optimization , 2010, Experimental Methods for the Analysis of Optimization Algorithms.

[6]  Abdullah Al-Dujaili,et al.  Hypervolume-Based DIRECT for Multi-Objective Optimisation , 2016, GECCO.

[7]  Carlos M. Fonseca,et al.  Inferential Performance Assessment of Stochastic Optimisers and the Attainment Function , 2001, EMO.

[8]  Thomas Stützle,et al.  Stochastic Local Search: Foundations & Applications , 2004 .

[9]  Thomas Stützle,et al.  Evaluating Las Vegas Algorithms: Pitfalls and Remedies , 1998, UAI.

[11]  S S Stevens,et al.  On the Theory of Scales of Measurement. , 1946, Science.

[12]  Anne Auger,et al.  COCO: Performance Assessment , 2016, ArXiv.

[13]  Anne Auger,et al.  Benchmarking MATLAB's gamultiobj (NSGA-II) on the Bi-objective BBOB-2016 Test Suite , 2016, GECCO.

[14]  Stefan M. Wild,et al.  Benchmarking Derivative-Free Optimization Algorithms , 2009, SIAM J. Optim..

[15]  Jorge J. Moré,et al.  Benchmarking optimization software with performance profiles , 2001, Math. Program..

[16]  Anne Auger,et al.  The Impact of Variation Operators on the Performance of SMS-EMOA on the Bi-objective BBOB-2016 Test Suite , 2016, GECCO.

[17]  Anne Auger,et al.  Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed , 2016, GECCO.

[18]  Thomas Stützle,et al.  Improvements on the Ant-System: Introducing the MAX-MIN Ant System , 1997, ICANNGA.