DSCTool: A web-service-based framework for statistical comparison of stochastic optimization algorithms

Abstract DSCTool is a statistical tool for comparing performance of stochastic optimization algorithms on a single benchmark function (i.e. single-problem analysis) or a set of benchmark functions (i.e., multiple-problem analysis). DSCTool implements a recently proposed approach, called Deep Statistical Comparison (DSC), and its variants. DSC ranks optimization algorithms by comparing distributions of obtained solutions for a problem instead of using a simple descriptive statistic such as the mean or the median. The rankings obtained for an individual problem give the relations between the performance of the applied algorithms. To compare optimization algorithms in the multiple-problem scenario, an appropriate statistical test must be applied to the rankings obtained for a set of problems. The main advantage of DSCTool are its REST web services, which means all its functionalities can be accessed from any programming language. In this paper, we present the DSCTool in detail with examples for its usage.

[1]  N. Henze A MULTIVARIATE TWO-SAMPLE TEST BASED ON THE NUMBER OF NEAREST NEIGHBOR TYPE COINCIDENCES , 1988 .

[2]  E. Lehmann Testing Statistical Hypotheses , 1960 .

[3]  R Core Team,et al.  R: A language and environment for statistical computing. , 2014 .

[4]  Anne Auger,et al.  Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions , 2009 .

[5]  Holger H. Hoos,et al.  UBCSAT: An Implementation and Experimentation Environment for SLS Algorithms for SAT & MAX-SAT , 2004, SAT.

[6]  Tome Eftimov,et al.  Identifying practical significance through statistical comparison of meta-heuristic stochastic optimization algorithms , 2019, Appl. Soft Comput..

[7]  Lothar Thiele,et al.  Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach , 1999, IEEE Trans. Evol. Comput..

[8]  Marjan Mernik,et al.  Ranking Multi-Objective Evolutionary Algorithms using a chess rating system with Quality Indicator ensemble , 2017, 2017 IEEE Congress on Evolutionary Computation (CEC).

[9]  Tome Eftimov,et al.  A novel statistical approach for comparing meta-heuristic stochastic optimization algorithms according to the distribution of solutions in the search space , 2019, Inf. Sci..

[10]  Tome Eftimov,et al.  Data-Driven Preference-Based Deep Statistical Ranking for Comparing Multi-objective Optimization Algorithms , 2018, BIOMA.

[11]  S. Sarkar STEPUP PROCEDURES CONTROLLING GENERALIZED FWER AND GENERALIZED FDR , 2007, 0803.2934.

[12]  Borja Calvo,et al.  scmamp: Statistical Comparison of Multiple Algorithms in Multiple Problems , 2016, R J..

[13]  Heike Trautmann,et al.  Benchmarking Evolutionary Algorithms: Towards Exploratory Landscape Analysis , 2010, PPSN.

[14]  M. Schilling Multivariate Two-Sample Tests Based on Nearest Neighbors , 1986 .

[15]  Francisco Herrera,et al.  A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 Special Session on Real Parameter Optimization , 2009, J. Heuristics.

[16]  Kalyanmoy Deb,et al.  Multi-objective Optimization , 2014 .

[17]  Ke Tang,et al.  AOAB: automated optimization algorithm benchmarking , 2010, GECCO '10.

[18]  Petr Posík,et al.  Dimension Selection in Axis-Parallel Brent-STEP Method for Black-Box Optimization of Separable Continuous Functions , 2015, GECCO.

[19]  Heike Trautmann,et al.  On the properties of the R2 indicator , 2012, GECCO '12.

[20]  Asma Atamna,et al.  Benchmarking IPOP-CMA-ES-TPA and IPOP-CMA-ES-MSR on the BBOB Noiseless Testbed , 2015, GECCO.

[21]  Francisco Herrera,et al.  Analyzing convergence performance of evolutionary algorithms: A statistical approach , 2014, Inf. Sci..

[22]  Soheil Boroushaki,et al.  Entropy-Based Weights for MultiCriteria Spatial Decision-Making , 2017 .

[23]  F. Herrera,et al.  JavaNPST: Nonparametric Statistical Tests in Java , 2015, 1501.04222.

[24]  Tome Eftimov,et al.  A Novel Approach to statistical comparison of meta-heuristic stochastic optimization algorithms using deep statistics , 2017, Inf. Sci..

[25]  Tome Eftimov,et al.  Deep Statistical Comparison Applied on Quality Indicators to Compare Multi-objective Stochastic Optimization Algorithms , 2017, MOD.

[26]  Tea Tusar,et al.  Differential Evolution versus Genetic Algorithms in Multiobjective Optimization , 2007, EMO.

[27]  Bin Li,et al.  Automatically discovering clusters of algorithm and problem instance behaviors as well as their causes from experimental data, algorithm setups, and instance features , 2018, Appl. Soft Comput..

[28]  Tome Eftimov,et al.  The Behavior of Deep Statistical Comparison Approach for Different Criteria of Comparing Distributions , 2017, IJCCI.

[29]  Francisco Herrera,et al.  rNPBST: An R Package Covering Non-parametric and Bayesian Statistical Tests , 2017, HAIS.

[30]  Tome Eftimov,et al.  Comparing multi-objective optimization algorithms using an ensemble of quality indicators with deep statistical comparison approach , 2017, 2017 IEEE Symposium Series on Computational Intelligence (SSCI).

[31]  Anne Auger,et al.  COCO: The Large Scale Black-Box Optimization Benchmarking (bbob-largescale) Test Suite , 2019, ArXiv.

[32]  Antonio J. Nebro,et al.  jMetal: A Java framework for multi-objective optimization , 2011, Adv. Eng. Softw..

[33]  Hao Wang,et al.  IOHprofiler: A Benchmarking and Profiling Tool for Iterative Optimization Heuristics , 2018, ArXiv.

[34]  Lothar Thiele,et al.  A Tutorial on the Performance Assessment of Stochastic Multiobjective Optimizers , 2006 .