The Impact of Initial Designs on the Performance of MATSuMoTo on the Noiseless BBOB-2015 Testbed: A Preliminary Study

Most surrogate-assisted algorithms for expensive optimization follow the same framework: After an initial design phase in which the true objective function is evaluated for a few search points, an iterative process builds a surrogate model of the expensive function and, based on the current model, a so-called infill criterion suggests one or more points to be evaluated on the true problem. The evaluations are used to successively update and refine the model. Implementing surrogate-assisted algorithms requires several design choices to be made. It is practically relevant to understand their impact on the algorithms' performance. Here, we start to look at the initial design phase and experimentally investigate the performance of the freely available MATLAB Surrogate Model Toolbox (MATSuMoTo) with regard to the initial design. The results are preliminary in the sense that not all possible choices are investigated, but we can make first well-founded statements about whether Latin Hypercube or uniform random sampling should be preferred and about the effect of the size of the initial design on the performance of MATSuMoTo on the 24 noiseless test functions of the BBOB-2015 test suite.

[1]  Donald R. Jones,et al.  Efficient Global Optimization of Expensive Black-Box Functions , 1998, J. Glob. Optim..

[2]  Juliane Müller,et al.  MATSuMoTo: The MATLAB Surrogate Model Toolbox For Computationally Expensive Black-Box Global Optimization Problems , 2014, 1404.4261.

[3]  Raymond Ros,et al.  Real-Parameter Black-Box Optimization Benchmarking 2009: Experimental Setup , 2009 .

[4]  Timothy W. Simpson,et al.  Metamodeling in Multidisciplinary Design Optimization: How Far Have We Really Come? , 2014 .

[5]  P. C. Gehlen,et al.  Computer Experiments , 1996 .

[6]  Dimo Brockhoff,et al.  Comparison of the MATSuMoTo library for expensive optimization on the noiseless black-box optimization benchmarking testbed , 2015, 2015 IEEE Congress on Evolutionary Computation (CEC).

[7]  I. Sobol On the distribution of points in a cube and the approximate evaluation of integrals , 1967 .

[8]  Anne Auger,et al.  Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions , 2009 .

[9]  David M. Steinberg,et al.  Comparison of designs for computer experiments , 2006 .

[10]  Kevin Leyton-Brown,et al.  An evaluation of sequential model-based optimization for expensive blackbox functions , 2013, GECCO.

[11]  Richard J. Beckman,et al.  A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output From a Computer Code , 2000, Technometrics.

[12]  Dimo Brockho Comparison of the MATSuMoTo Library for Expensive Optimization on the Noiseless Black-Box Optimization Benchmarking Testbed , 2015 .

[13]  Anne Auger,et al.  Benchmarking the pure random search on the BBOB-2009 testbed , 2009, GECCO '09.

[14]  Thomas Bartz-Beielstein,et al.  Considerations of Budget Allocation for Sequential Parameter Optimization (SPO) , 2006 .

[15]  Art B. Owen,et al.  9 Computer experiments , 1996, Design and analysis of experiments.

[16]  K. Price Differential evolution vs. the functions of the 2/sup nd/ ICEO , 1997, Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC '97).