Two-layered Surrogate Modeling for Tuning Optimization Metaheuristics

The problem of detecting suitable parameters for metaheuristic optimization algorithms is well known long since. As these non-deterministic methods, e.g. evolution strategies (ES) [1], are highly adaptible to a specific application, detecting good parameter settings is vital for their success. Performance differences of orders of magnitude (in time and/or quality) are often achieved by means of automated tuning methods. In the last years, several of these have been suggested, and many incorporate surrogate models for the algorithm parameter space. Although tuning methods are reliable tools to build specific optimization algorithms by modifying the parameters of canonic ones, their use is somewhat restricted to relatively cheap objective functions (in terms of computational cost). As a huge number of algorithm runs is necessary, they are simply not applicable when the evaluation time of the objective function increases above a certain level. If the objective function is too expensive for applying tuning methods directly, one can resort to simpler approaches: