Stagnation detection in highly multimodal fitness landscapes

Stagnation detection has been proposed as a mechanism for randomized search heuristics to escape from local optima by automatically increasing the size of the neighborhood to find the so-called gap size, i. e., the distance to the next improvement. Its usefulness has mostly been considered in simple multimodal landscapes with few local optima that could be crossed one after another. In multimodal landscapes with a more complex location of optima of similar gap size, stagnation detection suffers from the fact that the neighborhood size is frequently reset to 1 without using gap sizes that were promising in the past. In this paper, we investigate a new mechanism called radius memory which can be added to stagnation detection to control the search radius more carefully by giving preference to values that were successful in the past. We implement this idea in an algorithm called SD-RLSm and show compared to previous variants of stagnation detection that it yields speed-ups for linear functions under uniform constraints and the minimum spanning tree problem. Moreover, its running time does not significantly deteriorate on unimodal functions and a generalization of the Jump benchmark. Finally, we present experimental results carried out to study SD-RLSm and compare it with other algorithms.

[1]  Benjamin Doerr,et al.  Optimal Static and Self-Adjusting Parameter Choices for the (1+(λ,λ))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$( , 2017, Algorithmica.

[2]  Benjamin Doerr,et al.  Theoretical Analyses of Multi-Objective Evolutionary Algorithms on Multi-Modal Objectives , 2020, ArXiv.

[3]  Carsten Witt,et al.  Stagnation Detection with Randomized Local Search , 2021, EvoCOP.

[4]  Bryant A. Julstrom,et al.  Biased mutation operators for subgraph-selection problems , 2006, IEEE Transactions on Evolutionary Computation.

[5]  Benjamin Doerr,et al.  Probabilistic Tools for the Analysis of Randomized Optimization Heuristics , 2018, Theory of Evolutionary Computation.

[6]  Pierre Hansen,et al.  Variable Neighborhood Search , 2018, Handbook of Heuristics.

[7]  Benjamin Doerr,et al.  From black-box complexity to designing new genetic algorithms , 2015, Theor. Comput. Sci..

[8]  Benjamin Doerr,et al.  The Impact of Random Initialization on the Runtime of Randomized Search Heuristics , 2015, Algorithmica.

[9]  Benjamin Doerr,et al.  Fast genetic algorithms , 2017, GECCO.

[10]  Mahmoud Fouz,et al.  Quasirandom evolutionary algorithms , 2010, GECCO '10.

[11]  Benjamin Doerr,et al.  Theory of Parameter Control for Discrete Black-Box Optimization: Provable Performance Gains Through Dynamic Parameter Choices , 2018, Theory of Evolutionary Computation.

[12]  Per Kristian Lehre,et al.  Drift analysis , 2012, GECCO '12.

[13]  Carsten Witt,et al.  Self-Adjusting Evolutionary Algorithms for Multimodal Optimization , 2020, Algorithmica.

[14]  Ingo Wegener,et al.  Methods for the Analysis of Evolutionary Algorithms on Pseudo-Boolean Functions , 2003 .

[15]  C. Witt Population size vs. runtime of a simple EA , 2003, The 2003 Congress on Evolutionary Computation, 2003. CEC '03..

[16]  Mojgan Pourhassan,et al.  Improved Runtime Results for Simple Randomised Search Heuristics on Linear Functions with a Uniform Constraint , 2019, Algorithmica.

[17]  Frank Neumann,et al.  Randomized Local Search, Evolutionary Algorithms, and the Minimum Spanning Tree Problem , 2004, GECCO.

[18]  Benjamin Doerr,et al.  Multiplicative Drift Analysis , 2010, GECCO '10.