The standard adaptive pursuit technique (AP) shows preference for a single operator at a time but is not able to simultaneously pursue multiple operators. We generalize AP by allowing any target distribution to be pursued for operator selection probabilities. We call this the generalized adaptive pursuit algorithm (GAPA). We show that the probability matching and multi-armed bandit strategies, with particular settings, can be integrated in the GAPA framework. We propose and experimentally test two instances of GAPA. Assuming that there are multiple useful operators, the multi-operator AP pursues them all simultaneously. The multi-layer AP is intended to scale up the pursuit algorithm to a large set of operators. To experimentally test the proposed GAPA instances, we introduce the adaptive genetic Pareto local search (aGPLS) that selects on-line genetic operators to restart the Pareto local search. We show on a bi-objective Quadratic assignment problem (bQAP) instance with a large number of facilities and high correlation that aGPLSs are the algorithms with best performance tested.
[1]
Michèle Sebag,et al.
Adaptive operator selection with dynamic multi-armed bandits
,
2008,
GECCO '08.
[2]
Marco Laumanns,et al.
Performance assessment of multiobjective optimizers: an analysis and review
,
2003,
IEEE Trans. Evol. Comput..
[3]
Thomas Stützle,et al.
A study of stochastic local search algorithms for the biobjective QAP with correlated flow matrices
,
2006,
Eur. J. Oper. Res..
[4]
Dirk Thierens,et al.
Path-Guided Mutation for Stochastic Pareto Local Search Algorithms
,
2010,
PPSN.
[5]
Dirk Thierens,et al.
An Adaptive Pursuit Strategy for Allocating Operator Probabilities
,
2005,
BNAIC.
[6]
David W. Corne,et al.
Instance Generators and Test Suites for the Multiobjective Quadratic Assignment Problem
,
2003,
EMO.