Multi-objective Optimization with Unbounded Solution Sets

Machine learning often requires the optimization of multiple, partially conflicting objectives. True multi-objective optimization (MOO) methods avoid the need to choose a weighting of the objectives a priori and provide insights about the tradeoffs between the objectives. We extend a state-of-the-art derivative-free Monte Carlo method for MOO, the MO-CMA-ES, to operate on an unbounded set of (non-dominated) candidate solutions. The resulting algorithm, UP-MO-CMA-ES, performed well in two recent performance comparisons of MOO methods.

[1]  Edgar Tello-Leal,et al.  A Review of Surrogate Assisted Multiobjective Evolutionary Algorithms , 2016, Comput. Intell. Neurosci..

[2]  Anne Auger,et al.  Evolution Strategies , 2018, Handbook of Computational Intelligence.

[3]  Nikolaus Hansen,et al.  Completely Derandomized Self-Adaptation in Evolution Strategies , 2001, Evolutionary Computation.

[4]  Joshua D. Knowles,et al.  ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems , 2006, IEEE Transactions on Evolutionary Computation.

[5]  Isao Ono,et al.  Theoretical Foundation for CMA-ES from Information Geometry Perspective , 2012, Algorithmica.

[6]  Christian Igel,et al.  Hoeffding and Bernstein races for selecting policies in evolutionary direct policy search , 2009, ICML '09.

[7]  Ann Nowé,et al.  Multi-objective reinforcement learning using sets of pareto dominating policies , 2014, J. Mach. Learn. Res..

[8]  Oswin Krause,et al.  Unbounded Population MO-CMA-ES for the Bi-Objective BBOB Test Suite , 2016, GECCO.

[9]  Tobias Friedrich,et al.  Approximating the Least Hypervolume Contributor: NP-Hard in General, But Fast in Practice , 2009, EMO.

[10]  Oswin Krause,et al.  A More Efficient Rank-one Covariance Matrix Update for Evolution Strategies , 2015, FOGA.

[11]  Christian Igel,et al.  Improved step size adaptation for the MO-CMA-ES , 2010, GECCO '10.

[12]  Anne Auger,et al.  COCO: The Bi-objective Black Box Optimization Benchmarking (bbob-biobj) Test Suite , 2016, ArXiv.

[13]  Bernhard Sendhoff,et al.  Pareto-Based Multiobjective Machine Learning: An Overview and Case Studies , 2008, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[14]  Christian Igel,et al.  Multi-Objective Optimization of Support Vector Machines , 2006, Multi-Objective Machine Learning.

[15]  Jens Jägersküpper,et al.  Probabilistic runtime analysis of (1 +, λ),ES using isotropic mutations , 2006, GECCO '06.

[16]  Christian Igel,et al.  Uncertainty Handling in Model Selection for Support Vector Machines , 2008, PPSN.

[17]  S. Griffis EDITOR , 1997, Journal of Navigation.

[18]  Christian Igel Evolutionary Kernel Learning , 2010, Encyclopedia of Machine Learning.

[19]  Nicola Beume,et al.  SMS-EMOA: Multiobjective selection based on dominated hypervolume , 2007, Eur. J. Oper. Res..

[20]  Léon Bottou,et al.  The Tradeoffs of Large Scale Learning , 2007, NIPS.

[21]  A. Auger Convergence results for the ( 1 , )-SA-ES using the theory of-irreducible Markov chains , 2005 .

[22]  Tobias Friedrich,et al.  Approximating the Volume of Unions and Intersections of High-Dimensional Geometric Objects , 2008, ISAAC.

[23]  Anne Auger,et al.  Theory of the hypervolume indicator: optimal μ-distributions and the choice of the reference point , 2009, FOGA '09.

[24]  M. Powell The NEWUOA software for unconstrained optimization without derivatives , 2006 .

[25]  Tobias Glasmachers,et al.  Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES) , 2016, GECCO.

[26]  Nikolaus Hansen,et al.  The CMA Evolution Strategy: A Tutorial , 2016, ArXiv.

[27]  Stefan Roth,et al.  Covariance Matrix Adaptation for Multi-objective Optimization , 2007, Evolutionary Computation.

[28]  Risto Miikkulainen,et al.  Accelerated Neural Evolution through Cooperatively Coevolved Synapses , 2008, J. Mach. Learn. Res..

[29]  A. E. Eiben,et al.  From evolutionary computation to the evolution of things , 2015, Nature.

[30]  Bernd Bischl,et al.  A comparative study on large scale kernelized support vector machines , 2016, Adv. Data Anal. Classif..

[31]  Verena Heidrich-Meisner,et al.  Neuroevolution strategies for episodic reinforcement learning , 2009, J. Algorithms.

[32]  Oswin Krause,et al.  CMA-ES with Optimal Covariance Update and Storage Complexity , 2016, NIPS.

[33]  Tobias Friedrich,et al.  Speeding up many-objective optimization by Monte Carlo approximations , 2013, Artif. Intell..