Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Control

Many real-world control problems involve conflicting objectives where we desire a dense and high-quality set of control policies that are optimal for different objective preferences (called Pareto-optimal). While extensive research in multi-objective reinforcement learning (MORL) has been conducted to tackle such problems, multi-objective optimization for complex continuous robot control is still under-explored. In this work, we propose an efficient evolutionary learning algorithm to find the Pareto set approximation for continuous robot control problems, by extending a state-of-the-art RL algorithm and presenting a novel prediction model to guide the learning process. In addition to efficiently discovering the individual policies on the Pareto front, we construct a continuous set of Pareto-optimal solutions by Pareto analysis and interpolation. Furthermore, we design seven multi-objective RL environments with continuous action space, which is the first benchmark platform to evaluate MORL algorithms on various robot control problems. We test the previous methods on the proposed benchmark problems, and the experiments show that our approach is able to find a much denser and higher-quality set of Pareto policies than the existing algorithms.

[1]  Xi Chen,et al.  Meta-Learning for Multi-objective Reinforcement Learning , 2018, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[2]  Konkoly Thege Multi-criteria Reinforcement Learning , 1998 .

[3]  Geoffrey E. Hinton,et al.  Visualizing Data using t-SNE , 2008 .

[4]  Runzhe Yang,et al.  A Generalized Algorithm for Multi-Objective RL and Policy Adaptation , 2019 .

[5]  Tom Lenaerts,et al.  Dynamic Weights in Multi-Objective Deep Reinforcement Learning , 2018, ICML.

[6]  Wojciech Matusik,et al.  Interactive exploration of design trade-offs , 2018, ACM Trans. Graph..

[7]  Lothar Thiele,et al.  Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach , 1999, IEEE Trans. Evol. Comput..

[8]  Kalyanmoy Deb,et al.  Multi-objective Optimisation Using Evolutionary Algorithms: An Introduction , 2011, Multi-objective Evolutionary Optimisation for Product Design and Manufacturing.

[9]  Sriraam Natarajan,et al.  Dynamic preferences in multi-criteria reinforcement learning , 2005, ICML.

[10]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[11]  Benjamín Barán,et al.  Performance metrics in multi-objective optimization , 2015, 2015 Latin American Computing Conference (CLEI).

[12]  Rui Wang,et al.  Deep Reinforcement Learning for Multiobjective Optimization , 2019, IEEE Transactions on Cybernetics.

[13]  Luca Bascetta,et al.  Policy gradient approaches for multi-objective sequential decision making , 2014, 2014 International Joint Conference on Neural Networks (IJCNN).

[14]  Shie Mannor,et al.  The Steering Approach for Multi-Criteria Reinforcement Learning , 2001, NIPS.

[15]  Qingfu Zhang,et al.  MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition , 2007, IEEE Transactions on Evolutionary Computation.

[16]  Sergey Levine,et al.  High-Dimensional Continuous Control Using Generalized Advantage Estimation , 2015, ICLR.

[17]  Yuval Tassa,et al.  MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[18]  Andrea Castelletti,et al.  Multi-objective fitted Q-iteration: Pareto frontier approximation in one single run , 2011, 2011 International Conference on Networking, Sensing and Control.

[19]  Kalyanmoy Deb,et al.  A fast and elitist multiobjective genetic algorithm: NSGA-II , 2002, IEEE Trans. Evol. Comput..

[20]  Sergey Levine,et al.  Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks , 2017, ICML.