Proximal Distilled Evolutionary Reinforcement Learning

Reinforcement Learning (RL) has achieved impressive performance in many complex environments due to the integration with Deep Neural Networks (DNNs). At the same time, Genetic Algorithms (GAs), often seen as a competing approach to RL, had limited success in scaling up to the DNNs required to solve challenging tasks. Contrary to this dichotomic view, in the physical world, evolution and learning are complementary processes that continuously interact. The recently proposed Evolutionary Reinforcement Learning (ERL) framework has demonstrated mutual benefits to performance when combining the two methods. However, ERL has not fully addressed the scalability problem of GAs. In this paper, we show that this problem is rooted in an unfortunate combination of a simple genetic encoding for DNNs and the use of traditional biologically-inspired variation operators. When applied to these encodings, the standard operators are destructive and cause catastrophic forgetting of the traits the networks acquired. We propose a novel algorithm called Proximal Distilled Evolutionary Reinforcement Learning (PDERL) that is characterised by a hierarchical integration between evolution and learning. The main innovation of PDERL is the use of learning-based variation operators that compensate for the simplicity of the genetic representation. Unlike traditional operators, our proposals meet the functional requirements of variation operators when applied on directly-encoded DNNs. We evaluate PDERL in five robot locomotion settings from the OpenAI gym. Our method outperforms ERL, as well as two state-of-the-art RL algorithms, PPO and TD3, in all tested environments.

[1]  Yuval Tassa,et al.  Continuous control with deep reinforcement learning , 2015, ICLR.

[2]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[3]  Kenneth O. Stanley,et al.  Safe mutations for deep and recurrent neural networks through output gradients , 2017, GECCO.

[4]  G. Simpson THE BALDWIN EFFECT , 1953 .

[5]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[6]  A. E. Eiben,et al.  Introduction to Evolutionary Computing , 2003, Natural Computing Series.

[7]  Olivier Sigaud,et al.  CEM-RL: Combining evolutionary and gradient-based methods for policy search , 2018, ICLR.

[8]  Kagan Tumer,et al.  Collaborative Evolutionary Reinforcement Learning , 2019, ICML.

[9]  Wojciech Zaremba,et al.  OpenAI Gym , 2016, ArXiv.

[10]  Jian Peng,et al.  Policy Optimization by Genetic Distillation , 2017, ICLR.

[11]  Geoffrey E. Hinton,et al.  How Learning Can Guide Evolution , 1996, Complex Syst..

[12]  Alec Radford,et al.  Proximal Policy Optimization Algorithms , 2017, ArXiv.

[13]  B. Dias,et al.  Parental olfactory experience influences behavior and neural structure in subsequent generations , 2013, Nature Neuroscience.

[14]  Kagan Tumer,et al.  Evolution-Guided Policy Gradient in Reinforcement Learning , 2018, NeurIPS.

[15]  Yuval Tassa,et al.  MuJoCo: A physics engine for model-based control , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[16]  Marcin Andrychowicz,et al.  Hindsight Experience Replay , 2017, NIPS.

[17]  Herke van Hoof,et al.  Addressing Function Approximation Error in Actor-Critic Methods , 2018, ICML.

[18]  Razvan Pascanu,et al.  Policy Distillation , 2015, ICLR.

[19]  Demis Hassabis,et al.  Mastering the game of Go without human knowledge , 2017, Nature.

[20]  Melanie Mitchell,et al.  Adaptive Individuals In Evolving Populations: Models And Algorithms , 2018 .

[21]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[22]  David H. Ackley,et al.  Interactions between learning and evolution , 1991 .

[23]  Alexandre Attia,et al.  Global overview of Imitation Learning , 2018, ArXiv.

[24]  Kenneth O. Stanley,et al.  Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning , 2017, ArXiv.

[25]  Marcin Andrychowicz,et al.  Parameter Space Noise for Exploration , 2017, ICLR.

[26]  Pieter Abbeel,et al.  An Algorithmic Perspective on Imitation Learning , 2018, Found. Trends Robotics.

[27]  Takaya Arita,et al.  Interactions between learning and evolution: the outstanding strategy generated by the Baldwin effect. , 2004, Bio Systems.