Distributional Reinforcement Learning for Multi-Dimensional Reward Functions

A growing trend for value-based reinforcement learning (RL) algorithms is to capture more information than scalar value functions in the value network. One of the most well-known methods in this branch is distributional RL, which models return distribution instead of scalar value. In another line of work, hybrid reward architectures (HRA) in RL have studied to model source-specific value functions for each source of reward, which is also shown to be beneficial in performance. To fully inherit the benefits of distributional RL and hybrid reward architectures, we introduce Multi-Dimensional Distributional DQN (MD3QN), which extends distributional RL to model the joint return distribution from multiple reward sources. As a by-product of joint distribution modeling, MD3QN can capture not only the randomness in returns for each source of reward, but also the rich reward correlation between the randomness of different sources. We prove the convergence for the joint distributional Bellman operator and build our empirical algorithm by minimizing the Maximum Mean Discrepancy between joint return distribution and its Bellman target. In experiments, our method accurately models the joint return distribution in environments with richly correlated reward functions, and outperforms previous RL methods utilizing multi-dimensional reward functions in the control setting.

[1]  Rémi Munos,et al.  Implicit Quantile Networks for Distributional Reinforcement Learning , 2018, ICML.

[2]  Yee Whye Teh,et al.  An Analysis of Categorical Distributional Reinforcement Learning , 2018, AISTATS.

[3]  Samuel Gershman,et al.  Deep Successor Reinforcement Learning , 2016, ArXiv.

[4]  Nando de Freitas,et al.  Playing hard exploration games by watching YouTube , 2018, NeurIPS.

[5]  Bernhard Schölkopf,et al.  A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..

[6]  Craig Boutilier,et al.  Exploiting Structure in Policy Construction , 1995, IJCAI.

[7]  Marc G. Bellemare,et al.  The Arcade Learning Environment: An Evaluation Platform for General Agents , 2012, J. Artif. Intell. Res..

[8]  Alex Graves,et al.  Playing Atari with Deep Reinforcement Learning , 2013, ArXiv.

[9]  Tom Schaul,et al.  Universal Value Function Approximators , 2015, ICML.

[10]  Patrick M. Pilarski,et al.  Horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction , 2011, AAMAS.

[11]  Aviv Tamar,et al.  Distributional Multivariate Policy Evaluation and Exploration with the Bellman GAN , 2019, ICML.

[12]  Yoshua Bengio,et al.  Unsupervised State Representation Learning in Atari , 2019, NeurIPS.

[13]  Romain Laroche,et al.  Hybrid Reward Architecture for Reinforcement Learning , 2017, NIPS.

[14]  Svetha Venkatesh,et al.  Distributional Reinforcement Learning via Moment Matching , 2020, AAAI.

[15]  Tie-Yan Liu,et al.  RD$^2$: Reward Decomposition with Representation Decomposition , 2020, NeurIPS.

[16]  Marc G. Bellemare,et al.  Dopamine: A Research Framework for Deep Reinforcement Learning , 2018, ArXiv.

[17]  Marc G. Bellemare,et al.  A Distributional Perspective on Reinforcement Learning , 2017, ICML.

[18]  Alec Koppel,et al.  Cautious Reinforcement Learning via Distributional Risk in the Dual Domain , 2020, IEEE Journal on Selected Areas in Information Theory.

[19]  Tie-Yan Liu,et al.  Distributional Reward Decomposition for Reinforcement Learning , 2019, NeurIPS.

[20]  Tie-Yan Liu,et al.  Fully Parameterized Quantile Function for Distributional Reinforcement Learning , 2019, NeurIPS.

[21]  Marc G. Bellemare,et al.  Distributional Reinforcement Learning with Quantile Regression , 2017, AAAI.