Particle Value Functions

The policy gradients of the expected return objective can react slowly to rare rewards. Yet, in some cases agents may wish to emphasize the low or high returns regardless of their probability. Borrowing from the economics and control literature, we review the risk-sensitive value function that arises from an exponential utility and illustrate its effects on an example. This risk-sensitive value function is not always applicable to reinforcement learning problems, so we introduce the particle value function defined by a particle filter over the distributions of an agent's experience, which bounds the risk-sensitive one. We illustrate the benefit of the policy gradients of this objective in Cliffworld.

[1]  J. Neumann,et al.  Theory of games and economic behavior , 1945, 100 Years of Math Milestones.

[2]  A. Copeland Review: John von Neumann and Oskar Morgenstern, Theory of games and economic behavior , 1945 .

[3]  K. Arrow Essays in the theory of risk-bearing , 1958 .

[4]  R. Howard,et al.  Risk-Sensitive Markov Decision Processes , 1972 .

[5]  F. Albertini,et al.  Logarithmic transformations for discrete-time, finite-horizon stochastic control problems , 1988 .

[6]  Jürgen Schmidhuber,et al.  Curious model-building control systems , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[7]  J. Pratt RISK AVERSION IN THE SMALL AND IN THE LARGE11This research was supported by the National Science Foundation (grant NSF-G24035). Reproduction in whole or in part is permitted for any purpose of the United States Government. , 1964 .

[8]  R. J. Williams,et al.  Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.

[9]  Reid G. Simmons,et al.  Risk-Sensitive Planning with Probabilistic Decision Graphs , 1994, KR.

[10]  Geoffrey E. Hinton,et al.  Using Expectation-Maximization for Reinforcement Learning , 1997, Neural Computation.

[11]  Andrew Y. Ng,et al.  Policy Invariance Under Reward Transformations: Theory and Application to Reward Shaping , 1999, ICML.

[12]  Sean P. Meyn,et al.  Risk-Sensitive Optimal Control for Markov Decision Processes with Monotone Cost , 2002, Math. Oper. Res..

[13]  Pierre Del Moral,et al.  Feynman-Kac formulae , 2004 .

[14]  H. Kappen Linear theory for control of nonlinear stochastic systems. , 2004, Physical review letters.

[15]  Marc Toussaint,et al.  Probabilistic inference for solving discrete and continuous state Markov Decision Processes , 2006, ICML.

[16]  Emanuel Todorov,et al.  Linearly-solvable Markov decision problems , 2006, NIPS.

[17]  Matt Hoffman,et al.  On Solving General State-Space Sequential Decision Problems using Inference Algorithms , 2007 .

[18]  A. Doucet,et al.  A Tutorial on Particle Filtering and Smoothing: Fifteen years later , 2008 .

[19]  N. Kantas Sequential decision making in general state space models , 2009 .

[20]  Marc Toussaint,et al.  An Approximate Inference Approach to Temporal Optimization in Optimal Control , 2010, NIPS.

[21]  Hilbert J. Kappen,et al.  Risk Sensitive Path Integral Control , 2010, UAI.

[22]  Daniel Polani,et al.  Information Theory of Decisions and Actions , 2011 .

[23]  Vicenç Gómez,et al.  Optimal control as a graphical model inference problem , 2009, Machine Learning.

[24]  Ralph S. Silva,et al.  On Some Properties of Markov Chain Monte Carlo Simulation Methods Based on the Particle Filter , 2012 .

[25]  Arnaud Doucet,et al.  On Particle Methods for Parameter Estimation in State-Space Models , 2014, 1412.8695.

[26]  Klaus Obermayer,et al.  Risk-Sensitive Reinforcement Learning , 2013, Neural Computation.

[27]  Nicole Bäuerle,et al.  More Risk-Sensitive Markov Decision Processes , 2014, Math. Oper. Res..

[28]  Andriy Mnih,et al.  Variational Inference for Monte Carlo Objectives , 2016, ICML.

[29]  Ruslan Salakhutdinov,et al.  Importance Weighted Autoencoders , 2015, ICLR.

[30]  Roy Fox,et al.  Taming the Noise in Reinforcement Learning via Soft Updates , 2015, UAI.

[31]  Hilbert J. Kappen,et al.  Particle Smoothing for Hidden Diffusion Processes: Adaptive Path Integral Smoother , 2016, IEEE Transactions on Signal Processing.