Predictive representations for policy gradient in POMDPs

We consider the problem of estimating the policy gradient in Partially Observable Markov Decision Processes (POMDPs) with a special class of policies that are based on Predictive State Representations (PSRs). We compare PSR policies to Finite-State Controllers (FSCs), which are considered as a standard model for policy gradient methods in POMDPs. We present a general Actor-Critic algorithm for learning both FSCs and PSR policies. The critic part computes a value function that has as variables the parameters of the policy. These latter parameters are gradually updated to maximize the value function. We show that the value function is polynomial for both FSCs and PSR policies, with a potentially smaller degree in the case of PSR policies. Therefore, the value function of a PSR policy can have less local optima than the equivalent FSC, and consequently, the gradient algorithm is more likely to converge to a global optimal solution.

[1]  Leslie Pack Kaelbling,et al.  Reinforcement Learning by Policy Search , 2002 .

[2]  Stefan Schaal,et al.  Natural Actor-Critic , 2003, Neurocomputing.

[3]  Stefan Schaal,et al.  Policy Gradient Methods for Robotics , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  G. Casella,et al.  Rao-Blackwellisation of sampling schemes , 1996 .

[5]  Takaki Makino,et al.  On-line discovery of temporal-difference networks , 2008, ICML '08.

[6]  Yishay Mansour,et al.  Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.

[7]  Douglas Aberdeen,et al.  Scalable Internal-State Policy-Gradient Methods for POMDPs , 2002, ICML.

[8]  Eric Wiewiora,et al.  Learning predictive representations from a history , 2005, ICML.

[9]  Jonathan Baxter,et al.  Scaling Internal-State Policy-Gradient Methods for POMDPs , 2002 .

[10]  Peter L. Bartlett,et al.  Reinforcement Learning in POMDP's via Direct Gradient Ascent , 2000, ICML.

[11]  Olivier Buffet,et al.  Policy-Gradients for PSRs and POMDPs , 2007, AISTATS.

[12]  Richard S. Sutton,et al.  Predictive Representations of State , 2001, NIPS.

[13]  Michael R. James,et al.  Predictive State Representations: A New Theory for Modeling Dynamical Systems , 2004, UAI.

[14]  Kee-Eung Kim,et al.  Learning Finite-State Controllers for Partially Observable Environments , 1999, UAI.

[15]  Christian R. Shelton,et al.  Importance sampling for reinforcement learning with multiple objectives , 2001 .