Active Policy Learning for Robot Planning and Exploration under Uncertainty

This paper proposes a simulation-based active policy learning algorithm for finite-horizon, partially-observed sequential decision processes. The algorithm is tested in the domain of robot navigation and exploration under uncertainty. In such a setting, the expected cost, that must be minimized, is a function of the belief state (filtering distribution). This filtering distribution is in turn nonlinear and subject to discontinuities, which arise because constraints in the robot motion and control models. As a result, the expected cost is non-differentiable and very expensive to simulate. The new algorithm overcomes the first difficulty and reduces the number of required simulations as follows. First, it assumes that we have carried out previous simulations which returned values of the expected cost for different corresponding policy parameters. Second, it fits a Gaussian process (GP) regression model to these values, so as to approximate the expected cost as a function of the policy parameters. Third, it uses the GP predicted mean and variance to construct a statistical measure that determines which policy parameters should be used in the next simulation. The process is then repeated using the new parameters and the newly gathered expected cost observation. Since the objective is to find the policy parameters that minimize the expected cost, this iterative active learning approach effectively trades-off between exploration (in regions where the GP variance is large) and exploitation (where the GP mean is low). In our experiments, a robot uses the proposed algorithm to plan an optimal path for accomplishing a series of tasks, while maximizing the information about its pose and map estimates. These estimates are obtained with a standard filter for simultaneous localization and mapping. Upon gathering new observations, the robot updates the state estimates and is able to replan a new path in the spirit of open-loop feedback control.

[1]  Harold J. Kushner,et al.  A New Method of Locating the Maximum Point of an Arbitrary Multipeak Curve in the Presence of Noise , 1964 .

[2]  Edward J. Sondik,et al.  The Optimal Control of Partially Observable Markov Processes over a Finite Horizon , 1973, Oper. Res..

[3]  Leslie Pack Kaelbling,et al.  Learning in embedded systems , 1993 .

[4]  C. D. Perttunen,et al.  Lipschitzian optimization without the Lipschitz constant , 1993 .

[5]  Andrew W. Moore,et al.  Memory-based Stochastic Optimization , 1995, NIPS.

[6]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[7]  Carlos H. Muravchik,et al.  Posterior Cramer-Rao bounds for discrete-time nonlinear filtering , 1998, IEEE Trans. Signal Process..

[8]  Donald R. Jones,et al.  Efficient Global Optimization of Expensive Black-Box Functions , 1998, J. Glob. Optim..

[9]  J. Cadre,et al.  Optimal observer trajectory in bearings-only tracking for manoeuvring sources , 1999 .

[10]  H. Banks Center for Research in Scientific Computationにおける研究活動 , 1999 .

[11]  José A. Castellanos,et al.  Mobile Robot Localization and Map Building: A Multisensor Fusion Approach , 2000 .

[12]  Michael I. Jordan,et al.  PEGASUS: A policy search method for large MDPs and POMDPs , 2000, UAI.

[13]  Peter L. Bartlett,et al.  Infinite-Horizon Policy-Gradient Estimation , 2001, J. Artif. Intell. Res..

[14]  Donald R. Jones,et al.  A Taxonomy of Global Optimization Methods Based on Response Surfaces , 2001, J. Glob. Optim..

[15]  N. Gordon,et al.  Optimal Estimation and Cramér-Rao Bounds for Partial Non-Gaussian State Space Models , 2001 .

[16]  J. Cadre,et al.  Planification for Terrain- Aided Navigation , 2002 .

[17]  Michael James Sasena,et al.  Flexibility and efficiency enhancements for constrained global design optimization with kriging approximations. , 2002 .

[18]  Jan M. Maciejowski,et al.  Predictive control : with constraints , 2002 .

[19]  D. Finkel,et al.  Direct optimization algorithm user guide , 2003 .

[20]  Noah J. Cowan,et al.  Efficient Gradient Estimation for Motor Control Learning , 2002, UAI.

[21]  Y. Bar-Shalom,et al.  Multisensor resource deployment using posterior Cramer-Rao bounds , 2004, IEEE Transactions on Aerospace and Electronic Systems.

[22]  Peter Stone,et al.  Policy gradient reinforcement learning for fast quadrupedal locomotion , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[23]  Ronald J. Williams,et al.  Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.

[24]  Ben Tse,et al.  Autonomous Inverted Helicopter Flight via Reinforcement Learning , 2004, ISER.

[25]  E. S. Siah,et al.  Fast parameter optimization of large-scale electromagnetic objects using DIRECT with Kriging metamodeling , 2004, IEEE Transactions on Microwave Theory and Techniques.

[26]  Marcel L. Hernandez,et al.  Optimal Sensor Trajectories in Bearings-Only Tracking , 2004 .

[27]  Thomas Hofmann,et al.  Kernel Methods for Missing Variables , 2005, AISTATS.

[28]  Nicholas Roy,et al.  Global A-Optimal Robot Exploration in SLAM , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[29]  Robin J. Evans,et al.  Simulation-Based Optimal Sensor Scheduling with Application to Observer Trajectory Planning , 2005, Proceedings of the 44th IEEE Conference on Decision and Control.

[30]  Wolfram Burgard,et al.  Information Gain-based Exploration Using Rao-Blackwellized Particle Filters , 2005, Robotics: Science and Systems.

[31]  Gamini Dissanayake,et al.  Trajectory planning for multiple robots in bearing-only target localisation , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[32]  Nicholas Roy,et al.  Using reinforcement learning to improve exploration trajectories for error minimization , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[33]  Stefan Schaal,et al.  Policy Gradient Methods for Robotics , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[34]  Carl E. Rasmussen,et al.  Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.