Estimation and control using sampling-based Bayesian reinforcement learning

Real-world autonomous systems operate under uncertainty about both their pose and dynamics. Autonomous control systems must simultaneously perform estimation and control tasks to maintain robustness to changing dynamics or modeling errors. However, information gathering actions often conflict with optimal actions for reaching control objectives, requiring a trade-off between exploration and exploitation. The specific problem setting considered here is for discrete-time nonlinear systems, with process noise, input-constraints, and parameter uncertainty. This article frames this problem as a Bayes-adaptive Markov decision process and solves it online using Monte Carlo tree search with an unscented Kalman filter to account for process noise and parameter uncertainty. This method is compared with certainty equivalent model predictive control and a tree search method that approximates the QMDP solution, providing insight into when information gathering is useful. Discrete time simulations characterize performance over a range of process noise and bounds on unknown parameters. An offline optimization method is used to select the Monte Carlo tree search parameters without hand-tuning. In lieu of recursive feasibility guarantees, a probabilistic bounding heuristic is offered that increases the probability of keeping the state within a desired region.

[1]  Hengjian Cui,et al.  Empirical likelihood confidence region for parameter in the errors-in-variables models , 2003 .

[2]  PavoneMarco,et al.  Fast marching tree , 2015 .

[3]  Manfred Morari,et al.  Model predictive control: Theory and practice - A survey , 1989, Autom..

[4]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[5]  John N. Tsitsiklis,et al.  The Complexity of Markov Decision Processes , 1987, Math. Oper. Res..

[6]  Vladimir Stojanovic,et al.  Optimal experiment design for identification of ARX models with constrained output in non-Gaussian noise , 2016 .

[7]  Peter Dayan,et al.  Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search , 2013, J. Artif. Intell. Res..

[8]  Simon M. Lucas,et al.  A Survey of Monte Carlo Tree Search Methods , 2012, IEEE Transactions on Computational Intelligence and AI in Games.

[9]  Shane Legg,et al.  Human-level control through deep reinforcement learning , 2015, Nature.

[10]  S. LaValle,et al.  Randomized Kinodynamic Planning , 2001 .

[11]  N. Filatov,et al.  Survey of adaptive dual control methods , 2000 .

[12]  Jeffrey K. Uhlmann,et al.  New extension of the Kalman filter to nonlinear systems , 1997, Defense, Security, and Sensing.

[13]  J. Allwright,et al.  On linear programming and robust modelpredictive control using impulse-responses , 1992 .

[14]  Ryan Calo,et al.  Legal Aspects of Autonomous Driving , 2010 .

[15]  Marimuthu Palaniswami,et al.  Stability analysis of input constrained continuous time indirect adaptive control , 1991 .

[16]  Jörg Stoye,et al.  More on Confidence Intervals for Partially Identified Parameters , 2008 .

[17]  Joelle Pineau,et al.  Towards robotic assistants in nursing homes: Challenges and results , 2003, Robotics Auton. Syst..

[18]  Marco Pavone,et al.  Fast marching tree: A fast marching sampling-based method for optimal motion planning in many dimensions , 2013, ISRR.