Minimum uncertainty robot path planning using a POMDP approach

We propose a new minimum uncertainty planning technique for mobile robots localizing with beacons. We model the system as a partially-observable Markov decision process and use a sampling-based method in the belief space (the space of posterior probability density functions over the state space) to find a belief-feedback policy. This approach allows us to analyze the evolution of the belief more accurately, which can result in improved policies when common approximations do not model the true behavior of the system. We demonstrate that our method performs comparatively, and in certain cases better, than current methods in the literature.

[1]  Seth Hutchinson,et al.  Exploiting domain knowledge in planning for uncertain robot systems modeled as POMDPs , 2010, 2010 IEEE International Conference on Robotics and Automation.

[2]  Reid G. Simmons,et al.  Particle RRT for Path Planning with Uncertainty , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[3]  Nicholas Roy,et al.  Planning in information space for a quadrotor helicopter in a GPS-denied environment , 2008, 2008 IEEE International Conference on Robotics and Automation.

[4]  Timothy J. Robinson,et al.  Sequential Monte Carlo Methods in Practice , 2003 .

[5]  Nicholas Roy,et al.  The Belief Roadmap: Efficient Planning in Linear POMDPs by Factoring the Covariance , 2007, ISRR.

[6]  B. Faverjon,et al.  Probabilistic Roadmaps for Path Planning in High-Dimensional Con(cid:12)guration Spaces , 1996 .

[7]  Pascal Poupart,et al.  Point-Based Value Iteration for Continuous POMDPs , 2006, J. Mach. Learn. Res..

[8]  Joris De Schutter,et al.  A multisine approach for trajectory optimization based on information gain , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Wolfram Burgard,et al.  Monte Carlo localization for mobile robots , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[10]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[11]  Seth Hutchinson,et al.  Hyper-particle filtering for stochastic systems , 2008, 2008 IEEE International Conference on Robotics and Automation.

[12]  Wolfram Burgard,et al.  Active mobile robot localization by entropy minimization , 1997, Proceedings Second EUROMICRO Workshop on Advanced Mobile Robots.

[13]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[14]  Richard L. Tweedie,et al.  Markov Chains and Stochastic Stability , 1993, Communications and Control Engineering Series.

[15]  N. Roy,et al.  The Belief Roadmap: Efficient Planning in Belief Space by Factoring the Covariance , 2009, Int. J. Robotics Res..

[16]  Jean-Claude Latombe,et al.  Stochastic roadmap simulation: an efficient representation and algorithm for analyzing molecular motion , 2002, RECOMB '02.

[17]  Sebastian Thrun,et al.  Probabilistic robotics , 2002, CACM.

[18]  W. Lovejoy A survey of algorithmic methods for partially observed Markov decision processes , 1991 .

[19]  Wolfram Burgard,et al.  Coastal navigation-mobile robot navigation with uncertainty in dynamic environments , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[20]  Seth Hutchinson,et al.  A Sampling Hyperbelief Optimization Technique for Stochastic Systems , 2008, WAFR.

[21]  Thierry Siméon,et al.  The Stochastic Motion Roadmap: A Sampling Framework for Planning with Markov Motion Uncertainty , 2007, Robotics: Science and Systems.

[22]  Nicholas Roy,et al.  Adapting probabilistic roadmaps to handle uncertain maps , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..