Non-Gaussian SLAP: Simultaneous Localization and Planning Under Non-Gaussian Uncertainty in Static and Dynamic Environments

Simultaneous Localization and Planning (SLAP) under process and measurement uncertainties is a challenge. It involves solving a stochastic control problem modeled as a Partially Observed Markov Decision Process (POMDP) in a general framework. For a convex environment, we propose an optimization-based open-loop optimal control problem coupled with receding horizon control strategy to plan for high quality trajectories along which the uncertainty of the state localization is reduced while the system reaches to a goal state with minimum control effort. In a static environment with non-convex state constraints, the optimization is modified by defining barrier functions to obtain collision-free paths while maintaining the previous goals. By initializing the optimization with trajectories in different homotopy classes and comparing the resultant costs, we improve the quality of the solution in the presence of action and measurement uncertainties. In dynamic environments with time-varying constraints such as moving obstacles or banned areas, the approach is extended to find collision-free trajectories. In this paper, the underlying spaces are continuous, and beliefs are non-Gaussian. Without obstacles, the optimization is a globally convex problem, while in the presence of obstacles it becomes locally convex. We demonstrate the performance of the method on different scenarios.

[1]  Giuseppe Carlo Calafiore,et al.  The scenario approach to robust control design , 2006, IEEE Transactions on Automatic Control.

[2]  Steven M. LaValle,et al.  Randomized Kinodynamic Planning , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[3]  Subhrajit Bhattacharya,et al.  Search-Based Path Planning with Homotopy Class Constraints in 3D , 2010, AAAI.

[4]  Sidney Yakowitz,et al.  Algorithms and Computational Techniques in Differential Dynamic Programming , 1989 .

[5]  Eric A. Hansen,et al.  An Improved Grid-Based Approximation Algorithm for POMDPs , 2001, IJCAI.

[6]  W. P. M. H. Heemels,et al.  Further Input-to-State Stability Subtleties for Discrete-Time Systems , 2013, IEEE Transactions on Automatic Control.

[7]  Timothy J. Robinson,et al.  Sequential Monte Carlo Methods in Practice , 2003 .

[8]  E. Todorov,et al.  A generalized iterative LQG method for locally-optimal feedback control of constrained nonlinear stochastic systems , 2005, Proceedings of the 2005, American Control Conference, 2005..

[9]  Lydia E. Kavraki,et al.  Analysis of probabilistic roadmaps for path planning , 1998, IEEE Trans. Robotics Autom..

[10]  Anne Condon,et al.  On the Undecidability of Probabilistic Planning and Infinite-Horizon Partially Observable Markov Decision Problems , 1999, AAAI/IAAI.

[11]  Joel W. Burdick,et al.  Robotic motion planning in dynamic, cluttered, uncertain environments , 2010, 2010 IEEE International Conference on Robotics and Automation.

[12]  Sebastian Thrun,et al.  Probabilistic robotics , 2002, CACM.

[13]  J. Maciejowski,et al.  Sequential Monte Carlo for Model Predictive Control , 2009 .

[14]  Nancy M. Amato,et al.  FIRM: Feedback controller-based information-state roadmap - A framework for motion planning under uncertainty , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Siddhartha S. Srinivasa,et al.  CHOMP: Covariant Hamiltonian optimization for motion planning , 2013, Int. J. Robotics Res..

[16]  P. Abbeel,et al.  LQG-MP: Optimized path planning for robots with motion uncertainty and imperfect state information , 2011 .

[17]  Christian Laugier,et al.  The International Journal of Robotics Research (IJRR) - Special issue on ``Field and Service Robotics '' , 2009 .

[18]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[19]  J. Lygeros,et al.  Stable stochastic receding horizon control of linear systems with bounded control inputs , 2010 .

[20]  Nicholas Roy,et al.  icLQG: Combining local and global optimization for control in information space , 2009, 2009 IEEE International Conference on Robotics and Automation.

[21]  Lydia E. Kavraki,et al.  Probabilistic roadmaps for path planning in high-dimensional configuration spaces , 1996, IEEE Trans. Robotics Autom..

[22]  Pravin Varaiya,et al.  Stochastic Systems: Estimation, Identification, and Adaptive Control , 1986 .

[23]  Edward J. Sondik,et al.  The Optimal Control of Partially Observable Markov Processes over a Finite Horizon , 1973, Oper. Res..

[24]  Panganamala Ramana Kumar,et al.  Feedback motion planning under non-Gaussian uncertainty and non-convex state constraints , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[25]  Seyed-Mahdi Kazempour-Radi,et al.  A utilization of wireless sensor network in smart homes, reviewing its combination with modern cellular networks , 2011, 2011 International Conference on Communication and Industrial Application.

[26]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control, Two Volume Set , 1995 .

[27]  Guy Shani,et al.  Noname manuscript No. (will be inserted by the editor) A Survey of Point-Based POMDP Solvers , 2022 .

[28]  Robert Platt,et al.  Convex Receding Horizon Control in Non-Gaussian Belief Space , 2012, WAFR.

[29]  Joelle Pineau,et al.  Point-based value iteration: An anytime algorithm for POMDPs , 2003, IJCAI.

[30]  Ron Alterovitz,et al.  Motion planning under uncertainty using iterative local optimization in belief space , 2012, Int. J. Robotics Res..

[31]  Amirhossein Tamjidi,et al.  On-line MPC-based Stochastic Planning in the Non-Gaussian Belief Space with Non-convex Constraints ∗ , 2015 .

[32]  Edward J. Sondik,et al.  The optimal control of par-tially observable Markov processes , 1971 .

[33]  N. Roy,et al.  The Belief Roadmap: Efficient Planning in Belief Space by Factoring the Covariance , 2009, Int. J. Robotics Res..

[34]  Karl Johan Åström,et al.  Control: A perspective , 2014, Autom..

[35]  Arnaud Doucet,et al.  A survey of convergence results on particle filtering methods for practitioners , 2002, IEEE Trans. Signal Process..

[36]  Sébastien Bubeck,et al.  Convex Optimization: Algorithms and Complexity , 2014, Found. Trends Mach. Learn..

[37]  Pieter Abbeel,et al.  LQG-MP: Optimized path planning for robots with motion uncertainty and imperfect state information , 2010, Int. J. Robotics Res..

[38]  Alberto Bemporad,et al.  Scenario-based model predictive control of stochastic constrained linear systems , 2009, Proceedings of the 48h IEEE Conference on Decision and Control (CDC) held jointly with 2009 28th Chinese Control Conference.

[39]  Manfred Morari,et al.  Model predictive control: Theory and practice - A survey , 1989, Autom..

[40]  Zhong-Ping Jiang,et al.  Input-to-state stability for discrete-time nonlinear systems , 1999 .

[41]  Mahdi Boloursaz,et al.  Bounds on compressed voice channel capacity , 2014, 2014 Iran Workshop on Communication and Information Theory (IWCIT).

[42]  Karl Johan Åström,et al.  Optimal control of Markov processes with incomplete state information , 1965 .

[43]  Vijay Kumar,et al.  Topological constraints in search-based robot path planning , 2012, Auton. Robots.

[44]  B. Faverjon,et al.  Probabilistic Roadmaps for Path Planning in High-Dimensional Con(cid:12)guration Spaces , 1996 .

[45]  David K. Smith,et al.  Dynamic Programming and Optimal Control. Volume 1 , 1996 .

[46]  Emilio Frazzoli,et al.  Sampling-based algorithms for optimal motion planning , 2011, Int. J. Robotics Res..

[47]  David Q. Mayne,et al.  Constrained model predictive control: Stability and optimality , 2000, Autom..

[48]  John Darzentas,et al.  Problem Complexity and Method Efficiency in Optimization , 1983 .

[49]  Sébastien Bubeck,et al.  Theory of Convex Optimization for Machine Learning , 2014, ArXiv.

[50]  Pieter Abbeel,et al.  Motion planning with sequential convex optimization and convex collision checking , 2014, Int. J. Robotics Res..

[51]  Nancy M. Amato,et al.  FIRM: Sampling-based feedback motion-planning under motion uncertainty and imperfect measurements , 2014, Int. J. Robotics Res..

[52]  D. Mayne,et al.  Receding horizon control of nonlinear systems , 1990 .

[53]  David Q. Mayne,et al.  Model predictive control: Recent developments and future promise , 2014, Autom..

[54]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[55]  Ron Alterovitz,et al.  Motion Planning Under Uncertainty Using Differential Dynamic Programming in Belief Space , 2011, ISRR.

[56]  John N. Tsitsiklis,et al.  The Complexity of Markov Decision Processes , 1987, Math. Oper. Res..

[57]  Leslie Pack Kaelbling,et al.  Belief space planning assuming maximum likelihood observations , 2010, Robotics: Science and Systems.

[58]  Dimitri P. Bertsekas,et al.  Dynamic Programming and Optimal Control 3rd Edition, Volume II , 2010 .