Towards high-speed autonomous navigation of unknown environments

In this paper, we summarize recent research enabling high-speed navigation in unknown environments for dynamic robots that perceive the world through onboard sensors. Many existing solutions to this problem guarantee safety by making the conservative assumption that any unknown portion of the map may contain an obstacle, and therefore constrain planned motions to lie entirely within known free space. In this work, we observe that safety constraints may significantly limit performance and that faster navigation is possible if the planner reasons about collision with unobserved obstacles probabilistically. Our overall approach is to use machine learning to approximate the expected costs of collision using the current state of the map and the planned trajectory. Our contribution is to demonstrate fast but safe planning using a learned function to predict future collision probabilities.

[1]  Reid G. Simmons,et al.  The curvature-velocity method for local obstacle avoidance , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[2]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[3]  Charles Richter,et al.  High-speed autonomous navigation of unknown environments using learned probabilities of collision , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Joel Veness,et al.  Monte-Carlo Planning in Large POMDPs , 2010, NIPS.

[5]  Urs A. Muller,et al.  Learning long-range vision for autonomous off-road driving , 2009 .

[6]  Emilio Frazzoli,et al.  High-speed flight in an ergodic forest , 2012, 2012 IEEE International Conference on Robotics and Automation.

[7]  Thierry Fraichard,et al.  A Short Paper about Motion Safety , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[8]  Joelle Pineau,et al.  Point-based value iteration: An anytime algorithm for POMDPs , 2003, IJCAI.

[9]  Emilio Frazzoli,et al.  Real-Time Motion Planning for Agile Autonomous Vehicles , 2000 .

[10]  Joelle Pineau,et al.  Bayesian reinforcement learning in continuous POMDPs with application to robot navigation , 2008, 2008 IEEE International Conference on Robotics and Automation.

[11]  Martial Hebert,et al.  Learning monocular reactive UAV control in cluttered natural environments , 2012, 2013 IEEE International Conference on Robotics and Automation.

[12]  Nicholas Roy,et al.  Nonparametric Bayesian inference on multivariate exponential families , 2014, NIPS.

[13]  J. How,et al.  Receding horizon path planning with implicit safety guarantees , 2004, Proceedings of the 2004 American Control Conference.

[14]  Mykel J. Kochenderfer,et al.  Unmanned Aircraft Collision Avoidance using Continuous-State POMDPs , 2011, Robotics: Science and Systems.

[15]  Brian Yamauchi,et al.  A frontier-based approach for autonomous exploration , 1997, Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA'97. 'Towards New Computational Principles for Robotics and Automation'.

[16]  Kostas E. Bekris,et al.  Greedy but Safe Replanning under Kinodynamic Constraints , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[17]  Wolfram Burgard,et al.  Information Gain-based Exploration Using Rao-Blackwellized Particle Filters , 2005, Robotics: Science and Systems.

[18]  Ashutosh Saxena,et al.  High speed obstacle avoidance using monocular vision and reinforcement learning , 2005, ICML.

[19]  Wolfram Burgard,et al.  The dynamic window approach to collision avoidance , 1997, IEEE Robotics Autom. Mag..

[20]  Paolo Fiorini,et al.  Motion Planning in Dynamic Environments Using Velocity Obstacles , 1998, Int. J. Robotics Res..

[21]  Leslie Pack Kaelbling,et al.  Collision Avoidance for Unmanned Aircraft using Markov Decision Processes , 2010 .

[22]  Hajime Asama,et al.  Inevitable collision states — a step towards safer robots? , 2004, Adv. Robotics.

[23]  David Hsu,et al.  SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces , 2008, Robotics: Science and Systems.

[24]  Reid G. Simmons,et al.  Unsupervised learning of probabilistic models for robot navigation , 1996, Proceedings of IEEE International Conference on Robotics and Automation.

[25]  Alexei Makarenko,et al.  An experiment in integrated exploration , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[26]  Nicholas Roy,et al.  State estimation for aggressive flight in GPS-denied environments using onboard sensing , 2012, 2012 IEEE International Conference on Robotics and Automation.