Learning to Plan for Visibility in Navigation of Unknown Environments

For robots navigating in unknown environments, naively following the shortest path toward the goal often leads to poor visibility of free space, limiting navigation speed, or even preventing forward progress altogether. In this work, we train a guidance function to give the robot greater visibility into unknown parts of the environment. Unlike exploration techniques that aim to observe as much map as possible for its own sake, we reason about the value of future observations directly in terms of expected cost-to-goal. We show significant improvements in navigation speed and success rate for narrow field-of-view sensors such as popular RGBD cameras. However, contrary to our expectations, we show that our strategy makes little difference for sensors with fields-of-view greater than 80\(^{\circ }\), and we discuss why the naive strategy is hard to beat.

[1]  Leslie Pack Kaelbling,et al.  Planning and Acting in Partially Observable Stochastic Domains , 1998, Artif. Intell..

[2]  Vijay Kumar,et al.  Safe receding horizon control for aggressive MAV flight with limited range sensing , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[3]  Alexei Makarenko,et al.  An experiment in integrated exploration , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Charles Richter,et al.  Bayesian Learning for Safe High-Speed Navigation in Unknown Environments , 2015, ISRR.

[5]  Kostas E. Bekris,et al.  Greedy but Safe Replanning under Kinodynamic Constraints , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[6]  Wolfram Burgard,et al.  Information Gain-based Exploration Using Rao-Blackwellized Particle Filters , 2005, Robotics: Science and Systems.

[7]  David Hsu,et al.  SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces , 2008, Robotics: Science and Systems.

[8]  Nicholas Roy,et al.  RANGE–Robust autonomous navigation in GPS‐denied environments , 2011, J. Field Robotics.

[9]  Anthony Stentz,et al.  PPCP: Efficient Probabilistic Planning with Clear Preferences in Partially-Known Environments , 2006, AAAI.

[10]  Hajime Asama,et al.  Inevitable collision states — a step towards safer robots? , 2004, Adv. Robotics.

[11]  Reid G. Simmons,et al.  Probabilistic Robot Navigation in Partially Observable Environments , 1995, IJCAI.

[12]  N. Roy,et al.  Markov Chain Hallway and Poisson Forest Environment Generating Distributions , 2015 .

[13]  Brian Yamauchi,et al.  A frontier-based approach for autonomous exploration , 1997, Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA'97. 'Towards New Computational Principles for Robotics and Automation'.

[14]  J. How,et al.  Receding horizon path planning with implicit safety guarantees , 2004, Proceedings of the 2004 American Control Conference.

[15]  Sebastian Scherer,et al.  Emergency maneuver library - ensuring safe navigation in partially known environments , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).