Visually Adaptive Geometric Navigation

—While classical autonomous navigation systems can move robots from one point to another in a collision-free manner due to geometric modeling, recent approaches to visual navigation allow robots to consider semantic information. However, most visual navigation systems do not explicitly reason about geometry, which may potentially lead to collisions. This paper presents Visually Adaptive Geometric Navigation ( VAGN ), which marries the two schools of navigation approaches to produce a navigation system that is able to adapt to the visual appearance of the environment while maintaining collision-free behavior. Employing a classical geometric navigation system to address geometric safety and efficiency, VAGN consults visual perception to dynamically adjust the classical planner’s hyper- parameters (e.g., maximum speed, inflation radius) to enable navigational behaviors not possible with purely geometric reasoning. VAGN is implemented on two different physical ground robots with different action spaces, navigation systems, and parameter sets. VAGN demonstrates superior navigation performance in both a test course with rich semantic and geometric features and a real-world deployment compared to other navigation baselines using visual and/or geometric input.

[1]  P. Stone,et al.  APPL: Adaptive Planner Parameter Learning , 2021, Robotics Auton. Syst..

[2]  Garrett A. Warnell,et al.  Machine Learning Methods for Local Motion Planning: A Study of End-to-End vs. Parameter Learning , 2021, 2021 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR).

[3]  Ufuk Topcu,et al.  From Agile Ground to Aerial Navigation: Learning from Learned Hallucination , 2021, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[4]  Peter Stone,et al.  APPLE: Adaptive Planner Parameter Learning From Evaluative Feedback , 2021, IEEE Robotics and Automation Letters.

[5]  Bo Liu,et al.  A Lifelong Learning Approach to Mobile Robot Navigation , 2021, IEEE Robotics and Automation Letters.

[6]  P. Stone,et al.  Learning Inverse Kinodynamics for Accurate High-Speed Off-Road Navigation on Unstructured Terrain , 2021, IEEE Robotics and Automation Letters.

[7]  Bo Liu,et al.  APPLR: Adaptive Planner Parameter Learning from Reinforcement , 2020, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[8]  Bo Liu,et al.  APPLI: Adaptive Planner Parameter Learning From Interventions , 2020, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[9]  Bo Liu,et al.  Agile Robot Navigation through Hallucinated Learning and Sober Deployment , 2020, 2021 IEEE International Conference on Robotics and Automation (ICRA).

[10]  Bo Liu,et al.  Toward Agile Maneuvers in Highly Constrained Spaces: Learning From Hallucination , 2020, IEEE Robotics and Automation Letters.

[11]  S. Levine,et al.  BADGR: An Autonomous Self-Supervised Learning-Based Navigation System , 2020, IEEE Robotics and Automation Letters.

[12]  Kaiyu Zheng,et al.  ROS Navigation Tuning Guide , 2017, Studies in Computational Intelligence.

[13]  Peter Stone,et al.  Motion Control for Mobile Robot Navigation Using Machine Learning: a Survey , 2020, ArXiv.

[14]  Peter Stone,et al.  Benchmarking Metric Ground Navigation , 2020, 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR).

[15]  Peter Stone,et al.  APPLD: Adaptive Planner Parameter Learning From Demonstration , 2020, IEEE Robotics and Automation Letters.

[16]  Jonathan P. How,et al.  Motion Planning Among Dynamic, Decision-Making Agents with Deep Reinforcement Learning , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[17]  Lydia Tapia,et al.  PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-Based Planning , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[18]  Rahul Sukthankar,et al.  Cognitive Mapping and Planning for Visual Navigation , 2017, International Journal of Computer Vision.

[19]  Roland Siegwart,et al.  From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[20]  Sebastian Scherer,et al.  Real-Time Semantic Mapping for Autonomous Off-Road Navigation , 2017, FSR.

[21]  Jürgen Schmidhuber,et al.  A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots , 2016, IEEE Robotics and Automation Letters.

[22]  Manuela M. Veloso,et al.  The 1,000-km Challenge: Insights and Quantitative and Qualitative Results , 2016, IEEE Intelligent Systems.

[23]  Xin Zhang,et al.  End to End Learning for Self-Driving Cars , 2016, ArXiv.

[24]  Manuela M. Veloso,et al.  Episodic non-Markov localization: Reasoning about short-term and long-term features , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[25]  Paul Timothy Furgale,et al.  Visual teach and repeat for long‐range rover autonomy , 2010, J. Field Robotics.

[26]  Francisco Bonin-Font,et al.  Visual Navigation for Mobile Robots: A Survey , 2008, J. Intell. Robotic Syst..

[27]  Petros Koumoutsakos,et al.  Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES) , 2003, Evolutionary Computation.

[28]  Wolfram Burgard,et al.  The dynamic window approach to collision avoidance , 1997, IEEE Robotics Autom. Mag..

[29]  Oussama Khatib,et al.  Elastic bands: connecting path planning and control , 1993, [1993] Proceedings IEEE International Conference on Robotics and Automation.