To Go or Not To Go? A Near Unsupervised Learning Approach For Robot Navigation

It is important for robots to be able to decide whether they can go through a space or not, as they navigate through a dynamic environment. This capability can help them avoid injury or serious damage, e.g., as a result of running into people and obstacles, getting stuck, or falling off an edge. To this end, we propose an unsupervised and a near-unsupervised method based on Generative Adversarial Networks (GAN) to classify scenarios as traversable or not based on visual data. Our method is inspired by the recent success of data-driven approaches on computer vision problems and anomaly detection, and reduces the need for vast amounts of negative examples at training time. Collecting negative data indicating that a robot should not go through a space is typically hard and dangerous because of collisions, whereas collecting positive data can be automated and done safely based on the robot's own traveling experience. We verify the generality and effectiveness of the proposed approach on a test dataset collected in a previously unseen environment with a mobile robot. Furthermore, we show that our method can be used to build costmaps (we call as "GoNoGo" costmaps) for robot path planning using visual data only.

[1]  Abhinav Gupta,et al.  Learning to fly by crashing , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[2]  Daniel Cremers,et al.  Collision Avoidance for Quadrotors with a Monocular Camera , 2014, ISER.

[3]  Yann LeCun,et al.  Off-Road Obstacle Avoidance through End-to-End Learning , 2005, NIPS.

[4]  Koren,et al.  Real-Time Obstacle Avoidance for Fast Mobile Robots , 2022 .

[5]  Jan Peters,et al.  Model learning for robot control: a survey , 2011, Cognitive Processing.

[6]  Meng Joo Er,et al.  Obstacle avoidance of a mobile robot using hybrid learning approach , 2005, IEEE Transactions on Industrial Electronics.

[7]  Illah R. Nourbakhsh,et al.  Appearance-Based Obstacle Detection with Monocular Color Vision , 2000, AAAI/IAAI.

[8]  Noriaki Hirose,et al.  Personal Robot Assisting Transportation to Support Active Human Life - Following Control based on Model Predictive Control with Multiple Future Predictions - , 2015 .

[9]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[10]  Noriaki Hirose,et al.  Modeling of rolling friction by recurrent neural network using LSTM , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[11]  Brian Yamauchi,et al.  A frontier-based approach for autonomous exploration , 1997, Proceedings 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA'97. 'Towards New Computational Principles for Robotics and Automation'.

[12]  Min Guo,et al.  Reinforcement Learning Neural Network to the Problem of Autonomous Mobile Robot Obstacle Avoidance , 2005, 2005 International Conference on Machine Learning and Cybernetics.

[13]  Noriaki Hirose,et al.  Personal robot assisting transportation to support active human life — Posture stabilization based on feedback compensation of lateral acceleration , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Silvio Savarese,et al.  Learning Social Etiquette: Human Trajectory Understanding In Crowded Scenes , 2016, ECCV.

[15]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[16]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[17]  Wolfram Burgard,et al.  MINERVA: a second-generation museum tour-guide robot , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).

[18]  Geoffrey J. Gordon,et al.  A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning , 2010, AISTATS.

[19]  Jürgen Schmidhuber,et al.  A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots , 2016, IEEE Robotics and Automation Letters.

[20]  Silvio Savarese,et al.  Tracking the Untrackable: Learning to Track Multiple Cues with Long-Term Dependencies , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[21]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Andrew Zisserman,et al.  Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps , 2013, ICLR.

[23]  Martial Hebert,et al.  Learning monocular reactive UAV control in cluttered natural environments , 2012, 2013 IEEE International Conference on Robotics and Automation.

[24]  Wolfram Burgard,et al.  Traversability analysis for mobile robots in outdoor environments: A semi-supervised learning approach based on 3D-lidar data , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[25]  Sergey Levine,et al.  Learning deep control policies for autonomous aerial vehicles with MPC-guided policy search , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[26]  Roland Siegwart,et al.  From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots , 2016, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[27]  Martial Hebert,et al.  Vision and Learning for Deliberative Monocular Cluttered Flight , 2014, FSR.

[28]  Yoshua Bengio,et al.  Generative Adversarial Networks , 2014, ArXiv.

[29]  Georg Langs,et al.  Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery , 2017, IPMI.

[30]  Minoru Tanaka,et al.  Personal robot assisting transportation to support active human life — Reference generation based on model predictive control for robust quick turning , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).