Leveraging Deep Learning Based Object Detection for Localising Autonomous Personal Mobility Devices in Sparse Maps

This paper presents a low cost, resource efficient localisation approach for autonomous driving in GPS denied environments. One of the most challenging aspects of traditional landmark based localisation in the context of autonomous driving, is the necessity to accurately and frequently detect landmarks. We leverage the state of the art deep learning framework, YOLO (You Only Look Once), to carry out this important perceptual task using data obtained from monocular cameras. Extracted bearing only information from the YOLO framework, and vehicle odometry, is fused using an Extended Kalman Filter (EKF) to generate an estimate of the location of the autonomous vehicle, together with it’s associated uncertainty. This approach enables us to achieve real-time sub metre localisation accuracy, using only a sparse map of an outdoor urban environment. The broader motivation of this research is to improve the safety and reliability of Personal Mobility Devices (PMDs) through autonomous technology. Thus, all the ideas presented here are demonstrated using an instrumented mobility scooter platform.

[1]  Liam Paull,et al.  Autonomous Vehicle Navigation in Rural Environments Without Detailed Prior Maps , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[2]  Kun Jiang,et al.  Monocular Vehicle Self-localization method based on Compact Semantic Map* , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[3]  Ali Farhadi,et al.  YOLOv3: An Incremental Improvement , 2018, ArXiv.

[4]  Wolfram Burgard,et al.  Deep Auxiliary Learning for Visual Localization and Odometry , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[5]  Mehrdad Dianati,et al.  A Survey of the State-of-the-Art Localization Techniques and Their Potentials for Autonomous Vehicle Applications , 2018, IEEE Internet of Things Journal.

[6]  Supun Samarasekera,et al.  Utilizing semantic visual landmarks for precise vehicle navigation , 2017, 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC).

[7]  Roberto Cipolla,et al.  Geometric Loss Functions for Camera Pose Regression with Deep Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Ali Farhadi,et al.  YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  G. Waitt,et al.  Don't ignore the mobility scooter. It may just be the future of transport , 2017 .

[10]  Chen Zhang,et al.  Autonomous personal mobility scooter for multi-class mobility-on-demand service , 2016, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC).

[11]  Supun Samarasekera,et al.  Sub-meter vehicle navigation using efficient pre-mapped visual landmarks , 2016, 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC).

[12]  Xiaolong Hu,et al.  Autonomous Driving in the iCity—HD Maps as a Key Challenge of the Automotive Industry , 2016 .

[13]  Peter I. Corke,et al.  Visual Place Recognition: A Survey , 2016, IEEE Transactions on Robotics.

[14]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[15]  Naoki Suganuma,et al.  Localization for autonomous driving on urban road , 2015, 2015 International Conference on Intelligent Informatics and Biomedical Sciences (ICIIBMS).

[16]  Julia D. Irwin,et al.  Use of personal mobility devices for first-and-last mile travel: the Macquarie-Ryde trial , 2015 .

[17]  Marcelo H. Ang,et al.  Autonomous golf cars for public trial of mobility-on-demand service , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[18]  Roberto Cipolla,et al.  PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[19]  Paul Newman,et al.  Shady dealings: Robust, long-term visual localisation using illumination invariance , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[20]  Trevor Darrell,et al.  Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation , 2013, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[21]  Emilio Frazzoli,et al.  Mapping with synthetic 2D LIDAR in 3D urban environment , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[22]  Ananth Ranganathan,et al.  Towards illumination invariance for visual localization , 2013, 2013 IEEE International Conference on Robotics and Automation.

[23]  Emilio Frazzoli,et al.  Synthetic 2D LIDAR for precise vehicle localization in 3D urban environment , 2013, 2013 IEEE International Conference on Robotics and Automation.

[24]  Paul Newman,et al.  FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance , 2008, Int. J. Robotics Res..