Localising PMDs through CNN Based Perception of Urban Streets

The main contribution of this paper is a novel Extended Kalman Filter (EKF) based localisation scheme that fuses two complementary approaches to outdoor vision based localisation. This EKF is aided by a front end consisting of two Convolutional Neural Networks (CNNs) that provide the necessary perceptual information from camera images. The first approach involves a CNN based extraction of information corresponding to artefacts such as curbs, lane markings, and manhole covers to localise on a vector distance transform representation of a binary image of these ground surface boundaries. The second approach involves a CNN based detection of common environmental landmarks such as tree trunks and light poles, which are represented as point features on a sparse map. Utilising CNNs to obtain higher level information about the environment enables this framework to avoid the typical pitfalls of common vision based approaches that use low level hand crafted features for localisation. The EKF framework makes it possible to deal with false positives and missed detections that are inevitable in a practical CNN, to produce a location estimate together with its associated uncertainty. Experiments using a Personal Mobility Device (PMD) driven in typical suburban streets are presented to demonstrate the effectiveness of the proposed localiser.

[1]  Julia D. Irwin,et al.  Use of personal mobility devices for first-and-last mile travel: the Macquarie-Ryde trial , 2015 .

[2]  G. Waitt,et al.  Don't ignore the mobility scooter. It may just be the future of transport , 2017 .

[3]  Martin Lauer,et al.  Accurate and Efficient Self-Localization on Roads using Basic Geometric Primitives , 2019, 2019 International Conference on Robotics and Automation (ICRA).

[4]  Ali Farhadi,et al.  YOLO9000: Better, Faster, Stronger , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Wolfram Burgard,et al.  Deep Auxiliary Learning for Visual Localization and Odometry , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[6]  Gamini Dissanayake,et al.  Environment representation for mobile robot localisation , 2017, 2017 IEEE International Conference on Industrial and Information Systems (ICIIS).

[7]  Gamini Dissanayake,et al.  Vector Distance Function Based Map Representation for Robot Localisation , 2017 .

[8]  Kun Jiang,et al.  Monocular Vehicle Self-localization method based on Compact Semantic Map* , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[9]  Peter I. Corke,et al.  Visual Place Recognition: A Survey , 2016, IEEE Transactions on Robotics.

[10]  Roberto Cipolla,et al.  Geometric Loss Functions for Camera Pose Regression with Deep Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Juan D. Tardós,et al.  ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras , 2016, IEEE Transactions on Robotics.

[12]  Matthias Mayr,et al.  Lanelet2: A high-definition map framework for the future of automated driving , 2018, 2018 21st International Conference on Intelligent Transportation Systems (ITSC).

[13]  François Michaud,et al.  RTAB‐Map as an open‐source lidar and visual simultaneous localization and mapping library for large‐scale and long‐term online operation , 2018, J. Field Robotics.

[14]  Gamini Dissanayake,et al.  Robot Localisation in 3D Environments Using Sparse Range Measurements , 2019, 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM).

[15]  Supun Samarasekera,et al.  Utilizing semantic visual landmarks for precise vehicle navigation , 2017, 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC).

[16]  Roberto Cipolla,et al.  PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[17]  Richard Steffen,et al.  A Robust Iterative Kalman Filter Based On Implicit Measurement Equations Robuster iterativer Kalman-Filter mit implizierten Beobachtungsgleichungen , 2013 .

[18]  Paul Newman,et al.  FAB-MAP: Probabilistic Localization and Mapping in the Space of Appearance , 2008, Int. J. Robotics Res..

[19]  Ananth Ranganathan,et al.  Towards illumination invariance for visual localization , 2013, 2013 IEEE International Conference on Robotics and Automation.

[20]  Baris Fidan,et al.  Localization for Autonomous Driving , 2019, Handbook of Position Location.

[21]  Javier Civera,et al.  DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes , 2018, IEEE Robotics and Automation Letters.

[22]  Gamini Dissanayake,et al.  An extended Kalman filter for localisation in occupancy grid maps , 2015, 2015 IEEE 10th International Conference on Industrial and Information Systems (ICIIS).

[23]  Paul Newman,et al.  Shady dealings: Robust, long-term visual localisation using illumination invariance , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[24]  Gamini Dissanayake,et al.  Distance function based 6DOF localization for unmanned aerial vehicles in GPS denied environments , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[25]  Gamini Dissanayake,et al.  Leveraging Deep Learning Based Object Detection for Localising Autonomous Personal Mobility Devices in Sparse Maps , 2019, 2019 IEEE Intelligent Transportation Systems Conference (ITSC).

[26]  Gamini Dissanayake,et al.  C-LOG: A Chamfer distance based algorithm for localisation in occupancy grid-maps , 2016, CAAI Trans. Intell. Technol..

[27]  Shaojie Shen,et al.  VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator , 2017, IEEE Transactions on Robotics.

[28]  Mehrdad Dianati,et al.  A Survey of the State-of-the-Art Localization Techniques and Their Potentials for Autonomous Vehicle Applications , 2018, IEEE Internet of Things Journal.