SeqSLAM with Bag of Visual Words for Appearance Based Loop Closure Detection

The detection of pre-visited areas in robots’ traversed path, widely known as loop closure detection, is vital for drift and position correction in robotic applications, such as simultaneous localization and mapping. In this paper, we present a sequence based approach for pose estimation, by advancing the well known SeqSLAM algorithm with the usage of Bag of Words (BoW) model. A visual vocabulary is produced in an offline procedure resulting in the system ’s ability to describe the incoming image stream by visual words, at the online process. Image similarity is achieved through BoW histogram comparisons instead of sum of absolute differences metric. Comparative results on several publicly-available datasets show the benefits of the proposed method offering high recall scores at 100% precision against the original one.

[1]  Gary R. Bradski,et al.  ORB: An efficient alternative to SIFT or SURF , 2011, 2011 International Conference on Computer Vision.

[2]  Antonios Gasteratos,et al.  High order visual words for structure-aware and viewpoint-invariant loop closure detection , 2017, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[3]  Hong Zhang,et al.  Fast-SeqSLAM: A fast appearance based place recognition algorithm , 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Francisco Angel Moreno,et al.  A collection of outdoor robotic datasets with centimeter-accuracy ground truth , 2009, Auton. Robots.

[5]  Sergei Vassilvitskii,et al.  k-means++: the advantages of careful seeding , 2007, SODA '07.

[6]  Jean-Arcady Meyer,et al.  Fast and Incremental Method for Loop-Closure Detection Using Bags of Visual Words , 2008, IEEE Transactions on Robotics.

[7]  Antonios Gasteratos,et al.  Fast loop-closure detection using visual-word-vectors from image sequences , 2018, Int. J. Robotics Res..

[8]  Paul Newman,et al.  Appearance-only SLAM at large scale with FAB-MAP 2.0 , 2011, Int. J. Robotics Res..

[9]  Peter I. Corke,et al.  Visual Place Recognition: A Survey , 2016, IEEE Transactions on Robotics.

[10]  Antonios Gasteratos,et al.  Encoding the description of image sequences: A two-layered pipeline for loop closure detection , 2016, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[11]  Paul Newman,et al.  Detecting Loop Closure with Scene Sequences , 2007, International Journal of Computer Vision.

[12]  Hong Zhang,et al.  Towards improving the efficiency of sequence-based SLAM , 2013, 2013 IEEE International Conference on Mechatronics and Automation.

[13]  Yang Liu,et al.  Visual loop closure detection with a compact image descriptor , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Xiaoping Hu,et al.  Improved Seq SLAM for Real-Time Place Recognition and Navigation Error Correction , 2015, 2015 7th International Conference on Intelligent Human-Machine Systems and Cybernetics.

[15]  Antonios Gasteratos,et al.  Assigning Visual Words to Places for Loop Closure Detection , 2018, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[16]  Andrew Zisserman,et al.  Video Google: a text retrieval approach to object matching in videos , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[17]  David Nistér,et al.  Scalable Recognition with a Vocabulary Tree , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[18]  Hugh F. Durrant-Whyte,et al.  Simultaneous localization and mapping: part I , 2006, IEEE Robotics & Automation Magazine.

[19]  Dorian Gálvez-López,et al.  Bags of Binary Words for Fast Place Recognition in Image Sequences , 2012, IEEE Transactions on Robotics.

[20]  Gordon Wyeth,et al.  SeqSLAM: Visual route-based navigation for sunny summer days and stormy winter nights , 2012, 2012 IEEE International Conference on Robotics and Automation.