Mismatch Removal of Visual Odometry using KLT danger-points tracking and suppression

Visual odometry (VO) is a technique to transform front-end visual observation to pose transformation. In simultaneous localization and mapping (SLAM) based on visual odometry, mismatch of features can lead to high uncertainty and inaccurate state estimation. Although RANSAC (RANdom SAmple Consensus) can reject the outlier with iterative sampling among all feature points, it only eliminate the mismatch instead of finding a better match to replace it. In this paper, we introduce an algorithm to reject the mismatch in visual odometry and find a better match if possible. Our approach start with a self-match of latest camera frame in order to detect the danger-point probably leading to mismatch for every feature. KLT (Kanade-Lucas-Tomasi) optical flow tracking method is used to predict the motion of danger-point in next frame, where we form a danger-area of mismatch. We additionally apply suppression in this area by adding an extra Hamming distance in Gaussian distribution to the points in the area. Therefore, mismatch can be removed with extra Hamming distance added. We integrate the algorithm on ROS (Robot Operating System) and record a series of video data sets. Then we apply our algorithm to the video stream and successfully remove the mismatch difficult to be rejected by RANSAC.

[1]  Alaa Eleyan,et al.  Character recognition using correlation & hamming distance , 2015, 2015 23nd Signal Processing and Communications Applications Conference (SIU).

[2]  C.-T. Chen,et al.  Indoor localization using line based map for autonomous mobile robot , 2008, 2008 IEEE Workshop on Advanced robotics and Its Social Impacts.

[3]  Xiangyang Xu,et al.  SIFT Feature Point Matching Based on Improved RANSAC Algorithm , 2013, 2013 5th International Conference on Intelligent Human-Machine Systems and Cybernetics.

[4]  Cai-Wei Shu,et al.  ORB-Oriented Mismatching Feature Points Elimination , 2018, 2018 IEEE International Conference on Progress in Informatics and Computing (PIC).

[5]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[6]  Jean-François Brethé,et al.  Vision based target tracking using an unmanned aerial vehicle , 2015, 2015 IEEE International Workshop on Advanced Robotics and its Social Impacts (ARSO).

[7]  Feng Tian,et al.  The analysis method of video camera's motion based on optical flow and slam , 2016, 2016 International Conference on Audio, Language and Image Processing (ICALIP).

[8]  Y. Yamamoto,et al.  Optical sensing for robot perception and localization , 2005, IEEE Workshop on Advanced Robotics and its Social Impacts, 2005..

[9]  Ronald Azuma,et al.  Recent Advances in Augmented Reality , 2001, IEEE Computer Graphics and Applications.

[10]  You Fucheng,et al.  A tracking algorithm based on ORB , 2013, Proceedings 2013 International Conference on Mechatronic Sciences, Electric Engineering and Computer (MEC).

[11]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.

[12]  Halil Yigit,et al.  A weighting approach for KNN classifier , 2013, 2013 International Conference on Electronics, Computer and Computation (ICECCO).

[13]  Kazunori Umeda,et al.  Tracking of multiple humans using subtraction stereo and particle filter , 2014, 2014 IEEE International Workshop on Advanced Robotics and its Social Impacts.