SIFT Based Graphical SLAM on a Packbot

We present an implementation of Simultaneous Localization and Mapping (SLAM) that uses infrared (IR) camera images collected at 10 Hz from a Packbot robot. The Packbot has a number of challenging characteristics with regard to vision based SLAM. The robot travels on tracks which causes the odometry to be poor especially while turning. The IMU is of relatively low quality as well making the drift in the motion prediction greater than on conventional robots. In addition, the very low placement of the camera and its fixed orientation looking forward is not ideal for estimating motion from the images. Several novel ideas are tested here. Harris corners are extracted from every 5 th frame and used as image features for our SLAM. Scale Invariant Feature Transform, SIFT, descriptors are formed from each of these. These are used to match image features over these 5 frame intervals. Lucas-Kanade tracking is done to find corresponding pixels in the frames between the SIFT frames. This allows a substantial computational savings over doing SIFT matching every frame. The epipolar constraints between all these matches that are implied by the dead-reckoning are used to further test the matches and eliminate poor features. Finally, the features are initialized on the map at once using an inverse depth parameterization which eliminates the delay in initialization of the 3D point features.

[1]  Michel Devy,et al.  Undelayed initialization in bearing only SLAM , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[2]  Andrew J. Davison,et al.  Real-time simultaneous localisation and mapping with a single camera , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[3]  David G. Lowe,et al.  Object recognition from local scale-invariant features , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[4]  Henrik I. Christensen,et al.  Vision SLAM in the Measurement Subspace , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[5]  Henrik I. Christensen,et al.  Graphical SLAM for Outdoor Applications , 2007, J. Field Robotics.

[6]  Yolanda González Cid,et al.  Real-time 3d SLAM with wide-angle vision , 2004 .

[7]  Javier Civera,et al.  Unified Inverse Depth Parametrization for Monocular SLAM , 2006, Robotics: Science and Systems.

[8]  Darius Burschka,et al.  V-GPS(SLAM): vision-based inertial system for mobile robots , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[9]  Javier Civera,et al.  Inverse Depth Parametrization for Monocular SLAM , 2008, IEEE Transactions on Robotics.

[10]  Paul Newman,et al.  Outdoor SLAM using visual appearance and laser ranging , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[11]  James J. Little,et al.  Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks , 2002, Int. J. Robotics Res..

[12]  Tim D. Barfoot,et al.  Online visual motion estimation using FastSLAM with SIFT features , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[13]  Hong Zhang,et al.  Good Image Features for Bearing-only SLAM , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Henrik I. Christensen,et al.  Graphical SLAM using vision and the measurement subspace , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  Danica Kragic,et al.  A framework for vision based bearing only 3D SLAM , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[16]  Christopher G. Harris,et al.  A Combined Corner and Edge Detector , 1988, Alvey Vision Conference.

[17]  Takeo Kanade,et al.  An Iterative Image Registration Technique with an Application to Stereo Vision , 1981, IJCAI.