Omni-vision mobile robot vSLAM based on spherical camera model

This paper presents the Visual Simultaneous Localization and Mapping (vSLAM) algorithm based on the spherical camera model, a novel algorithm for simultaneous localization and mapping (SLAM). We mapped the wide-angle image to the spherical image because wide-angle images exhibit significant distortion, for which existing scale-space detectors such as the scale-invariant feature transform(SIFT) are inappropriate. The algorithm adopts the omni-vision odemetry based on spherical camera model, and enables low cost navigation in cluttered and populated environments. No initial map is required, and it satisfactorily handles dynamic changes in the environment, and associates detected features with previously detected features. The results of the offline experiments indicate the feasibility of the proposed method.

[1]  Hugh F. Durrant-Whyte,et al.  Simultaneous localization and mapping: part I , 2006, IEEE Robotics & Automation Magazine.

[2]  Davide Scaramuzza,et al.  Omnidirectional Vision: From Calibration to Root Motion Estimation , 2007 .

[3]  K. Madhava Krishna,et al.  On-line convex optimization based solution for mapping in VSLAM , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[4]  Paolo Pirjanian,et al.  The vSLAM Algorithm for Robust Localization and Mapping , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[5]  Roland Siegwart,et al.  Automatic detection of checkerboards on blurred and distorted images , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  Roland Siegwart,et al.  A Flexible Technique for Accurate Omnidirectional Camera Calibration and Structure from Motion , 2006, Fourth IEEE International Conference on Computer Vision Systems (ICVS'06).

[7]  Homayoun Najjaran,et al.  Development of Visual Simultaneous Localization and Mapping (VSLAM) for a Pipe Inspection Robot , 2007, 2007 International Symposium on Computational Intelligence in Robotics and Automation.

[8]  Luis Miguel Bergasa,et al.  Real-time hierarchical stereo Visual SLAM in large-scale environments , 2010, Robotics Auton. Syst..

[9]  Xu Wenli,et al.  Pose estimation problem in computer vision , 1993, Proceedings of TENCON '93. IEEE Region 10 International Conference on Computers, Communications and Automation.

[10]  Peter Stone,et al.  A Comparison of Two Approaches for Vision and Self-Localization on a Mobile Robot , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[11]  Ying Wang,et al.  Evaluating 3D-2D correspondences for accurate camera pose estimation from a single image , 2003, SMC'03 Conference Proceedings. 2003 IEEE International Conference on Systems, Man and Cybernetics. Conference Theme - System Security and Assurance (Cat. No.03CH37483).

[12]  Peter I. Corke,et al.  Wide-angle Visual Feature Matching for Outdoor Localization , 2010, Int. J. Robotics Res..

[13]  Yonghuai Liu,et al.  Eliminating false matches for the projective registration of free-form surfaces with small translational motions , 2005, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[14]  Li Zhang,et al.  Accurate pose and location estimation of uncalibrated camera in urban area , 2009, 2009 IEEE International Geoscience and Remote Sensing Symposium.

[15]  H. Najjaran,et al.  Monocular vSLAM using a novel Rao-Blackwellized particle filter , 2010, 2010 IEEE/ASME International Conference on Advanced Intelligent Mechatronics.

[16]  Zhengyou Zhang,et al.  A Flexible New Technique for Camera Calibration , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[17]  Naokazu Yokoya,et al.  Camera parameter estimation from a long image sequence by tracking markers and natural features , 2004, Systems and Computers in Japan.