Multi-camera VSLAM : from former information losses to self-calibration

Visual SLAM is, in recent years, a very active research area, the result of activities in the convergence of the robotics and computer vision communities. We present an overview of techniques, from classical filtering to bundle adjustment solutions, for both monocular and stereo (or multicamera) systems, and emphasize that classical SLAM solutions have been discarding precious sensory information. In particular, the ability of vision to sense objects at infinity should be exploited at its maximum because it is precisely these remote objects that will provide long-term, stable angular references (in the way a compass would do). Monocular SLAM systems have already solved this issue, but stereo and multicamera systems have not. We propose in these cases to use monocular SLAM algorithms using fusion to incorporate all the information. Numerous advantages like desynchronization of the sensors firing, the possibility of using several unequal cameras, or selfcalibration, will naturally arise. We develop a particular method for extrinsically decalibrated stereo systems to illustrate the proposed ideas. We evaluate the method with a real indoor experiment, and highlight and discuss both its assets and drawbacks.

[1]  V. A. Tupyser A GENERALIZED APPROACH TO THE PROBLEM OF DISTRIBUTED KALMAN FILTERING , 1998 .

[2]  Eric Foxlin,et al.  Generalized architecture for simultaneous localization, auto-calibration, and map-building , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[3]  Andrew J. Davison,et al.  Real-time simultaneous localisation and mapping with a single camera , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[4]  Tim Bailey Constrained initialisation for bearing-only SLAM , 2003, 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422).

[5]  Sebastian Thrun,et al.  Simultaneous localization and mapping with active stereo vision , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[6]  Andrew J. Davison,et al.  Active search for real-time vision , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[7]  Michel Devy,et al.  Undelayed initialization in bearing only SLAM , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Henrik I. Christensen,et al.  Vision SLAM in the Measurement Subspace , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[9]  Tim D. Barfoot,et al.  Online visual motion estimation using FastSLAM with SIFT features , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  James J. Little,et al.  /spl sigma/SLAM: stereo vision SLAM using the Rao-Blackwellised particle filter and a novel mixture proposal distribution , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[11]  Javier Civera,et al.  Unified Inverse Depth Parametrization for Monocular SLAM , 2006, Robotics: Science and Systems.

[12]  Tom Drummond,et al.  Scalable Monocular SLAM , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[13]  J. Solà Towards visual localization, mapping and moving objects tracking by a mobile robot: a geometric and probabilistic approach , 2007 .

[14]  Kurt Konolige,et al.  Large-Scale Visual Odometry for Rough Terrain , 2007, ISRR.

[15]  Patrick Rives,et al.  Accurate Quadrifocal Tracking for Robust 3D Visual Odometry , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.