Camera self-calibration for sequential Bayesian structure from motion

Computer vision researchers have proved the feasibility of camera self-calibration —the estimation of a camera's internal parameters from an image sequence without any known scene structure. Various self-calibration algorithms have been published. Nevertheless, all of the recent sequential approaches to 3D structure and motion estimation from image sequences which have arisen in robotics and aim at real-time operation (often classed as visual SLAM or visual odometry) have relied on pre-calibrated cameras and have not attempted online calibration.

[1]  Tom Drummond,et al.  Monocular SLAM as a Graph of Coalesced Observations , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[2]  O. D. Faugeras,et al.  Camera Self-Calibration: Theory and Experiments , 1992, ECCV.

[3]  Andrew Zisserman,et al.  Multiple view geometry in computer visiond , 2001 .

[4]  G. Klein,et al.  Parallel Tracking and Mapping for Small AR Workspaces , 2007, 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.

[5]  Andrew J. Davison,et al.  Real-time simultaneous localisation and mapping with a single camera , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[6]  Ieee Xplore,et al.  IEEE Transactions on Pattern Analysis and Machine Intelligence Information for Authors , 2022, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Gamini Dissanayake,et al.  Bearing-only SLAM Using a SPRT Based Gaussian Sum Filter , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[8]  Juan D. Tardós,et al.  Data association in stochastic mapping using the joint compatibility test , 2001, IEEE Trans. Robotics Autom..

[9]  Hugh F. Durrant-Whyte,et al.  A Bayesian Algorithm for Simultaneous Localisation and Map Building , 2001, ISRR.

[10]  Juan D. Tardós,et al.  Localization of avalanche victims using robocentric SLAM , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Kurt Konolige,et al.  Large-Scale Visual Odometry for Rough Terrain , 2007, ISRR.

[12]  Olivier Stasse,et al.  MonoSLAM: Real-Time Single Camera SLAM , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  H. Sorenson,et al.  Nonlinear Bayesian estimation using Gaussian sum approximations , 1972 .

[14]  Andrew J. Davison,et al.  Active Matching , 2008, ECCV.

[15]  Rama Chellappa,et al.  Bayesian self-calibration of a moving camera , 2004, Comput. Vis. Image Underst..

[16]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[17]  Alex Pentland,et al.  Recursive Estimation of Motion, Structure, and Focal Length , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[18]  Javier Civera,et al.  Inverse Depth Parametrization for Monocular SLAM , 2008, IEEE Transactions on Robotics.

[19]  R. Khan,et al.  Sequential Tests of Statistical Hypotheses. , 1972 .

[20]  A. Wald Sequential Tests of Statistical Hypotheses , 1945 .

[21]  James R. Bergen,et al.  Visual odometry , 2004, Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004..

[22]  Javier Civera,et al.  Interacting multiple model monocular SLAM , 2008, 2008 IEEE International Conference on Robotics and Automation.