Visual odometry with unsynchronized multi-cameras setup for intelligent vehicle application

This paper presents a visual odometry with metric scale estimation of a multi-camera system in challenging un-synchronized setup. The intended application is in the field of intelligent vehicles. We propose a new algorithm named “triangle-based” method. The proposed algorithm employs the information from both extrinsic and intrinsic parameters of calibrated cameras. We assume that the trajectory between two consecutive frames of a camera is a linear segment (straight trajectory). The relative camera poses are estimated via classical Structure-from-Motion. Then, the scale factors are computed by imposing the known extrinsic parameters and the linearity assumption. We verify the validity of our method both in simulated and real conditions. For the real world, the motion trajectory estimated for image sequence of two cameras from KITTI dataset is compared against the GPS/INS ground truth.

[1]  Luca Gatti,et al.  The VisLab Intercontinental Autonomous Challenge: 13,000 km, 3 months,… no driver , 2010 .

[2]  Hongdong Li,et al.  Five-Point Motion Estimation Made Easy , 2006, 18th International Conference on Pattern Recognition (ICPR'06).

[3]  Bernhard P. Wrobel,et al.  Multiple View Geometry in Computer Vision , 2001 .

[4]  Roland Siegwart,et al.  Toward automated driving in cities using close-to-market sensors: An overview of the V-Charge Project , 2013, 2013 IEEE Intelligent Vehicles Symposium (IV).

[5]  Pierre Vandergheynst,et al.  FREAK: Fast Retina Keypoint , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Roland Siegwart,et al.  Monocular‐SLAM–based navigation for autonomous micro helicopters in GPS‐denied environments , 2011, J. Field Robotics.

[7]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[8]  Hans-Peter Seidel,et al.  Spatio-temporal motion tracking with unsynchronized cameras , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[9]  Marc Pollefeys,et al.  Motion Estimation for Self-Driving Cars with a Generalized Camera , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Patrick Rives,et al.  Dense visual mapping of large scale environments for real-time localisation , 2011, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Roland Siegwart,et al.  Real-time 6D stereo Visual Odometry with non-overlapping fields of view , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[12]  Andreas Geiger,et al.  Vision meets robotics: The KITTI dataset , 2013, Int. J. Robotics Res..

[13]  Roland Siegwart,et al.  Real-time monocular visual odometry for on-road vehicles with 1-point RANSAC , 2009, 2009 IEEE International Conference on Robotics and Automation.

[14]  Paolo Pirjanian,et al.  Structure from stereo vision using unsynchronized cameras for simultaneous localization and mapping , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[15]  David Nistér,et al.  An efficient solution to the five-point relative pose problem , 2004, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[16]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[17]  Marc Pollefeys,et al.  A constricted bundle adjustment parameterization for relative scale estimation in visual odometry , 2010, 2010 IEEE International Conference on Robotics and Automation.