Pose Estimation of an Autonomous Car by Visual Feature Correspondence and Tracking

This paper presents a pose estimation method based on 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of an Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continually updates the vehicle’s 6D pose state and temporary estimates of the extracted visual features’ 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKFSLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus’s LIDAR-Vision dataset. The results are compared with the ground truth and the estimation error is ∼1.9% of the path length.

[1]  Robert C. Bolles,et al.  Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography , 1981, CACM.

[2]  Theerayod Wiangtong,et al.  Pose estimation of unmanned ground vehicle based on dead-reckoning/GPS sensor fusion by unscented Kalman filter , 2009, 2009 6th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology.

[3]  Javier Civera,et al.  1‐Point RANSAC for extended Kalman filtering: Application to real‐time structure from motion and visual odometry , 2010, J. Field Robotics.

[4]  James R. Bergen,et al.  Visual odometry for ground vehicle applications , 2006, J. Field Robotics.

[5]  Jan-Michael Frahm,et al.  Visual Odometry for Non-overlapping Views Using Second-Order Cone Programming , 2007, ACCV.

[6]  Tom Drummond,et al.  Machine Learning for High-Speed Corner Detection , 2006, ECCV.

[7]  Javier Civera,et al.  Inverse Depth to Depth Conversion for Monocular SLAM , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[8]  Christoph Stiller,et al.  Velodyne SLAM , 2011, 2011 IEEE Intelligent Vehicles Symposium (IV).

[9]  Frank Dellaert,et al.  iSAM: Incremental Smoothing and Mapping , 2008, IEEE Transactions on Robotics.

[10]  Emanuele Frontoni,et al.  Robot localization in urban environments using omnidirectional vision sensors and partial heterogeneous apriori knowledge , 2010, Proceedings of 2010 IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications.

[11]  Paul Newman,et al.  Outdoor SLAM using visual appearance and laser ranging , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[12]  Olivier Stasse,et al.  MonoSLAM: Real-Time Single Camera SLAM , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[13]  Frank Dellaert,et al.  Incremental smoothing and mapping , 2008 .

[14]  Hongdong Li,et al.  Motion Estimation for Nonoverlapping Multicamera Rigs: Linear Algebraic and L∞ Geometric Solutions , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Aleksandr V. Segal,et al.  Generalized-ICP , 2009, Robotics: Science and Systems.

[16]  Stergios I. Roumeliotis,et al.  A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[17]  R. Sargent,et al.  Combined feature based and shape based visual tracker for robot navigation , 2005, 2005 IEEE Aerospace Conference.

[18]  Silvio Savarese,et al.  Visually bootstrapped generalized ICP , 2011, 2011 IEEE International Conference on Robotics and Automation.

[19]  Ryan M. Eustice,et al.  Ford Campus vision and lidar data set , 2011, Int. J. Robotics Res..