Revisiting the PnP Problem with a GPS

This paper revisits the pose estimation from point correspondences problem to properly exploit data provided by a GPS. In practice, the location given by the GPS is only a noisy estimate, and some point correspondences may be erroneous. Our method therefore starts from the GPS location estimate to progressively refine the full pose estimate by hypothesizing correct correspondences. We show how the GPS location estimate and the choice of a first random correspondence dramatically reduce the possibility for a second correspondence, which in turn constrains even more the remaining possible correspondences. This results in an efficient sampling of the solution space. Experimental results on a large 3D scene show that our method outperforms standard approaches and a recent related method [1] in terms of accuracy and robustness.

[1]  Xiaochun Cao,et al.  An Efficient Data Driven Algorithm for Multi-Sensor Alignment , 2008 .

[2]  David Nistér,et al.  Preemptive RANSAC for live structure and motion estimation , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[3]  Mads Nielsen,et al.  Computer Vision — ECCV 2002 , 2002, Lecture Notes in Computer Science.

[4]  Andrew Zisserman,et al.  MLESAC: A New Robust Estimator with Application to Estimating Image Geometry , 2000, Comput. Vis. Image Underst..

[5]  Andrew J. Davison,et al.  Active Matching , 2008, ECCV.

[6]  Ankita Kumar,et al.  Structure from Motion with Known Camera Positions , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[7]  Takeo Kanade,et al.  PALM: portable sensor-augmented vision system for large-scene modeling , 1999, Second International Conference on 3-D Digital Imaging and Modeling (Cat. No.PR00062).

[8]  Luc Van Gool,et al.  SURF: Speeded Up Robust Features , 2006, ECCV.

[9]  Slawomir J. Nasuto,et al.  NAPSAC: High Noise, High Dimensional Robust Estimation - it's in the Bag , 2002, BMVC.

[10]  Tom Drummond,et al.  Tightly integrated sensor fusion for robust visual tracking , 2004, Image Vis. Comput..

[11]  Jiri Matas,et al.  Randomized RANSAC with Td, d test , 2004, Image Vis. Comput..

[12]  Jan-Michael Frahm,et al.  Detailed Real-Time Urban 3D Reconstruction from Video , 2007, International Journal of Computer Vision.

[13]  David W. Murray,et al.  Guided Sampling and Consensus for Motion Estimation , 2002, ECCV.

[14]  Vincent Lepetit,et al.  Pose Priors for Simultaneously Solving Alignment and Correspondence , 2008, ECCV.

[15]  Jiri Matas,et al.  Randomized RANSAC with T(d, d) test , 2002, BMVC.

[16]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..

[17]  Bernd Girod,et al.  Outdoors augmented reality on mobile phone using loxel-based visual feature organization , 2008, MIR '08.

[18]  Jiri Matas,et al.  Locally Optimized RANSAC , 2003, DAGM-Symposium.

[19]  David W. Murray,et al.  Guided-MLESAC: faster image transform estimation by using matching priors , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[20]  Philip H. S. Torr,et al.  IMPSAC: Synthesis of Importance Sampling and Random Sample Consensus , 2000, IEEE Trans. Pattern Anal. Mach. Intell..

[21]  Jan-Michael Frahm,et al.  A Comparative Analysis of RANSAC Techniques Leading to Adaptive Real-Time Random Sample Consensus , 2008, ECCV.

[22]  Suya You,et al.  Fusion of vision and gyro tracking for robust augmented reality registration , 2001, Proceedings IEEE Virtual Reality 2001.

[23]  John Mark Bishop,et al.  NAPSAC: high noise, high dimensional model parameterisation - it's in the bag , 2002 .

[24]  Lixin Fan,et al.  Hill Climbing Algorithm for Random Sample Consensus Methods , 2007, ISVC.

[25]  Jiri Matas,et al.  Matching with PROSAC - progressive sample consensus , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[26]  Walterio W. Mayol-Cuevas,et al.  2nd International Symposium on Visual Computing , 2006 .