Video-rate localization in multiple maps for wearable augmented reality

We show how a system for video-rate parallel camera tracking and 3D map-building can be readily extended to allow one or more cameras to work in several maps, separately or simultaneously. The ability to handle several thousand features per map at video-rate, and for the cameras to switch automatically between maps, allows spatially localized AR workcells to be constructed and used with very little intervention from the user of a wearable vision system. The user can explore an environment in a natural way, acquiring local maps in real-time. When revisiting those areas the camera will select the correct local map from store and continue tracking and structural acquisition, while the user views relevant AR constructs registered to that map.

[1]  Olivier Faugeras,et al.  Motion and Structure from Motion in a piecewise Planar Environment , 1988, Int. J. Pattern Recognit. Artif. Intell..

[2]  Eduardo Mario Nebot,et al.  Optimization of the simultaneous localization and map-building algorithm for real-time implementation , 2001, IEEE Trans. Robotics Autom..

[3]  John J. Leonard,et al.  A Computationally Efficient Method for Large-Scale Concurrent Mapping and Localization , 2000 .

[4]  Tom Drummond,et al.  Machine Learning for High-Speed Corner Detection , 2006, ECCV.

[5]  David W. Murray,et al.  Video-rate Recognition and Localization for Wearable Cameras , 2007, BMVC.

[6]  Olivier Stasse,et al.  MonoSLAM: Real-Time Single Camera SLAM , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[7]  Reinhard Koch,et al.  Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).

[8]  Ingemar J. Cox,et al.  Dynamic Map Building for an Autonomous Mobile Robot , 1992 .

[9]  Michel Dhome,et al.  Real Time Localization and 3D Reconstruction , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[10]  David W. Murray,et al.  Improving the Agility of Keyframe-Based SLAM , 2008, ECCV.

[11]  Andrew W. Fitzgibbon,et al.  Automatic Camera Recovery for Closed or Open Image Sequences , 1998, ECCV.

[12]  James R. Bergen,et al.  Visual odometry for ground vehicle applications , 2006, J. Field Robotics.

[13]  Peter Cheeseman,et al.  On the Representation and Estimation of Spatial Uncertainty , 1986 .

[14]  Sebastian Thrun,et al.  Probabilistic robotics , 2002, CACM.

[15]  Reinhard Koch,et al.  Self-Calibration and Metric Reconstruction Inspite of Varying and Unknown Intrinsic Camera Parameters , 1999, International Journal of Computer Vision.

[16]  Tobias Höllerer,et al.  Hybrid Feature Tracking and User Interaction for Markerless Augmented Reality , 2008, 2008 IEEE Virtual Reality Conference.

[17]  John J. Leonard,et al.  Consistent, Convergent, and Constant-Time SLAM , 2003, IJCAI.

[18]  David W. Murray,et al.  Applying Active Vision and SLAM to Wearables , 2005, ISRR.

[19]  Selim Benhimane,et al.  Homography-based 2D Visual Tracking and Servoing , 2007, Int. J. Robotics Res..

[20]  Ian D. Reid,et al.  Real-Time SLAM Relocalisation , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[21]  Vincent Lepetit,et al.  Keypoint recognition using randomized trees , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[22]  David Nister,et al.  Automatic Dense Reconstruction from Uncalibrated Video Sequences , 2001 .

[23]  G. Klein,et al.  Parallel Tracking and Mapping for Small AR Workspaces , 2007, 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.

[24]  Andrew J. Davison,et al.  Real-time simultaneous localisation and mapping with a single camera , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.