Simulation based Camera Localization under a Variable Lighting Environment

Localizing the user from a feature database of a scene is a basic and necessary step for presentation of localized augmented reality (AR) content. Commonly such a database depicts a single appearance of the scene, due to time and effort required to prepare it. However, the appearance depends on various factors, e.g., the position of the sun and cloudiness. Observing the scene under different lighting conditions results in a decreased success rate and accuracy of the localization. To address this we propose to generate the feature database from a simulated appearance of the scene model under a number of different lighting conditions. We also propose to extend the feature descriptors used in the localization with a parametric representation of their changes under varying lighting conditions. We compare our method with a standard representation and matching based on L2-norm in a simulation and real world experiments. Our results show that our simulated environment is a satisfactory representation of the scene's appearance and improves feature matching over a single database. The proposed feature descriptor achieves a higher localization ratio with fewer feature points and a lower process cost.

[1]  Selim Benhimane,et al.  Representative feature descriptor sets for robust handheld camera localization , 2012, 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

[2]  Shinichiro Omachi,et al.  Mahalanobis Encodings for Visual Categorization , 2015, IPSJ Trans. Comput. Vis. Appl..

[3]  Dieter Schmalstieg,et al.  Global Localization from Monocular SLAM on a Mobile Phone , 2014, IEEE Transactions on Visualization and Computer Graphics.

[4]  Tobias Höllerer,et al.  The City of Sights: Design, construction, and measurement of an Augmented Reality stage set , 2010, 2010 IEEE International Symposium on Mixed and Augmented Reality.

[5]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[6]  Jun Rekimoto,et al.  CyberCode: designing augmented reality environments with visual tags , 2000, DARE '00.

[7]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[8]  G. Klein,et al.  Parallel Tracking and Mapping for Small AR Workspaces , 2007, 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality.

[9]  Vincent Lepetit,et al.  Instant Outdoor Localization and SLAM Initialization from 2.5D Maps , 2015, IEEE Transactions on Visualization and Computer Graphics.

[10]  Andrew W. Fitzgibbon,et al.  KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera , 2011, UIST.

[11]  Gudrun Klinker,et al.  An outdoor ground truth evaluation dataset for sensor-aided visual handheld camera localization , 2013, 2013 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

[12]  Jan-Michael Frahm,et al.  From structure-from-motion point clouds to fast location recognition , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Alexei A. Efros,et al.  Estimating the Natural Illumination Conditions from a Single Outdoor Image , 2012, International Journal of Computer Vision.

[14]  Tobias Höllerer,et al.  Wide-area scene mapping for mobile visual tracking , 2012, 2012 IEEE International Symposium on Mixed and Augmented Reality (ISMAR).

[15]  Dieter Schmalstieg,et al.  Exploiting sensors on mobile phones to improve wide-area localization , 2012, Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).

[16]  Matthijs C. Dorst Distinctive Image Features from Scale-Invariant Keypoints , 2011 .

[17]  Andrew J. Davison,et al.  DTAM: Dense tracking and mapping in real-time , 2011, 2011 International Conference on Computer Vision.

[18]  Gudrun Klinker,et al.  Absolute Spatial Context-aware visual feature descriptors for outdoor handheld camera localization overcoming visual repetitiveness in urban environments , 2015, 2014 International Conference on Computer Vision Theory and Applications (VISAPP).

[19]  Luc Van Gool,et al.  Speeded-Up Robust Features (SURF) , 2008, Comput. Vis. Image Underst..