Illumination Invariant Camera Localization Using Synthetic Images

Accurate camera localization is an essential part of tracking systems. However, localization results are greatly affected by illumination. Including data collected under various lighting conditions can improve the robustness of the localization algorithm to lighting variation. However, this is very tedious and time consuming. By using synthetic images, it is possible to easily accumulate a large variety of views under varying illumination and weather conditions. Despite continuously improving processing power and rendering algorithms, synthetic images do not perfectly match real images of the same scene, i.e., there exists a gap between real and synthetic images that also affects the accuracy of camera localization. To reduce the impact of this gap, we introduce “REal-to-Synthetic Transform (REST).” REST is an autoencoder-like network that converts real features to their synthetic counterpart. The converted features can then be matched against the accumulated database for robust camera localization. Our results shows that REST improves matching accuracy by approximately 30%.

[1]  Roberto Cipolla,et al.  PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).

[2]  Mathieu Aubry,et al.  Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[3]  Gudrun Klinker,et al.  Absolute Spatial Context-aware visual feature descriptors for outdoor handheld camera localization overcoming visual repetitiveness in urban environments , 2015, 2014 International Conference on Computer Vision Theory and Applications (VISAPP).

[4]  Henrik Aanæs,et al.  Large Scale Multi-view Stereopsis Evaluation , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Tobias Höllerer,et al.  Simulation based Camera Localization under a Variable Lighting Environment , 2016, ICAT-EGVE.