Generating 360 outdoor panorama dataset with reliable sun position estimation

A large dataset of outdoor panoramas with ground truth labels of sun position (SP) can be a valuable training data for learning outdoor illumination. In general, the sun position (if exists) in an outdoor panorama corresponds to the pixel with highest luminance and contrast with respect to neighbor pixels. However, both image-based estimation and manual annotation can not obtain reliable SP due to complex interplay between sun light and sky appearance. Here, we present an efficient and reliable approach to estimate a SP from an outdoor panorama with accessible metadata. Specifically, we focus on the outdoor panoramas retrieved from Google Street View and leverages built-in metadata as well as a well-established Solar Position Algorithm to propose a set of candidate SPs. Next, a custom made luminance model is used to rank each candidate and a confidence metric is computed to effectively filter out trivial cases (e.g., cloudy day, sun is occluded). We extensively evaluated the efficacy of our approach by conducting an experimental study on a dataset with over 600 panoramas.

[1]  I. Reda,et al.  Solar position algorithm for solar radiation applications , 2004 .

[2]  Bolei Zhou,et al.  Semantic Understanding of Scenes Through the ADE20K Dataset , 2016, International Journal of Computer Vision.

[3]  Jinsong Zhang,et al.  Learning High Dynamic Range from Outdoor Panoramas , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[4]  Yannick Hold-Geoffroy,et al.  Deep Outdoor Illumination Estimation , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).