Vegetation Greening for Winter Oblique Photography Using Cycle-Consistence Adversarial Networks

A 3D city model is critical for the construction of a digital city. One of the methods of building a 3D city model is tilt photogrammetry. In this method, oblique photography is crucial for generating the model because the visual quality of photography directly impacts the model’s visual effect. Yet, sometimes, oblique photography does not have good visual quality due to a bad season or defective photographic equipment. For example, for oblique photography taken in winter, vegetation is brown. If this photography is employed to generate the 3D model, the result would be bad visually. Yet, common methods for vegetation greening in oblique photography rely on the assistance of the infrared band, which is not available sometimes. Thus, a method for vegetation greening in winter oblique photography without the infrared band is required, which is proposed in this paper. The method was inspired by the work on CycleGAN (Cycle-consistence Adversarial Networks). In brief, the problem of turning vegetation green in winter oblique photography is considered as a style transfer problem. Summer oblique photography generally has green vegetation. By applying CycleGAN, winter oblique photography can be transferred to summer oblique photography, and the vegetation can turn green. Yet, due to the existence of “checkerboard artifacts”, the original result cannot be applied for real production. To reduce artifacts, the generator of CycleGAN is modified. As the final results suggest, the proposed method unlocks the bottleneck of vegetation greening when the infrared band is not available and artifacts are reduced.

[1]  Yann LeCun,et al.  Energy-based Generative Adversarial Network , 2016, ICLR.

[2]  Jon Gauthier Conditional generative adversarial nets for convolutional face generation , 2015 .

[3]  Li Fei-Fei,et al.  Perceptual Losses for Real-Time Style Transfer and Super-Resolution , 2016, ECCV.

[4]  David Salesin,et al.  Image Analogies , 2001, SIGGRAPH.

[5]  Thomas Brox,et al.  Generating Images with Perceptual Similarity Metrics based on Deep Networks , 2016, NIPS.

[6]  Abhinav Gupta,et al.  Generative Image Modeling Using Style and Structure Adversarial Networks , 2016, ECCV.

[7]  Leonidas J. Guibas,et al.  Consistent Shape Maps via Semidefinite Programming , 2013, SGP '13.

[8]  Luca Maria Gambardella,et al.  Deep Neural Networks Segment Neuronal Membranes in Electron Microscopy Images , 2012, NIPS.

[9]  Christian Ledig,et al.  Is the deconvolution layer the same as a convolutional layer? , 2016, ArXiv.

[10]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[11]  Oisin Mac Aodha,et al.  Unsupervised Monocular Depth Estimation with Left-Right Consistency , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Léon Bottou,et al.  Towards Principled Methods for Training Generative Adversarial Networks , 2017, ICLR.

[13]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[14]  R. Brislin Back-Translation for Cross-Cultural Research , 1970 .

[15]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[16]  Xiaofeng Tao,et al.  Transient attributes for high-level understanding and editing of outdoor scenes , 2014, ACM Trans. Graph..

[17]  Hui Jiang,et al.  Generating images with recurrent adversarial networks , 2016, ArXiv.

[18]  Vincent Dumoulin,et al.  Deconvolution and Checkerboard Artifacts , 2016 .

[19]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[21]  Andrea Vedaldi,et al.  Instance Normalization: The Missing Ingredient for Fast Stylization , 2016, ArXiv.

[22]  Rob Fergus,et al.  Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks , 2015, NIPS.

[23]  Brendan J. Frey,et al.  Unsupervised image translation , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[24]  Zhen Jiao Pictometry Oblique Photography Technique and its Application in 3D City Modeling , 2011 .

[25]  Ming-Yu Liu,et al.  Coupled Generative Adversarial Networks , 2016, NIPS.

[26]  Yoshua Bengio,et al.  Generative Adversarial Networks , 2014, ArXiv.

[27]  Alexei A. Efros,et al.  Colorful Image Colorization , 2016, ECCV.

[28]  P. Gong,et al.  Comparison of IKONOS and QuickBird images for mapping mangrove species on the Caribbean coast of Panama , 2004 .

[29]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[30]  Alexei A. Efros,et al.  Learning Dense Correspondence via 3D-Guided Cycle Consistency , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[31]  Kurt Keutzer,et al.  Dense Point Trajectories by GPU-Accelerated Large Displacement Optical Flow , 2010, ECCV.

[32]  Pieter Abbeel,et al.  InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.

[33]  Jan Kautz,et al.  Unsupervised Image-to-Image Translation Networks , 2017, NIPS.