Style Transfer for Light Field Photography

As light field images continue to increase in use and application, it becomes necessary to adapt existing image processing methods to this unique form of photography. In this paper we explore methods for applying neural style transfer to light field images. Feed-forward style transfer networks provide fast, high-quality results for monocular images, but no such networks exist for full light field images. Because of the size of these images, current light field data sets are small and are insufficient for training purely feed-forward style-transfer networks from scratch. Thus, it is necessary to adapt existing monocular style transfer networks in a way that allows for the stylization of each view of the light field while maintaining visual consistencies between views. To do this, we first generate disparity maps for each view given a single depth image for the light field. Then in a fashion similar to neural stylization of stereo images, we use disparity maps to enforce a consistency loss between views and to warp feature maps during the feed forward stylization. Unlike previous work, however, light fields have too many views to train a purely feed-forward network that can stylize the entire light field with angular consistency. Instead, the proposed method uses an iterative optimization for each view of a single light field image that backpropagates the consistency loss through the network. Thus, the network architecture allows for the incorporation of pre-trained fast monocular stylization network while avoiding the need for a large light field training set.

[1]  Andrea Vedaldi,et al.  Texture Networks: Feed-forward Synthesis of Textures and Stylized Images , 2016, ICML.

[2]  Jian Sun,et al.  Symmetric stereo matching for occlusion handling , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[3]  Hao Wang,et al.  Real-Time Neural Style Transfer for Videos , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Björn Ommer,et al.  A Content Transformation Block for Image Style Transfer , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[5]  Thomas Brox,et al.  Artistic Style Transfer for Videos and Spherical Images , 2017, International Journal of Computer Vision.

[6]  Leon A. Gatys,et al.  Controlling Perceptual Factors in Neural Style Transfer , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Yu-Kun Lai,et al.  Depth-aware neural style transfer , 2017, NPAR '17.

[8]  Jie Chen,et al.  Accurate Light Field Depth Estimation With Superpixel Regularization Over Partially Occluded Regions , 2017, IEEE Transactions on Image Processing.

[9]  Ming-Hsuan Yang,et al.  Universal Style Transfer via Feature Transforms , 2017, NIPS.

[10]  Ken Perlin,et al.  Painterly rendering for video and interaction , 2000, NPAR '00.

[11]  Kun Zhou,et al.  Intrinsic Light Field Images , 2016, Comput. Graph. Forum.

[12]  In-So Kweon,et al.  Depth estimation from light field cameras , 2015, 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).

[13]  Xueting Li,et al.  Learning Linear Transformations for Fast Arbitrary Style Transfer , 2018, ArXiv.

[14]  Nenghai Yu,et al.  Stereoscopic Neural Style Transfer , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[15]  Touradj Ebrahimi,et al.  New Light Field Image Dataset , 2016, QoMEX 2016.

[16]  Li Fei-Fei,et al.  Perceptual Losses for Real-Time Style Transfer and Super-Resolution , 2016, ECCV.

[17]  Tong Zhang,et al.  Neural Stereoscopic Image Style Transfer , 2018, ECCV.

[18]  Keiji Yanai,et al.  Conditional Fast Style Transfer Network , 2017, ICMR.

[19]  Gregory Shakhnarovich,et al.  Style Transfer by Relaxed Optimal Transport and Self-Similarity , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[20]  Alexei A. Efros,et al.  SVBRDF-Invariant Shape and Reflectance Estimation from Light-Field Cameras , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[21]  Tieniu Tan,et al.  High quality depth map estimation of object surface from light-field images , 2017, Neurocomputing.

[22]  Aaron Hertzmann,et al.  Painterly rendering with curved brush strokes of multiple sizes , 1998, SIGGRAPH.

[23]  Marc Levoy,et al.  Light field rendering , 1996, SIGGRAPH.

[24]  Brian L. Price,et al.  PatchMatch-Based Content Completion of Stereo Image Pairs , 2012, 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission.

[25]  In-So Kweon,et al.  Accurate depth map estimation from a lenslet light field camera , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Leon A. Gatys,et al.  Image Style Transfer Using Convolutional Neural Networks , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[27]  Patrick Pérez,et al.  A Flexible Convolutional Solver for Fast Style Transfers , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[28]  Andrea Vedaldi,et al.  Improved Texture Networks: Maximizing Quality and Diversity in Feed-Forward Stylization and Texture Synthesis , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Bastian Goldlücke,et al.  Light Field Intrinsics with a Deep Encoder-Decoder Network , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[30]  Feng Tian,et al.  Painterly rendering techniques: a state‐of‐the‐art review of current approaches , 2013, Comput. Animat. Virtual Worlds.

[31]  Li Fei-Fei,et al.  Characterizing and Improving Stability in Neural Style Transfer , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[32]  Bernd Jähne,et al.  Light-field camera design for high-accuracy depth estimation , 2015, Optical Metrology.

[33]  Nenghai Yu,et al.  Coherent Online Video Style Transfer , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[34]  P. Hanrahan,et al.  Digital light field photography , 2006 .

[35]  In-So Kweon,et al.  A Taxonomy and Evaluation of Dense Light Field Depth Estimation Algorithms , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[36]  Bing-Yu Chen,et al.  Geometrically Consistent Stereoscopic Image Editing Using Patch-Based Synthesis , 2015, IEEE Transactions on Visualization and Computer Graphics.

[37]  David Salesin,et al.  Image Analogies , 2001, SIGGRAPH.

[38]  Adam Finkelstein,et al.  Interactive painterly stylization of images, videos and 3D animations , 2010, I3D '10.

[39]  Alexei A. Efros,et al.  A 4D Light-Field Dataset and CNN Architectures for Material Recognition , 2016, ECCV.

[40]  Richard Szeliski,et al.  The lumigraph , 1996, SIGGRAPH.