SVBRDF Recovery from a Single Image with Highlights Using a Pre‐trained Generative Adversarial Network

Spatially-varying bi-directional reflectance distribution functions (SVBRDFs) are crucial for designers to incorporate new materials in virtual scenes, making them look more realistic. Reconstruction of SVBRDFs is a long-standing problem. Existing methods either rely on extensive acquisition system or require huge datasets which are nontrivial to acquire. We aim to recover SVBRDFs from a single image, without any datasets. A single image contains incomplete information about the SVBRDF, making the reconstruction task highly ill-posed. It is also difficult to separate between the changes in color that are caused by the material and those caused by the illumination, without the prior knowledge learned from the dataset. In this paper, we use an unsupervised generative adversarial neural network (GAN) to recover SVBRDFs maps with a single image as input. To better separate the effects due to illumination from the effects due to the material, we add the hypothesis that the material is stationary and introduce a new loss function based on Fourier coefficients to enforce this stationarity. For efficiency, we train the network in two stages: reusing a trained model to initialize the SVBRDFs and fine-tune it based on the input image. Our method generates high-quality SVBRDFs maps from a single input photograph, and provides more vivid rendering results compared to previous work. The two-stage training boosts runtime performance, making it 8 times faster than previous work.

[1]  Ling-Qi Yan,et al.  Highlight-aware two-stream network for single-image SVBRDF acquisition , 2021, ACM Trans. Graph..

[2]  Tobias Ritschel,et al.  Generative modelling of BRDF textures from flash images , 2021, ACM Trans. Graph..

[3]  Ling-Qi Yan,et al.  Highlight-aware two-stream network for single-image SVBRDF acquisition , 2021, ACM Trans. Graph..

[4]  Kalyan Sunkavalli,et al.  MaterialGAN , 2020, ACM Trans. Graph..

[5]  Jan Kautz,et al.  Two-Shot Spatially-Varying BRDF and Shape Estimation , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Kalyan Sunkavalli,et al.  Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Beibei Wang,et al.  Joint SVBRDF Recovery and Synthesis From a Single Image using an Unsupervised Generative Adversarial Network , 2020, EGSR.

[8]  Julie Dorsey,et al.  A novel framework for inverse procedural texture modeling , 2019, ACM Trans. Graph..

[9]  Pieter Peers,et al.  Deep inverse rendering for high-resolution SVBRDF estimation from an arbitrary number of images , 2019, ACM Trans. Graph..

[10]  Adrien Bousseau,et al.  Flexible SVBRDF Capture with a Multi‐Image Deep Network , 2019, Comput. Graph. Forum.

[11]  Kalyan Sunkavalli,et al.  Learning to reconstruct shape and spatially-varying reflectance from a single image , 2018, ACM Trans. Graph..

[12]  Xiao Li,et al.  Single Image Surface Appearance Modeling with Self‐augmented CNNs and Inexact Supervision , 2018, Comput. Graph. Forum.

[13]  Adrien Bousseau,et al.  Single-image SVBRDF capture with a rendering-aware deep network , 2018, ACM Trans. Graph..

[14]  Kalyan Sunkavalli,et al.  Materials for Masses: SVBRDF Acquisition with a Single Mobile Phone Image , 2018, ECCV.

[15]  Jian Wang,et al.  Reflectance Capture Using Univariate Sampling of BRDFs , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[16]  Xiao Li,et al.  Modeling surface appearance from a single photograph using self-augmented convolutional neural networks , 2017, ACM Trans. Graph..

[17]  Jannik Boll Nielsen,et al.  Minimal BRDF sampling for two-shot near-field reflectance acquisition , 2016, ACM Trans. Graph..

[18]  Pieter Peers,et al.  Recovering shape and spatially-varying surface reflectance under unknown illumination , 2016, ACM Trans. Graph..

[19]  Jaakko Lehtinen,et al.  Reflectance modeling by neural texture synthesis , 2016, ACM Trans. Graph..

[20]  Giuseppe Claudio Guarnera,et al.  BRDF Representation and Acquisition , 2016, Comput. Graph. Forum.

[21]  Li Fei-Fei,et al.  Perceptual Losses for Real-Time Style Transfer and Super-Resolution , 2016, ECCV.

[22]  Jaakko Lehtinen,et al.  Two-shot SVBRDF capture for stationary materials , 2015, ACM Trans. Graph..

[23]  Aswin C. Sankaranarayanan,et al.  A Dictionary-Based Approach for Estimating Shape and Spatially-Varying Reflectance , 2015, 2015 IEEE International Conference on Computational Photography (ICCP).

[24]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[25]  Pieter Peers,et al.  Appearance-from-motion , 2014, ACM Trans. Graph..

[26]  Manmohan Krishna Chandraker,et al.  On Shape and Material Recovery from Motion , 2014, ECCV.

[27]  Pieter Peers,et al.  Mobile Surface Reflectometry , 2014, SIGGRAPH '14.

[28]  Robert L. Cook,et al.  A Reflectance Model for Computer Graphics , 1987, TOGS.