Hair-GAN: Recovering 3D hair structure from a single image using generative adversarial networks

Abstract We introduce Hair-GAN, an architecture of generative adversarial networks, to recover the 3D hair structure from a single image. The goal of our networks is to build a parametric transformation from 2D hair maps to 3D hair structure. The 3D hair structure is represented as a 3D volumetric field which encodes both the occupancy and the orientation information of the hair strands. Given a single hair image, we first align it with a bust model and extract a set of 2D maps encoding the hair orientation information in 2D, along with the bust depth map to feed into our Hair-GAN. With our generator network, we compute the 3D volumetric field as the structure guidance for the final hair synthesis. The modeling results not only resemble the hair in the input image but also possesses many vivid details in other views. The efficacy of our method is demonstrated by using a variety of hairstyles and comparing with the prior art.

[1]  Alexei A. Efros,et al.  Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[2]  Meng Zhang,et al.  Modeling hair from an RGB-D camera , 2018, ACM Trans. Graph..

[3]  Steve Marschner,et al.  A Survey on Hair Modeling: Styling, Simulation, and Rendering , 2007, IEEE Transactions on Visualization and Computer Graphics.

[4]  Hao Li,et al.  Avatar digitization from a single image for real-time rendering , 2017, ACM Trans. Graph..

[5]  Jian Sun,et al.  Face Alignment by Explicit Shape Regression , 2012, International Journal of Computer Vision.

[6]  Nils Thürey,et al.  tempoGAN: A Temporally Coherent, Volumetric GAN for Super-resolution Fluid Flow , 2018, ACM Trans. Graph..

[7]  William E. Lorensen,et al.  Marching cubes: A high resolution 3D surface construction algorithm , 1987, SIGGRAPH.

[8]  Frédo Durand,et al.  Hair photobooth , 2008, SIGGRAPH 2008.

[9]  Aaron C. Courville,et al.  Improved Training of Wasserstein GANs , 2017, NIPS.

[10]  Chongyang Ma,et al.  Robust hair capture using simulated examples , 2014, ACM Trans. Graph..

[11]  Kun Zhou,et al.  Single-view hair modeling for portrait manipulation , 2012, ACM Trans. Graph..

[12]  Chongyang Ma,et al.  Single-view hair modeling using a hairstyle database , 2015, ACM Trans. Graph..

[13]  Arno Zinke,et al.  Lighting hair from the inside , 2012, ACM Trans. Graph..

[14]  Baining Guo,et al.  Example-based hair geometry synthesis , 2009, SIGGRAPH 2009.

[15]  Kun Zhou,et al.  High-quality hair modeling from a single portrait photo , 2015, ACM Trans. Graph..

[16]  Szymon Rusinkiewicz,et al.  Structure-aware hair capture , 2013, ACM Trans. Graph..

[17]  Steve Marschner,et al.  Capturing hair assemblies fiber by fiber , 2009, SIGGRAPH 2009.

[18]  Hao Li,et al.  3D hair synthesis using volumetric variational autoencoders , 2018, ACM Trans. Graph..

[19]  Diego Gutierrez,et al.  Capturing and stylizing hair for 3D fabrication , 2014, ACM Trans. Graph..

[20]  Sepp Hochreiter,et al.  GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.

[21]  Hao Yang,et al.  A data-driven approach to four-view image-based hair modeling , 2017, ACM Trans. Graph..

[22]  Kun Zhou,et al.  Dynamic hair manipulation in images and videos , 2013, ACM Trans. Graph..

[23]  Kun Zhou,et al.  AutoHair: fully automatic hair modeling from a single image , 2016, ACM Trans. Graph..