FashionGAN: Display your fashion design using Conditional Generative Adversarial Nets

Virtual garment display plays an important role in fashion design for it can directly show the design effect of the garment without having to make a sample garment like traditional clothing industry. In this paper, we propose an end‐to‐end virtual garment display method based on Conditional Generative Adversarial Networks. Different from existing 3D virtual garment methods which need complex interactions and domain‐specific user knowledge, our method only need users to input a desired fashion sketch and a specified fabric image then the image of the virtual garment whose shape and texture are consistent with the input fashion sketch and fabric image can be shown out quickly and automatically. Moreover, it can also be extended to contour images and garment images, which further improves the reuse rate of fashion design. Compared with the existing image‐to‐image methods, the quality of images generated by our method is better in terms of color and shape.

[1]  Christian Ledig,et al.  Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[3]  Alla Sheffer,et al.  Context-aware garment modeling from sketches , 2011, Comput. Graph..

[4]  Eitan Grinspun,et al.  Parsing sewing patterns into 3D garments , 2013, ACM Trans. Graph..

[5]  Christian Wolf,et al.  Interactive example-based terrain authoring with conditional generative adversarial networks , 2017, ACM Trans. Graph..

[6]  John F. Hughes,et al.  A Sketch-Based Interface for Clothing Virtual Characters , 2007, IEEE Computer Graphics and Applications.

[7]  Dimitris N. Metaxas,et al.  StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[8]  Jan Kautz,et al.  Multimodal Unsupervised Image-to-Image Translation , 2018, ECCV.

[9]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[10]  Rob Fergus,et al.  Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks , 2015, NIPS.

[11]  Minh N. Do,et al.  Semantic Image Inpainting with Deep Generative Models , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Jan Kautz,et al.  High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[13]  Nadia Magnenat-Thalmann,et al.  From early virtual garment simulation to interactive fashion design , 2005, Comput. Aided Des..

[14]  Caterina Rizzi,et al.  ClothAssembler: a CAD Module for Feature-based Garment Pattern Assembly , 2005 .

[15]  Marie-Paule Cani,et al.  Sketching Folds , 2015, ACM Trans. Graph..

[16]  Wojciech Zaremba,et al.  Improved Techniques for Training GANs , 2016, NIPS.

[17]  Xiaogang Wang,et al.  DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[19]  Alla Sheffer,et al.  Virtual Garments: A Fully Geometric Approach for Clothing Design , 2006, Comput. Graph. Forum.

[20]  Bernt Schiele,et al.  Generative Adversarial Text to Image Synthesis , 2016, ICML.

[21]  Alexei A. Efros,et al.  Toward Multimodal Image-to-Image Translation , 2017, NIPS.

[22]  Ole Winther,et al.  Autoencoding beyond pixels using a learned similarity metric , 2015, ICML.

[23]  Trevor Darrell,et al.  Adversarial Feature Learning , 2016, ICLR.

[24]  Fisher Yu,et al.  TextureGAN: Controlling Deep Image Synthesis with Texture Patches , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[25]  Fisher Yu,et al.  Scribbler: Controlling Deep Image Synthesis with Sketch and Color , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Katsushi Ikeuchi,et al.  Guest Editorial: Best Papers from ICCV 2015 , 2017, International Journal of Computer Vision.

[27]  Philip H. S. Torr,et al.  Multi-agent Diverse Generative Adversarial Networks , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[28]  Alexei A. Efros,et al.  Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[30]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[31]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[32]  Alexei A. Efros,et al.  Colorful Image Colorization , 2016, ECCV.

[33]  Leon A. Gatys,et al.  Image Style Transfer Using Convolutional Neural Networks , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[34]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).