Deep Virtual Try-on with Clothes Transform

The goal of this work is to enable users to try on clothes by photos. When users providing their own photo and photo of intended clothes, we can generate the result photo of themselves wearing the clothes. Other virtual try-on methods are focused on the front-view of the person and the clothes. Meanwhile, our method can handle front and slightly turned-view directions. The details of the clothes are clearer. In the user study, about 90% of the cases, respondents chose our results over others.

[1]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[2]  Jitendra Malik,et al.  Shape matching and object recognition using shape contexts , 2010, 2010 3rd International Conference on Computer Science and Information Technology.

[3]  Larry S. Davis,et al.  VITON: An Image-Based Virtual Try-on Network , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[4]  Chu-Song Chen,et al.  MVC: A Dataset for View-Invariant Clothing Retrieval and Attribute Prediction , 2016, ICMR.

[5]  Ke Gong,et al.  Look into Person: Self-Supervised Structure-Sensitive Learning and a New Benchmark for Human Parsing , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[6]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[7]  Nikolay Jetchev,et al.  The Conditional Analogy GAN: Swapping Fashion Articles on People Images , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).