TiedGAN: Multi-domain Image Transformation Networks

Recently, domain transformation has become a popular challenge in deep generative networks. One of the recent well-known domain transformation model named CycleGAN, has shown good performance in transformation task from one domain to another domain. However, CycleGAN lacks the capability to address multi-domain transformation problems because of its high complexity. In this paper, we propose TiedGAN in order to achieve multi-domain image transformation with reduced complexity. The results of our experiment indicate that the proposed model has comparable performance to CycleGAN as well as successfully alleviates the complexity issue in the multi-domain transformation task.

[1]  Alexei A. Efros,et al.  Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[2]  Ming-Yu Liu,et al.  Coupled Generative Adversarial Networks , 2016, NIPS.

[3]  Xiaogang Wang,et al.  Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).

[4]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[5]  Jan Kautz,et al.  Unsupervised Image-to-Image Translation Networks , 2017, NIPS.

[6]  Navdeep Jaitly,et al.  Adversarial Autoencoders , 2015, ArXiv.

[7]  Bernhard Schölkopf,et al.  AdaGAN: Boosting Generative Models , 2017, NIPS.

[8]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.