Coupled Learning for Image Generation and Latent Representation Inference Using MMD

For modeling the data distribution or the latent representation distribution in the image domain, deep learning methods such as the variational autoencoder (VAE) and the generative adversarial network (GAN) have been proposed. However, despite its capability of modeling these two distributions, VAE tends to learn less meaningful latent representations; GAN can only model the data distribution using the challenging and unstable adversarial training. To address these issues, we propose an unsupervised learning framework to perform coupled learning of these two distributions based on kernel maximum mean discrepancy (MMD). Specifically, the proposed framework consists of (1) an inference network and a generation network for mapping between the data space and the latent space, and (2) a latent tester and a data tester for performing two-sample tests in these two spaces, respectively. On one hand, we perform a two-sample test between stochastic representations from the prior distribution and inferred representations from the inference network. On the other hand, we perform a two-sample test between the real data and generated data. In addition, we impose structural regularization that the two networks are inverses of each other, so that the learning of these two distributions can be coupled. Experimental results on benchmark image datasets demonstrate that the proposed framework is competitive on image generation and latent representation inference of images compared with representative approaches.

[1]  Richard S. Zemel,et al.  Generative Moment Matching Networks , 2015, ICML.

[2]  Soumith Chintala,et al.  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks , 2015, ICLR.

[3]  Rob Fergus,et al.  Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks , 2015, NIPS.

[4]  Ming-Yu Liu,et al.  Coupled Generative Adversarial Networks , 2016, NIPS.

[5]  Aaron C. Courville,et al.  Adversarially Learned Inference , 2016, ICLR.

[6]  Jiajun Wu,et al.  Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling , 2016, NIPS.

[7]  Yiming Yang,et al.  MMD GAN: Towards Deeper Understanding of Moment Matching Network , 2017, NIPS.

[8]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[9]  Thomas Brox,et al.  Generating Images with Perceptual Similarity Metrics based on Deep Networks , 2016, NIPS.

[10]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[11]  Yann LeCun,et al.  Deep multi-scale video prediction beyond mean square error , 2015, ICLR.

[12]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Xiaogang Wang,et al.  Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).

[14]  Alexander J. Smola,et al.  Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy , 2016, ICLR.

[15]  Hyunsoo Kim,et al.  Learning to Discover Cross-Domain Relations with Generative Adversarial Networks , 2017, ICML.

[16]  Sebastian Nowozin,et al.  f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.

[17]  Yoshua Bengio,et al.  Generative Adversarial Nets , 2014, NIPS.

[18]  Zhen Wang,et al.  Multi-class Generative Adversarial Networks with the L2 Loss Function , 2016, ArXiv.

[19]  Yann LeCun,et al.  Energy-based Generative Adversarial Network , 2016, ICLR.

[20]  Yoshua Bengio,et al.  Variance Regularizing Adversarial Learning , 2017, ArXiv.

[21]  Raymond Y. K. Lau,et al.  Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[22]  Dimitris N. Metaxas,et al.  StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).

[23]  Eero P. Simoncelli,et al.  Image quality assessment: from error visibility to structural similarity , 2004, IEEE Transactions on Image Processing.

[24]  David Lopez-Paz,et al.  Optimizing the Latent Space of Generative Networks , 2017, ICML.

[25]  Pieter Abbeel,et al.  Variational Lossy Autoencoder , 2016, ICLR.

[26]  Yoshua Bengio,et al.  Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.

[27]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[28]  Bernhard Schölkopf,et al.  A Kernel Method for the Two-Sample-Problem , 2006, NIPS.

[29]  Stefano Ermon,et al.  InfoVAE: Balancing Learning and Inference in Variational Autoencoders , 2019, AAAI.

[30]  Ole Winther,et al.  Autoencoding beyond pixels using a learned similarity metric , 2015, ICML.

[31]  Zoubin Ghahramani,et al.  Training generative neural networks via Maximum Mean Discrepancy optimization , 2015, UAI.