暂无分享,去创建一个
Martin Renqiang Min | Erran L. Li | Jun Han | Xuan Zhang | Ligong Han | Martin Renqiang Min | Jun Han | Ligong Han | Xuan Zhang
[1] Graham Neubig,et al. Lagging Inference Networks and Posterior Collapse in Variational Autoencoders , 2019, ICLR.
[2] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[3] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[4] Arthur Gretton,et al. On gradient regularizers for MMD GANs , 2018, NeurIPS.
[5] Emilien Dupont,et al. Joint-VAE: Learning Disentangled Joint Continuous and Discrete Representations , 2018, NeurIPS.
[6] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[7] Arthur Gretton,et al. Demystifying MMD GANs , 2018, ICLR.
[8] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[9] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[10] Marc G. Bellemare,et al. The Cramer Distance as a Solution to Biased Wasserstein Gradients , 2017, ArXiv.
[11] Patrick Kenny,et al. Front-End Factor Analysis for Speaker Verification , 2011, IEEE Transactions on Audio, Speech, and Language Processing.
[12] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[13] Guo-Jun Qi,et al. Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities , 2017, International Journal of Computer Vision.
[14] Stephan Mandt,et al. Disentangled Sequential Autoencoder , 2018, ICML.
[15] Daniel Kuhn,et al. Data-driven distributionally robust optimization using the Wasserstein metric: performance guarantees and tractable reformulations , 2015, Mathematical Programming.
[16] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[17] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[18] Bernhard Schölkopf,et al. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations , 2018, ICML.
[19] Bernhard Schölkopf,et al. On the Latent Space of Wasserstein Auto-Encoders , 2018, ArXiv.
[20] Asja Fischer,et al. On the regularization of Wasserstein GANs , 2017, ICLR.
[21] Jiawei He,et al. Probabilistic Video Generation using Holistic Attribute Control , 2018, ECCV.
[22] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[23] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[24] Sergey Levine,et al. Wasserstein Dependency Measure for Representation Learning , 2019, NeurIPS.
[25] Anastasios Delopoulos,et al. The MUG facial expression database , 2010, 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10.
[26] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[27] O. Bousquet,et al. From optimal transport to generative modeling: the VEGAN cookbook , 2017, 1705.07642.
[28] Carla Teixeira Lopes,et al. TIMIT Acoustic-Phonetic Continuous Speech Corpus , 2012 .
[29] Michael Tschannen,et al. On Mutual Information Maximization for Representation Learning , 2019, ICLR.
[30] Yiming Yang,et al. MMD GAN: Towards Deeper Understanding of Moment Matching Network , 2017, NIPS.
[31] Rama Chellappa,et al. TFGAN: Improving Conditioning for Text-to-Video Synthesis , 2018 .
[32] Ole Winther,et al. Ladder Variational Autoencoders , 2016, NIPS.
[33] Yoshua Bengio,et al. Learning deep representations by mutual information estimation and maximization , 2018, ICLR.
[34] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[35] Yu Zhang,et al. Unsupervised Learning of Disentangled and Interpretable Representations from Sequential Data , 2017, NIPS.
[36] Kate Saenko,et al. A Two-Stream Variational Adversarial Network for Video Generation , 2018, ArXiv.
[37] Andriy Mnih,et al. Disentangling by Factorising , 2018, ICML.
[38] Yoshua Bengio,et al. A Recurrent Latent Variable Model for Sequential Data , 2015, NIPS.
[39] Bernhard Schölkopf,et al. A Kernel Method for the Two-Sample-Problem , 2006, NIPS.
[40] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[41] Pascal Vincent,et al. Representation Learning: A Review and New Perspectives , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[42] Bernhard Schölkopf,et al. Learning Disentangled Representations with Wasserstein Auto-Encoders , 2018, ICLR.
[43] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[44] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[45] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[46] Sebastian Nowozin,et al. Stabilizing Training of Generative Adversarial Networks through Regularization , 2017, NIPS.
[47] Truyen Tran,et al. Improving Generalization and Stability of Generative Adversarial Networks , 2019, ICLR.
[48] Stefano Ermon,et al. Learning Hierarchical Features from Deep Generative Models , 2017, ICML.
[49] Vighnesh Birodkar,et al. Unsupervised Learning of Disentangled Representations from Video , 2017, NIPS.
[50] Jun Han,et al. Deep Probabilistic Video Compression , 2018, ArXiv.
[51] Alexander A. Alemi,et al. On Variational Bounds of Mutual Information , 2019, ICML.
[52] Roger B. Grosse,et al. Isolating Sources of Disentanglement in Variational Autoencoders , 2018, NeurIPS.
[53] Juan Carlos Niebles,et al. Learning to Decompose and Disentangle Representations for Video Prediction , 2018, NeurIPS.
[54] Jan Kautz,et al. MoCoGAN: Decomposing Motion and Content for Video Generation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.