FlexAE: flexibly learning latent priors for wasserstein auto-encoders
暂无分享,去创建一个
Himanshu Asnani | Parag Singla | AP Prathosh | Arnab Kumar Mondal | Parag Singla | Himanshu Asnani | A. Mondal | A. Prathosh
[1] Arno Solin,et al. Pioneer Networks: Progressively Growing Generative Autoencoder , 2018, ACCV.
[2] Mohammad Havaei,et al. Learnable Explicit Density for Continuous Latent Space and Variational Inference , 2017, ArXiv.
[3] Ioannis Mitliagkas,et al. Manifold Mixup: Better Representations by Interpolating Hidden States , 2018, ICML.
[4] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[5] Hiroshi Takahashi,et al. Variational Autoencoder with Implicit Optimal Priors , 2018, AAAI.
[6] Yann LeCun,et al. The mnist database of handwritten digits , 2005 .
[7] Guillaume Desjardins,et al. Understanding disentangling in β-VAE , 2018, ArXiv.
[8] Himanshu Asnani,et al. MaskAAE: Latent space optimization for Adversarial Auto-Encoders , 2020, UAI.
[9] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[10] Oriol Vinyals,et al. Neural Discrete Representation Learning , 2017, NIPS.
[11] Navdeep Jaitly,et al. Adversarial Autoencoders , 2015, ArXiv.
[12] Andriy Mnih,et al. Disentangling by Factorising , 2018, ICML.
[13] Max Welling,et al. Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.
[14] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[15] Ole Winther,et al. Autoencoding beyond pixels using a learned similarity metric , 2015, ICML.
[16] Marco Cote. STICK-BREAKING VARIATIONAL AUTOENCODERS , 2017 .
[17] Bernhard Schölkopf,et al. Wasserstein Auto-Encoders , 2017, ICLR.
[18] Patrick van der Smagt,et al. Learning Hierarchical Priors in VAEs , 2019, NeurIPS.
[19] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[20] Mario Lucic,et al. Are GANs Created Equal? A Large-Scale Study , 2017, NeurIPS.
[21] Stefano Ermon,et al. Towards Deeper Understanding of Variational Autoencoding Models , 2017, ArXiv.
[22] David Berthelot,et al. Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer , 2018, ICLR.
[23] Abhishek Kumar,et al. Regularized Autoencoders via Relaxed Injective Probability Flow , 2020, AISTATS.
[24] Andriy Mnih,et al. Resampled Priors for Variational Autoencoders , 2018, AISTATS.
[25] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[26] Shakir Mohamed,et al. Distribution Matching in Variational Inference , 2018, ArXiv.
[27] Bernhard Schölkopf,et al. Wasserstein Auto-Encoders: Latent Dimensionality and Random Encoders , 2018, ICLR.
[28] Max Welling,et al. VAE with a VampPrior , 2017, AISTATS.
[29] David P. Wipf,et al. Diagnosing and Enhancing VAE Models , 2019, ICLR.
[30] Stanislav Pidhorskyi,et al. Adversarial Latent Autoencoders , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[31] O. Bousquet,et al. From optimal transport to generative modeling: the VEGAN cookbook , 2017, 1705.07642.
[32] Bernhard Schölkopf,et al. From Variational to Deterministic Autoencoders , 2019, ICLR.
[33] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[34] Bharath Hariharan,et al. Augmentation-Interpolative AutoEncoders for Unsupervised Few-Shot Image Generation , 2020, ArXiv.
[35] Olivier Bachem,et al. Assessing Generative Models via Precision and Recall , 2018, NeurIPS.
[36] Shakir Mohamed,et al. Variational Inference with Normalizing Flows , 2015, ICML.
[37] Samy Bengio,et al. Density estimation using Real NVP , 2016, ICLR.
[38] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[39] Lars Hertel,et al. Approximate Inference for Deep Latent Gaussian Mixtures , 2016 .
[40] Stefano Ermon,et al. InfoVAE: Balancing Learning and Inference in Variational Autoencoders , 2019, AAAI.