暂无分享,去创建一个
[1] Emilien Dupont,et al. Joint-VAE: Learning Disentangled Joint Continuous and Discrete Representations , 2018, NeurIPS.
[2] Masahiro Suzuki,et al. Joint Multimodal Learning with Deep Generative Models , 2016, ICLR.
[3] Yee Whye Teh,et al. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables , 2016, ICLR.
[4] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[5] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[6] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[7] Roger B. Grosse,et al. Isolating Sources of Disentanglement in Variational Autoencoders , 2018, NeurIPS.
[8] Max Welling Donald,et al. Products of Experts , 2007 .
[9] Philip H. S. Torr,et al. Variational Mixture-of-Experts Autoencoders for Multi-Modal Deep Generative Models , 2019, NeurIPS.
[10] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[11] Andriy Mnih,et al. Disentangling by Factorising , 2018, ICML.
[12] Mike Wu,et al. Multimodal Generative Models for Scalable Weakly-Supervised Learning , 2018, NeurIPS.
[13] Kevin Murphy,et al. Generative Models of Visually Grounded Imagination , 2017, ICLR.
[14] Dana H. Brooks,et al. Structured Disentangled Representations , 2018, AISTATS.
[15] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.