暂无分享,去创建一个
Shakir Mohamed | Balaji Lakshminarayanan | Mihaela Rosca | S. Mohamed | Balaji Lakshminarayanan | Mihaela Rosca
[1] Samy Bengio,et al. Density estimation using Real NVP , 2016, ICLR.
[2] Trevor Darrell,et al. Adversarial Feature Learning , 2016, ICLR.
[3] Alexander A. Alemi,et al. An Information-Theoretic Analysis of Deep Latent-Variable Models , 2017, ArXiv.
[4] Kilian Q. Weinberger,et al. On Calibration of Modern Neural Networks , 2017, ICML.
[5] Peter Dayan,et al. Comparison of Maximum Likelihood and GAN-based training of Real NVPs , 2017, ArXiv.
[6] Yann LeCun,et al. Energy-based Generative Adversarial Network , 2016, ICLR.
[7] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[8] Max Welling,et al. VAE with a VampPrior , 2017, AISTATS.
[9] Christopher Burgess,et al. DARLA: Improving Zero-Shot Transfer in Reinforcement Learning , 2017, ICML.
[10] Bernhard Schölkopf,et al. Wasserstein Auto-Encoders , 2017, ICLR.
[11] Takafumi Kanamori,et al. Density Ratio Estimation in Machine Learning , 2012 .
[12] Aaron C. Courville,et al. Adversarially Learned Inference , 2016, ICLR.
[13] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[14] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[15] Daan Wierstra,et al. One-Shot Generalization in Deep Generative Models , 2016, ICML.
[16] Navdeep Jaitly,et al. Adversarial Autoencoders , 2015, ArXiv.
[17] David Pfau,et al. Unrolled Generative Adversarial Networks , 2016, ICLR.
[18] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .
[19] P. Cochat,et al. Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.
[20] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[21] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[22] Andrew M. Dai,et al. Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step , 2017, ICLR.
[23] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[24] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[25] Oriol Vinyals,et al. Neural Discrete Representation Learning , 2017, NIPS.
[26] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[27] Daan Wierstra,et al. Towards Conceptual Compression , 2016, NIPS.
[28] Iain Murray,et al. Masked Autoregressive Flow for Density Estimation , 2017, NIPS.
[29] Ferenc Huszár,et al. Variational Inference using Implicit Distributions , 2017, ArXiv.
[30] Yoshua Bengio,et al. Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[31] Hyunsoo Kim,et al. Learning to Discover Cross-Domain Relations with Generative Adversarial Networks , 2017, ICML.
[32] Zhou Wang,et al. Multiscale structural similarity for image quality assessment , 2003, The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003.
[33] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[34] Dawn Xiaodong Song,et al. Adversarial Examples for Generative Models , 2017, 2018 IEEE Security and Privacy Workshops (SPW).
[35] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[36] Sergey Ioffe,et al. Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Shakir Mohamed,et al. Variational Inference with Normalizing Flows , 2015, ICML.
[38] Léon Bottou,et al. Wasserstein GAN , 2017, ArXiv.
[39] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[40] Mohammad Havaei,et al. Learnable Explicit Density for Continuous Latent Space and Variational Inference , 2017, ArXiv.
[41] Alex Graves,et al. Conditional Image Generation with PixelCNN Decoders , 2016, NIPS.
[42] He Ma,et al. Quantitatively Evaluating GANs With Divergences Proposed for Training , 2018, ICLR.
[43] Shakir Mohamed,et al. Variational Approaches for Auto-Encoding Generative Adversarial Networks , 2017, ArXiv.
[44] Raymond Y. K. Lau,et al. Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[45] Charles A. Sutton,et al. VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning , 2017, NIPS.
[46] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[47] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[48] Lindsay I. Smith,et al. A tutorial on Principal Components Analysis , 2002 .
[49] Lawrence Carin,et al. ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching , 2017, NIPS.
[50] Alex Graves,et al. DRAW: A Recurrent Neural Network For Image Generation , 2015, ICML.
[51] Ole Winther,et al. Autoencoding beyond pixels using a learned similarity metric , 2015, ICML.
[52] Masashi Sugiyama,et al. Density-ratio matching under the Bregman divergence: a unified framework of density-ratio estimation , 2012 .
[53] 拓海 杉山,et al. “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .
[54] Jukka Corander,et al. Likelihood-Free Inference by Ratio Estimation , 2016, Bayesian Analysis.
[55] Sebastian Nowozin,et al. Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks , 2017, ICML.
[56] Max Welling,et al. Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.
[57] Zhe Gan,et al. Adversarial Symmetric Variational Autoencoder , 2017, NIPS.
[58] Yann LeCun,et al. Disentangling factors of variation in deep representation using adversarial training , 2016, NIPS.
[59] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[60] M. Gutmann,et al. Likelihood-free inference by penalised logistic regression , 2016 .
[61] Andrew L. Maas. Rectifier Nonlinearities Improve Neural Network Acoustic Models , 2013 .
[62] Shakir Mohamed,et al. Learning in Implicit Generative Models , 2016, ArXiv.
[63] S. Wood. Statistical inference for noisy nonlinear ecological dynamic systems , 2010, Nature.
[64] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[65] Ian J. Goodfellow,et al. On distinguishability criteria for estimating generative models , 2014, ICLR.
[66] Max Welling,et al. Causal Effect Inference with Deep Latent-Variable Models , 2017, NIPS 2017.