暂无分享,去创建一个
Bernhard Schölkopf | Michael J. Black | Partha Ghosh | Antonio Vergari | Mehdi S. M. Sajjadi | B. Schölkopf | Antonio Vergari | Partha Ghosh | B. Scholkopf
[1] A. N. Tikhonov,et al. Solutions of ill-posed problems , 1977 .
[2] Jocelyn Sietsma,et al. Creating artificial neural networks that generalize , 1991, Neural Networks.
[3] Bradley P. Carlin,et al. Markov Chain Monte Carlo conver-gence diagnostics: a comparative review , 1996 .
[4] Guozhong An,et al. The Effects of Adding Noise During Backpropagation Training on a Generalization Performance , 1996, Neural Computation.
[5] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[6] Simon Haykin,et al. GradientBased Learning Applied to Document Recognition , 2001 .
[7] T. Chan,et al. Variational image inpainting , 2005 .
[8] Radford M. Neal. Pattern Recognition and Machine Learning , 2007, Technometrics.
[9] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[10] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[11] Pascal Vincent,et al. Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.
[12] Hugo Larochelle,et al. The Neural Autoregressive Distribution Estimator , 2011, AISTATS.
[13] Yoshua Bengio,et al. A Generative Process for sampling Contractive Auto-Encoders , 2012, ICML 2012.
[14] Pascal Vincent,et al. Generalized Denoising Auto-Encoders as Generative Models , 2013, NIPS.
[15] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[16] Daan Wierstra,et al. Stochastic Backpropagation and Approximate Inference in Deep Generative Models , 2014, ICML.
[17] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[18] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[19] Honglak Lee,et al. Learning Structured Output Representation using Deep Conditional Generative Models , 2015, NIPS.
[20] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[21] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[22] Hugo Larochelle,et al. MADE: Masked Autoencoder for Distribution Estimation , 2015, ICML.
[23] Navdeep Jaitly,et al. Adversarial Autoencoders , 2015, ArXiv.
[24] Alán Aspuru-Guzik,et al. What Is High-Throughput Virtual Screening? A Perspective from Organic Materials Discovery , 2015 .
[25] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[26] Matthias Bethge,et al. A note on the evaluation of generative models , 2015, ICLR.
[27] Ruslan Salakhutdinov,et al. Importance Weighted Autoencoders , 2015, ICLR.
[28] Samy Bengio,et al. Generating Sentences from a Continuous Space , 2015, CoNLL.
[29] Matt J. Kusner,et al. Grammar Variational Autoencoder , 2017, ICML.
[30] Sepp Hochreiter,et al. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , 2017, NIPS.
[31] Valero Laparra,et al. End-to-end Optimized Image Compression , 2016, ICLR.
[32] Lucas Theis,et al. Amortised MAP Inference for Image Super-resolution , 2016, ICLR.
[33] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[34] Stefano Ermon,et al. Towards Deeper Understanding of Variational Autoencoding Models , 2017, ArXiv.
[35] Pieter Abbeel,et al. Variational Lossy Autoencoder , 2016, ICLR.
[36] Max Welling,et al. Improved Variational Inference with Inverse Autoregressive Flow , 2016, NIPS 2016.
[37] Erhardt Barth,et al. A Hybrid Convolutional Variational Autoencoder for Text Generation , 2017, EMNLP.
[38] Jascha Sohl-Dickstein,et al. REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models , 2017, NIPS.
[39] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[40] Léon Bottou,et al. Wasserstein Generative Adversarial Networks , 2017, ICML.
[41] Ruslan Salakhutdinov,et al. Geometry of Optimization and Implicit Regularization in Deep Learning , 2017, ArXiv.
[42] Aaron C. Courville,et al. Improved Training of Wasserstein GANs , 2017, NIPS.
[43] Oriol Vinyals,et al. Neural Discrete Representation Learning , 2017, NIPS.
[44] Bernhard Schölkopf,et al. EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[45] Fabio Viola,et al. Taming VAEs , 2018, ArXiv.
[46] Olivier Bachem,et al. Assessing Generative Models via Precision and Recall , 2018, NeurIPS.
[47] Bernhard Schölkopf,et al. Tempered Adversarial Networks , 2018, ICML.
[48] Xiaohua Zhai,et al. The GAN Landscape: Losses, Architectures, Regularization, and Normalization , 2018, ArXiv.
[49] Max Welling,et al. VAE with a VampPrior , 2017, AISTATS.
[50] Bernhard Schölkopf,et al. Wasserstein Auto-Encoders , 2017, ICLR.
[51] Sebastian Nowozin,et al. Which Training Methods for GANs do actually Converge? , 2018, ICML.
[52] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[53] Alán Aspuru-Guzik,et al. Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules , 2016, ACS central science.
[54] Alexander A. Alemi,et al. Fixing a Broken ELBO , 2017, ICML.
[55] Mario Lucic,et al. Are GANs Created Equal? A Large-Scale Study , 2017, NeurIPS.
[56] Tieniu Tan,et al. IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis , 2018, NeurIPS.
[57] David Lopez-Paz,et al. Optimizing the Latent Space of Generative Networks , 2017, ICML.
[58] Shakir Mohamed,et al. Distribution Matching in Variational Inference , 2018, ArXiv.
[59] Regina Barzilay,et al. Junction Tree Variational Autoencoder for Molecular Graph Generation , 2018, ICML.
[60] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[61] Andriy Mnih,et al. Resampled Priors for Variational Autoencoders , 2018, AISTATS.
[62] Ali Razavi,et al. Generating Diverse High-Fidelity Images with VQ-VAE-2 , 2019, NeurIPS.
[63] Michael J. Black,et al. Resisting Adversarial Attacks using Gaussian Mixture Variational Autoencoders , 2018, AAAI.
[64] David P. Wipf,et al. Diagnosing and Enhancing VAE Models , 2019, ICLR.