暂无分享,去创建一个
Anton van den Hengel | Anthony R. Dick | Qinfeng Shi | Ehsan Abbasnejad | Iman Abbasnejad | A. Dick | Ehsan Abbasnejad | Javen Qinfeng Shi | A. Hengel | Iman Abbasnejad
[1] Bernhard Schölkopf,et al. A Kernel Two-Sample Test , 2012, J. Mach. Learn. Res..
[2] Aaron C. Courville,et al. Adversarially Learned Inference , 2016, ICLR.
[3] Carl E. Rasmussen,et al. Gaussian processes for machine learning , 2005, Adaptive computation and machine learning.
[4] Jost Tobias Springenberg,et al. Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks , 2015, ICLR.
[5] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[6] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[7] Shin Ishii,et al. Distributional Smoothing with Virtual Adversarial Training , 2015, ICLR 2016.
[8] Alexander J. Smola,et al. Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy , 2016, ICLR.
[9] Philip Bachman,et al. Learning with Pseudo-Ensembles , 2014, NIPS.
[10] David J. C. MacKay,et al. Information Theory, Inference, and Learning Algorithms , 2004, IEEE Transactions on Information Theory.
[11] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[12] Léon Bottou,et al. Wasserstein GAN , 2017, ArXiv.
[13] Simon Osindero,et al. Conditional Generative Adversarial Nets , 2014, ArXiv.
[14] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[15] Ole Winther,et al. Autoencoding beyond pixels using a learned similarity metric , 2015, ICML.
[16] Sebastian Nowozin,et al. f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization , 2016, NIPS.
[17] Guo-Jun Qi,et al. Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities , 2017, International Journal of Computer Vision.
[18] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[19] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[20] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[21] Tim Salimans,et al. Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks , 2016, NIPS.
[22] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[23] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[24] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[25] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.