On Adversarial Mixup Resynthesis
暂无分享,去创建一个
R Devon Hjelm | Sina Honari | Yoshua Bengio | Christopher Beckham | Chris Pal | Alex Lamb | Vikas Verma | R. Devon Hjelm | Farnoosh Ghadiri | Yoshua Bengio | Alex Lamb | Chris Pal | Vikas Verma | Christopher Beckham | F. Ghadiri | S. Honari
[1] L. Deng,et al. The MNIST Database of Handwritten Digit Images for Machine Learning Research [Best of the Web] , 2012, IEEE Signal Processing Magazine.
[2] Kristen Grauman,et al. Semantic Jitter: Dense Supervision for Visual Comparisons via Synthetic Images , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[3] Yoshua Bengio,et al. Mutual Information Neural Estimation , 2018, ICML.
[4] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[5] Yuichi Yoshida,et al. Spectral Normalization for Generative Adversarial Networks , 2018, ICLR.
[6] Yoshua Bengio,et al. Learning a synaptic learning rule , 1991, IJCNN-91-Seattle International Joint Conference on Neural Networks.
[7] Ioannis Mitliagkas,et al. Manifold Mixup: Better Representations by Interpolating Hidden States , 2018, ICML.
[8] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[9] Pieter Abbeel,et al. InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets , 2016, NIPS.
[10] Christopher Burgess,et al. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework , 2016, ICLR 2016.
[11] Andriy Mnih,et al. Disentangling by Factorising , 2018, ICML.
[12] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Yoshua Bengio,et al. Learning deep representations by mutual information estimation and maximization , 2018, ICLR.
[14] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[15] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[16] Oriol Vinyals,et al. Neural Discrete Representation Learning , 2017, NIPS.
[17] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[18] Sergey Levine,et al. Unsupervised Learning via Meta-Learning , 2018, ICLR.
[19] Alex Lamb,et al. Deep Learning for Classical Japanese Literature , 2018, ArXiv.
[20] Max Welling,et al. Auto-Encoding Variational Bayes , 2013, ICLR.
[21] Xi Chen,et al. Evolution Strategies as a Scalable Alternative to Reinforcement Learning , 2017, ArXiv.
[22] Kristen Grauman,et al. Fine-Grained Visual Comparisons with Local Learning , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.
[23] Yoshua Bengio,et al. Interpolation Consistency Training for Semi-Supervised Learning , 2019, IJCAI.
[24] Ole Winther,et al. Autoencoding beyond pixels using a learned similarity metric , 2015, ICML.
[25] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[26] Yoshua Bengio,et al. GraphMix: Regularized Training of Graph Neural Networks for Semi-Supervised Learning , 2019, ArXiv.
[27] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[28] Yoichi Yaguchi,et al. MixFeat: Mix Feature in Latent Space Learns Discriminative Space , 2018 .
[29] Yoshua Bengio,et al. Towards Biologically Plausible Deep Learning , 2015, ArXiv.
[30] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[31] Kenneth O. Stanley,et al. Deep Neuroevolution: Genetic Algorithms Are a Competitive Alternative for Training Deep Neural Networks for Reinforcement Learning , 2017, ArXiv.
[32] Seong Joon Oh,et al. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[33] Tim Sainburg,et al. Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions , 2018, ArXiv.
[34] Navdeep Jaitly,et al. Adversarial Autoencoders , 2015, ArXiv.
[35] Yoshua Bengio,et al. Deep Learning of Representations for Unsupervised and Transfer Learning , 2011, ICML Unsupervised and Transfer Learning.
[36] Jan Kautz,et al. Unsupervised Image-to-Image Translation Networks , 2017, NIPS.
[37] Quoc V. Le,et al. DropBlock: A regularization method for convolutional networks , 2018, NeurIPS.
[38] Alexei A. Efros,et al. Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Jürgen Schmidhuber,et al. Recurrent World Models Facilitate Policy Evolution , 2018, NeurIPS.
[40] Raymond Y. K. Lau,et al. Least Squares Generative Adversarial Networks , 2016, 2017 IEEE International Conference on Computer Vision (ICCV).
[41] Ioannis Mitliagkas,et al. Manifold Mixup: Encouraging Meaningful On-Manifold Interpolation as a Regularizer , 2018, ArXiv.
[42] Ben Poole,et al. Categorical Reparameterization with Gumbel-Softmax , 2016, ICLR.
[43] Philippe Beaudoin,et al. Independently Controllable Factors , 2017, ArXiv.
[44] Pascal Vincent,et al. Contractive Auto-Encoders: Explicit Invariance During Feature Extraction , 2011, ICML.
[45] Jonathon Shlens,et al. Conditional Image Synthesis with Auxiliary Classifier GANs , 2016, ICML.
[46] Yoshua Bengio,et al. Understanding intermediate layers using linear classifier probes , 2016, ICLR.
[47] David Berthelot,et al. Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer , 2018, ICLR.