暂无分享,去创建一个
[1] P. Hansen. The truncatedSVD as a method for regularization , 1987 .
[2] Zoubin Ghahramani,et al. Learning from labeled and unlabeled data with label propagation , 2002 .
[3] Yoshua Bengio,et al. Semi-supervised Learning by Entropy Minimization , 2004, CAP.
[4] Alexander Zien,et al. Semi-Supervised Learning , 2006 .
[5] Geoffrey E. Hinton,et al. Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.
[6] Andrew Y. Ng,et al. Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .
[7] Dong-Hyun Lee,et al. Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .
[8] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[9] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[10] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[11] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[12] Nikos Komodakis,et al. Wide Residual Networks , 2016, BMVC.
[13] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Tolga Tasdizen,et al. Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning , 2016, NIPS.
[15] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[16] Daniel Cremers,et al. Learning by Association — A Versatile Semi-Supervised Training Method for Neural Networks , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[18] Gabriel Goh,et al. Why Momentum Really Works , 2017 .
[19] Xavier Gastaldi,et al. Shake-Shake regularization , 2017, ArXiv.
[20] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[21] Loïc Le Folgoc,et al. Semi-Supervised Learning via Compact Latent Space Clustering , 2018, ICML.
[22] Colin Raffel,et al. Realistic Evaluation of Deep Semi-Supervised Learning Algorithms , 2018, NeurIPS.
[23] Saurabh Shintre,et al. Gradient Similarity: An Explainable Approach to Detect Adversarial Attacks against Deep Learning , 2018, ArXiv.
[24] Ioannis Mitliagkas,et al. Manifold Mixup: Encouraging Meaningful On-Manifold Interpolation as a Regularizer , 2018, ArXiv.
[25] Razvan Pascanu,et al. Adapting Auxiliary Losses Using Gradient Similarity , 2018, ArXiv.
[26] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[27] Andrew Gordon Wilson,et al. There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average , 2018, ICLR.