暂无分享,去创建一个
Koh Takeuchi | Naonori Ueda | Tomoharu Iwata | Zoubin Ghahramani | Akisato Kimura | Zoubin Ghahramani | N. Ueda | Tomoharu Iwata | Akisato Kimura | Koh Takeuchi
[1] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[2] Wojciech Zaremba,et al. Improved Techniques for Training GANs , 2016, NIPS.
[3] Ananthram Swami,et al. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks , 2015, 2016 IEEE Symposium on Security and Privacy (SP).
[4] Junmo Kim,et al. A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[6] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[7] Rich Caruana,et al. Learning Many Related Tasks at the Same Time with Backpropagation , 1994, NIPS.
[8] Zachary Chase Lipton,et al. Born Again Neural Networks , 2018, ICML.
[9] James Hensman,et al. Scalable Variational Gaussian Process Classification , 2014, AISTATS.
[10] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[11] Bernhard Schölkopf,et al. Fidelity-Weighted Learning , 2017, ICLR.
[12] Luca Bertinetto,et al. Learning feed-forward one-shot learners , 2016, NIPS.
[13] Carl E. Rasmussen,et al. A Unifying View of Sparse Approximate Gaussian Process Regression , 2005, J. Mach. Learn. Res..
[14] Shin Ishii,et al. Distributional Smoothing by Virtual Adversarial Examples , 2015, ICLR.
[15] Mingyan Liu,et al. Spatially Transformed Adversarial Examples , 2018, ICLR.
[16] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[17] Zhihua Zhang,et al. Bayesian Multicategory Support Vector Machines , 2006, UAI.
[18] Zoubin Ghahramani,et al. Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks , 2017, 1707.02476.
[19] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[20] Patrick D. McDaniel,et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples , 2016, ArXiv.
[21] Alexis Boukouvalas,et al. GPflow: A Gaussian Process Library using TensorFlow , 2016, J. Mach. Learn. Res..
[22] Zoubin Ghahramani,et al. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.
[23] Manfred Opper,et al. The Variational Gaussian Approximation Revisited , 2009, Neural Computation.
[24] Zoubin Ghahramani,et al. Sparse Gaussian Processes using Pseudo-inputs , 2005, NIPS.
[25] Rich Caruana,et al. Model compression , 2006, KDD '06.
[26] Oriol Vinyals,et al. Matching Networks for One Shot Learning , 2016, NIPS.
[27] Sameer Singh,et al. Generating Natural Adversarial Examples , 2017, ICLR.
[28] Gregory R. Koch,et al. Siamese Neural Networks for One-Shot Image Recognition , 2015 .
[29] Timothy Dozat,et al. Incorporating Nesterov Momentum into Adam , 2016 .
[30] Michalis K. Titsias,et al. Variational Learning of Inducing Variables in Sparse Gaussian Processes , 2009, AISTATS.
[31] Jorge Nocedal,et al. Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound-constrained optimization , 1997, TOMS.
[32] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[33] Mandar Kulkarni,et al. Knowledge distillation using unlabeled mismatched images , 2017, ArXiv.
[34] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[35] Thad Starner,et al. Data-Free Knowledge Distillation for Deep Neural Networks , 2017, ArXiv.
[36] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[37] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[39] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[40] Neil D. Lawrence,et al. Gaussian Processes for Big Data , 2013, UAI.
[41] Dacheng Tao,et al. Learning from Multiple Teacher Networks , 2017, KDD.