N EED TO BE D EEP AND C ONVOLUTIONAL ?
暂无分享,去创建一个
S. Kahou | R. Caruana | Shengjie Wang | Matthai Philipose | Matthew Richardson | K. Geras | G. Urban | Abdelrahman Mohamed | Ozlem Aslan
[1] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[2] Razvan Pascanu,et al. Policy Distillation , 2015, ICLR.
[3] Ruslan Salakhutdinov,et al. Actor-Mimic: Deep Multitask and Transfer Reinforcement Learning , 2015, ICLR.
[4] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[6] Yoshua Bengio,et al. Big Neural Networks Waste Capacity , 2013, ICLR.
[7] Dong Yu,et al. Conversational Speech Transcription Using Context-Dependent Deep Neural Networks , 2012, ICML.
[8] R. Srikant,et al. Why Deep Neural Networks? , 2016, ArXiv.
[9] Antonio Torralba,et al. Ieee Transactions on Pattern Analysis and Machine Intelligence 1 80 Million Tiny Images: a Large Dataset for Non-parametric Object and Scene Recognition , 2022 .
[10] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[11] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[12] Geoffrey E. Hinton,et al. Regularizing Neural Networks by Penalizing Confident Output Distributions , 2017, ICLR.
[13] Charles A. Sutton,et al. Scheduled denoising autoencoders , 2015, ICLR.
[14] Amnon Shashua,et al. Convolutional Rectifier Networks as Generalized Tensor Decompositions , 2016, ICML.
[15] Razvan Pascanu,et al. Theano: new features and speed improvements , 2012, ArXiv.
[16] Alexander J. Smola,et al. Fastfood - Computing Hilbert Space Expansions in loglinear time , 2013, ICML.
[17] Yifan Gong,et al. Learning small-size DNN with output-distribution-based criteria , 2014, INTERSPEECH.
[18] Prabhat,et al. Scalable Bayesian Optimization Using Deep Neural Networks , 2015, ICML.
[19] Roland Memisevic,et al. How far can we go without convolution: Improving fully-connected networks , 2015, ArXiv.
[20] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[21] Jürgen Schmidhuber,et al. Training Very Deep Networks , 2015, NIPS.
[22] Roland Memisevic,et al. Zero-bias autoencoders and the benefits of co-adapting features , 2014, ICLR.
[23] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[24] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[25] Yann LeCun,et al. Understanding Deep Architectures using a Recursive Convolutional Network , 2013, ICLR.
[26] G. Lewicki,et al. Approximation by Superpositions of a Sigmoidal Function , 2003 .
[27] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[28] William Chan,et al. Transferring knowledge from a RNN to a DNN , 2015, INTERSPEECH.