SlimNets: An Exploration of Deep Model Compression and Acceleration
暂无分享,去创建一个
[1] Suyog Gupta,et al. To prune, or not to prune: exploring the efficacy of pruning for model compression , 2017, ICLR.
[2] Thad Starner,et al. Data-Free Knowledge Distillation for Deep Neural Networks , 2017, ArXiv.
[3] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[4] Erich Elsen,et al. Exploring Sparsity in Recurrent Neural Networks , 2017, ICLR.
[5] Xiaogang Wang,et al. Convolutional neural networks with low-rank regularization , 2015, ICLR.
[6] Max Welling,et al. Group Equivariant Convolutional Networks , 2016, ICML.
[7] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[8] Yoshua Bengio,et al. Gradient-based learning applied to document recognition , 1998, Proc. IEEE.
[9] Jiawei Huang,et al. Learning Loss for Knowledge Distillation with Conditional Adversarial Networks , 2017, ArXiv.
[10] Qiang Chen,et al. Network In Network , 2013, ICLR.
[11] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning , 2016, ArXiv.
[12] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[13] Gregory J. Wolff,et al. Optimal Brain Surgeon and general network pruning , 1993, IEEE International Conference on Neural Networks.
[14] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[15] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[16] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[17] Rich Caruana,et al. Do Deep Nets Really Need to be Deep? , 2013, NIPS.
[18] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.