暂无分享,去创建一个
[1] Christopher D. Manning,et al. Compression of Neural Machine Translation Models via Pruning , 2016, CoNLL.
[2] Michael C. Mozer,et al. Skeletonization: A Technique for Trimming the Fat from a Network via Relevance Assessment , 1988, NIPS.
[3] Ehud D. Karnin,et al. A simple procedure for pruning back-propagation trained neural networks , 1990, IEEE Trans. Neural Networks.
[4] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[5] Miguel Á. Carreira-Perpiñán,et al. "Learning-Compression" Algorithms for Neural Net Pruning , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[6] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[7] Ameya Prabhu,et al. Deep Expander Networks: Efficient Deep Networks from Graph Theory , 2017, ECCV.
[8] Yves Chauvin,et al. A Back-Propagation Algorithm with Optimal Use of Hidden Units , 1988, NIPS.
[9] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[10] Yoshua Bengio,et al. Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.
[11] Klaus-Robert Müller,et al. Efficient BackProp , 2012, Neural Networks: Tricks of the Trade.
[12] Peter Stone,et al. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science , 2017, Nature Communications.
[13] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[14] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[15] Geoffrey E. Hinton,et al. A Simple Way to Initialize Recurrent Networks of Rectified Linear Units , 2015, ArXiv.
[16] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[17] Yurong Chen,et al. Dynamic Network Surgery for Efficient DNNs , 2016, NIPS.
[18] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[19] G. D. Magoulas,et al. Under review as a conference paper at ICLR 2017 , 2017 .
[20] Xin Dong,et al. Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon , 2017, NIPS.
[21] Russell Reed,et al. Pruning algorithms-a survey , 1993, IEEE Trans. Neural Networks.
[22] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[23] Grgoire Montavon,et al. Neural Networks: Tricks of the Trade , 2012, Lecture Notes in Computer Science.
[24] Masumi Ishikawa,et al. Structural learning with forgetting , 1996, Neural Networks.
[25] Dmitry P. Vetrov,et al. Variational Dropout Sparsifies Deep Neural Networks , 2017, ICML.
[26] Max Welling,et al. Soft Weight-Sharing for Neural Network Compression , 2017, ICLR.
[27] Wojciech Zaremba,et al. Recurrent Neural Network Regularization , 2014, ArXiv.
[28] Andrew Zisserman,et al. Speeding up Convolutional Neural Networks with Low Rank Expansions , 2014, BMVC.
[29] Alexander Novikov,et al. Tensorizing Neural Networks , 2015, NIPS.
[30] Geoffrey E. Hinton,et al. Simplifying Neural Networks by Soft Weight-Sharing , 1992, Neural Computation.
[31] Max Welling,et al. Learning Sparse Neural Networks through L0 Regularization , 2017, ICLR.
[32] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[33] Erich Elsen,et al. Exploring Sparsity in Recurrent Neural Networks , 2017, ICLR.
[34] Gregory J. Wolff,et al. Optimal Brain Surgeon and general network pruning , 1993, IEEE International Conference on Neural Networks.
[35] Ming Yang,et al. Compressing Deep Convolutional Networks using Vector Quantization , 2014, ArXiv.
[36] Percy Liang,et al. Understanding Black-box Predictions via Influence Functions , 2017, ICML.
[37] Yi Zhang,et al. Stronger generalization bounds for deep nets via a compression approach , 2018, ICML.
[38] L. Breiman. Better subset regression using the nonnegative garrote , 1995 .
[39] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[40] Yiran Chen,et al. Learning Structured Sparsity in Deep Neural Networks , 2016, NIPS.
[41] Pritish Narayanan,et al. Deep Learning with Limited Numerical Precision , 2015, ICML.