暂无分享,去创建一个
[1] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[2] Dumitru Erhan,et al. Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Babak Hassibi,et al. Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.
[4] Diana Marculescu,et al. Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks , 2018, ArXiv.
[5] Song Han,et al. Learning both Weights and Connections for Efficient Neural Network , 2015, NIPS.
[6] Mathieu Salzmann,et al. Learning the Number of Neurons in Deep Networks , 2016, NIPS.
[7] Jiaxiang Wu,et al. Collaborative Channel Pruning for Deep Networks , 2019, ICML.
[8] Mingjie Sun,et al. Rethinking the Value of Network Pruning , 2018, ICLR.
[9] Max Welling,et al. Learning Sparse Neural Networks through L0 Regularization , 2017, ICLR.
[10] Ping Liu,et al. Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Rui Peng,et al. Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures , 2016, ArXiv.
[13] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[14] Larry S. Davis,et al. NISP: Pruning Networks Using Neuron Importance Score Propagation , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[15] Jacek M. Zurada,et al. Smooth group L1/2 regularization for input layer of feedforward neural networks , 2018, Neurocomputing.
[16] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[17] M. Yuan,et al. Model selection and estimation in regression with grouped variables , 2006 .
[18] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[19] Suya You,et al. Learning to Prune Filters in Convolutional Neural Networks , 2018, 2018 IEEE Winter Conference on Applications of Computer Vision (WACV).
[20] Xin Wang,et al. SkipNet: Learning Dynamic Routing in Convolutional Networks , 2017, ECCV.
[21] Xiangyu Zhang,et al. Channel Pruning for Accelerating Very Deep Neural Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[22] Feng Li,et al. Group $L_{1/2}$ Regularization for Pruning Hidden Layer Nodes of Feedforward Neural Networks , 2019, IEEE Access.
[23] Rong Jin,et al. Exclusive Lasso for Multi-task Feature Selection , 2010, AISTATS.
[24] Jianxin Wu,et al. ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[25] Takio Kurita,et al. Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks , 2020, 2020 International Joint Conference on Neural Networks (IJCNN).
[26] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[27] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[28] Yiran Chen,et al. Learning Structured Sparsity in Deep Neural Networks , 2016, NIPS.
[29] Liujuan Cao,et al. Towards Optimal Structured CNN Pruning via Generative Adversarial Learning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.
[31] Song Han,et al. Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.
[32] R. Venkatesh Babu,et al. Data-free Parameter Pruning for Deep Neural Networks , 2015, BMVC.
[33] James Zijun Wang,et al. Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers , 2018, ICLR.
[34] Rongrong Ji,et al. Accelerating Convolutional Networks via Global & Dynamic Filter Pruning , 2018, IJCAI.
[35] Kilian Q. Weinberger,et al. Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Naiyan Wang,et al. Data-Driven Sparse Structure Selection for Deep Neural Networks , 2017, ECCV.
[37] Hao Zhou,et al. Less Is More: Towards Compact CNNs , 2016, ECCV.
[38] Pavlo Molchanov,et al. Importance Estimation for Neural Network Pruning , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[39] Feiping Nie,et al. Exclusive Feature Learning on Arbitrary Structures via \ell_{1, 2}-norm , 2014, NIPS.
[40] Zhiqiang Shen,et al. Learning Efficient Convolutional Networks through Network Slimming , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[41] Yi Yang,et al. Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks , 2018, IJCAI.
[42] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[43] Luca Zappella,et al. Principal Filter Analysis for Guided Network Compression , 2018, ArXiv.