Pruning the Convolution Neural Network (SqueezeNet) based on L2 Normalization of Activation Maps
暂无分享,去创建一个
[1] Hanan Samet,et al. Pruning Filters for Efficient ConvNets , 2016, ICLR.
[2] Forrest N. Iandola,et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.
[3] Jacek M. Zurada,et al. Building Efficient ConvNets using Redundant Feature Pruning , 2018, ArXiv.
[4] Andrew Lavin,et al. Fast Algorithms for Convolutional Neural Networks , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Timo Aila,et al. Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning , 2016, ArXiv.
[6] Michael I. Jordan,et al. Advances in Neural Information Processing Systems 30 , 1995 .
[7] Suvrit Sra,et al. Diversity Networks: Neural Network Compression Using Determinantal Point Processes , 2015, 1511.05077.
[8] Yann LeCun,et al. Optimal Brain Damage , 1989, NIPS.
[9] Wonyong Sung,et al. Structured Pruning of Deep Convolutional Neural Networks , 2015, ACM J. Emerg. Technol. Comput. Syst..
[10] Lior Wolf,et al. Channel-Level Acceleration of Deep Face Representations , 2015, IEEE Access.
[11] Yann LeCun,et al. Fast Training of Convolutional Networks through FFTs , 2013, ICLR.