Pruning filters with L1-norm and standard deviation for CNN compression

Convolution Neural Networks (CNN) have evolved to be the state-of-art technique for machine learning tasks. However, CNNs bring a significant increase in the computation and parameter storage costs, which makes it difficult to deploy on embedded devices with limited hardware resources and a tight power budget. In recent years, people focus on reducing these overheads by compressing the CNN models, such as pruning weights and pruning filters. Compared with the method of pruning weights, the method of pruning filters does not result in sparse connectivity patterns. And it is conducive to the parallel acceleration on hardware platforms. In this paper, we proposed a new method to judge the importance of filters. In order to make the judgement more accurate, we use the standard deviation to represent the amount of information extracted by the filter. In the process of pruning, the unimportant filters can be removed directly without loss in the test accuracy. We also proposed a multilayer pruning method to avoid setting the pruning rate layer by layer. This holistic pruning method can improve the pruning efficiency. In order to verify the effectiveness of our algorithm, we do experiments with simple network VGG16 and complex networks ResNet18/34. We re-trained the pruned CNNs to compensate the accuracy loss caused by the pruning process. The results showed that our pruning method can reduce inference cost by up to 50% for VGG16 and 35% for ResNet18/34 on CIFAR10 with little accuracy loss.

[1]  Yurong Chen,et al.  Dynamic Network Surgery for Efficient DNNs , 2016, NIPS.

[2]  Rui Peng,et al.  Network Trimming: A Data-Driven Neuron Pruning Approach towards Efficient Deep Architectures , 2016, ArXiv.

[3]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Hanan Samet,et al.  Pruning Filters for Efficient ConvNets , 2016, ICLR.

[5]  Babak Hassibi,et al.  Second Order Derivatives for Network Pruning: Optimal Brain Surgeon , 1992, NIPS.

[6]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[7]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[8]  Xiaolin Hu,et al.  Sparsity-Regularized HMAX for Visual Recognition , 2014, PloS one.

[9]  Gordon Wetzstein,et al.  Convolutional Sparse Coding for High Dynamic Range Imaging , 2016, Comput. Graph. Forum.

[10]  Forrest N. Iandola,et al.  SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size , 2016, ArXiv.

[11]  Timo Aila,et al.  Pruning Convolutional Neural Networks for Resource Efficient Inference , 2016, ICLR.

[12]  Suvrit Sra,et al.  Diversity Networks: Neural Network Compression Using Determinantal Point Processes , 2015, 1511.05077.

[13]  Danilo Comminiello,et al.  Group sparse regularization for deep neural networks , 2016, Neurocomputing.

[14]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[15]  Shuicheng Yan,et al.  Training Skinny Deep Neural Networks with Iterative Hard Thresholding Methods , 2016, ArXiv.