Pruning filters with L1-norm and capped L1-norm for CNN compression

The blistering progress of convolutional neural networks (CNNs) in numerous applications of the real-world usually obstruct by a surge in network volume and computational cost. Recently, researchers concentrate on eliminating these issues by compressing the CNN models, such as pruning filters and weights. In comparison with the technique of pruning weights, the technique of pruning filters doesn’t effect in sparse connectivity patterns. In this article, we have proposed a fresh new technique to estimate the significance of filters. More precisely, we combined L1-norm with capped L1-norm to represent the amount of information extracted by the filter and control regularization. In the process of pruning, the insignificant filters remove directly without any loss in the test accuracy, providing much slimmer and compact models with comparable accuracy and this process is iterated a few times. To validate the effectiveness of our algorithm. We experimentally determine the usefulness of our approach with several advanced CNN models on numerous standard data sets. Particularly, data sets CIFAR-10 is used on VGG-16 and prunes 92.7% parameters with float-point-operations (FLOPs) reduction of 75.8% without loss of accuracy and has achieved advancement in state-of-art.

[1]  Danilo Comminiello,et al.  Group sparse regularization for deep neural networks , 2016, Neurocomputing.

[2]  Song Han,et al.  AMC: AutoML for Model Compression and Acceleration on Mobile Devices , 2018, ECCV.

[3]  Gerhard Neumann,et al.  Deep Reinforcement Learning for Swarm Systems , 2018, J. Mach. Learn. Res..

[4]  Yi Yang,et al.  Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks , 2018, IJCAI.

[5]  George C. Runger,et al.  Feature Selection with Ensembles, Artificial Variables, and Redundancy Elimination , 2009, J. Mach. Learn. Res..

[6]  Fei Wang,et al.  Pruning filters with L1-norm and standard deviation for CNN compression , 2019, International Conference on Machine Vision.

[7]  Lei Wang,et al.  Multiple Kernel k-Means with Incomplete Kernels , 2017, AAAI.

[8]  Kilian Q. Weinberger,et al.  Gradient boosted feature selection , 2014, KDD.

[9]  Jan Peters,et al.  Model-Free Trajectory-based Policy Optimization with Monotonic Improvement , 2016, J. Mach. Learn. Res..

[10]  Robert Babuska,et al.  Experience Selection in Deep Reinforcement Learning for Control , 2018, J. Mach. Learn. Res..

[11]  Kaushik Roy,et al.  A Low Effort Approach to Structured CNN Design Using PCA , 2018, IEEE Access.

[12]  Bin Wang,et al.  Where to Prune: Using LSTM to Guide End-to-end Pruning , 2018, IJCAI.

[13]  Baoqun Yin,et al.  Using Feature Entropy to Guide Filter Pruning for Efficient Convolutional Networks , 2019, ICANN.

[14]  Xiao Yu,et al.  Infrared Handprint Image Restoration Algorithm Based on Apoptotic Mechanism , 2020, IEEE Access.