ConvNets Pruning by Feature Maps Selection

Convolutional neural network (CNN) is one of the research focuses in machine learning in the last few years. But as the continuous development of CNN in vision and speech, the number of parameters is also increasing, too. CNN, which has millions of parameters, makes the memory of the model very large, and this impedes its widespread especially in mobile device. Based on the observation above, we not only design a CNN pruning method, where we prune unimportant feature maps, but also propose a separability values based number confirmation method which can relatively determine the appropriate pruning number. Experimental results show that, in the cifar-10 dataset, feature maps in each convolutional layer can be pruned by at least 15.6%, up to 59.7%, and the pruning process will not cause any performance loss. We also proved that the confirmation method is effective by a large number of repeated experiments which gradually prune feature maps of each convolutional layer.

[1]  Miao Sun,et al.  Generic Object Detection with Dense Neural Patterns and Regionlets , 2014, BMVC.

[2]  Jingfan Guo,et al.  Salient object detection in RGB-D image based on saliency fusion and propagation , 2015, ICIMCS '15.

[3]  Ting Rui,et al.  Convolutional neural network feature maps selection based on LDA , 2017, Multimedia Tools and Applications.

[4]  Yan Liu,et al.  Soft-assigned bag of features for object tracking , 2014, Multimedia Systems.

[5]  Andrew Zisserman,et al.  Return of the Devil in the Details: Delving Deep into Convolutional Nets , 2014, BMVC.

[6]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[7]  Bin Li,et al.  Wound intensity correction and segmentation with convolutional neural networks , 2017, Concurr. Comput. Pract. Exp..

[8]  Huimin Lu,et al.  Brain Intelligence: Go beyond Artificial Intelligence , 2017, Mobile Networks and Applications.

[9]  Rob Fergus,et al.  Visualizing and Understanding Convolutional Networks , 2013, ECCV.

[10]  Huimin Lu,et al.  Underwater image de-scattering and classification by deep neural network , 2016, Comput. Electr. Eng..

[11]  Yoshua Bengio,et al.  BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1 , 2016, ArXiv.

[12]  Xiaogang Wang,et al.  Sparsifying Neural Network Connections for Face Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[13]  Xiang Zhang,et al.  OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks , 2013, ICLR.

[14]  Stefan Carlsson,et al.  CNN Features Off-the-Shelf: An Astounding Baseline for Recognition , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[15]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[16]  Geoffrey E. Hinton,et al.  Deep Learning , 2015, Nature.

[17]  Ming Yang,et al.  3D Convolutional Neural Networks for Human Action Recognition , 2010, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Jing Liu,et al.  Object proposal on RGB-D images via elastic edge boxes , 2017, Neurocomputing.

[19]  Ting Rui,et al.  Pedestrian detection based on multi-convolutional features by feature maps pruning , 2017, Multimedia Tools and Applications.

[20]  Yan Liu,et al.  How important is location information in saliency detection of natural images , 2015, Multimedia Tools and Applications.

[21]  Ming Yang,et al.  DeepFace: Closing the Gap to Human-Level Performance in Face Verification , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition.

[22]  Huimin Lu,et al.  Single image dehazing through improved atmospheric light estimation , 2015, Multimedia Tools and Applications.