Accelerating the Classification of Very Deep Convolutional Network by A Cascading Approach

Large convolutional networks have achieved impressive classification performances recently. To achieve better performance, convolutional network tends to develop into deeper. However, the increase of network depth causes the linear growth of computational complexity, but cannot bring equivalent increase to the classification accuracy. To alleviate this inconsistence, we propose a cascading approach to accelerate the classification of very deep convolutional neural network. By exploiting the entropy metric to analyze the statistic differences of basic networks between the correctly and mistakenly classified images, we can assign the easily distinguished images to the shallow networks for reducing the computational complexity, and leave the difficultly classified images to the deep networks for maintaining the overall performance. Besides, the proposed cascaded networks can take advantage of the complementarity between different networks, which may boost the classification accuracy compared to the deepest network. We perform the experiments using residual networks of different depths on cifar100 dataset, on the condition of obtaining the similar accuracy to the deepest network, the results show that our cascaded ResNet32-ResNet110 and cascaded ResNet32-ResNet164 can reduce the computation time by 48.6% and 44.3% compared to ResNet110 and ResNet164, respectively. And the cascaded ResNet32-ResNet110-ResNet164 can reduce the computation time by 85.4% compared to the very deep Resnet1001.

[1]  Geoffrey E. Hinton,et al.  ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.

[2]  Zhuowen Tu,et al.  Deeply-Supervised Nets , 2014, AISTATS.

[3]  Yoav Freund,et al.  A decision-theoretic generalization of on-line learning and an application to boosting , 1995, EuroCOLT.

[4]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[5]  Robert E. Schapire,et al.  The strength of weak learnability , 1990, Mach. Learn..

[6]  Dumitru Erhan,et al.  Going deeper with convolutions , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Peter I. Corke,et al.  Subset feature learning for fine-grained category classification , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).

[8]  Jian Sun,et al.  Identity Mappings in Deep Residual Networks , 2016, ECCV.

[9]  Qiang Chen,et al.  Network In Network , 2013, ICLR.

[10]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[11]  Kilian Q. Weinberger,et al.  Densely Connected Convolutional Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Jürgen Schmidhuber,et al.  Highway Networks , 2015, ArXiv.

[13]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[14]  N Mehta Parth,et al.  Energy, Entropy and Exergy Concepts and Their Roles in Thermal Engineering , 2013 .

[15]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[16]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[17]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.