Topology reduction in deep convolutional feature extraction networks

Deep convolutional neural networks (CNNs) used in practice employ potentially hundreds of layers and 10,000s of nodes. Such network sizes entail significant computational complexity due to the large number of convolutions that need to be carried out; in addition, a large number of parameters needs to be learned and stored. Very deep and wide CNNs may therefore not be well suited to applications operating under severe resource constraints as is the case, e.g., in low-power embedded and mobile platforms. This paper aims at understanding the impact of CNN topology, specifically depth and width, on the network's feature extraction capabilities. We address this question for the class of scattering networks that employ either Weyl-Heisenberg filters or wavelets, the modulus non-linearity, and no pooling. The exponential feature map energy decay results in Wiatowski et al., 2017, are generalized to O(a−N), where an arbitrary decay factor a > 1 can be realized through suitable choice of the Weyl-Heisenberg prototype function or the mother wavelet. We then show how networks of fixed (possibly small) depth N can be designed to guarantee that ((1 — ε) · 100)% of the input signal's energy are contained in the feature vector. Based on the notion of operationally significant nodes, we characterize, partly rigorously and partly heuristically, the topology-reducing effects of (effectively) band-limited input signals, band-limited filters, and feature map symmetries. Finally, for networks based on Weyl-Heisenberg filters, we determine the prototype function bandwidth that minimizes — for fixed network depth N — the average number of operationally significant nodes per layer.

[1]  Luca Benini,et al.  Deep structured features for semantic segmentation , 2016, 2017 25th European Signal Processing Conference (EUSIPCO).

[2]  P. Grohs,et al.  Cartoon Approximation with -Curvelets , 2014 .

[3]  M. Urner Scattered Data Approximation , 2016 .

[4]  Helmut Bölcskei,et al.  Deep convolutional neural networks on cartoon functions , 2016, 2016 IEEE International Symposium on Information Theory (ISIT).

[5]  Philipp Grohs,et al.  Energy Propagation in Deep Convolutional Neural Networks , 2017, IEEE Transactions on Information Theory.

[6]  Yann LeCun,et al.  The mnist database of handwritten digits , 2005 .

[7]  Thomas Wiatowski,et al.  A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction , 2015, IEEE Transactions on Information Theory.

[8]  Nicholas D. Lane,et al.  Can Deep Learning Revolutionize Mobile Sensing? , 2015, HotMobile.

[9]  Karlheinz Gröchenig,et al.  Note on B-splines, wavelet scaling functions, and Gabor frames , 2003, IEEE Trans. Inf. Theory.

[10]  G. Weiss,et al.  Littlewood-Paley Theory and the Study of Function Spaces , 1991 .

[11]  Gian Marti,et al.  Heart sound classification using deep structured features , 2016, 2016 Computing in Cardiology Conference (CinC).

[12]  D. Slepian,et al.  On bandwidth , 1976, Proceedings of the IEEE.

[13]  W. Czaja,et al.  Analysis of time-frequency scattering transforms , 2016, Applied and Computational Harmonic Analysis.

[14]  Andrew Zisserman,et al.  Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.

[15]  Stéphane Mallat,et al.  Group Invariant Scattering , 2011, ArXiv.

[16]  Irène Waldspurger Wavelet transform modulus : phase retrieval and scattering , 2017 .

[17]  Syed Twareque Ali,et al.  Continuous Frames in Hilbert Space , 1993 .

[18]  Helmut Bölcskei,et al.  Discrete Deep Feature Extraction: A Theory and New Architectures , 2016, ICML.

[19]  A. Rahimi,et al.  CONTINUOUS FRAMES IN HILBERT SPACES , 2006 .

[20]  D. Donoho Sparse Components of Images and Optimal Atomic Decompositions , 2001 .

[21]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).