Kronecker CP Decomposition With Fast Multiplication for Compressing RNNs

Recurrent neural networks (RNNs) are powerful in the tasks oriented to sequential data, such as natural language processing and video recognition. However, because the modern RNNs have complex topologies and expensive space/computation complexity, compressing them becomes a hot and promising topic in recent years. Among plenty of compression methods, tensor decomposition, e.g., tensor train (TT), block term (BT), tensor ring (TR), and hierarchical Tucker (HT), appears to be the most amazing approach because a very high compression ratio might be obtained. Nevertheless, none of these tensor decomposition formats can provide both space and computation efficiency. In this article, we consider to compress RNNs based on a novel Kronecker CANDECOMP/PARAFAC (KCP) decomposition, which is derived from Kronecker tensor (KT) decomposition, by proposing two fast algorithms of multiplication between the input and the tensor-decomposed weight. According to our experiments based on UCF11, Youtube Celebrities Face, UCF50, TIMIT, TED-LIUM, and Spiking Heidelberg digits datasets, it can be verified that the proposed KCP-RNNs have a comparable performance of accuracy with those in other tensor-decomposed formats, and even 2,78,219x compression ratio could be obtained by the low-rank KCP. More importantly, KCP-RNNs are efficient in both space and computation complexity compared with other tensor-decomposed ones. Besides, we find KCP has the best potential of parallel computing to accelerate the calculations in neural networks.

[1]  Bo Yuan,et al.  Compressing Recurrent Neural Networks Using Hierarchical Tucker Tensor Decomposition , 2020, ArXiv.

[2]  Zhizheng Wu,et al.  Investigating gated recurrent neural networks for speech synthesis , 2016, ArXiv.

[3]  Cordelia Schmid,et al.  Long-Term Temporal Convolutions for Action Recognition , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[4]  Satoshi Nakamura,et al.  Compressing recurrent neural network with tensor train , 2017, 2017 International Joint Conference on Neural Networks (IJCNN).

[5]  Lars Grasedyck,et al.  Hierarchical Singular Value Decomposition of Tensors , 2010, SIAM J. Matrix Anal. Appl..

[6]  Lieven De Lathauwer,et al.  Decompositions of a Higher-Order Tensor in Block Terms - Part II: Definitions and Uniqueness , 2008, SIAM J. Matrix Anal. Appl..

[7]  Junjun Jiang,et al.  Semisupervised Discriminant Multimanifold Analysis for Action Recognition , 2019, IEEE Transactions on Neural Networks and Learning Systems.

[8]  S. V. Dolgov,et al.  ALTERNATING MINIMAL ENERGY METHODS FOR LINEAR SYSTEMS IN HIGHER DIMENSIONS∗ , 2014 .

[9]  Alexander Novikov,et al.  Tensorizing Neural Networks , 2015, NIPS.

[10]  Zhen Cui,et al.  Recurrent Regression for Face Recognition , 2016, ArXiv.

[11]  Yoshua Bengio,et al.  Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.

[12]  Andrzej Cichocki,et al.  Tensor Networks for Latent Variable Analysis: Higher Order Canonical Polyadic Decomposition , 2018, IEEE Transactions on Neural Networks and Learning Systems.

[13]  Ivan Oseledets,et al.  Tensor-Train Decomposition , 2011, SIAM J. Sci. Comput..

[14]  Vittorio Murino,et al.  Efficient pooling of image based CNN features for action recognition in videos , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[15]  Andrzej Cichocki,et al.  On Revealing Replicating Structures in Multiway Data: A Novel Tensor Decomposition Approach , 2012, LVA/ICA.

[16]  Andrzej Cichocki,et al.  Tensor Networks for Dimensionality Reduction, Big Data and Deep Learning , 2018, Advances in Data Analysis with Computational Intelligence Methods.

[17]  Jiebo Luo,et al.  Recognizing realistic actions from videos “in the wild” , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Ruslan Salakhutdinov,et al.  Action Recognition using Visual Attention , 2015, NIPS 2015.

[19]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[20]  Ran El-Yaniv,et al.  Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations , 2016, J. Mach. Learn. Res..

[21]  Masashi Sugiyama,et al.  Learning Efficient Tensor Representations with Ring-structured Networks , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[22]  Zenglin Xu,et al.  Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[23]  Gang Wang,et al.  Multi-manifold deep metric learning for image set classification , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[24]  Yoshua Bengio,et al.  Architectural Complexity Measures of Recurrent Neural Networks , 2016, NIPS.

[25]  Yoshua Bengio,et al.  Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.

[26]  Christopher J. Hillar,et al.  Most Tensor Problems Are NP-Hard , 2009, JACM.

[27]  Niraj K. Jha,et al.  Grow and Prune Compact, Fast, and Accurate LSTMs , 2018, IEEE Transactions on Computers.

[28]  Andrzej Cichocki,et al.  Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions , 2016, Found. Trends Mach. Learn..

[29]  Volker Tresp,et al.  Tensor-Train Recurrent Neural Networks for Video Classification , 2017, ICML.

[30]  Jun Yang,et al.  Human action recognition based on multi-mode spatial-temporal feature fusion , 2019, 2019 22th International Conference on Information Fusion (FUSION).

[31]  Yuan Xie,et al.  Model Compression and Hardware Acceleration for Neural Networks: A Comprehensive Survey , 2020, Proceedings of the IEEE.

[32]  Jürgen Schmidhuber,et al.  LSTM: A Search Space Odyssey , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[33]  Lei Deng,et al.  Hybrid Tensor Decomposition in Neural Network Compression , 2020, Neural Networks.

[34]  Reinhold Schneider,et al.  Optimization problems in contracted tensor networks , 2011, Comput. Vis. Sci..

[35]  Andrzej Cichocki,et al.  From basis components to complex structural patterns , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[36]  Vladimir Pavlovic,et al.  Face tracking and recognition with visual constraints in real-world videos , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[37]  Ivan Oseledets,et al.  Expressive power of recurrent neural networks , 2017, ICLR.