Reconstruction of sparse vectors in compressive sensing with multiple measurement vectors using bidirectional long short-term memory

In this paper we address the problem of compressive sensing with multiple measurement vectors. We propose a reconstruction algorithm which learns sparse structure inside each sparse vector and among sparse vectors. The learning is based on a cross entropy cost function. The model is the Bidirectional Long Short-Term Memory that is deep in time. All modifications are done at the decoder so that the encoder remains the general compressive sensing encoder, i.e., wide random matrix. Through numerical experiments on a real world dataset, we show that the proposed method outperforms the traditional greedy algorithm SOMP as well as a number of model based Bayesian methods including Multitask Compressive Sensing and Compressive Sensing with Temporally Correlated Sources. We emphasize that since the proposed method is a learning based method, its performance depends on the availability of training data. Nevertheless, in many applications huge dataset of offline training data is usually available.1

[1]  David L Donoho,et al.  Compressed sensing , 2006, IEEE Transactions on Information Theory.

[2]  Geoffrey E. Hinton,et al.  Reducing the Dimensionality of Data with Neural Networks , 2006, Science.

[3]  Rabab Kreidieh Ward,et al.  Convolutional Deep Stacking Networks for distributed compressive sensing , 2017, Signal Process..

[4]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[5]  Rabab Kreidieh Ward,et al.  Using deep stacking network to improve structured compressed sensing with Multiple Measurement Vectors , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[6]  Dong Yu,et al.  Scalable stacking and learning for building deep architectures , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[7]  J. Tropp Algorithms for simultaneous sparse approximation. Part II: Convex relaxation , 2006, Signal Process..

[8]  References , 1971 .

[9]  Joel A. Tropp,et al.  Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit , 2006, Signal Process..

[10]  Rabab Kreidieh Ward,et al.  Deep Sentence Embedding Using Long Short-Term Memory Networks: Analysis and Application to Information Retrieval , 2015, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[11]  Rabab Kreidieh Ward,et al.  Recurrent Deep-Stacking Networks for sequence classification , 2014, 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP).

[12]  David B. Dunson,et al.  Multitask Compressive Sensing , 2009, IEEE Transactions on Signal Processing.

[13]  Rabab Kreidieh Ward,et al.  Exploiting correlations among channels in distributed compressive sensing with convolutional deep stacking networks , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[14]  Andrew W. Senior,et al.  Long short-term memory recurrent neural network architectures for large scale acoustic modeling , 2014, INTERSPEECH.

[15]  Bhaskar D. Rao,et al.  An Empirical Bayesian Strategy for Solving the Simultaneous Sparse Approximation Problem , 2007, IEEE Transactions on Signal Processing.

[16]  Tara N. Sainath,et al.  Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups , 2012, IEEE Signal Processing Magazine.

[17]  Rabab Kreidieh Ward,et al.  Distributed Compressive Sensing: A Deep Learning Approach , 2015, IEEE Transactions on Signal Processing.

[18]  Bhaskar D. Rao,et al.  Sparse Signal Recovery With Temporally Correlated Source Vectors Using Sparse Bayesian Learning , 2011, IEEE Journal of Selected Topics in Signal Processing.

[19]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[20]  Dong Yu,et al.  Deep Learning and Its Applications to Signal and Information Processing [Exploratory DSP] , 2011, IEEE Signal Processing Magazine.

[21]  Rabab Kreidieh Ward,et al.  Learning Input and Recurrent Weight Matrices in Echo State Networks , 2013, ArXiv.

[22]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[23]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.