DeepMP for Non–Negative Sparse Decomposition

Non–negative signals form an important class of sparse signals. Many algorithms have already been proposed to recover such non-negative representations, where greedy and convex relaxed algorithms are among the most popular methods. The greedy techniques are low computational cost algorithms, which have also been modified to incorporate the non-negativity of the representations. One such modification has been pro-posed for Matching Pursuit (MP) based algorithms, which first chooses positive coefficients and uses a non-negative optimisation technique that guarantees the non–negativity of the coefficients. The performance of greedy algorithms, like all non–exhaustive search methods, suffer from high coherence with the linear generative model, called the dictionary. We here first reformulate the non–negative matching pursuit algorithm in the form of a deep neural network. We then show that the proposed model after training yields a significant improvement in terms of exact recovery performance, compared to other non–trained greedy algorithms, while keeping the complexity low.

[1]  David J. Field,et al.  Emergence of simple-cell receptive field properties by learning a sparse code for natural images , 1996, Nature.

[2]  G. A Theory for Multiresolution Signal Decomposition : The Wavelet Representation , 2004 .

[3]  Patrik O. Hoyer,et al.  Non-negative Matrix Factorization with Sparseness Constraints , 2004, J. Mach. Learn. Res..

[4]  Michael Elad,et al.  Image Denoising Via Learned Dictionaries and Sparse representation , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[5]  Marc'Aurelio Ranzato,et al.  Unsupervised Learning of Invariant Feature Hierarchies with Applications to Object Recognition , 2007, 2007 IEEE Conference on Computer Vision and Pattern Recognition.

[6]  Hyunsoo Kim,et al.  Sparse Non-negative Matrix Factorizations via Alternating Non-negativity-constrained Least Squares , 2006 .

[7]  T. Blumensath,et al.  Iterative Thresholding for Sparse Approximations , 2008 .

[8]  Geoffrey E. Hinton,et al.  Rectified Linear Units Improve Restricted Boltzmann Machines , 2010, ICML.

[9]  Yann LeCun,et al.  Learning Fast Approximations of Sparse Coding , 2010, ICML.

[10]  Jun Zhou,et al.  Hyperspectral Unmixing via $L_{1/2}$ Sparsity-Constrained Nonnegative Matrix Factorization , 2011, IEEE Transactions on Geoscience and Remote Sensing.

[11]  Antonio J. Plaza,et al.  Sparse Unmixing of Hyperspectral Data , 2011, IEEE Transactions on Geoscience and Remote Sensing.

[12]  Di Wu,et al.  A sparse regularized model for Raman spectral analysis , 2014, 2014 Sensor Signal Processing for Defence (SSPD).

[13]  Mike E. Davies,et al.  Fast Non-Negative Orthogonal Matching Pursuit , 2015, IEEE Signal Processing Letters.

[14]  Xiaohan Chen,et al.  Theoretical Linear Convergence of Unfolded ISTA and its Practical Weights and Thresholds , 2018, NeurIPS.

[15]  Xu Sun,et al.  Adaptive Gradient Methods with Dynamic Bound of Learning Rate , 2019, ICLR.

[16]  Charles Soussen,et al.  Non-Negative Orthogonal Greedy Algorithms , 2019, IEEE Transactions on Signal Processing.

[17]  Xiaohan Chen,et al.  ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA , 2018, ICLR.