An Empirical-Bayes Approach to Recovering Linearly Constrained Non-Negative Sparse Signals

We consider the recovery of an (approximately) sparse signal from noisy linear measurements, in the case that the signal is apriori known to be non-negative and obeys certain linear equality constraints. For this, we propose a novel empirical-Bayes approach that combines the Generalized Approximate Message Passing (GAMP) algorithm with the expectation maximization (EM) algorithm. To enforce both sparsity and non-negativity, we employ an i.i.d Bernoulli non-negative Gaussian mixture (NNGM) prior and perform approximate minimum mean-squared error (MMSE) recovery of the signal using sum-product GAMP. To learn the NNGM parameters, we use the EM algorithm with a suitable initialization. Meanwhile, the linear equality constraints are enforced by augmenting GAMP's linear observation model with noiseless pseudo-measurements. Numerical experiments demonstrate the state-of-the art mean-squared-error and runtime of our approach.

[1]  Andrea Montanari,et al.  Message-passing algorithms for compressed sensing , 2009, Proceedings of the National Academy of Sciences.

[2]  Michael Unser,et al.  Approximate Message Passing With Consistent Parameter Estimation and Applications to Sparse Learning , 2012, IEEE Transactions on Information Theory.

[3]  N. L. Johnson,et al.  Continuous Univariate Distributions. , 1995 .

[4]  Antonin Chambolle,et al.  Nonlinear wavelet image processing: variational problems, compression, and noise removal through wavelet shrinkage , 1998, IEEE Trans. Image Process..

[5]  José M. Bioucas-Dias,et al.  Vertex component analysis: a fast algorithm to unmix hyperspectral data , 2005, IEEE Transactions on Geoscience and Remote Sensing.

[6]  Andrea Montanari,et al.  Message passing algorithms for compressed sensing: I. motivation and construction , 2009, 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo).

[7]  Arian Maleki,et al.  Optimally Tuned Iterative Reconstruction Algorithms for Compressed Sensing , 2009, IEEE Journal of Selected Topics in Signal Processing.

[8]  Alexandros G. Dimakis,et al.  Sparse Recovery of Nonnegative Signals With Minimal Expansion , 2011, IEEE Transactions on Signal Processing.

[9]  D. Donoho,et al.  Sparse nonnegative solution of underdetermined linear equations by linear programming. , 2005, Proceedings of the National Academy of Sciences of the United States of America.

[10]  Michael Elad,et al.  On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations , 2008, IEEE Transactions on Information Theory.

[11]  Bradley Efron,et al.  Large-scale inference , 2010 .

[12]  Philip Schniter,et al.  Turbo reconstruction of structured sparse signals , 2010, 2010 44th Annual Conference on Information Sciences and Systems (CISS).

[13]  Antonio J. Plaza,et al.  Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches , 2012, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.

[14]  Geoffrey E. Hinton,et al.  A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.

[15]  Victor DeMiguel,et al.  Optimal Versus Naive Diversification: How Inefficient is the 1/N Portfolio Strategy? , 2009 .

[16]  W. Steiger,et al.  Least Absolute Deviations: Theory, Applications and Algorithms , 1984 .

[17]  K. Feldman Portfolio Selection, Efficient Diversification of Investments . By Harry M. Markowitz (Basil Blackwell, 1991) £25.00 , 1992 .

[18]  Michael P. Friedlander,et al.  Probing the Pareto Frontier for Basis Pursuit Solutions , 2008, SIAM J. Sci. Comput..

[19]  I. Daubechies,et al.  An iterative thresholding algorithm for linear inverse problems with a sparsity constraint , 2003, math/0307152.

[20]  Volkan Cevher,et al.  Bilinear Generalized Approximate Message Passing—Part I: Derivation , 2013, IEEE Transactions on Signal Processing.

[21]  New York Dover,et al.  ON THE CONVERGENCE PROPERTIES OF THE EM ALGORITHM , 1983 .

[22]  H. Markowitz Portfolio Selection: Efficient Diversification of Investments , 1971 .

[23]  Rahul Garg,et al.  Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property , 2009, ICML '09.

[24]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[25]  Nicolas Gillis,et al.  Fast and Robust Recursive Algorithmsfor Separable Nonnegative Matrix Factorization , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[26]  Dimitri P. Bertsekas,et al.  Nonlinear Programming , 1997 .

[27]  I. Daubechies,et al.  Sparse and stable Markowitz portfolios , 2007, Proceedings of the National Academy of Sciences.

[28]  Sanjeev Khudanpur,et al.  Maximum Likelihood Set for Estimating a Probability Mass Function , 2005, Neural Computation.

[29]  Mario Winter,et al.  N-FINDR: an algorithm for fast autonomous spectral end-member determination in hyperspectral data , 1999, Optics & Photonics.

[30]  Emmanuel J. Candès,et al.  Templates for convex cone problems with applications to sparse signal recovery , 2010, Math. Program. Comput..

[31]  Andrea Montanari,et al.  Message passing algorithms for compressed sensing: II. analysis and validation , 2009, 2010 IEEE Information Theory Workshop on Information Theory (ITW 2010, Cairo).

[32]  Sundeep Rangan,et al.  Generalized approximate message passing for estimation with random linear mixing , 2010, 2011 IEEE International Symposium on Information Theory Proceedings.

[33]  Sina Jafarpour,et al.  Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices , 2012, Proceedings of the National Academy of Sciences.

[34]  Yonina C. Eldar,et al.  The Projected GSURE for Automatic Parameter Tuning in Iterative Shrinkage Methods , 2010, Applied and Computational Harmonic Analysis.

[35]  Sundeep Rangan,et al.  On the convergence of approximate message passing with arbitrary matrices , 2014, 2014 IEEE International Symposium on Information Theory.

[36]  Adel Javanmard,et al.  State Evolution for General Approximate Message Passing Algorithms, with Applications to Spatial Coupling , 2012, ArXiv.

[37]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[38]  Chein-I Chang,et al.  Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery , 2001, IEEE Trans. Geosci. Remote. Sens..

[39]  Volkan Cevher,et al.  Fixed Points of Generalized Approximate Message Passing With Arbitrary Matrices , 2013, IEEE Transactions on Information Theory.

[40]  Volkan Cevher,et al.  Sparse projections onto the simplex , 2012, ICML.

[41]  Matthias Hein,et al.  Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization , 2012, 1205.0953.

[42]  John P. Kerekes,et al.  SHARE 2012: large edge targets for hyperspectral imaging applications , 2013, Defense, Security, and Sensing.

[43]  J. Romberg,et al.  Imaging via Compressive Sampling , 2008, IEEE Signal Processing Magazine.

[44]  Y. Selen,et al.  Model-order selection: a review of information criterion rules , 2004, IEEE Signal Processing Magazine.

[45]  Nasser M. Nasrabadi,et al.  Pattern Recognition and Machine Learning , 2006, Technometrics.

[46]  Philip Schniter,et al.  Hyperspectral image unmixing via bilinear generalized approximate message passing , 2013, Defense, Security, and Sensing.

[47]  D. Rubin,et al.  Maximum likelihood from incomplete data via the EM - algorithm plus discussions on the paper , 1977 .