Bayesian Compressive Sensing

The data of interest are assumed to be represented as N-dimensional real vectors, and these vectors are compressible in some linear basis B, implying that the signal can be reconstructed accurately using only a small number M Lt N of basis-function coefficients associated with B. Compressive sensing is a framework whereby one does not measure one of the aforementioned N-dimensional signals directly, but rather a set of related measurements, with the new measurements a linear combination of the original underlying N-dimensional signal. The number of required compressive-sensing measurements is typically much smaller than N, offering the potential to simplify the sensing system. Let f denote the unknown underlying N-dimensional signal, and g a vector of compressive-sensing measurements, then one may approximate f accurately by utilizing knowledge of the (under-determined) linear relationship between f and g, in addition to knowledge of the fact that f is compressible in B. In this paper we employ a Bayesian formalism for estimating the underlying signal f based on compressive-sensing measurements g. The proposed framework has the following properties: i) in addition to estimating the underlying signal f, "error bars" are also estimated, these giving a measure of confidence in the inverted signal; ii) using knowledge of the error bars, a principled means is provided for determining when a sufficient number of compressive-sensing measurements have been performed; iii) this setting lends itself naturally to a framework whereby the compressive sensing measurements are optimized adaptively and hence not determined randomly; and iv) the framework accounts for additive noise in the compressive-sensing measurements and provides an estimate of the noise variance. In this paper we present the underlying theory, an associated algorithm, example results, and provide comparisons to other compressive-sensing inversion algorithms in the literature.

[1]  Yaakov Tsaig,et al.  Extensions of compressed sensing , 2006, Signal Process..

[2]  Stéphane Mallat,et al.  A Wavelet Tour of Signal Processing, 2nd Edition , 1999 .

[3]  E. George,et al.  APPROACHES FOR BAYESIAN VARIABLE SELECTION , 1997 .

[4]  David J. C. MacKay,et al.  Information-Based Objective Functions for Active Data Selection , 1992, Neural Computation.

[5]  Michael I. Jordan,et al.  Bayesian parameter estimation via variational methods , 2000, Stat. Comput..

[6]  William A. Pearlman,et al.  Efficient, low-complexity image coding with a set-partitioning embedded block coder , 2004, IEEE Transactions on Circuits and Systems for Video Technology.

[7]  Thomas M. Cover,et al.  Elements of Information Theory , 2005 .

[8]  Dean Phillips Foster,et al.  Calibration and Empirical Bayes Variable Selection , 1997 .

[9]  S. Mallat A wavelet tour of signal processing , 1998 .

[10]  Terence Tao,et al.  The Dantzig selector: Statistical estimation when P is much larger than n , 2005, math/0506081.

[11]  Ingrid Daubechies,et al.  Ten Lectures on Wavelets , 1992 .

[12]  P. Laguna,et al.  Signal Processing , 2002, Yearbook of Medical Informatics.

[13]  Lawrence Carin,et al.  Variational Bayes for continuous hidden Markov models and its application to active learning , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  A. Brix Bayesian Data Analysis, 2nd edn , 2005 .

[15]  Michael E. Tipping,et al.  Analysis of Sparse Bayesian Learning , 2001, NIPS.

[16]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[17]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[18]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[19]  J. Tropp,et al.  SIGNAL RECOVERY FROM PARTIAL INFORMATION VIA ORTHOGONAL MATCHING PURSUIT , 2005 .

[20]  W. J. Studden,et al.  Theory Of Optimal Experiments , 1972 .

[21]  Christopher M. Bishop,et al.  Variational Relevance Vector Machines , 2000, UAI.

[22]  Bhaskar D. Rao,et al.  Perspectives on Sparse Bayesian Learning , 2003, NIPS.

[23]  Christian P. Robert,et al.  The Bayesian choice : from decision-theoretic foundations to computational implementation , 2007 .

[24]  Rich Caruana,et al.  Multitask Learning , 1998, Encyclopedia of Machine Learning and Data Mining.

[25]  Aaas News,et al.  Book Reviews , 1893, Buffalo Medical and Surgical Journal.

[26]  B. Rao,et al.  ℓâ‚€-norm Minimization for Basis Selection , 2004, NIPS 2004.

[27]  John K Kruschke,et al.  Bayesian data analysis. , 2010, Wiley interdisciplinary reviews. Cognitive science.

[28]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[29]  George Eastman House,et al.  Sparse Bayesian Learning and the Relevan e Ve tor Ma hine , 2001 .

[30]  John G. Proakis,et al.  Probability, random variables and stochastic processes , 1985, IEEE Trans. Acoust. Speech Signal Process..

[31]  Joel A. Tropp,et al.  Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit , 2007, IEEE Transactions on Information Theory.

[32]  I. Miller Probability, Random Variables, and Stochastic Processes , 1966 .

[33]  David L. Donoho,et al.  Sparse Solution Of Underdetermined Linear Equations By Stagewise Orthogonal Matching Pursuit , 2006 .

[34]  Bhaskar D. Rao,et al.  Sparse Bayesian learning for basis selection , 2004, IEEE Transactions on Signal Processing.

[35]  Mário A. T. Figueiredo Adaptive Sparseness Using Jeffreys Prior , 2001, NIPS.

[36]  Bhaskar D. Rao,et al.  Comparing the Effects of Different Weight Distributions on Finding Sparse Representations , 2005, NIPS.

[37]  Robert D. Nowak,et al.  Signal Reconstruction From Noisy Random Projections , 2006, IEEE Transactions on Information Theory.

[38]  William A. Pearlman,et al.  A new, fast, and efficient image codec based on set partitioning in hierarchical trees , 1996, IEEE Trans. Circuits Syst. Video Technol..

[39]  Aarnout Brombacher,et al.  Probability... , 2009, Qual. Reliab. Eng. Int..

[40]  Michael E. Tipping,et al.  Fast Marginal Likelihood Maximisation for Sparse Bayesian Models , 2003 .

[41]  Peter Green,et al.  Markov chain Monte Carlo in Practice , 1996 .