Trading approximation quality versus sparsity within incremental automatic relevance determination frameworks

In this paper a trade-off between sparsity and approximation quality of models learned with incremental automatic relevance determination (IARD) is addressed. An IARD algorithm is a class of sparse Bayesian learning (SBL) schemes. It permits an intuitive and simple adjustment of estimation expressions, with the adjustment having a simple interpretation in terms of signal-to-noise ratio (SNR). This adjustment allows for implementing a trade-off between sparsity of the estimated model versus its accuracy in terms of residual mean-square error (MSE). It is found that this adjustment has a different impact on the IARD performance, depending on whether the measurement model coincides with the used estimation model or not. Specifically, in the former case the value of the adjustment parameter set to the true SNR leads to an optimum performance of the IARD with the smallest MSE and estimated signal sparsity; moreover, the estimated sparsity then coincides with the true signal sparsity. In contrast, when there is a model mismatch, the lower MSE can be achieved only at the expense of less sparser models. In this case the adjustment parameter simply trades the estimated signal sparsity versus the accuracy of the model.

[1]  David P. Wipf,et al.  A New View of Automatic Relevance Determination , 2007, NIPS.

[2]  E. Candès,et al.  Sparsity and incoherence in compressive sampling , 2006, math/0611957.

[3]  H. Vincent Poor,et al.  Incremental Reformulated Automatic Relevance Determination , 2012, IEEE Transactions on Signal Processing.

[4]  Mário A. T. Figueiredo Adaptive Sparseness for Supervised Learning , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[5]  Stephen P. Boyd,et al.  Enhancing Sparsity by Reweighted ℓ1 Minimization , 2007, 0711.1612.

[6]  Yonina C. Eldar,et al.  Structured Compressed Sensing: From Theory to Applications , 2011, IEEE Transactions on Signal Processing.

[7]  Martin J. Wainwright,et al.  Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso) , 2009, IEEE Transactions on Information Theory.

[8]  E.J. Candes,et al.  An Introduction To Compressive Sampling , 2008, IEEE Signal Processing Magazine.

[9]  George Eastman House,et al.  Sparse Bayesian Learning and the Relevan e Ve tor Ma hine , 2001 .

[10]  E. Candès,et al.  Stable signal recovery from incomplete and inaccurate measurements , 2005, math/0503066.

[11]  Dmitriy Shutin,et al.  2 2 A ug 2 01 1 1 Sparse Estimation using Bayesian Hierarchical Prior Modeling for Real and Complex Models , 2011 .

[12]  R.G. Baraniuk,et al.  Compressive Sensing [Lecture Notes] , 2007, IEEE Signal Processing Magazine.

[13]  J. Bernardo,et al.  THE FORMAL DEFINITION OF REFERENCE PRIORS , 2009, 0904.0156.

[14]  Michael E. Tipping,et al.  Fast Marginal Likelihood Maximisation for Sparse Bayesian Models , 2003 .

[15]  D. Donoho For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution , 2006 .

[16]  H. Vincent Poor,et al.  Fast Variational Sparse Bayesian Learning With Automatic Relevance Determination for Superimposed Signals , 2011, IEEE Transactions on Signal Processing.

[17]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..