SPICE and LIKES: Two hyperparameter-free methods for sparse-parameter estimation

SPICE (SParse Iterative Covariance-based Estimation) is a recently introduced method for sparse-parameter estimation in linear models using a robust covariance fitting criterion that does not depend on any hyperparameters. In this paper we revisit the derivation of SPICE to streamline it and to provide further insights into this method. LIKES (LIKelihood-based Estimation of Sparse parameters) is a new method obtained in a hyperparameter-free manner from the maximum-likelihood principle applied to the same estimation problem as considered by SPICE. Both SPICE and LIKES are shown to provide accurate parameter estimates even from scarce data samples, with LIKES being more accurate than SPICE at the cost of an increased computational burden.

[1]  Yaakov Tsaig,et al.  Fast Solution of $\ell _{1}$ -Norm Minimization Problems When the Solution May Be Sparse , 2008, IEEE Transactions on Information Theory.

[2]  David P. Wipf,et al.  A New View of Automatic Relevance Determination , 2007, NIPS.

[3]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[4]  Joachim H. G. Ender,et al.  On compressive sensing applied to radar , 2010, Signal Process..

[5]  John Wright,et al.  Dense Error Correction Via $\ell^1$-Minimization , 2010, IEEE Transactions on Information Theory.

[6]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[7]  Jianwei Ma,et al.  Compressed sensing of complex-valued data , 2012, Signal Process..

[8]  H. Zou The Adaptive Lasso and Its Oracle Properties , 2006 .

[9]  John Wright,et al.  Dense Error Correction via L1-Minimization , 2008, 0809.0199.

[10]  Petre Stoica,et al.  Sparse Estimation of Spectral Lines: Grid Selection Problems and Their Solutions , 2012, IEEE Transactions on Signal Processing.

[11]  D. Donoho,et al.  Fast Solution of -Norm Minimization Problems When the Solution May Be Sparse , 2008 .

[12]  Jian Li,et al.  SPICE: A Sparse Covariance-Based Estimation Method for Array Processing , 2011, IEEE Transactions on Signal Processing.

[13]  Björn E. Ottersten,et al.  Covariance Matching Estimation Techniques for Array Signal Processing Applications , 1998, Digit. Signal Process..

[14]  Petre Stoica,et al.  Spectral Analysis of Signals , 2009 .

[15]  Arian Maleki,et al.  Optimally Tuned Iterative Reconstruction Algorithms for Compressed Sensing , 2009, IEEE Journal of Selected Topics in Signal Processing.

[16]  M. Lustig,et al.  Compressed Sensing MRI , 2008, IEEE Signal Processing Magazine.

[17]  Bhaskar D. Rao,et al.  Sparse Bayesian learning for basis selection , 2004, IEEE Transactions on Signal Processing.

[18]  Hao He,et al.  Probing Waveform Synthesis and Receiver Filter Design , 2010, IEEE Signal Processing Magazine.

[19]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[20]  P. Stoica,et al.  Cyclic minimizers, majorization techniques, and the expectation-maximization algorithm: a refresher , 2004, IEEE Signal Process. Mag..

[21]  Jian Li,et al.  New Method of Sparse Parameter Estimation in Separable Models and Its Use for Spectral Analysis of Irregularly Sampled Data , 2011, IEEE Transactions on Signal Processing.

[22]  Hong Sun,et al.  Bayesian compressive sensing for cluster structured sparse signals , 2012, Signal Process..

[23]  George Eastman House,et al.  Sparse Bayesian Learning and the Relevan e Ve tor Ma hine , 2001 .

[24]  A. Jameson,et al.  Conditions for nonnegativeness of partitioned matrices , 1972 .

[25]  Petre Stoica,et al.  Maximum-Likelihood Nonparametric Estimation of Smooth Spectra From Irregularly Sampled Data , 2011, IEEE Transactions on Signal Processing.

[26]  Stephen P. Boyd,et al.  Enhancing Sparsity by Reweighted ℓ1 Minimization , 2007, 0711.1612.

[27]  D. Hunter,et al.  A Tutorial on MM Algorithms , 2004 .