Incremental Reformulated Automatic Relevance Determination

In this work, the relationship between the incremental version of sparse Bayesian learning (SBL) with automatic relevance determination (ARD)-a fast marginal likelihood maximization (FMLM) algorithm-and a recently proposed reformulated ARD scheme is established. The FMLM algorithm is an incremental approach to SBL with ARD, where the corresponding objective function-the marginal likelihood-is optimized with respect to the parameters of a single component provided that the other parameters are fixed; the corresponding maximizer is computed in closed form, which enables a very efficient SBL realization. Wipf and Nagarajan have recently proposed a reformulated ARD (R-ARD) approach, which optimizes the marginal likelihood using auxiliary upper bounding functions. The resulting algorithm is then shown to correspond to a series of reweighted l1-constrained convex optimization problems. This correspondence establishes and analyzes the relationship between the FMLM and R-ARD schemes. Specifically, it is demonstrated that the FMLM algorithm realizes an incremental approach to the optimization of the R-ARD objective function. This relationship allows deriving the R-ARD pruning conditions similar to those used in the FMLM scheme to analytically detect components that are to be removed from the model, thus regulating the estimated signal sparsity and accelerating the algorithm convergence.

[1]  E.J. Candes,et al.  An Introduction To Compressive Sampling , 2008, IEEE Signal Processing Magazine.

[2]  George Eastman House,et al.  Sparse Bayesian Learning and the Relevan e Ve tor Ma hine , 2001 .

[3]  Robert H. Halstead,et al.  Matrix Computations , 2011, Encyclopedia of Parallel Computing.

[4]  Bhaskar D. Rao,et al.  Sparse Bayesian learning for basis selection , 2004, IEEE Transactions on Signal Processing.

[5]  R.G. Baraniuk,et al.  Compressive Sensing [Lecture Notes] , 2007, IEEE Signal Processing Magazine.

[6]  D.G. Tzikas,et al.  The variational approximation for Bayesian inference , 2008, IEEE Signal Processing Magazine.

[7]  D. Donoho For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution , 2006 .

[8]  Dmitriy Shutin,et al.  Application of Bayesian hierarchical prior modeling to sparse channel estimation , 2012, 2012 IEEE International Conference on Communications (ICC).

[9]  David P. Wipf,et al.  A New View of Automatic Relevance Determination , 2007, NIPS.

[10]  Mário A. T. Figueiredo Adaptive Sparseness for Supervised Learning , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[11]  Stephen P. Boyd,et al.  Convex Optimization , 2004, Algorithms and Theory of Computation Handbook.

[12]  Michael E. Tipping,et al.  Fast Marginal Likelihood Maximisation for Sparse Bayesian Models , 2003 .

[13]  H. Vincent Poor,et al.  Fast Variational Sparse Bayesian Learning With Automatic Relevance Determination for Superimposed Signals , 2011, IEEE Transactions on Signal Processing.

[14]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[15]  Gene H. Golub,et al.  Matrix computations (3rd ed.) , 1996 .

[16]  A. Doucet,et al.  A Hierarchical Bayesian Framework for Constructing Sparsity-inducing Priors , 2010, 1009.1914.