Probability estimation for recoverability analysis of blind source separation based on sparse representation

An important application of sparse representation is underdetermined blind source separation (BSS), where the number of sources is greater than the number of observations. Within the stochastic framework, this paper discusses recoverability of underdetermined BSS based on a two-stage sparse representation approach. The two-stage approach is effective when the source matrix is sufficiently sparse. The first stage of the two-stage approach is to estimate the mixing matrix, and the second is to estimate the source matrix by minimizing the 1-norms of the source vectors subject to some constraints. After estimating the mixing matrix and fixing the number of nonzero entries of a source vector, we estimate the recoverability probability (i.e., the probability that the source vector can be recovered). A general case is then considered where the number of nonzero entries of the source vector is fixed and the mixing matrix is drawn from a specific probability distribution. The corresponding probability estimate on recoverability is also obtained. Based on this result, we further estimate the recoverability probability when the sources are also drawn from a distribution (e.g., Laplacian distribution). These probability estimates not only reflect the relationship between the recoverability and sparseness of sources, but also indicate the overall performance and confidence of the two-stage sparse representation approach for solving BSS problems. Several simulation results have demonstrated the validity of the probability estimation approach.

[1]  Rémi Gribonval,et al.  Sparse representations in unions of bases , 2003, IEEE Trans. Inf. Theory.

[2]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[3]  Michael Elad,et al.  A Probabilistic Study of the Average Performance of the Basis Pursuit , 2004 .

[4]  S. Muthukrishnan,et al.  Improved sparse approximation over quasiincoherent dictionaries , 2003, Proceedings 2003 International Conference on Image Processing (Cat. No.03CH37429).

[5]  D. Donoho For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution , 2006 .

[6]  Jean-Jacques Fuchs,et al.  Recovery of exact sparse representations in the presence of noise , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[7]  Jun Wang,et al.  Sequential blind extraction of instantaneously mixed sources , 2002, IEEE Trans. Signal Process..

[8]  Richard M. Everson,et al.  Independent Component Analysis: Principles and Practice , 2001 .

[9]  V. Koivunen,et al.  Identifiability and Separability of Linear Ica Models Revisited , 2003 .

[10]  Visa Koivunen,et al.  Identifiability, separability, and uniqueness of linear ICA models , 2004, IEEE Signal Processing Letters.

[11]  Terrence J. Sejnowski,et al.  Learning Overcomplete Representations , 2000, Neural Computation.

[12]  Michael Elad,et al.  Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[13]  Terrence J. Sejnowski,et al.  Blind source separation of more sources than mixtures using overcomplete representations , 1999, IEEE Signal Processing Letters.

[14]  Joel A. Tropp,et al.  Greed is good: algorithmic results for sparse approximation , 2004, IEEE Transactions on Information Theory.

[15]  J. Tropp Algorithms for simultaneous sparse approximation. Part II: Convex relaxation , 2006, Signal Process..

[16]  Bruno A. Olshausen,et al.  Learning Sparse Image Codes using a Wavelet Pyramid Architecture , 2000, NIPS.

[17]  D. Donoho For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution , 2006 .

[18]  T. Ens,et al.  Blind signal separation : statistical principles , 1998 .

[19]  Yuanqing Li,et al.  Analysis of Sparse Representation and Blind Source Separation , 2004, Neural Computation.

[20]  Michael Zibulevsky,et al.  Underdetermined blind source separation using sparse representations , 2001, Signal Process..

[21]  Pierre Vandergheynst,et al.  A simple test to check the optimality of sparse signal approximations , 2005, Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005..

[22]  Dmitry M. Malioutov,et al.  Optimal sparse representations in general overcomplete bases , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[23]  Barak A. Pearlmutter,et al.  Blind source separation by sparse decomposition , 2000, SPIE Defense + Commercial Sensing.

[24]  D. Donoho,et al.  Maximal Sparsity Representation via l 1 Minimization , 2002 .

[25]  Mark A. Girolami,et al.  A Variational Method for Learning Sparse and Overcomplete Representations , 2001, Neural Computation.

[26]  Michael Elad,et al.  Stable recovery of sparse overcomplete representations in the presence of noise , 2006, IEEE Transactions on Information Theory.

[27]  Joseph F. Murray,et al.  Dictionary Learning Algorithms for Sparse Representation , 2003, Neural Computation.

[28]  David L. Donoho,et al.  Neighborly Polytopes And Sparse Solution Of Underdetermined Linear Equations , 2005 .

[29]  M. Davies,et al.  Identifiability issues in noisy ICA , 2004, IEEE Signal Processing Letters.

[30]  Xiaoming Huo,et al.  Uncertainty principles and ideal atomic decomposition , 2001, IEEE Trans. Inf. Theory.

[31]  Barak A. Pearlmutter,et al.  Blind Source Separation by Sparse Decomposition in a Signal Dictionary , 2001, Neural Computation.