Blind Multilinear Identification

We discuss a technique that allows blind recovery of signals or blind identification of mixtures in instances where such recovery or identification were previously thought to be impossible. These instances include: 1) closely located or highly correlated sources in antenna array processing; 2) highly correlated spreading codes in code division multiple access (CDMA) radio communication; and 3) nearly dependent spectra in fluorescence spectroscopy. These have important implications. In the case of antenna array processing, it allows for joint localization and extraction of multiple sources from the measurement of a noisy mixture recorded on multiple sensors in an entirely deterministic manner. In the case of CDMA, it allows the possibility of having a number of users larger than the spreading gain. In the case of fluorescence spectroscopy, it allows for detection of nearly identical chemical constituents. The proposed technique involves the solution of a bounded coherence low-rank multilinear approximation problem. We show that bounded coherence allows us to establish existence and uniqueness of the recovered solution. We will provide some statistical motivation for the approximation problem and discuss greedy approximation bounds. To provide the theoretical underpinnings for this technique, we develop a corresponding theory of sparse separable decompositions of functions, including notions of rank and nuclear norm that can be specialized to the usual ones for matrices and operators and also be applied to hypermatrices and tensors.

[1]  V. Tikhomirov On the Representation of Continuous Functions of Several Variables as Superpositions of Continuous Functions of a Smaller Number of Variables , 1991 .

[2]  Stephen P. Boyd,et al.  A rank minimization heuristic with application to minimum order system approximation , 2001, Proceedings of the 2001 American Control Conference. (Cat. No.01CH37148).

[3]  Linda Cardozo,et al.  Contrasts , 2003, BMJ : British Medical Journal.

[4]  Martin J. Mohlenkamp,et al.  Numerical operator calculus in higher dimensions , 2002, Proceedings of the National Academy of Sciences of the United States of America.

[5]  Yu Chen Inner product quadratures , 2012 .

[6]  J. Chang,et al.  Analysis of individual differences in multidimensional scaling via an n-way generalization of “Eckart-Young” decomposition , 1970 .

[7]  F. L. Hitchcock The Expression of a Tensor or a Polyadic as a Sum of Products , 1927 .

[8]  Radford M. Neal Pattern Recognition and Machine Learning , 2007, Technometrics.

[9]  Xiaoming Huo,et al.  Uncertainty principles and ideal atomic decomposition , 2001, IEEE Trans. Inf. Theory.

[10]  Grazia Lotti,et al.  Approximate Solutions for the Bilinear Form Computational Problem , 1980, SIAM J. Comput..

[11]  João Cesar M. Mota,et al.  Blind channel identification algorithms based on the Parafac decomposition of cumulant tensors: The single and multiuser cases , 2008, Signal Process..

[12]  Jean-Christophe Pesquet,et al.  Cumulant-based independence measures for linear mixtures , 2001, IEEE Trans. Inf. Theory.

[13]  W. Marsden I and J , 2012 .

[14]  Nikos D. Sidiropoulos,et al.  Kruskal's permutation lemma and the identification of CANDECOMP/PARAFAC and bilinear models with constant modulus constraints , 2004, IEEE Transactions on Signal Processing.

[15]  James Demmel,et al.  Fast linear algebra is stable , 2006, Numerische Mathematik.

[16]  Pierre Comon,et al.  Subtracting a best rank-1 approximation may increase tensor rank , 2009, 2009 17th European Signal Processing Conference.

[17]  S. Muthukrishnan,et al.  Approximation of functions over redundant dictionaries using coherence , 2003, SODA '03.

[18]  P. Comon Contrasts, independent component analysis, and blind deconvolution , 2004 .

[19]  Georgios B. Giannakis,et al.  Modeling of non-Gaussian array data using cumulants: DOA estimation of more sources with less sensors , 1993, Signal Process..

[20]  Stéphane Mallat,et al.  A Wavelet Tour of Signal Processing - The Sparse Way, 3rd Edition , 2008 .

[21]  David Gross,et al.  Recovering Low-Rank Matrices From Few Coefficients in Any Basis , 2009, IEEE Transactions on Information Theory.

[22]  P. Paatero A weighted non-negative least squares algorithm for three-way ‘PARAFAC’ factor analysis , 1997 .

[23]  Vin de Silva,et al.  Tensor rank and the ill-posedness of the best low-rank approximation problem , 2006, math/0607647.

[24]  R. Young,et al.  An introduction to nonharmonic Fourier series , 1980 .

[25]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1992, Math. Control. Signals Syst..

[26]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[27]  Richard A. Harshman,et al.  Foundations of the PARAFAC procedure: Models and conditions for an "explanatory" multi-model factor analysis , 1970 .

[28]  E. Candès,et al.  Sparsity and incoherence in compressive sampling , 2006, math/0611957.

[29]  Lek-Heng Lim,et al.  Singular values and eigenvalues of tensors: a variational approach , 2005, 1st IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, 2005..

[30]  Pierre Comon,et al.  Handbook of Blind Source Separation: Independent Component Analysis and Applications , 2010 .

[31]  Karl Pearson F.R.S. LIII. On lines and planes of closest fit to systems of points in space , 1901 .

[32]  Michael E. Tipping,et al.  Probabilistic Principal Component Analysis , 1999 .

[33]  Robert E. Mahony,et al.  Optimization Algorithms on Matrix Manifolds , 2007 .

[34]  Pierre Comon,et al.  Multiarray Signal Processing: Tensor decomposition meets compressed sensing , 2010, ArXiv.

[35]  Marc E. Pfetsch,et al.  The Computational Complexity of the Restricted Isometry Property, the Nullspace Property, and Related Concepts in Compressed Sensing , 2012, IEEE Transactions on Information Theory.

[36]  Pierre Comon,et al.  Sparse Representations and Low-Rank Tensor Approximation , 2011 .

[37]  Rémi Gribonval,et al.  Sparse representations in unions of bases , 2003, IEEE Trans. Inf. Theory.

[38]  Jayakumar Ramanathan Harmonic Analysis in Euclidean Space , 1998 .

[39]  Chris Peterson,et al.  Induction for secant varieties of Segre varieties , 2006, math/0607191.

[40]  Eric Moulines,et al.  Asymptotic performance analysis of direction-finding algorithms based on fourth-order cumulants , 1995, IEEE Trans. Signal Process..

[41]  R J A Tough Symmetry and Separation of Variables (Encyclopaedia of Mathematics and its Applications Vol 4) , 1978 .

[42]  Harm Derksen,et al.  On the Nuclear Norm and the Singular Value Decomposition of Tensors , 2013, Foundations of Computational Mathematics.

[43]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[44]  Guido Weiss,et al.  Harmonic Analysis in Euclidean Spaces , 1981 .

[45]  Harm Derksen,et al.  Kruskal's uniqueness inequality is sharp , 2013 .

[46]  P. Comon,et al.  Subtracting a best rank-1 approximation does not necessarily decrease tensor rank , 2015 .

[47]  Johan Håstad,et al.  Tensor Rank is NP-Complete , 1989, ICALP.

[48]  R. Horgan,et al.  Statistical Field Theory , 2014 .

[49]  Pierre Comon,et al.  Nonnegative approximations of nonnegative tensors , 2009, ArXiv.

[50]  J. Hiriart-Urruty,et al.  Convex analysis and minimization algorithms , 1993 .

[51]  Luca Chiantini,et al.  On the Concept of k‐Secant Order of a Variety , 2006 .

[52]  Gunter H. Meyer,et al.  Separation of Variables for Partial Differential Equations: An Eigenfunction Approach , 2005 .

[53]  Michael Elad,et al.  Stable recovery of sparse overcomplete representations in the presence of noise , 2006, IEEE Transactions on Information Theory.

[54]  Daniel N. Rockmore,et al.  Separation of Variables and the Computation of Fourier Transforms on Finite Groups, II , 2015, Discrete Mathematics & Theoretical Computer Science.

[55]  Laurent Albera,et al.  Multi-way space-time-wave-vector analysis for EEG source separation , 2012, Signal Process..

[56]  Kevin Barraclough,et al.  I and i , 2001, BMJ : British Medical Journal.

[57]  George Cybenko,et al.  Approximation by superpositions of a sigmoidal function , 1989, Math. Control. Signals Syst..

[58]  Nikos D. Sidiropoulos,et al.  Blind PARAFAC receivers for DS-CDMA systems , 2000, IEEE Trans. Signal Process..

[59]  Arye Nehorai,et al.  Vector-sensor array processing for electromagnetic source localization , 1994, IEEE Trans. Signal Process..

[60]  R. Redon,et al.  A simple correction method of inner filter effects affecting FEEM and its application to the PARAFAC decomposition , 2009 .

[61]  Alexander Vardy,et al.  The intractability of computing the minimum distance of a code , 1997, IEEE Trans. Inf. Theory.

[62]  Willard Miller,et al.  Symmetry and Separation of Variables , 1977 .

[63]  P. McCullagh Tensor Methods in Statistics , 1987 .

[64]  Florian Roemer,et al.  Multi-dimensional space-time-frequency component analysis of event related EEG data using closed-form PARAFAC , 2009, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing.

[65]  Michael Elad,et al.  Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization , 2003, Proceedings of the National Academy of Sciences of the United States of America.

[66]  Lek-Heng Lim Tensors and Hypermatrices , 2013 .

[67]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[68]  A. Geramita,et al.  Higher secant varieties of the Segre varieties , 2005 .

[69]  J. Kruskal Three-way arrays: rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics , 1977 .

[70]  P. Comon,et al.  Tensor decompositions, alternating least squares and other tales , 2009 .

[71]  Shihua Zhu,et al.  A CANDECOMP/PARAFAC Perspective on Uniqueness of DOA Estimation Using a Vector Sensor Array , 2011, IEEE Transactions on Signal Processing.

[72]  Nikos D. Sidiropoulos,et al.  Parallel factor analysis in sensor array processing , 2000, IEEE Trans. Signal Process..

[73]  E. D. Livshitz On the optimality of the Orthogonal Greedy Algorithm for µ-coherent dictionaries , 2012, J. Approx. Theory.

[74]  Emmanuel J. Candès,et al.  The Power of Convex Relaxation: Near-Optimal Matrix Completion , 2009, IEEE Transactions on Information Theory.

[75]  R. DeVore,et al.  Compressed sensing and best k-term approximation , 2008 .

[76]  J. Kahane Sur le théorème de superposition de Kolmogorov , 1975 .

[77]  Seungjin Choi,et al.  Independent Component Analysis , 2009, Handbook of Natural Computing.

[78]  V. Strassen Gaussian elimination is not optimal , 1969 .

[79]  Joel A. Tropp,et al.  Greed is good: algorithmic results for sparse approximation , 2004, IEEE Transactions on Information Theory.

[80]  N. Sidiropoulos,et al.  On the uniqueness of multilinear decomposition of N‐way arrays , 2000 .

[81]  A. Defant,et al.  Tensor Norms and Operator Ideals , 2011 .

[82]  F. L. Hitchcock Multiple Invariants and Generalized Rank of a P‐Way Matrix or Tensor , 1928 .

[83]  Vladimir Temlyakov,et al.  CAMBRIDGE MONOGRAPHS ON APPLIED AND COMPUTATIONAL MATHEMATICS , 2022 .

[84]  Christopher J. Hillar,et al.  Most Tensor Problems Are NP-Hard , 2009, JACM.