Fast Algorithms for Demixing Sparse Signals From Nonlinear Observations

We study the problem of <italic>demixing</italic> a pair of sparse signals from noisy, nonlinear observations of their superposition. Mathematically, we consider a nonlinear signal observation model, <inline-formula> <tex-math notation="LaTeX">$y_i = g(a_i^Tx) + e_i, \ i=1,\ldots,m$</tex-math></inline-formula>, where <inline-formula> <tex-math notation="LaTeX">$x = \Phi w+\Psi z$</tex-math></inline-formula> denotes the superposition signal, <inline-formula><tex-math notation="LaTeX">$\Phi$</tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\Psi$</tex-math></inline-formula> are orthonormal bases in <inline-formula> <tex-math notation="LaTeX">$\mathbb {R}^n$</tex-math></inline-formula>, and <inline-formula><tex-math notation="LaTeX"> $w, z\in \mathbb {R}^n$</tex-math></inline-formula> are sparse coefficient vectors of the constituent signals, and <inline-formula><tex-math notation="LaTeX">$e_i$</tex-math></inline-formula> represents the noise. Moreover, <inline-formula><tex-math notation="LaTeX">$g$</tex-math></inline-formula> represents a nonlinear <italic>link</italic> function, and <inline-formula><tex-math notation="LaTeX">$a_i\in \mathbb {R}^n$</tex-math></inline-formula> is the <inline-formula><tex-math notation="LaTeX">$i$</tex-math></inline-formula>th row of the measurement matrix <inline-formula><tex-math notation="LaTeX">$A\in \mathbb {R}^{m\times n}$</tex-math></inline-formula>. Problems of this nature arise in several applications ranging from astronomy, computer vision, and machine learning. In this paper, we make some concrete algorithmic progress for the above demixing problem. Specifically, we consider two scenarios: first, the case when the demixing procedure has no knowledge of the link function, and second is the case when the demixing algorithm has perfect knowledge of the link function. In both cases, we provide fast algorithms for recovery of the constituents <inline-formula><tex-math notation="LaTeX">$w$</tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$z$</tex-math></inline-formula> from the observations. Moreover, we support these algorithms with a rigorous theoretical analysis and derive (nearly) tight upper bounds on the sample complexity of the proposed algorithms for achieving stable recovery of the component signals. We also provide a range of numerical simulations to illustrate the performance of the proposed algorithms on both real and synthetic signals and images.

[1]  E.J. Candes Compressive Sampling , 2022 .

[2]  Prateek Jain,et al.  On Iterative Hard Thresholding Methods for High-dimensional M-Estimation , 2014, NIPS.

[3]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[4]  Yaniv Plan,et al.  Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach , 2012, IEEE Transactions on Information Theory.

[5]  Martin J. Wainwright,et al.  A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers , 2009, NIPS.

[6]  Michael P. Friedlander,et al.  Probing the Pareto Frontier for Basis Pursuit Solutions , 2008, SIAM J. Sci. Comput..

[7]  Richard G. Baraniuk,et al.  1-Bit compressive sensing , 2008, 2008 42nd Annual Conference on Information Sciences and Systems.

[8]  Pierre Vandergheynst,et al.  Compressed Sensing and Redundant Dictionaries , 2007, IEEE Transactions on Information Theory.

[9]  Adam Tauman Kalai,et al.  The Isotron Algorithm: High-Dimensional Isotonic Regression , 2009, COLT.

[10]  慧 廣瀬 A Mathematical Introduction to Compressive Sensing , 2015 .

[11]  Ewout van den Berg,et al.  1-Bit Matrix Completion , 2012, ArXiv.

[12]  Helmut Bölcskei,et al.  Recovery of Sparsely Corrupted Signals , 2011, IEEE Transactions on Information Theory.

[13]  George Atia,et al.  High Dimensional Low Rank Plus Sparse Matrix Decomposition , 2015, IEEE Transactions on Signal Processing.

[14]  J. Tropp,et al.  CoSaMP: Iterative signal recovery from incomplete and inaccurate samples , 2008, Commun. ACM.

[15]  J. Tropp On the conditioning of random subdictionaries , 2008 .

[16]  Chinmay Hegde,et al.  A fast iterative algorithm for demixing sparse signals from nonlinear observations , 2016, 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP).

[17]  Christos Thrampoulidis,et al.  LASSO with Non-linear Measurements is Equivalent to One With Linear Measurements , 2015, NIPS.

[18]  Pablo A. Parrilo,et al.  Rank-Sparsity Incoherence for Matrix Decomposition , 2009, SIAM J. Optim..

[19]  Richard G. Baraniuk,et al.  An Architecture for Compressive Imaging , 2006, 2006 International Conference on Image Processing.

[20]  Pablo A. Parrilo,et al.  Latent variable graphical model selection via convex optimization , 2010, 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[21]  Holger Rauhut,et al.  A Mathematical Introduction to Compressive Sensing , 2013, Applied and Numerical Harmonic Analysis.

[22]  John Wright,et al.  RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[23]  Bhiksha Raj,et al.  Greedy sparsity-constrained optimization , 2011, 2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR).

[24]  Martin J. Wainwright,et al.  Restricted Eigenvalue Properties for Correlated Gaussian Designs , 2010, J. Mach. Learn. Res..

[25]  Chinmay Hegde,et al.  Demixing sparse signals from nonlinear observations , 2016, 2016 50th Asilomar Conference on Signals, Systems and Computers.

[26]  George Atia,et al.  A Subspace Learning Approach for High Dimensional Matrix Decomposition with Efficient Column/Row Sampling , 2016, ICML.

[27]  Yonina C. Eldar,et al.  Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms , 2012, SIAM J. Optim..

[28]  Yonina C. Eldar,et al.  Sparse Nonlinear Regression: Parameter Estimation and Asymptotic Inference , 2015, ArXiv.

[29]  Rachel Ward,et al.  New and Improved Johnson-Lindenstrauss Embeddings via the Restricted Isometry Property , 2010, SIAM J. Math. Anal..

[30]  Xiaodong Li,et al.  Phase Retrieval via Wirtinger Flow: Theory and Algorithms , 2014, IEEE Transactions on Information Theory.

[31]  Y. Plan,et al.  High-dimensional estimation with geometric constraints , 2014, 1404.3749.

[32]  Adam Tauman Kalai,et al.  Efficient Learning of Generalized Linear and Single Index Models with Isotonic Regression , 2011, NIPS.

[33]  Shervin Minaee,et al.  Screen content image segmentation using sparse-smooth decomposition , 2015, 2015 49th Asilomar Conference on Signals, Systems and Computers.

[34]  Charles R. Johnson,et al.  Matrix analysis , 1985, Statistical Inference for Engineers and Data Scientists.

[35]  M. Talagrand,et al.  Probability in Banach Spaces: Isoperimetry and Processes , 1991 .

[36]  Mike E. Davies,et al.  Iterative Hard Thresholding for Compressed Sensing , 2008, ArXiv.

[37]  R. M. McLeod,et al.  Mean Value Theorems for Vector Valued Functions , 1965, Proceedings of the Edinburgh Mathematical Society.

[38]  Yi Ma,et al.  Robust principal component analysis? , 2009, JACM.

[39]  Venkatesan Guruswami,et al.  Restricted Isometry of Fourier Matrices and List Decodability of Random Linear Codes , 2012, SIAM J. Comput..

[40]  S. Mendelson,et al.  Uniform Uncertainty Principle for Bernoulli and Subgaussian Ensembles , 2006, math/0608665.

[41]  Rebecca Willett,et al.  Matrix Completion Under Monotonic Single Index Models , 2015, NIPS.

[42]  Volkan Cevher,et al.  Convexity in Source Separation : Models, geometry, and algorithms , 2013, IEEE Signal Processing Magazine.

[43]  Chinmay Hegde,et al.  Signal Recovery on Incoherent Manifolds , 2012, IEEE Transactions on Information Theory.

[44]  Yaniv Plan,et al.  One‐Bit Compressed Sensing by Linear Programming , 2011, ArXiv.

[45]  Roman Vershynin,et al.  Introduction to the non-asymptotic analysis of random matrices , 2010, Compressed Sensing.

[46]  Mohamed-Jalal Fadili,et al.  Morphological Component Analysis: An Adaptive Thresholding Strategy , 2007, IEEE Transactions on Image Processing.

[47]  Constantine Caramanis,et al.  Optimal Linear Estimation under Unknown Nonlinear Transform , 2015, NIPS.

[48]  Robert D. Nowak,et al.  Learning Single Index Models in High Dimensions , 2015, ArXiv.

[49]  Yonina C. Eldar,et al.  Compressed Sensing with Coherent and Redundant Dictionaries , 2010, ArXiv.

[50]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[51]  Narendra Ahuja,et al.  Face recognition using kernel eigenfaces , 2000, Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101).

[52]  D. Donoho,et al.  Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA) , 2005 .

[53]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[54]  E. Candès,et al.  Sparsity and incoherence in compressive sampling , 2006, math/0611957.

[55]  M. Rudelson,et al.  On sparse reconstruction from Fourier and Gaussian measurements , 2008 .

[56]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[57]  A. Willsky,et al.  Sparse and low-rank matrix decompositions , 2009, 2009 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[58]  Yonina C. Eldar,et al.  From Theory to Practice: Sub-Nyquist Sampling of Sparse Wideband Analog Signals , 2009, IEEE Journal of Selected Topics in Signal Processing.

[59]  Michael Elad,et al.  Stable recovery of sparse overcomplete representations in the presence of noise , 2006, IEEE Transactions on Information Theory.

[60]  Chinmay Hegde,et al.  SPIN: Iterative signal recovery on incoherent manifolds , 2012, 2012 IEEE International Symposium on Information Theory Proceedings.

[61]  Xiao-Tong Yuan,et al.  Gradient Hard Thresholding Pursuit for Sparsity-Constrained Optimization , 2013, ICML.

[62]  Joel A. Tropp,et al.  Sharp Recovery Bounds for Convex Demixing, with Applications , 2012, Found. Comput. Math..