Robust Sparse Analysis Regularization

This paper investigates the theoretical guarantees of l1-analysis regularization when solving linear inverse problems. Most of previous works in the literature have mainly focused on the sparse synthesis prior where the sparsity is measured as the l1 norm of the coefficients that synthesize the signal from a given dictionary. In contrast, the more general analysis regularization minimizes the l1 norm of the correlations between the signal and the atoms in the dictionary, where these correlations define the analysis support. The corresponding variational problem encompasses several well-known regularizations such as the discrete total variation and the fused Lasso. Our main contributions consist in deriving sufficient conditions that guarantee exact or partial analysis support recovery of the true signal in presence of noise. More precisely, we give a sufficient condition to ensure that a signal is the unique solution of the l1 -analysis regularization in the noiseless case. The same condition also guarantees exact analysis support recovery and l2-robustness of the l1-analysis minimizer vis-à-vis an enough small noise in the measurements. This condition turns to be sharp for the robustness of the sign pattern. To show partial support recovery and l2 -robustness to an arbitrary bounded noise, we introduce a stronger sufficient condition. When specialized to the l1-synthesis regularization, our results recover some corresponding recovery and robustness guarantees previously known in the literature. From this perspective, our work is a generalization of these results. We finally illustrate these theoretical findings on several examples to study the robustness of the 1-D total variation, shift-invariant Haar dictionary, and fused Lasso regularizations.

[1]  Mila Nikolova,et al.  Local Strong Homogeneity of a Regularized Estimator , 2000, SIAM J. Appl. Math..

[2]  Deanna Needell,et al.  Greedy signal recovery review , 2008, 2008 42nd Asilomar Conference on Signals, Systems and Computers.

[3]  Barak A. Pearlmutter,et al.  Blind Source Separation by Sparse Decomposition in a Signal Dictionary , 2001, Neural Computation.

[4]  B. Efron How Biased is the Apparent Error Rate of a Prediction Rule , 1986 .

[5]  O. Scherzer,et al.  Necessary and sufficient conditions for linear convergence of ℓ1‐regularization , 2011 .

[6]  A. Kirsch An Introduction to the Mathematical Theory of Inverse Problems , 1996, Applied Mathematical Sciences.

[7]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[8]  S. Osher,et al.  Convergence rates of convex variational regularization , 2004 .

[9]  Yonina C. Eldar,et al.  Robust Recovery of Signals From a Structured Union of Subspaces , 2008, IEEE Transactions on Information Theory.

[10]  Joel A. Tropp,et al.  Just relax: convex programming methods for identifying sparse signals in noise , 2006, IEEE Transactions on Information Theory.

[11]  VaiterSamuel,et al.  Robust Sparse Analysis Regularization , 2013 .

[12]  Deanna Needell,et al.  Stable Image Reconstruction Using Total Variation Minimization , 2012, SIAM J. Imaging Sci..

[13]  A. Chambolle,et al.  An introduction to Total Variation for Image Analysis , 2009 .

[14]  Michael Elad,et al.  Performance Guarantees of the Thresholding Algorithm for the Cosparse Analysis Model , 2013, IEEE Transactions on Information Theory.

[15]  D. Lorenz,et al.  Convergence rates and source conditions for Tikhonov regularization with sparsity constraints , 2008, 0801.1774.

[16]  Michael Elad,et al.  From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images , 2009, SIAM Rev..

[17]  J.-C. Pesquet,et al.  A Douglas–Rachford Splitting Approach to Nonsmooth Convex Variational Signal Recovery , 2007, IEEE Journal of Selected Topics in Signal Processing.

[18]  Michael Elad,et al.  Analysis versus synthesis in signal priors , 2006, 2006 14th European Signal Processing Conference.

[19]  M. Nikolova An Algorithm for Total Variation Minimization and Applications , 2004 .

[20]  Otmar Scherzer,et al.  The residual method for regularizing ill-posed problems , 2009, Appl. Math. Comput..

[21]  O. Scherzer,et al.  Sparse regularization with lq penalty term , 2008, 0806.3222.

[22]  C. Stein Estimation of the Mean of a Multivariate Normal Distribution , 1981 .

[23]  Stéphane Mallat,et al.  Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..

[24]  Jean-Jacques Fuchs,et al.  On sparse representations in arbitrary redundant bases , 2004, IEEE Transactions on Information Theory.

[25]  Mike E. Davies,et al.  Sampling Theorems for Signals From the Union of Finite-Dimensional Linear Subspaces , 2009, IEEE Transactions on Information Theory.

[26]  Michael Elad,et al.  The Cosparse Analysis Model and Algorithms , 2011, ArXiv.

[27]  Markus Grasmair,et al.  Linear convergence rates for Tikhonov regularization with positively homogeneous functionals , 2011 .

[28]  Antonin Chambolle,et al.  The Discontinuity Set of Solutions of the TV Denoising Problem and Some Extensions , 2007, Multiscale Model. Simul..

[29]  M. R. Osborne,et al.  A new approach to variable selection in least squares problems , 2000 .

[30]  L. Rudin,et al.  Nonlinear total variation based noise removal algorithms , 1992 .

[31]  Antonin Chambolle,et al.  A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging , 2011, Journal of Mathematical Imaging and Vision.

[32]  Y. C. Pati,et al.  Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition , 1993, Proceedings of 27th Asilomar Conference on Signals, Systems and Computers.

[33]  Mário A. T. Figueiredo,et al.  Signal restoration with overcomplete wavelet transforms: comparison of analysis and synthesis priors , 2009, Optical Engineering + Applications.

[34]  R. Tibshirani,et al.  Sparsity and smoothness via the fused lasso , 2005 .

[35]  Yaakov Tsaig,et al.  Fast Solution of $\ell _{1}$ -Norm Minimization Problems When the Solution May Be Sparse , 2008, IEEE Transactions on Information Theory.

[36]  F. Santosa,et al.  Linear inversion of ban limit reflection seismograms , 1986 .

[37]  W. Ring Structural Properties of Solutions to Total Variation Regularization Problems , 2000 .

[38]  R. Tibshirani,et al.  The solution path of the generalized lasso , 2010, 1005.1971.

[39]  C. Dossal A necessary and sufficient condition for exact recovery by l1 minimization. , 2012 .

[40]  Volkan Cevher,et al.  Model-Based Compressive Sensing , 2008, IEEE Transactions on Information Theory.

[41]  Xiaoming Huo,et al.  Uncertainty principles and ideal atomic decomposition , 2001, IEEE Trans. Inf. Theory.

[42]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[43]  Rémi Gribonval,et al.  Iterative cosparse projection algorithms for the recovery of cosparse vectors , 2011, 2011 19th European Signal Processing Conference.

[44]  Thomas Brox,et al.  On the Equivalence of Soft Wavelet Shrinkage, Total Variation Diffusion, Total Variation Regularization, and SIDEs , 2004, SIAM J. Numer. Anal..

[45]  Minh N. Do,et al.  A Theory for Sampling Signals from a Union of Subspaces , 2022 .

[46]  D. Donoho For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution , 2006 .

[47]  Yonina C. Eldar,et al.  Compressed Sensing with Coherent and Redundant Dictionaries , 2010, ArXiv.

[48]  R. Tibshirani Regression Shrinkage and Selection via the Lasso , 1996 .

[49]  P. Laguna,et al.  Signal Processing , 2002, Yearbook of Medical Informatics.

[50]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[51]  Gitta Kutyniok,et al.  Sparse Recovery From Combined Fusion Frame Measurements , 2009, IEEE Transactions on Information Theory.

[52]  Mohamed-Jalal Fadili,et al.  Total Variation Projection With First Order Schemes , 2011, IEEE Transactions on Image Processing.

[53]  Gabriel Peyré,et al.  The degrees of freedom of the Lasso in underdetermined linear regression models , 2011 .

[54]  Massimo Fornasier,et al.  Theoretical Foundations and Numerical Methods for Sparse Recovery , 2010, Radon Series on Computational and Applied Mathematics.

[55]  Geoffrey M. Davis,et al.  Adaptive Time-Frequency Approximations with Matching Pursuits , 1994 .

[56]  S. Mallat A wavelet tour of signal processing , 1998 .

[57]  Joel A. Tropp,et al.  Greed is good: algorithmic results for sparse approximation , 2004, IEEE Transactions on Information Theory.

[58]  Mohamed-Jalal Fadili,et al.  Total variation projection with first order schemes , 2009, ICIP.

[59]  Balas K. Natarajan,et al.  Sparse Approximate Solutions to Linear Systems , 1995, SIAM J. Comput..

[60]  R. Tibshirani,et al.  On the “degrees of freedom” of the lasso , 2007, 0712.0881.