Adaptive multi-penalty regularization based on a generalized Lasso path

For many algorithms, parameter tuning remains a challenging and critical task, which becomes tedious and infeasible in a multi-parameter setting. Multi-penalty regularization, successfully used for solving undetermined sparse regression of problems of unmixing type where signal and noise are additively mixed, is one of such examples. In this paper, we propose a novel algorithmic framework for an adaptive parameter choice in multi-penalty regularization with a focus on the correct support recovery. Building upon the theory of regularization paths and algorithms for single-penalty functionals, we extend these ideas to a multi-penalty framework by providing an efficient procedure for the construction of regions containing structurally similar solutions, i.e., solutions with the same sparsity and sign pattern, over the whole range of parameters. Combining this with a model selection criterion, we can choose regularization parameters in a data-adaptive manner. Another advantage of our algorithm is that it provides an overview on the solution stability over the whole range of parameters. This can be further exploited to obtain additional insights into the problem of interest. We provide a numerical analysis of our method and compare it to the state-of-the-art single-penalty algorithms for compressed sensing problems in order to demonstrate the robustness and power of the proposed algorithm.

[1]  S. Foucart,et al.  Hard thresholding pursuit algorithms: Number of iterations ☆ , 2016 .

[2]  Marius Kloft,et al.  Huber-Norm Regularization for Linear Prediction Models , 2016, ECML/PKDD.

[3]  Yves Meyer,et al.  Oscillating Patterns in Image Processing and Nonlinear Evolution Equations: The Fifteenth Dean Jacqueline B. Lewis Memorial Lectures , 2001 .

[4]  Yonina C. Eldar,et al.  Noise Folding in Compressed Sensing , 2011, IEEE Signal Processing Letters.

[5]  I. Daubechies,et al.  Sparsity-enforcing regularisation and ISTA revisited , 2016 .

[6]  V. Naumova,et al.  Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices , 2014, 1403.6718.

[7]  R. Tibshirani,et al.  Least angle regression , 2004, math/0406456.

[8]  R. Tibshirani The Lasso Problem and Uniqueness , 2012, 1206.0313.

[9]  Joel A. Tropp,et al.  Greed is good: algorithmic results for sparse approximation , 2004, IEEE Transactions on Information Theory.

[10]  Xiaoming Huo,et al.  Uncertainty principles and ideal atomic decomposition , 2001, IEEE Trans. Inf. Theory.

[11]  慧 廣瀬 A Mathematical Introduction to Compressive Sensing , 2015 .

[12]  S. Rosset,et al.  Piecewise linear regularized solution paths , 2007, 0708.2197.

[13]  Karl Rohe,et al.  Preconditioning the Lasso for sign consistency , 2015 .

[14]  O. Scherzer,et al.  Necessary and sufficient conditions for linear convergence of ℓ1‐regularization , 2011 .

[15]  Venkatesh Saligrama,et al.  Information Theoretic Bounds for Compressed Sensing , 2008, IEEE Transactions on Information Theory.

[16]  Ping Li,et al.  On the Iteration Complexity of Support Recovery via Hard Thresholding Pursuit , 2017, ICML.

[17]  H. Zou,et al.  Regularization and variable selection via the elastic net , 2005 .

[18]  Mike E. Davies,et al.  Iterative Hard Thresholding for Compressed Sensing , 2008, ArXiv.

[19]  S. Osher,et al.  Sparse Recovery via Differential Inclusions , 2014, 1406.7728.

[20]  D. Donoho,et al.  Uncertainty principles and signal recovery , 1989 .

[21]  Robert M. Gray,et al.  Toeplitz and Circulant Matrices: A Review , 2005, Found. Trends Commun. Inf. Theory.

[22]  Frederick R. Forst,et al.  On robust estimation of the location parameter , 1980 .

[23]  Massimo Fornasier,et al.  Damping Noise-Folding and Enhanced Support Recovery in Compressed Sensing , 2013, IEEE Transactions on Signal Processing.

[24]  Joel A. Tropp,et al.  Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit , 2007, IEEE Transactions on Information Theory.

[25]  Martin J. Wainwright,et al.  Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$ -Constrained Quadratic Programming (Lasso) , 2009, IEEE Transactions on Information Theory.

[26]  Tong Zhang,et al.  On the Consistency of Feature Selection using Greedy Least Squares Regression , 2009, J. Mach. Learn. Res..

[27]  M. Grasmair,et al.  Conditions on optimal support recovery in unmixing problems by means of multi-penalty regularization , 2016, 1601.01461.