Fast iterative regularization by reusing data

Discrete inverse problems correspond to solving a system of equations in a stable way with respect to noise in the data. A typical approach to enforce uniqueness and select a meaningful solution is to introduce a regularizer. While for most applications the regularizer is convex, in many cases it is not smooth nor strongly convex. In this paper, we propose and study two new iterative regularization methods, based on a primal-dual algorithm, to solve inverse problems efficiently. Our analysis, in the noise free case, provides convergence rates for the Lagrangian and the feasibility gap. In the noisy case, it provides stability bounds and early-stopping rules with theoretical guarantees. The main novelty of our work is the exploitation of some a priori knowledge about the solution set, i.e. redundant information. More precisely we show that the linear systems can be used more than once along the iteration. Despite the simplicity of the idea, we show that this procedure brings surprising advantages in the numerical applications. We discuss various approaches to take advantage of redundant information, that are at the same time consis-tent with our assumptions and flexible in the implementation. Finally, we illustrate our theoretical findings with numerical simulations for robust sparse recovery and image reconstruction through total variation. We confirm the efficiency of the proposed procedures, comparing the results with state-of-the-art methods.

[1]  Julio Deride,et al.  Random Activations in Primal-Dual Splittings for Monotone Inclusions with a Priori Information , 2020, Journal of Optimization Theory and Applications.

[2]  V. Cevher,et al.  On the Convergence of Stochastic Primal-Dual Hybrid Gradient , 2019, SIAM J. Optim..

[3]  Cesare Molinari,et al.  A Stochastic Bregman Primal-Dual Splitting Algorithm for Composite Optimization , 2021, 2112.11928.

[4]  Lorenzo Rosasco,et al.  Iterative regularization for convex regularizers , 2020, AISTATS.

[5]  Lorenzo Rosasco,et al.  Accelerated Iterative Regularization via Dual Diagonal Descent , 2019, SIAM J. Optim..

[6]  Matthias J. Ehrhardt,et al.  Convergence Properties of a Randomized Primal-Dual Algorithm with Applications to Parallel MRI , 2021, SSVM.

[7]  Julian Rasch,et al.  Inexact first-order primal–dual algorithms , 2018, Computational Optimization and Applications.

[8]  Luis M. Briceño-Arias,et al.  A Projected Primal–Dual Method for Solving Constrained Monotone Inclusions , 2018, Journal of Optimization Theory and Applications.

[9]  Dirk A. Lorenz,et al.  Linear convergence of the randomized sparse Kaczmarz method , 2016, Mathematical Programming.

[10]  Martin Burger,et al.  Modern regularization methods for inverse problems , 2018, Acta Numerica.

[11]  Antonin Chambolle,et al.  Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications , 2017, SIAM J. Optim..

[12]  Lorenzo Rosasco,et al.  Don't relax: early stopping for convex regularization , 2017, ArXiv.

[13]  Lorenzo Rosasco,et al.  Iterative Regularization via Dual Diagonal Descent , 2016, Journal of Mathematical Imaging and Vision.

[14]  Lea Fleischer,et al.  Regularization of Inverse Problems , 1996 .

[15]  Andreas Neubauer,et al.  On Nesterov acceleration for Landweber iteration of linear ill-posed problems , 2016 .

[16]  Lorenzo Rosasco,et al.  Learning with Incremental Iterative Regularization , 2014, NIPS.

[17]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[18]  Shai Ben-David,et al.  Understanding Machine Learning: From Theory to Algorithms , 2014 .

[19]  Holger Rauhut,et al.  A Mathematical Introduction to Compressive Sensing , 2013, Applied and Numerical Harmonic Analysis.

[20]  Laurent Condat,et al.  A Primal–Dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms , 2012, Journal of Optimization Theory and Applications.

[21]  Bang Công Vu,et al.  A splitting algorithm for dual monotone inclusions involving cocoercive operators , 2011, Advances in Computational Mathematics.

[22]  R. Boţ,et al.  Iterative regularization with a general penalty term—theory and application to L1 and TV regularization , 2012 .

[23]  L. Briceño-Arias A Douglas–Rachford splitting method for solving equilibrium problems , 2011, 1110.1670.

[24]  Eric Moulines,et al.  Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning , 2011, NIPS.

[25]  Antonin Chambolle,et al.  Diagonal preconditioning for first order primal-dual algorithms in convex optimization , 2011, 2011 International Conference on Computer Vision.

[26]  Martin J. Wainwright,et al.  Early stopping for non-parametric regression: An optimal data-dependent stopping rule , 2011, 2011 49th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[27]  Heinz H. Bauschke,et al.  Convex Analysis and Monotone Operator Theory in Hilbert Spaces , 2011, CMS Books in Mathematics.

[28]  Bin Dong,et al.  Fast Linearized Bregman Iteration for Compressive Sensing and Sparse Denoising , 2011, ArXiv.

[29]  Emmanuel J. Candès,et al.  NESTA: A Fast and Accurate First-Order Method for Sparse Recovery , 2009, SIAM J. Imaging Sci..

[30]  Gabriel Peyré,et al.  The Numerical Tours of Signal Processing , 2011, Comput. Sci. Eng..

[31]  Martin Burger,et al.  ERROR ESTIMATES FOR GENERAL FIDELITIES , 2011 .

[32]  Antonin Chambolle,et al.  A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging , 2011, Journal of Mathematical Imaging and Vision.

[33]  Wotao Yin,et al.  Analysis and Generalizations of the Linearized Bregman Method , 2010, SIAM J. Imaging Sci..

[34]  Gilles Blanchard,et al.  Optimal learning rates for Kernel Conjugate Gradient regression , 2010, NIPS.

[35]  Emmanuel J. Candès,et al.  Matrix Completion With Noise , 2009, Proceedings of the IEEE.

[36]  Emmanuel J. Candès,et al.  A Singular Value Thresholding Algorithm for Matrix Completion , 2008, SIAM J. Optim..

[37]  Lin Xiao,et al.  Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization , 2009, J. Mach. Learn. Res..

[38]  Yoram Singer,et al.  Efficient Online and Batch Learning Using Forward Backward Splitting , 2009, J. Mach. Learn. Res..

[39]  Martin Burger,et al.  Iterative total variation schemes for nonlinear inverse problems , 2009 .

[40]  Jian-Feng Cai,et al.  Linearized Bregman Iterations for Frame-Based Image Deblurring , 2009, SIAM J. Imaging Sci..

[41]  Emmanuel J. Candès,et al.  Exact Matrix Completion via Convex Optimization , 2008, Found. Comput. Math..

[42]  Barbara Kaltenbacher,et al.  Iterative Regularization Methods for Nonlinear Ill-Posed Problems , 2008, Radon Series on Computational and Applied Mathematics.

[43]  Wotao Yin,et al.  Bregman Iterative Algorithms for (cid:2) 1 -Minimization with Applications to Compressed Sensing ∗ , 2008 .

[44]  D. Lorenz,et al.  Convergence rates and source conditions for Tikhonov regularization with sparsity constraints , 2008, 0801.1774.

[45]  Lin He,et al.  Error estimation for Bregman iterations and inverse scale space methods in image restoration , 2007, Computing.

[46]  Y. Yao,et al.  On Early Stopping in Gradient Descent Learning , 2007 .

[47]  Lorenzo Rosasco,et al.  On regularization algorithms in learning theory , 2007, J. Complex..

[48]  Gene H. Golub,et al.  Generalized cross-validation as a method for choosing a good ridge parameter , 1979, Milestones in Matrix Computation.

[49]  Peter L. Bartlett,et al.  AdaBoost is Consistent , 2006, J. Mach. Learn. Res..

[50]  Emmanuel J. Candès,et al.  Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? , 2004, IEEE Transactions on Information Theory.

[51]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[52]  Yaakov Tsaig,et al.  Extensions of compressed sensing , 2006, Signal Process..

[53]  Bin Yu,et al.  Boosting with early stopping: Convergence and consistency , 2005, math/0508276.

[54]  A. Bakushinsky,et al.  Iterative Methods for Approximate Solution of Inverse Problems , 2005 .

[55]  M. Rudelson,et al.  Geometric approach to error-correcting codes and reconstruction of signals , 2005, math/0502299.

[56]  Wotao Yin,et al.  An Iterative Regularization Method for Total Variation-Based Image Restoration , 2005, Multiscale Model. Simul..

[57]  Patrick L. Combettes,et al.  Signal Recovery by Proximal Forward-Backward Splitting , 2005, Multiscale Model. Simul..

[58]  M. Nikolova An Algorithm for Total Variation Minimization and Applications , 2004 .

[59]  Marc Teboulle,et al.  Mirror descent and nonlinear projected subgradient methods for convex optimization , 2003, Oper. Res. Lett..

[60]  Otmar Scherzer,et al.  A Modified Landweber Iteration for Solving Parameter Estimation Problems , 1998 .

[61]  P. Lions,et al.  Image recovery via total variation minimization and related problems , 1997 .

[62]  Stanley Osher,et al.  Total variation based image restoration with free local constraints , 1994, Proceedings of 1st International Conference on Image Processing.

[63]  B. Lemaire,et al.  Convergence of diagonally stationary sequences in convex optimization , 1994 .

[64]  L. Rudin,et al.  Nonlinear total variation based noise removal algorithms , 1992 .

[65]  L. Rudin,et al.  Feature-oriented image enhancement using shock filters , 1990 .

[66]  John Darzentas,et al.  Problem Complexity and Method Efficiency in Optimization , 1983 .

[67]  A Tikhonov,et al.  Solution of Incorrectly Formulated Problems and the Regularization Method , 1963 .

[68]  L. Landweber An iteration formula for Fredholm integral equations of the first kind , 1951 .