The Learned Inexact Project Gradient Descent Algorithm

Accelerating iterative algorithms for solving inverse problems using neural networks have become a very popular strategy in the recent years. In this work, we propose a theoretical analysis that may provide an explanation for its success. Our theory relies on the usage of inexact projections with the projected gradient descent (PGD) method. It is demonstrated in various problems including image super-resolution.

[1]  Marcin Andrychowicz,et al.  Learning to learn by gradient descent by gradient descent , 2016, NIPS.

[2]  Benjamin Recht,et al.  Sharp Time–Data Tradeoffs for Linear Inverse Problems , 2015, IEEE Transactions on Information Theory.

[3]  Joan Bruna,et al.  Understanding Neural Sparse Coding with Matrix Factorization , 2016 .

[4]  Michael Elad,et al.  Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization , 2007 .

[5]  Emmanuel J. Candès,et al.  Decoding by linear programming , 2005, IEEE Transactions on Information Theory.

[6]  Ken Perlin,et al.  Accelerating Eulerian Fluid Simulation With Convolutional Networks , 2016, ICML.

[7]  Mike E. Davies,et al.  Iterative Hard Thresholding for Compressed Sensing , 2008, ArXiv.

[8]  Y. Nesterov A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .

[9]  Yann LeCun,et al.  Learning Fast Approximations of Sparse Coding , 2010, ICML.

[10]  I. Daubechies,et al.  An iterative thresholding algorithm for linear inverse problems with a sparsity constraint , 2003, math/0307152.

[11]  Raja Giryes,et al.  Generalizing CoSaMP to Signals from a Union of Low Dimensional Linear Subspaces , 2017, Applied and Computational Harmonic Analysis.

[12]  Volkan Cevher,et al.  Model-Based Compressive Sensing , 2008, IEEE Transactions on Information Theory.

[13]  Guillermo Sapiro,et al.  Learning Efficient Sparse and Low Rank Models , 2012, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[14]  Michael Elad,et al.  On Single Image Scale-Up Using Sparse-Representations , 2010, Curves and Surfaces.

[15]  I. Yavneh,et al.  A multilevel iterated-shrinkage approach to l-1 penalized least-squares minimization , 2012 .

[16]  Michael Elad,et al.  From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images , 2009, SIAM Rev..

[17]  Jorge Nocedal,et al.  Optimization Methods for Large-Scale Machine Learning , 2016, SIAM Rev..

[18]  Yonina C. Eldar,et al.  Tradeoffs Between Convergence Speed and Reconstruction Accuracy in Inverse Problems , 2016, IEEE Transactions on Signal Processing.

[19]  Yoram Singer,et al.  Efficient projections onto the l1-ball for learning in high dimensions , 2008, ICML '08.

[20]  Patrick L. Combettes,et al.  Proximal Splitting Methods in Signal Processing , 2009, Fixed-Point Algorithms for Inverse Problems in Science and Engineering.

[21]  Yaniv Plan,et al.  Robust 1-bit Compressed Sensing and Sparse Logistic Regression: A Convex Programming Approach , 2012, IEEE Transactions on Information Theory.

[22]  Alexander M. Bronstein,et al.  A Picture is Worth a Billion Bits: Real-Time Image Reconstruction from Dense Binary Pixels , 2015, ArXiv.