A Fast L1 Linear Estimator and Its Application on Predictive Deconvolution

The L1 linear estimator which is used to solve the linear problem by minimizing the L1 norm in the data-fitting term does better than traditional least square (LS) methods in applications where the residual vector is super-Gaussian or contains outliers. We propose a fast L1 linear estimation algorithm by first translating the L1 norm data-fitting problem into a L1 norm regularized L2 norm data-fitting problem and then solving the equivalent problem by fast iterative shrinkage-thresholding algorithm (FISTA). The method of the equivalence is carefully chosen and designed to achieve sufficiently low computational complexity of each FISTA iteration. The commonly used iterative reweighted least square (IRLS) algorithm is used as a benchmark in this letter. In comparison with IRLS, our numerical experiments show that the proposed algorithm is 5-11 times faster when achieving the same estimation accuracy. To demonstrate the performance of the proposed algorithm, we apply it on seismic predictive deconvolution. Both synthetic and real field data examples show that our method outperforms the IRLS-based and the traditional LS-based seismic predictive deconvolution method.

[1]  Michael Muma,et al.  Robust Estimation in Signal Processing: A Tutorial-Style Treatment of Fundamental Concepts , 2012, IEEE Signal Processing Magazine.

[2]  S. Treitel PREDICTIVE DECONVOLUTION: THEORY AND PRACTICE , 1969 .

[3]  Mauricio D. Sacchi,et al.  A Fast and Automatic Sparse Deconvolution in the Presence of Outliers , 2012, IEEE Transactions on Geoscience and Remote Sensing.

[4]  Milton J. Porsani,et al.  High Resolution Imaging of Seismic Data Using a Sparsity Norm , 2003 .

[5]  P. Holland,et al.  Robust regression using iteratively reweighted least-squares , 1977 .

[6]  David L Donoho,et al.  Compressed sensing , 2006, IEEE Transactions on Information Theory.

[7]  Jun Liu,et al.  An improved predictive deconvolution based on maximization of non-Gaussianity , 2008 .

[8]  Marc Teboulle,et al.  A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems , 2009, SIAM J. Imaging Sci..

[9]  R. Koenker,et al.  The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute-error estimators , 1997 .

[10]  松本 隆,et al.  Deconvolution , 1997, Computer Vision, A Reference Guide.

[11]  A. Guitton,et al.  Adaptive subtraction of multiples using the L1‐norm , 2004 .

[12]  Subhash C. NarulaI,et al.  The Minimum Sum of Absolute Errors Regression: A State of the Art Survey , 1982 .

[13]  Marc Moonen,et al.  Sparse Linear Prediction and Its Applications to Speech Processing , 2012, IEEE Transactions on Audio, Speech, and Language Processing.

[14]  C. Adcock,et al.  A comparison of two LP solvers and a new IRLS algorithm for L1 estimation , 1997 .

[15]  J. Claerbout,et al.  Robust Modeling With Erratic Data , 1973 .

[16]  T. E. Dielman,et al.  Least absolute value regression: recent contributions , 2005 .

[17]  Hong Wang,et al.  Algorithm for very fast computation of Least Absolute Value regression , 2009, 2009 American Control Conference.