Computational Statistics and Data Analysis

Solutions of numerically ill-posed least squares problems Ax~b for [email protected]?R^m^x^n by Tikhonov regularization are considered. For [email protected]?R^p^x^n, the Tikhonov regularized least squares functional is given by J(@s)[email protected][email protected]?"W^2+1/@s^[email protected]?D(x-x"0)@?"2^2 where matrix W is a weighting matrix and x"0 is given. Given a priori estimates on the covariance structure of errors in the measurement data b, the weighting matrix may be taken as W=W"b which is the inverse covariance matrix of the mean 0 normally distributed measurement errors e in b. If in addition x"0 is an estimate of the mean value of x, and @s is a suitable statistically-chosen value, J evaluated at its minimizer x(@s) approximately follows a @g^2 distribution with [email protected]?=m+p-n degrees of freedom. Using the generalized singular value decomposition of the matrix pair [W"b^1^/^2AD], @s can then be found such that the resulting J follows this @g^2 distribution. But the use of an algorithm which explicitly relies on the direct solution of the problem obtained using the generalized singular value decomposition is not practical for large-scale problems. Instead an approach using the Golub-Kahan iterative bidiagonalization of the regularized problem is presented. The original algorithm is extended for cases in which x"0 is not available, but instead a set of measurement data provides an estimate of the mean value of b. The sensitivity of the Newton algorithm to the number of steps used in the Golub-Kahan iterative bidiagonalization, and the relation between the size of the projected subproblem and @s are discussed. Experiments presented contrast the efficiency and robustness with other standard methods for finding the regularization parameter for a set of test problems and for the restoration of a relatively large real seismic signal. An application for image deblurring also validates the approach for large-scale problems. It is concluded that the presented approach is robust for both small and large-scale discretely ill-posed least squares problems.

[1]  Jodi Mead A priori weighting for parameter estimation , 2008 .

[2]  A. Bennett,et al.  Inverse Modeling of the Ocean and Atmosphere , 2002 .

[3]  M. D. Martínez-Miranda,et al.  Computational Statistics and Data Analysis , 2009 .

[4]  Daniel Peña,et al.  Detecting defects with image data , 2007, Comput. Stat. Data Anal..

[5]  Rosemary A. Renaut,et al.  A Newton root-finding algorithm for estimating the regularization parameter for solving ill-conditioned least squares problems , 2009 .

[6]  Donald W. Marquaridt Generalized Inverses, Ridge Regression, Biased Linear Estimation, and Nonlinear Estimation , 1970 .

[7]  P. Hansen,et al.  Exploiting Residual Information in the Parameter Choice for Discrete Ill-Posed Problems , 2006 .

[8]  Wolfgang Stefan,et al.  Total variation regularization for linear ill -posed inverse problems: Extensions and applications , 2008 .

[9]  Michael A. Saunders,et al.  LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares , 1982, TOMS.

[10]  Dianne P. O'Leary,et al.  Residual periodograms for choosing regularization parameters for ill-posed problems , 2008 .

[11]  James G. Nagy,et al.  Iterative Methods for Image Deblurring: A Matlab Object-Oriented Approach , 2004, Numerical Algorithms.

[12]  Peihua Qiu,et al.  A nonparametric procedure for blind image deblurring , 2008, Comput. Stat. Data Anal..

[13]  D. O’Leary,et al.  A Bidiagonalization-Regularization Procedure for Large Scale Discretizations of Ill-Posed Problems , 1981 .

[14]  Otmar Scherzer,et al.  Bivariate density estimation using BV regularisation , 2007, Comput. Stat. Data Anal..

[15]  Kwang-Ting Cheng,et al.  Fundamentals of algorithms , 2009 .

[16]  Jianhong Shen,et al.  Deblurring images: Matrices, spectra, and filtering , 2007, Math. Comput..

[17]  Dianne P. O'Leary,et al.  Deblurring Images: Matrices, Spectra, and Filtering (Fundamentals of Algorithms 3) (Fundamentals of Algorithms) , 2006 .

[18]  Jodi Mead,et al.  Parameter estimation: A new approach to weighting a priori information , 2007 .

[19]  L. Eldén A weighted pseudoinverse, generalized singular values, and constrained least squares problems , 1982 .

[20]  Per Christian Hansen,et al.  Regularization methods for large-scale problems , 1993 .

[21]  Calyampudi Radhakrishna Rao,et al.  Linear Statistical Inference and its Applications , 1967 .

[22]  P. Hansen Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion , 1987 .

[23]  J. Nagy,et al.  A weighted-GCV method for Lanczos-hybrid regularization. , 2007 .

[24]  Dianne P. O'Leary,et al.  Near-Optimal Parameters for Tikhonov and Other Regularization Methods , 2001, SIAM J. Sci. Comput..

[25]  Misha Elena Kilmer,et al.  A Projection-Based Approach to General-Form Tikhonov Regularization , 2007, SIAM J. Sci. Comput..

[26]  Calyampudi R. Rao,et al.  Linear Statistical Inference and Its Applications. , 1975 .

[27]  C. Vogel Computational Methods for Inverse Problems , 1987 .

[28]  Per Christian Hansen,et al.  REGULARIZATION TOOLS: A Matlab package for analysis and solution of discrete ill-posed problems , 1994, Numerical Algorithms.

[29]  D. Titterington,et al.  Noise estimation in signal restoration using regularization , 1991 .

[30]  Misha Elena Kilmer,et al.  Choosing Regularization Parameters in Iterative Methods for Ill-Posed Problems , 2000, SIAM J. Matrix Anal. Appl..

[31]  Gene H. Golub,et al.  Regularization by Truncated Total Least Squares , 1997, SIAM J. Sci. Comput..

[32]  Michael A. Saunders,et al.  Algorithm 583: LSQR: Sparse Linear Equations and Least Squares Problems , 1982, TOMS.

[33]  Zdeněk Strakoš,et al.  The regularizing effect of the Golub-Kahan iterative bidiagonalization and revealing the noise level in the data , 2009 .

[34]  Robert H. Halstead,et al.  Matrix Computations , 2011, Encyclopedia of Parallel Computing.

[35]  P. Hansen Regularization,GSVD and truncatedGSVD , 1989 .