Inverse imaging with mixed penalties
暂无分享,去创建一个
This paper proposes new iterative algorithms for solving linear inverse problems when the solution can be written as the sum of a smooth part and of a part which is sparse in pixel space or in terms of the coefficients of its expansion on an arbitrary orthonormal basis. SPARSITY VERSUS SMOOTHNESS CONSTRAINTS Many linear or linearized inverse imaging or scattering problems can be cast in the following form: solve for the array f (which contains M values representing the unknown object, e.g. pixel values or characteristics of the probed sample) the following linear equation Af = g (1) where g is the image or measurement vector containing N data values and A is the N × M matrix modeling the imaging process (A is assumed to be known). For simplicity, we have used a single index for labeling the object and data arrays, but the formulation obviously applies to 2D or 3D imaging. The usual approach to deal with noisy data is to minimize the least-squares discrepancy (data misfit) or, in the case of ill-conditioned matrices (typical for inverse problems), to solve the following penalized least-squares problem f∗ = arg minf Φ(f) with Φ(f) = ‖Af − g‖2 + μ‖f‖2 (2) where ‖f‖2 = ∑Mm=1 |fm| denotes the squared l2-norm of f and μ is a small positive regularization parameter controlling the balance between stability and fidelity to the data. The corresponding minimizer f∗ = (A∗A + μI)−1A∗g is usually referred to as the Tikhonov regularized solution of (1) (I is the identity matrix and A∗ the adjoint of A). An alternative to matrix inversion is provided by iterative schemes, such as the so-called damped Landweber iteration f arbitrary ; f (k+1) = T f (k) for k = 0, 1, . . . (3) where the iteration mapping T is given by T = (1+μ)−1L with Lf ≡ f +A∗(g−Af). Let us assume that the imaging matrix is renormalized so that ‖A‖ < 1. Then ∀f ,h, ‖Lf − Lh‖ ≤ ‖f − h‖ (L is non-expansive); hence, for strictly positive μ, the mapping T is a contraction. This ensures the convergence of the iteration (3) to the unique fixed point of T, which is the unique minimizer of (2). Linear estimates of this sort, however, may not be optimal whenever the object to be restored is known a priori to be sparse, i.e. to have many zero entries. Indeed, even if the original object is sparse, the Tikhonov solution restored from a noisy image will not in general be so. Therefore, it has been advocated [1,2] that the l2-penalty in (2) could be advantageously replaced by a penalty on the l1-norm of f : |||f ||| = ∑Mm=1 |fm|. This modification increases the penalty on components |fm| < 1 and simultaneously decreases the penalty on larger components, thus favouring the restoration of objects with few but large components (as we shall see, the components below some threshold value are even set to zero, a fact which promotes sparsity in the reconstructed object). This strategy leads to the following penalized least-squares problem f∗ = arg minf Φ(f) with Φ(f) = ‖Af − g‖2 + 2τ |||f ||| (τ > 0) . (4) Notice that as for (2) this functional is convex. For A = I (and N = M), the minimizer f∗ is easily seen to be equal to the soft-thresholded data vector (Sτg)n = { gn − τ sign(gn) if |gn| ≥ τ 0 if |gn| < τ . (5) (Note that, when implemented on wavelet coefficients, (5) is a simple denoising scheme as proposed in [3]). When A = I, the operator couples all object components and therefore problem (4) becomes
[1] Michael A. Saunders,et al. Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..
[2] R. Tibshirani. Regression Shrinkage and Selection via the Lasso , 1996 .
[3] Otmar Scherzer,et al. Inverse Problems, Image Analysis, and Medical Imaging , 2002 .
[4] D. L. Donoho,et al. Ideal spacial adaptation via wavelet shrinkage , 1994 .
[5] D. Hunter,et al. Optimization Transfer Using Surrogate Objective Functions , 2000 .