Comparison of three sinogram restoration methods

The raw data acquired during a computed tomography (CT) scan carry the unwanted traces of a number of adverse effects connected with the measurement setup and the acquisition process. To name a few, these include systematic errors like detector crosstalk and afterglow, fluctuations in tube power during the scan, but also statistical effects like photon noise. Most systematic effects can be cast into a linear model, providing a way for neutralizing the influence of these errors through deconvolution. However, this deconvolution process inevitably increases the image noise content. For low-dose scans, application of some kind of noise suppression algorithm is mandatory, in order to keep its disturbing influence on the reconstructed images in check. Since resolution and noise are antagonizing properties, noise suppression usually has the side effect of decreasing resolution. The interest in finding an algorithm that deals with this quandary in an optimal way is obvious. This work compares three deconvolution/denoising methods, identifying the one that performs best on a set of simulated data. The tested methods of combined sinogram deconvolution/denoising are based on (1) regularized matrix inversion, (2) straight matrix inversion plus adaptive filtering, and (3) deconvolution by a penalized maximum likelihood approach. In-plane and axial noise/resolution measurements identified the penalized maximum-likelihood method as best suited for low-dose applications. The adaptive filter approach performed well, but did not retain as much resolution when going to higher smoothing levels. The analytic deconvolution, however, could not compete against the other two methods.