CNN-based CT denoising with an accurate image domain noise insertion technique

Convolutional neural network (CNN)-based CT denoising methods have attracted great interest for improving the image quality of low-dose CT (LDCT) images. However, CNN requires a large amount of paired data consisting of normal-dose CT (NDCT) and LDCT images, which are generally not available. In this work, we aim to synthesize paired data from NDCT images with an accurate image domain noise insertion technique and investigate its effect on the denoising performance of CNN. Fan-beam CT images were reconstructed using extended cardiac-torso phantoms with Poisson noise added to projection data to simulate NDCT and LDCT. We estimated local noise power spectra and a variance map from a NDCT image using information on photon statistics and reconstruction parameters. We then synthesized image domain noise by filtering and scaling white Gaussian noise using the local noise power spectrum and variance map, respectively. The CNN architecture was U-net, and the loss function was a weighted summation of mean squared error, perceptual loss, and adversarial loss. CNN was trained with NDCT and LDCT (CNN-Ideal) or NDCT and synthesized LDCT (CNN-Proposed). To evaluate denoising performance, we measured the root mean squared error (RMSE), structural similarity index (SSIM), noise power spectrum (NPS), and modulation transfer function (MTF). The MTF was estimated from the edge spread function of a circular object with 12 mm diameter and 60 HU contrast. Denoising results from CNN-Ideal and CNN-Proposed show no significant difference in all metrics while providing high scores in RMSE and SSIM compared to NDCT and similar NPS shapes to that of NDCT.