A hybrid domain approach to reduce streak artifacts of sparse view CT image via convolutional neural network

In this study, we propose a method to reduce streak artifacts of sparse view CT images via convolutional neural network (CNN). The main idea of the proposed method is to utilize both image and sinogram domain data for CNN training. To generate datasets, projection data were acquired from 512 (128) views using Siddon’s ray-driven algorithm, and full (sparse) view CT images were reconstructed by filtered back projection with a Ram-Lak filter. We first trained U-net based CNN_img, which was designed to reduce the streak artifacts of sparse view CT in image domain. Then, the output images of CNN_img were used as prior images to conduct pseudo full view sinogram. Before upsampling, sparse view sinogram was normalized by the prior images, and then linear interpolation was employed to estimate the missing view data compared to full view sinogram. The upsampled data were denormalized using prior images. To reduce the residual errors in pseudo full view sinogram data, we trained CNN_hybrid with residual encoder-decoder CNN, which is known to be effective in reducing the residual errors while preserving structural details. In order to increase the learning efficiency, the dynamic range of the pseudo full view sinogram data was converted via exponential function. The results show that the CNN_hybrid provides better performance in streak artifacts reduction than CNN_img, which is also confirmed by quantitative assessment.