PURPOSE
Involuntary patient movement results in data discontinuities during computed tomography (CT) scans which lead to a serious degradation in the image quality. In this paper, we specifically address artifacts induced by patient motion during a head scan.
METHOD
Instead of trying to solve an inverse problem, we developed a motion simulation algorithm to synthesize images with motion induced artifacts. The artifacts induced by rotation, translation, oscillation and any possible combination are considered. Taking advantage of the powerful learning ability of neural network, we designed a novel 3D network structure with both a large reception field and a high image resolution to map the artifact free images from artifact contaminated images. Quantitative results of the proposed method were evaluated against the results of U-Net and proposed networks without dilation structure. 30 sets of motion contaminated images from two hospitals were selected to do clinical evaluation.
RESULT
Facilitating the training dataset with artifacts induced by variable motion patterns and the neural network, the artifact can be removed with good performance. Validation dataset with simulated random motion pattern showed outperformed image correction, and quantitative results showed the proposed network had the lowest normalized root mean-square-error, highest peak signal-to-noise ratio and structure similarity, indicating our network gave the best approximation of gold standard. Clinical image processing results further confirmed the effectiveness of our method.
CONCLUSION
We proposed a novel deep learning based algorithm to eliminate motion artifacts. The CNNs trained with synthesized image pairs achieved promising result in artifacts reduction. The corrected images increased the diagnostic confidence compared with artifacts contaminated images. We believe that the correction method can restore the ability to successfully diagnose and avoid repeated CT scans in certain clinical circumstances. This article is protected by copyright. All rights reserved.