Neural networks that perform domain translation have been successfully used for image restoration in settings where paired datasets are not available for network training. Here we present an unsupervised domain translation technique for PET image denoising which restores noisy real PET images by learning shared latent space information across a simulated clean image domain and a real noisy image domain. For training and validating the network in unsupervised mode, we used noisy human brain PET scans in combination with a set of noiseless simulated images based on the BrainWeb digital phantom. Using the peak signal-to-noise ratio as our evaluation metric, we show here that this unsupervised domain translation technique leads to quantitative improvements over Gaussian filtering. Unsupervised domain translation also leads to improvements in visual image quality relative to Gaussian filtering as evidenced by relative degrees of enhancement in gray matter intensities in the denoised brain PET images.
[1]
Hu Chen,et al.
Learning Invariant Representation for Unsupervised Image Restoration
,
2020,
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2]
Xiao Liu,et al.
STGAN: A Unified Selective Transfer Network for Arbitrary Image Attribute Editing
,
2019,
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3]
Maneesh Kumar Singh,et al.
DRIT++: Diverse Image-to-Image Translation via Disentangled Representations
,
2019,
International Journal of Computer Vision.
[4]
Jan Kautz,et al.
Unsupervised Image-to-Image Translation Networks
,
2017,
NIPS.
[5]
Harshad Rai,et al.
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
,
2018
.