Distribution Matching Losses Can Hallucinate Features in Medical Image Translation

This paper discusses how distribution matching losses, such as those used in CycleGAN, when used to synthesize medical images can lead to mis-diagnosis of medical conditions. It seems appealing to use these new image synthesis methods for translating images from a source to a target domain because they can produce high quality images and some even do not require paired data. However, the basis of how these image translation models work is through matching the translation output to the distribution of the target domain. This can cause an issue when the data provided in the target domain has an over or under representation of some classes (e.g. healthy or sick). When the output of an algorithm is a transformed image there are uncertainties whether all known and unknown class labels have been preserved or changed. Therefore, we recommend that these translated images should not be used for direct interpretation (e.g. by doctors) because they may lead to misdiagnosis of patients based on hallucinated image features by an algorithm that matches a distribution. However there are many recent papers that seem as though this is the goal.

[1]  Jelmer M. Wolterink,et al.  Deep MR to CT Synthesis Using Unpaired Data , 2017, SASHIMI@MICCAI.

[2]  Hayit Greenspan,et al.  Virtual PET Images from CT Data Using Deep Convolutional Networks: Initial Results , 2017, SASHIMI@MICCAI.

[3]  Guang Yang,et al.  DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction , 2018, IEEE Transactions on Medical Imaging.

[4]  Yoshua Bengio,et al.  Generative Adversarial Networks , 2014, ArXiv.

[5]  Aaron C. Courville,et al.  Adversarially Learned Inference , 2016, ICLR.

[6]  Jan Kautz,et al.  Unsupervised Image-to-Image Translation Networks , 2017, NIPS.

[7]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[8]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[9]  Won-Ki Jeong,et al.  Compressed Sensing MRI Reconstruction Using a Generative Adversarial Network With a Cyclic Loss , 2017, IEEE Transactions on Medical Imaging.

[10]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[11]  Su Ruan,et al.  Medical Image Synthesis with Context-Aware Generative Adversarial Networks , 2016, MICCAI.

[12]  Janne Heikkilä,et al.  Towards Virtual H&E Staining of Hyperspectral Lung Histology Images Using Conditional Generative Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision Workshops (ICCVW).

[13]  Mark Sandler,et al.  CycleGAN, a Master of Steganography , 2017, ArXiv.

[14]  Yoshua Bengio,et al.  GibbsNet: Iterative Adversarial Inference for Deep Graphical Models , 2017, NIPS.

[15]  Brian B. Avants,et al.  The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) , 2015, IEEE Transactions on Medical Imaging.

[16]  Alexei A. Efros,et al.  Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).