Realistic Lung Nodule Synthesis With Multi-Target Co-Guided Adversarial Mechanism

The important cues for a realistic lung nodule synthesis include the diversity in shape and background, controllability of semantic feature levels, and overall CT image quality. To incorporate these cues as the multiple learning targets, we introduce the Multi-Target Co-Guided Adversarial Mechanism, which utilizes the foreground and background mask to guide nodule shape and lung tissues, takes advantage of the CT lung and mediastinal window as the guidance of spiculation and texture control, respectively. Further, we propose a Multi-Target Co-Guided Synthesizing Network with a joint loss function to realize the co-guidance of image generation and semantic feature learning. The proposed network contains a Mask-Guided Generative Adversarial Sub-Network (MGGAN) and a Window-Guided Semantic Learning Sub-Network (WGSLN). The MGGAN generates the initial synthesis using the mask combined with the foreground and background masks, guiding the generation of nodule shape and background tissues. Meanwhile, the WGSLN controls the semantic features and refines the synthesis quality by transforming the initial synthesis into the CT lung and mediastinal window, and performing the spiculation and texture learning simultaneously. We validated our method using the quantitative analysis of authenticity under the Fréchet Inception Score, and the results show its state-of-the-art performance. We also evaluated our method as a data augmentation method to predict malignancy level on the LIDC-IDRI database, and the results show that the accuracy of VGG-16 is improved by 5.6%. The experimental results confirm the effectiveness of the proposed method.