Deep learning-based volumetric image generation from projection imaging for prostate radiotherapy

Due to the inter-fraction variation of anatomy, it is highly desired to provide fast and low-dose volumetric imaging during prostate radiation therapy treatment for patient setup and daily treatment dose estimation. In this study, we propose a novel generative adversarial network integrated with perceptual supervision to derive 3D volumetric images from two orthogonal 2D projections. Our proposed network, named TransNet, consists of three modules, i.e., encoding, transformation and decoding modules. Rather than only using image distance loss between the generated 3D images and the ground truth 3D CT images to supervise the network, adversarial loss is used to improve the realism of generated 3D images. We conducted a study on 20 patient cases, who had received prostate radiotherapy in our institution, and evaluated the efficacy and consistency of our method for two orthogonal projection angles, i.e., 0° and 90°. For each 3D CT image, we simulated its 2D projections at these two angles. The TransNet takes the two angles as input and output the 3D CT. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR) and structural similarity index metric (SSIM) achieved by our method are 117.5±15.3HU, 22.7±3.8dB and 0.904±0.27, respectively. These results demonstrate the feasibility and efficacy of our 2D-to-3D method for prostate cancer patients, which provides a potential solution for fast on-board volumetric imaging for patient setup and adaptive radiation therapy.