Deep Learning for Localization in the Lung

Lung cancer is the leading cause of cancer-related death worldwide, and early diagnosis is critical to improving patient outcomes. To diagnose cancer, a highly trained pulmonologist must navigate a flexible bronchoscope deep into the branched structure of the lung for biopsy. The biopsy fails to sample the target tissue in 26-33% of cases largely because of poor registration with the preoperative CT map. We developed two deep learning approaches to localize the bronchoscope in the preoperative CT map in real time and tested the algorithms across 13 trajectories in a lung phantom and 68 trajectories in 11 human cadaver lungs. In the lung phantom, we observe performance reaching 95% precision and recall of visible airways and 3 mm average position error. On a successful cadaver lung sequence, the algorithms trained on simulation alone achieved 77%-94% precision and recall of visible airways and 4-6 mm average position error. We also compare the effect of GAN-stylizing images and we look at aggregate statistics over the entire set of trajectories.

[1]  Erlend Fagertun Hofstad,et al.  Automatic registration of CT images to patient during the initial phase of bronchoscopy: a clinical pilot study. , 2014, Medical physics.

[2]  OpenAI Learning Dexterous In-Hand Manipulation. , 2018 .

[3]  Kazuhiro Yasufuku,et al.  The role of bronchoscopy in the diagnosis of early lung cancer: a review. , 2016, Journal of thoracic disease.

[4]  Jakub W. Pachocki,et al.  Learning dexterous in-hand manipulation , 2018, Int. J. Robotics Res..

[5]  David Eng,et al.  OffsetNet: Deep Learning for Localization in the Lung using Rendered Images , 2018, 2019 International Conference on Robotics and Automation (ICRA).

[6]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  D. Mollura,et al.  Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends. , 2015, Radiographics : a review publication of the Radiological Society of North America, Inc.

[8]  Kensaku Mori,et al.  A Discriminative Structural Similarity Measure and its Application to Video-Volume Registration for Endoscope Three-Dimensional Motion Tracking , 2014, IEEE Transactions on Medical Imaging.

[9]  John J. Craig Zhu,et al.  Introduction to robotics mechanics and control , 1991 .

[10]  Roger Fletcher,et al.  A Rapidly Convergent Descent Method for Minimization , 1963, Comput. J..

[11]  Ophir D. Klein,et al.  The branching programme of mouse lung development , 2008, Nature.

[12]  Xifeng Dong,et al.  Meta-analysis of the diagnostic yield and safety of electromagnetic navigation bronchoscopy for lung nodules. , 2015, Journal of thoracic disease.

[13]  Toshimitsu Kaneko,et al.  Deep monocular 3D reconstruction for assisted navigation in bronchoscopy , 2017, International Journal of Computer Assisted Radiology and Surgery.

[14]  William E. Higgins,et al.  Interactive CT-Video Registration for the Continuous Guidance of Bronchoscopy , 2013, IEEE Transactions on Medical Imaging.

[15]  William E. Higgins,et al.  Combined video tracking and image-video registration for continuous bronchoscopic guidance , 2008, International Journal of Computer Assisted Radiology and Surgery.

[16]  P ? ? ? ? ? ? ? % ? ? ? ? , 1991 .

[17]  Yihong Wu,et al.  Image-based camera localization: an overview , 2016, Visual Computing for Industry, Biomedicine, and Art.

[18]  William E. Higgins,et al.  3D CT-Video Fusion for Image-Guided Bronchoscopy , 2008, Comput. Medical Imaging Graph..

[19]  Rosa M. Estrada-Y.-Martin,et al.  Diagnostic Yield and Complications of Bronchoscopy for Peripheral Lung Lesions. Results of the AQuIRE Registry. , 2016, American journal of respiratory and critical care medicine.

[20]  拓海 杉山,et al.  “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”の学習報告 , 2017 .

[21]  Li Fei-Fei,et al.  Perceptual Losses for Real-Time Style Transfer and Super-Resolution , 2016, ECCV.