Magnetic- Visual Sensor Fusion-based Dense 3D Reconstruction and Localization for Endoscopic Capsule Robots

Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is evaluated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors vary from 1.58 to 2.17 cm.

[1]  Stefan Leutenegger,et al.  ElasticFusion: Real-time dense SLAM and light source estimation , 2016, Int. J. Robotics Res..

[2]  Yasin Almalioglu,et al.  Endo-VMFuseNet: Deep Visual-Magnetic Sensor Fusion Approach for Uncalibrated, Unsynchronized and Asymmetric Endoscopic Capsule Robot Localization Data , 2017, ArXiv.

[3]  Helder Araújo,et al.  A fully dense and globally consistent 3D map reconstruction approach for GI tract to enhance therapeutic relevance of the endoscopic capsule robot , 2017, ArXiv.

[4]  Guang-Zhong Yang,et al.  Dynamic view expansion for minimally invasive surgery using simultaneous localization and mapping , 2009, 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[5]  Manuel Davy,et al.  Particle Filtering for Multisensor Data Fusion With Switching Observation Models: Application to Land Vehicle Positioning , 2007, IEEE Transactions on Signal Processing.

[6]  Seth D. Crockett,et al.  Burden of gastrointestinal disease in the United States: 2012 update. , 2012, Gastroenterology.

[7]  Helder Araújo,et al.  Sparse-then-dense alignment-based 3D map reconstruction method for endoscopic capsule robots , 2017, Machine Vision and Applications.

[8]  J. M. M. Montiel,et al.  Visual SLAM for Handheld Monocular Endoscope , 2014, IEEE Transactions on Medical Imaging.

[9]  Metin Sitti,et al.  A 5-D Localization Method for a Magnetically Manipulated Untethered Robot Using a 2-D Array of Hall-Effect Sensors , 2016, IEEE/ASME Transactions on Mechatronics.

[10]  Yasin Almalioglu,et al.  A Deep Learning Based 6 Degree-of-Freedom Localization Method for Endoscopic Capsule Robots , 2017, ArXiv.

[11]  Helder Araújo,et al.  Deep EndoVO: A recurrent convolutional neural network (RCNN) based visual odometry approach for endoscopic capsule robots , 2017, Neurocomputing.

[12]  Kaveh Pahlavan,et al.  On the accuracy of RF and image processing based hybrid localization for wireless capsule endoscopy , 2015, 2015 IEEE Wireless Communications and Networking Conference (WCNC).

[13]  P. Swain,et al.  Wireless capsule endoscopy. , 2002, The Israel Medical Association journal : IMAJ.

[14]  Alexandre Hostettler,et al.  ORBSLAM-Based Endoscope Tracking and 3D Reconstruction , 2016, CARE@MICCAI.

[15]  Li Liu,et al.  Capsule endoscope localization based on computer vision technique , 2009, 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[16]  Metin Sitti,et al.  3-D Localization Method for a Magnetically Actuated Soft Capsule Endoscope and Its Applications , 2013, IEEE Transactions on Robotics.

[17]  Helder Araújo,et al.  A deep learning based fusion of RGB camera information and magnetic localization information for endoscopic capsule robots , 2017, International Journal of Intelligent Robotics and Applications.

[18]  Weihua Li,et al.  A Review of Localization Systems for Robotic Endoscopic Capsules , 2012, IEEE Transactions on Biomedical Engineering.

[19]  Michel Dhome,et al.  Calibration of Non-Overlapping Cameras---Application to Vision-Based Robotics , 2010, BMVC.

[20]  Helder Araújo,et al.  EndoSensorFusion: Particle Filtering-Based Multi-Sensory Data Fusion with Switching State-Space Model for Endoscopic Capsule Robots , 2017, 2018 IEEE International Conference on Robotics and Automation (ICRA).

[21]  Daniel Cremers,et al.  LSD-SLAM: Large-Scale Direct Monocular SLAM , 2014, ECCV.

[22]  Eric Diller,et al.  Biomedical Applications of Untethered Mobile Milli/Microrobots , 2015, Proceedings of the IEEE.

[23]  Guang-Zhong Yang,et al.  Real-Time Stereo Reconstruction in Robotically Assisted Minimally Invasive Surgery , 2010, MICCAI.

[24]  Andrew J. Davison,et al.  A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM , 2014, 2014 IEEE International Conference on Robotics and Automation (ICRA).

[25]  J. M. M. Montiel,et al.  ORB-SLAM: A Versatile and Accurate Monocular SLAM System , 2015, IEEE Transactions on Robotics.

[26]  Arianna Menciassi,et al.  Wireless capsule endoscopy: from diagnostic devices to multipurpose robotic systems , 2007, Biomedical microdevices.

[27]  Guang-Zhong Yang,et al.  Motion Compensated SLAM for Image Guided Surgery , 2010, MICCAI.

[28]  Yu Sun,et al.  Simultaneous Tracking, 3D Reconstruction and Deforming Point Detection for Stereoscope Guided Surgery , 2013, AE-CAI.

[29]  Guang-Zhong Yang,et al.  Simultaneous Stereoscope Localization and Soft-Tissue Mapping for Minimal Invasive Surgery , 2006, MICCAI.

[30]  Helder Araújo,et al.  A non-rigid map fusion-based direct SLAM method for endoscopic capsule robots , 2017, International Journal of Intelligent Robotics and Applications.

[31]  Guang-Zhong Yang,et al.  Metric depth recovery from monocular images using Shape-from-Shading and specularities , 2012, 2012 19th IEEE International Conference on Image Processing.

[32]  Helder Araújo,et al.  Six Degree-of-Freedom Localization of Endoscopic Capsule Robots using Recurrent Neural Networks embedded into a Convolutional Neural Network , 2017, ArXiv.

[33]  Alejandro F. Frangi,et al.  Muliscale Vessel Enhancement Filtering , 1998, MICCAI.