Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

[1]  N. Sanai,et al.  Trends in fluorescence image-guided surgery for gliomas. , 2014, Neurosurgery.

[2]  D. Caleb Rucker,et al.  Evaluation of Conoscopic Holography for Estimating Tumor Resection Cavities in Model-Based Image-Guided Neurosurgery , 2014, IEEE Transactions on Biomedical Engineering.

[3]  Blake Hannaford,et al.  Semi-autonomous simulated brain tumor ablation with RAVENII Surgical Robot using behavior tree , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[4]  Blake Hannaford,et al.  Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot , 2014, Journal of medical imaging.

[5]  Blake Hannaford,et al.  Raven-II: An Open Platform for Surgical Robotics Research , 2013, IEEE Transactions on Biomedical Engineering.

[6]  Yuanzheng Gong,et al.  Bound constrained bundle adjustment for reliable 3D reconstruction. , 2015, Optics express.

[7]  Richard Szeliski,et al.  Vision Algorithms: Theory and Practice , 2002, Lecture Notes in Computer Science.

[8]  Chenying Yang,et al.  Target-to-background enhancement in multispectral endoscopy with background autofluorescence mitigation for quantitative molecular imaging , 2014, Journal of biomedical optics.

[9]  Guang-Zhong Yang,et al.  Surgical Robotics Through a Keyhole: From Today's Translational Barriers to Tomorrow's “Disappearing” Robots , 2013, IEEE Transactions on Biomedical Engineering.

[10]  Eric J. Seibel,et al.  Controlling the Trajectory of a Flexible Ultrathin Endoscope for Fully Automated Bladder Surveillance , 2014, IEEE/ASME Transactions on Mechatronics.