Belief Propagation for Depth Cue Fusion in Minimally Invasive Surgery

In minimally invasive surgery, dense 3D surface reconstruction is important for surgical navigation and integrating pre- and intra-operative data. Despite recent developments in 3D tissue deformation techniques, their general applicability is limited by specific constraints and underlying assumptions. The need for accurate and robust tissue deformation recovery has motivated research into fusing multiple visual cues for depth recovery. In this paper, a Markov Random Field (MRF) based Bayesian belief propagation framework has been proposed for the fusion of different depth cues. By using the underlying MRF structure to ensure spatial continuity in an image, the proposed method offers the possibility of inferring surface depth by fusing the posterior node probabilities in a node's Markov blanket together with the monocular and stereo depth maps. Detailed phantom validation and in vivo results are provided to demonstrate the accuracy, robustness, and practical value of the technique.

[1]  Jayant Shah,et al.  Recovery of surfaces with discontinuities by fusing shading and range data within a variational framework , 1996, IEEE Trans. Image Process..

[2]  Pierre Hellier,et al.  Level Set Methods in an EM Framework for Shape Classification and Estimation , 2004, International Conference on Medical Image Computing and Computer-Assisted Intervention.

[3]  Guido Gerig,et al.  Medical Image Computing and Computer-Assisted Intervention - MICCAI 2005, 8th International Conference, Palm Springs, CA, USA, October 26-29, 2005, Proceedings, Part I , 2005, MICCAI.

[4]  Maurizio Pilu,et al.  A direct method for stereo correspondence based on singular value decomposition , 1997, Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[5]  Katsushi Ikeuchi,et al.  Numerical Shape from Shading and Occluding Boundaries , 1981, Artif. Intell..

[6]  Guang-Zhong Yang,et al.  Laparoscope Self-calibration for Robotic Assisted Minimally Invasive Surgery , 2005, MICCAI.

[7]  Sharath Pankanti,et al.  Integrating Vision Modules: Stereo, Shading, Grouping, and Line Labeling , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[8]  Jiri Matas,et al.  Robust wide-baseline stereo from maximally stable extremal regions , 2004, Image Vis. Comput..

[9]  Carlo Tomasi,et al.  Good features to track , 1994, 1994 Proceedings of IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Ashutosh Saxena,et al.  Depth Estimation Using Monocular and Stereo Cues , 2007, IJCAI.

[11]  Takayuki Okatani,et al.  Shape Reconstruction from an Endoscope Image by Shape from Shading Technique for a Point Light Source at the Projection Center , 1997, Comput. Vis. Image Underst..

[12]  Berthold K. P. Horn,et al.  Shape from shading , 1989 .

[13]  Guang-Zhong Yang,et al.  Dense 3D Depth Recovery for Soft Tissue Deformation During Robotically Assisted Laparoscopic Surgery , 2004, MICCAI.

[14]  Hervé Delingette,et al.  Computational Models for Image-Guided Robot-Assisted and Simulated Medical Interventions , 2006, Proceedings of the IEEE.

[15]  K. Deguchi,et al.  Shape reconstruction from an endoscope image by shape-from-shading technique for a point light source at the projection center , 1996, Proceedings of the Workshop on Mathematical Methods in Biomedical Image Analysis.

[16]  James S. Duncan,et al.  Integration of vision modules: a game-theoretic framework , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.