Building robust confocal endomicroscopy mosaics despite image losses

Probe-based confocal laser endomicroscopy (pCLE) is a promising image modality for early cancer screening in various clinical applications. A typical limitation of probe-based systems, however, is the limited field-of-view (FOV) achievable with miniature optics. This is especially true with the high magnifications required for clinical assessment, namely for optical biopsy-based investigations. A widely accepted solution is to opt for high-resolution optics, and enlarge the FOV algorithmically by sweeping the probe along the tissue and reconstructing a mosaic. While mosaicing effectively enhances the FOV, its accuracy is limited by the fact that the microscale movements required to sweep the probe are difficult to generate manually, especially in minimally invasive settings. For this reason, various approaches using a robotic micromanipulator and visual feedback control have been developed [1, 2, 3]. The above-mentioned methods have in common the fact that they rely on an accurate image-based motion estimation, which should be computed in realtime. Typically, pCLE-based visual servoing methods use a normalized cross correlation (NCC) computation between successive overlapping frames [4]. While this may be sufficient in ex vivo conditions, the constraints imposed by the in vivo environment make it more difficult. The image quality might be affected by a partial loss of contact with –or excessive force applied on– the tissue (e.g., due to non planar tissue geometry), or simply due to surgical debris present on the surface. Moreover, accelerations in the probe/tissue movement (due to various effects such as stick/slip effects or robot manufacturing inaccuracies) are detrimental for image quality and real-time matching. In summary, there may be parts of the trajectory where the image quality is insufficient for visual servo control, leading to the production of poor mosaics. This abstract presents a Kalman filter-based approach, where both the image estimation and the (possibly inaccurate) robot trajectory are fused. We validate the proposed approach in controlled benchtop experiments, where the loss of contact with tissue is simulated. We show that it allows computing online mosaics with a coherent topology, despite an important loss of probe-tissue contact at several points along the trajectory. The method could be used for robustly estimating the probe-tissue movement online in a visual servo control loop.