Optical coherence tomography noise modeling and fundamental bounds on human retinal layer segmentation accuracy (Conference Presentation)

The human retina is composed of several layers, visible by in vivo optical coherence tomography (OCT) imaging. To enhance diagnostics of retinal diseases, several algorithms have been developed to automatically segment one or more of the boundaries of these layers. OCT images are corrupted by noise, which is frequently the result of the detector noise and speckle, a type of coherent noise resulting from the presence of several scatterers in each voxel. However, it is unknown what the empirical distribution of noise in each layer of the retina is, and how the magnitude and distribution of the noise affects the lower bounds of segmentation accuracy. Five healthy volunteers were imaged using a spectral domain OCT probe from Bioptigen, Inc, centered at 850nm with 4.6µm full width at half maximum axial resolution. Each volume was segmented by expert manual graders into nine layers. The histograms of intensities in each layer were then fit to seven possible noise distributions from the literature on speckle and image processing. Using these empirical noise distributions and empirical estimates of the intensity of each layer, the Cramer-Rao lower bound (CRLB), a measure of the variance of an estimator, was calculated for each boundary layer. Additionally, the optimum bias of a segmentation algorithm was calculated, and a corresponding biased CRLB was calculated, which represents the improved performance an algorithm can achieve by using prior knowledge, such as the smoothness and continuity of layer boundaries. Our general mathematical model can be easily adapted for virtually any OCT modality.