Cervigram image segmentation based on reconstructive sparse representations

We proposed an approach based on reconstructive sparse representations to segment tissues in optical images of the uterine cervix. Because of large variations in image appearance caused by the changing of the illumination and specular reflection, the color and texture features in optical images often overlap with each other and are not linearly separable. By leveraging sparse representations the data can be transformed to higher dimensions with sparse constraints and become more separated. K-SVD algorithm is employed to find sparse representations and corresponding dictionaries. The data can be reconstructed from its sparse representations and positive and/or negative dictionaries. Classification can be achieved based on comparing the reconstructive errors. In the experiments we applied our method to automatically segment the biomarker AcetoWhite (AW) regions in an archive of 60,000 images of the uterine cervix. Compared with other general methods, our approach showed lower space and time complexity and higher sensitivity.

[1]  Antonio Criminisi,et al.  Single-Histogram Class Models for Image Segmentation , 2006, ICVGIP.

[2]  Sameer Antani,et al.  Digital Tools for Collecting Data from Cervigrams for Research and Training in Colposcopy , 2006, Journal of lower genital tract disease.

[3]  Shiri Gordon,et al.  Content analysis of uterine cervix images: initial steps toward content based indexing and retrieval of cervigrams , 2006, SPIE Medical Imaging.

[4]  Jose Jeronimo,et al.  A probabilistic approach to segmentation and classification of neoplasia in uterine cervix images using color and geometric features , 2005, SPIE Medical Imaging.

[5]  Jose Jeronimo,et al.  Tissue classification using cluster features for lesion detection in digital cervigrams , 2008, SPIE Medical Imaging.

[6]  Kjersti Engan,et al.  Method of optimal directions for frame design , 1999, 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings. ICASSP99 (Cat. No.99CH36258).

[7]  Xiaolei Huang,et al.  Distance guided selection of the best base classifier in an ensemble with application to cervigram image segmentation , 2009, 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[8]  William M. Wells,et al.  Simultaneous truth and performance level estimation (STAPLE): an algorithm for the validation of image segmentation , 2004, IEEE Transactions on Medical Imaging.

[9]  Junzhou Huang,et al.  The Benefit of Group Sparsity , 2009 .

[10]  Joel A. Tropp,et al.  Greed is good: algorithmic results for sparse approximation , 2004, IEEE Transactions on Information Theory.

[11]  Shelly Lotenberg,et al.  Shape Priors for Segmentation of the Cervix Region Within Uterine Cervix Images , 2008, Journal of Digital Imaging.

[12]  Guillermo Sapiro,et al.  Discriminative learned dictionaries for local image analysis , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[13]  Stéphane Mallat,et al.  Matching pursuits with time-frequency dictionaries , 1993, IEEE Trans. Signal Process..

[14]  Xiaolei Huang,et al.  Combining multiple 2ν-SVM classifiers for tissue segmentation , 2008, 2008 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro.

[15]  Sheng Chen,et al.  Orthogonal least squares methods and their application to non-linear system identification , 1989 .

[16]  M. Elad,et al.  $rm K$-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation , 2006, IEEE Transactions on Signal Processing.

[17]  Junzhou Huang,et al.  Learning with dynamic group sparsity , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[18]  Antonio Criminisi,et al.  TextonBoost: Joint Appearance, Shape and Context Modeling for Multi-class Object Recognition and Segmentation , 2006, ECCV.

[19]  Michael Elad,et al.  Submitted to Ieee Transactions on Image Processing Image Decomposition via the Combination of Sparse Representations and a Variational Approach , 2022 .