Consensus of Ambiguity: Theory and Application of Active Learning for Biomedical Image Analysis

Supervised classifiers require manually labeled training samples to classify unlabeled objects. Active Learning (AL) can be used to selectively label only "ambiguous" samples, ensuring that each labeled sample is maximally informative. This is invaluable in applications where manual labeling is expensive, as in medical images where annotation of specific pathologies or anatomical structures is usually only possible by an expert physician. Existing AL methods use a single definition of ambiguity, but there can be significant variation among individual methods. In this paper we present a consensus of ambiguity (CoA) approach to AL, where only samples which are consistently labeled as ambiguous across multiple AL schemes are selected for annotation. CoA-based AL uses fewer samples than Random Learning (RL) while exploiting the variance between individual AL schemes to efficiently label training sets for classifier training. We use a consensus ratio to determine the variance between AL methods, and the CoA approach is used to train classifiers for three different medical image datasets: 100 prostate histopathology images, 18 prostate DCE-MRI patient studies, and 9,000 breast histopathology regions of interest from 2 patients. We use a Probabilistic Boosting Tree (PBT) to classify each dataset as either cancer or non-cancer (prostate), or high or low grade cancer (breast). Trained is done using CoA-based AL, and is evaluated in terms of accuracy and area under the receiver operating characteristic curve (AUC). CoA training yielded between 0.01-0.05% greater performance than RL for the same training set size; approximately 5-10 more samples were required for RL to match the performance of CoA, suggesting that CoA is a more efficient training strategy.

[1]  Shigeo Abe DrEng Pattern Classification , 2001, Springer London.

[2]  J. Ross Quinlan,et al.  Decision trees and decision-making , 1990, IEEE Trans. Syst. Man Cybern..

[3]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[4]  Ishwar K. Sethi,et al.  Confidence-based active learning , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[5]  Robert M. Haralick,et al.  Textural Features for Image Classification , 1973, IEEE Trans. Syst. Man Cybern..

[6]  David A. Cohn,et al.  Active Learning with Statistical Models , 1996, NIPS.

[7]  David G. Stork,et al.  Pattern Classification , 1973 .

[8]  Zhuowen Tu,et al.  Probabilistic boosting-tree: learning discriminative models for classification, recognition, and clustering , 2005, Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1.

[9]  Anant Madabhushi,et al.  Integrating structural and functional imaging for computer assisted detection of prostate cancer on multi-protocol in vivo 3 Tesla MRI , 2009, Medical Imaging.

[10]  Gyan Bhanot,et al.  Computerized Image-Based Detection and Grading of Lymphocytic Infiltration in HER2+ Breast Cancer Histopathology , 2010, IEEE Transactions on Biomedical Engineering.

[11]  Anant Madabhushi,et al.  A Class Balanced Active Learning Scheme that Accounts for Minority Class Problems : Applications to Histopathology , 2009 .

[12]  Anant Madabhushi,et al.  COLLINARUS: collection of image-derived non-linear attributes for registration using splines , 2009, Medical Imaging.

[13]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[14]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[15]  A. Madabhushi Digital pathology image analysis: opportunities and challenges. , 2009, Imaging in medicine.

[16]  H. Sebastian Seung,et al.  Query by committee , 1992, COLT '92.

[17]  Anant Madabhushi,et al.  A Boosted Bayesian Multiresolution Classifier for Prostate Cancer Detection From Digitized Needle Biopsies , 2012, IEEE Transactions on Biomedical Engineering.