Probabilistic Active Learning: Towards Combining Versatility, Optimality and Efficiency

Mining data with minimal annotation costs requires efficient active approaches, that ideally select the optimal candidate for labelling under a user-specified classification performance measure. Common generic approaches, that are usable with any classifier and any performance measure, are either slow like error reduction, or heuristics like uncertainty sampling. In contrast, our Probabilistic Active Learning (PAL) approach offers versatility, direct optimisation of a performance measure and computational efficiency. Given a labelling candidate from a pool, PAL models both the candidate’s label and the true posterior in its neighbourhood as random variables. By computing the expectation of the gain in classification performance over both random variables, PAL then selects the candidate that in expectation will improve the classification performance the most. Extending our recent poster, we discuss the properties of PAL and perform a thorough experimental evaluation on several synthetic and real-world data sets of different sizes. Results show comparable or better classification performance than error reduction and uncertainty sampling, yet PAL has the same asymptotic time complexity as uncertainty sampling and is faster than error reduction.

[1]  Burr Settles,et al.  Active Learning Literature Survey , 2009 .

[2]  Olivier Chapelle,et al.  Active Learning for Parzen Window Classifier , 2005, AISTATS.

[3]  Roman Garnett,et al.  Bayesian Optimal Active Search and Surveying , 2012, ICML.

[4]  William A. Gale,et al.  A sequential algorithm for training text classifiers , 1994, SIGIR '94.

[5]  Vivekanand Gopalkrishnan,et al.  Big data, big business: bridging the gap , 2012, BigMine '12.

[6]  Katharina Morik,et al.  Inspecting Sample Reusability for Active Learning , 2011, Active Learning and Experimental Design @ AISTATS.

[7]  Myra Spiliopoulou,et al.  Probabilistic Active Learning: A Short Proposition , 2014, ECAI.

[8]  Harry Wechsler,et al.  Query by Transduction , 2008, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[9]  Bin Li,et al.  A survey on instance selection for active learning , 2012, Knowledge and Information Systems.

[10]  Andrew McCallum,et al.  Toward Optimal Active Learning through Sampling Estimation of Error Reduction , 2001, ICML.

[11]  Burr Settles,et al.  Active Learning , 2012, Synthesis Lectures on Artificial Intelligence and Machine Learning.

[12]  Charles Parker,et al.  An Analysis of Performance Measures for Binary Classifiers , 2011, 2011 IEEE 11th International Conference on Data Mining.

[13]  Alexander Zien,et al.  Semi-Supervised Learning , 2006 .

[14]  David A. Cohn,et al.  Active Learning with Statistical Models , 1996, NIPS.