Understanding Uncertainty Sampling

Uncertainty sampling is a prevalent active learning algorithm that queries sequentially the annotations of data samples which the current prediction model is uncertain about. However, the usage of uncertainty sampling has been largely heuristic: (i) There is no consensus on the proper definition of"uncertainty"for a specific task under a specific loss; (ii) There is no theoretical guarantee that prescribes a standard protocol to implement the algorithm, for example, how to handle the sequentially arrived annotated data under the framework of optimization algorithms such as stochastic gradient descent. In this work, we systematically examine uncertainty sampling algorithms under both stream-based and pool-based active learning. We propose a notion of equivalent loss which depends on the used uncertainty measure and the original loss function and establish that an uncertainty sampling algorithm essentially optimizes against such an equivalent loss. The perspective verifies the properness of existing uncertainty measures from two aspects: surrogate property and loss convexity. Furthermore, we propose a new notion for designing uncertainty measures called \textit{loss as uncertainty}. The idea is to use the conditional expected loss given the features as the uncertainty measure. Such an uncertainty measure has nice analytical properties and generality to cover both classification and regression problems, which enable us to provide the first generalization bound for uncertainty sampling algorithms under both stream-based and pool-based settings, in the full generality of the underlying model and problem. Lastly, we establish connections between certain variants of the uncertainty sampling algorithms with risk-sensitive objectives and distributional robustness, which can partly explain the advantage of uncertainty sampling algorithms when the sample size is small.

[1]  Zhongze Cai,et al.  Distribution-Free Model-Agnostic Regression Calibration via Nonparametric Methods , 2023, ArXiv.

[2]  Francis Bach,et al.  Convergence of Uncertainty Sampling for Active Learning , 2021, ICML.

[3]  Willem Waegeman,et al.  Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods , 2019, Machine Learning.

[4]  Willem Waegeman,et al.  Aleatoric and Epistemic Uncertainty in Machine Learning: A Tutorial Introduction , 2019, ArXiv.

[5]  Percy Liang,et al.  Verified Uncertainty Calibration , 2019, NeurIPS.

[6]  E. Candès,et al.  The limits of distribution-free conditional predictive inference , 2019, Information and Inference: A Journal of the IMA.

[7]  Percy Liang,et al.  Uncertainty Sampling is Preconditioned Stochastic Gradient Descent on Zero-One Loss , 2018, NeurIPS.

[8]  Stefano Ermon,et al.  Accurate Uncertainties for Deep Learning Using Calibrated Regression , 2018, ICML.

[9]  Percy Liang,et al.  On the Relationship between Data Efficiency and Error for Uncertainty Sampling , 2018, ICML.

[10]  John Duchi,et al.  Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach , 2016, Math. Oper. Res..

[11]  John C. Duchi,et al.  Variance-based Regularization with Convex Objectives , 2016, NIPS.

[12]  Chi-Yin Chow,et al.  Ambiguity-Based Multiclass Active Learning , 2016, IEEE Transactions on Fuzzy Systems.

[13]  Steve Hanneke,et al.  Theory of Disagreement-Based Active Learning , 2014, Found. Trends Mach. Learn..

[14]  Ohad Shamir,et al.  Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes , 2012, ICML.

[15]  Francesco Orabona,et al.  Better Algorithms for Selective Sampling , 2011, ICML.

[16]  Adam Tauman Kalai,et al.  Analysis of Perceptron-Based Active Learning , 2009, COLT.

[17]  Massimiliano Pontil,et al.  Empirical Bernstein Bounds and Sample-Variance Penalization , 2009, COLT.

[18]  Mark Craven,et al.  Multiple-Instance Active Learning , 2007, NIPS.

[19]  Maria-Florina Balcan,et al.  Margin Based Active Learning , 2007, COLT.

[20]  Maria-Florina Balcan,et al.  Agnostic active learning , 2006, J. Comput. Syst. Sci..

[21]  Michael I. Jordan,et al.  Convexity, Classification, and Risk Bounds , 2006 .

[22]  Andrew McCallum,et al.  Reducing Labeling Effort for Structured Prediction Tasks , 2005, AAAI.

[23]  Gábor Lugosi,et al.  Introduction to Statistical Learning Theory , 2004, Advanced Lectures on Machine Learning.

[24]  Yi Lin A note on margin-based loss functions in classification , 2004 .

[25]  Peter L. Bartlett,et al.  Localized Rademacher Complexities , 2002, COLT.

[26]  David D. Lewis,et al.  A sequential algorithm for training text classifiers: corrigendum and additional data , 1995, SIGF.

[27]  Shlomo Argamon,et al.  Committee-Based Sampling For Training Probabilistic Classi(cid:12)ers , 1995 .

[28]  David A. Cohn,et al.  Improving generalization with active learning , 1994, Machine Learning.

[29]  Jacques Stern,et al.  The hardness of approximate optima in lattices, codes, and systems of linear equations , 1993, Proceedings of 1993 IEEE 34th Annual Foundations of Computer Science.

[30]  H. Sebastian Seung,et al.  Query by committee , 1992, COLT '92.

[31]  M. Talagrand,et al.  Probability in Banach Spaces: Isoperimetry and Processes , 1991 .

[32]  Fanny Yang,et al.  Uniform versus uncertainty sampling: When being active is less efficient than staying passive , 2022, ArXiv.

[33]  Burr Settles,et al.  Active Learning Literature Survey , 2009 .

[34]  Yi Lin A note on margin-based loss functions in classification , 2004 .

[35]  Andrew McCallum,et al.  Toward Optimal Active Learning through Monte Carlo Estimation of Error Reduction , 2001, ICML 2001.

[36]  David A. Cohn,et al.  Training Connectionist Networks with Queries and Selective Sampling , 1989, NIPS.

[37]  Dana Angluin,et al.  Queries and concept learning , 1988, Machine Learning.