Test Distribution-Aware Active Learning: A Principled Approach Against Distribution Shift and Outliers

Expanding on MacKay (1992), we argue that conventional model-based methods for active learning—like BALD—have a fundamental shortfall: they fail to directly account for the testtime distribution of the input variables. This can lead to pathologies in the acquisition strategy, as what is maximally informative for model parameters may not be maximally informative for prediction: for example, when the data in the pool set is more disperse than that of the final prediction task, or when the distribution of pool and test samples differs. To correct this, we revisit an acquisition strategy that is based on maximizing the expected information gained about possible future predictions, referring to this as the Expected Predictive Information Gain (EPIG). As EPIG does not scale well for batch acquisition, we further examine an alternative strategy, a hybrid between BALD and EPIG, which we call Joint Expected Predictive Information Gain (JEPIG). We consider using both for active learning with Bayesian neural networks on a variety of datasets, examining the behavior under distribution shift in the pool set.

[1]  Zoubin Ghahramani,et al.  Deep Bayesian Active Learning with Image Data , 2017, ICML.

[2]  Shengyang Sun,et al.  Beyond Marginal Uncertainty: How Accurately can Bayesian Regression Models Estimate Posterior Predictive Correlations? , 2020, AISTATS.

[3]  D. Lindley On a Measure of the Information Provided by an Experiment , 1956 .

[4]  Ryan P. Adams,et al.  On Warm-Starting Neural Network Training , 2020, NeurIPS.

[5]  Thomas G. Dietterich,et al.  Deep Anomaly Detection with Outlier Exposure , 2018, ICLR.

[6]  Li Fei-Fei,et al.  Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering , 2021, ACL.

[7]  Hongseok Yang,et al.  On Nesting Monte Carlo Estimators , 2017, ICML.

[8]  Andrew McCallum,et al.  Toward Optimal Active Learning through Sampling Estimation of Error Reduction , 2001, ICML.

[9]  Akshay Krishnamurthy,et al.  Gone Fishing: Neural Active Learning with Fisher Embeddings , 2021, NeurIPS.

[10]  Jinbo Bi,et al.  Active learning via transductive experimental design , 2006, ICML.

[11]  Dong-Hyun Lee,et al.  Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks , 2013 .

[12]  Yarin Gal,et al.  Prioritized training on points that are learnable, worth learning, and not yet learned , 2021, ArXiv.

[13]  Andreas Krause,et al.  Adaptive Submodularity: Theory and Applications in Active Learning and Stochastic Optimization , 2010, J. Artif. Intell. Res..

[14]  Geoffrey E. Hinton,et al.  Bayesian Learning for Neural Networks , 1995 .

[15]  Andrew Y. Ng,et al.  Reading Digits in Natural Images with Unsupervised Feature Learning , 2011 .

[16]  Suraj Kothawade,et al.  SIMILAR: Submodular Information Measures Based Active Learning In Realistic Scenarios , 2021, NeurIPS.

[17]  David A. Cohn,et al.  Training Connectionist Networks with Queries and Selective Sampling , 1989, NIPS.

[18]  Yarin Gal,et al.  BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning , 2019, NeurIPS.

[19]  K. Chaloner,et al.  Bayesian Experimental Design: A Review , 1995 .

[20]  Geoffrey E. Hinton,et al.  Distilling the Knowledge in a Neural Network , 2015, ArXiv.

[21]  Zoubin Ghahramani,et al.  Bayesian Active Learning for Classification and Preference Learning , 2011, ArXiv.

[22]  Honglak Lee,et al.  Predictive Information Accelerates Learning in RL , 2020, NeurIPS.

[23]  Tom Rainforth,et al.  On Statistical Bias In Active Learning: How and When To Fix It , 2021, ICLR.

[24]  John Langford,et al.  Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds , 2019, ICLR.

[25]  David J. C. MacKay,et al.  Information-Based Objective Functions for Active Data Selection , 1992, Neural Computation.

[26]  Yarin Gal,et al.  Understanding Measures of Uncertainty for Adversarial Example Detection , 2018, UAI.

[27]  Naftali Tishby,et al.  Predictive Information , 1999, cond-mat/9902341.

[28]  Yee Whye Teh,et al.  Variational Bayesian Optimal Experimental Design , 2019, NeurIPS.

[29]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[30]  Yarin Gal,et al.  A Practical & Unified Notation for Information-Theoretic Quantities in ML , 2021, ArXiv.

[31]  David Yarowsky,et al.  Unsupervised Word Sense Disambiguation Rivaling Supervised Methods , 1995, ACL.

[32]  Kaisheng Ma,et al.  Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[33]  Burr Settles,et al.  Active Learning Literature Survey , 2009 .

[34]  Yoshua Bengio,et al.  Gradient-based learning applied to document recognition , 1998, Proc. IEEE.

[35]  Yarin Gal,et al.  A Simple Baseline for Batch Active Learning with Stochastic Acquisition Functions , 2021, ArXiv.

[36]  Zoubin Ghahramani,et al.  Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.

[37]  Alex Krizhevsky,et al.  Learning Multiple Layers of Features from Tiny Images , 2009 .

[38]  Andreas Krause,et al.  Submodular Function Maximization , 2014, Tractability.