A Survey on Epistemic (Model) Uncertainty in Supervised Learning: Recent Advances and Applications

Quantifying the uncertainty of supervised learning models plays an important role in making more reliable predictions. Epistemic uncertainty, which usually is due to insufficient knowledge about the model, can be reduced by collecting more data or refining the learning models. Over the last few years, scholars have proposed many epistemic uncertainty handling techniques which can be roughly grouped into two categories, i.e., Bayesian and ensemble. This paper provides a comprehensive review of epistemic uncertainty learning techniques in supervised learning over the last five years. As such, we, first, decompose the epistemic uncertainty into bias and variance terms. Then, a hierarchical categorization of epistemic uncertainty learning techniques along with their representative models is introduced. In addition, several applications such as computer vision (CV) and natural language processing (NLP) are presented, followed by a discussion on research gaps and possible future research directions.

[1]  Mirella Lapata,et al.  Confidence Modeling for Neural Semantic Parsing , 2018, ACL.

[2]  Klaus-Robert Müller,et al.  N-ary decomposition for multi-class classification , 2019, Machine Learning.

[3]  Bjorn Sprungk,et al.  On the convergence of the Laplace approximation and noise-level-robustness of Laplace-based Monte Carlo methods for Bayesian inverse problems , 2019, Numerische Mathematik.

[4]  José Miguel Hernández-Lobato,et al.  Bayesian Deep Learning via Subnetwork Inference , 2021, ICML.

[5]  Witold Pedrycz,et al.  A Study on Relationship Between Generalization Abilities and Fuzziness of Base Classifiers in Ensemble Learning , 2015, IEEE Transactions on Fuzzy Systems.

[6]  Jiong Niu,et al.  Target Detection in Clutter/Interference Regions Based on Deep Feature Fusion for HFSWR , 2021, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing.

[7]  Svetlozar T. Rachev,et al.  Bayesian Estimation of Stochastic Volatility Models , 2012 .

[8]  David J. C. MacKay,et al.  A Practical Bayesian Framework for Backpropagation Networks , 1992, Neural Computation.

[9]  A Survey on Bayesian Deep Learning , 2020, ACM Comput. Surv..

[10]  Guodong Zhang,et al.  Noisy Natural Gradient as Variational Inference , 2017, ICML.

[11]  Farhad Pourpanah,et al.  Recent advances in deep learning , 2020, International Journal of Machine Learning and Cybernetics.

[12]  Ludmila I. Kuncheva,et al.  Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy , 2003, Machine Learning.

[13]  Nicolás García-Pedrajas,et al.  Random feature weights for decision tree ensemble construction , 2012, Inf. Fusion.

[14]  Bin Wu,et al.  Densely connected deep random forest for hyperspectral imagery classification , 2018, International Journal of Remote Sensing.

[15]  Fuad E. Alsaadi,et al.  A competitive mechanism integrated multi-objective whale optimization algorithm with differential evolution , 2021, Neurocomputing.

[16]  Christian S. Perone,et al.  L2M: Practical posterior Laplace approximation with optimization-driven second moment estimation , 2021, ArXiv.

[17]  Jianhong Wang,et al.  Thermostat-assisted continuously-tempered Hamiltonian Monte Carlo for Bayesian learning , 2017, NeurIPS.

[18]  Kee-Eung Kim,et al.  An Improved Particle Filter With a Novel Hybrid Proposal Distribution for Quantitative Analysis of Gold Immunochromatographic Strips , 2019, IEEE Transactions on Nanotechnology.

[19]  Noel E. Sharkey,et al.  Combining diverse neural nets , 1997, The Knowledge Engineering Review.

[20]  Geoffrey E. Hinton,et al.  Keeping the neural networks simple by minimizing the description length of the weights , 1993, COLT '93.

[21]  Radford M. Neal Bayesian Learning via Stochastic Dynamics , 1992, NIPS.

[22]  Francis K. H. Quek,et al.  Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets , 2003, Pattern Recognit..

[23]  Erik M. Fredericks,et al.  Uncertainty in big data analytics: survey, opportunities, and challenges , 2019, Journal of Big Data.

[24]  Matias Valdenegro-Toro,et al.  Improving predictive uncertainty estimation using Dropout–Hamiltonian Monte Carlo , 2018, Soft Comput..

[25]  Doina Precup,et al.  Exploring uncertainty measures in deep networks for Multiple sclerosis lesion detection and segmentation , 2020, Medical Image Anal..

[26]  Didrik Nielsen,et al.  Fast and Scalable Bayesian Deep Learning by Weight-Perturbation in Adam , 2018, ICML.

[27]  Mohammad Emtiyaz Khan,et al.  Practical Deep Learning with Bayesian Principles , 2019, NeurIPS.

[28]  Pascal Vincent,et al.  Reducing Uncertainty in Undersampled MRI Reconstruction With Active Acquisition , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[29]  Yoav Freund,et al.  Boosting a weak learning algorithm by majority , 1995, COLT '90.

[30]  Xiaojuan Ban,et al.  Generating robust real-time object detector with uncertainty via virtual adversarial training , 2021, International Journal of Machine Learning and Cybernetics.

[31]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .

[32]  S. Duane,et al.  Hybrid Monte Carlo , 1987 .

[33]  Wenqi Wei,et al.  Boosting Ensemble Accuracy by Revisiting Ensemble Diversity Metrics , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[34]  Kagan Tumer,et al.  Analysis of decision boundaries in linearly combined neural classifiers , 1996, Pattern Recognit..

[35]  Shun-ichi Amari,et al.  Neural Learning in Structured Parameter Spaces - Natural Riemannian Gradient , 1996, NIPS.

[36]  Marcelo Pereyra,et al.  Uncertainty quantification for radio interferometric imaging: I. proximal MCMC methods , 2017, Monthly Notices of the Royal Astronomical Society.

[37]  Tin Kam Ho,et al.  The Random Subspace Method for Constructing Decision Forests , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[38]  Alexander T. Ihler,et al.  Empirical Study of MC-Dropout in Various Astronomical Observing Conditions , 2019, CVPR Workshops.

[39]  Ana Maria Mendonça,et al.  DR|GRADUATE: uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images , 2020, Medical Image Anal..

[40]  Wray L. Buntine,et al.  Hands-On Bayesian Neural Networks—A Tutorial for Deep Learning Users , 2020, IEEE Computational Intelligence Magazine.

[41]  Tanmoy Bhattacharya,et al.  The need for uncertainty quantification in machine-assisted medical decision making , 2019, Nat. Mach. Intell..

[42]  Alexandre Alahi,et al.  MonoLoco: Monocular 3D Pedestrian Localization and Uncertainty Estimation , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[43]  David Barber,et al.  Online Structured Laplace Approximations For Overcoming Catastrophic Forgetting , 2018, NeurIPS.

[44]  Leo Breiman,et al.  Randomizing Outputs to Increase Prediction Accuracy , 2000, Machine Learning.

[45]  Roberto Cipolla,et al.  SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[46]  Kai Ming Ting,et al.  Model Combination in the Multiple-Data-Batches Scenario , 1997, ECML.

[47]  Pedro Antonio Gutiérrez,et al.  Ordinal Regression Methods: Survey and Experimental Study , 2016, IEEE Transactions on Knowledge and Data Engineering.

[48]  George Papandreou,et al.  Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation , 2018, ECCV.

[49]  Yarin Gal,et al.  Uncertainty in Deep Learning , 2016 .

[50]  Huanbo Luan,et al.  Improving Back-Translation with Uncertainty-based Confidence Estimation , 2019, EMNLP.

[51]  Ji Feng,et al.  Deep Forest: Towards An Alternative to Deep Neural Networks , 2017, IJCAI.

[52]  Aaron Mishkin,et al.  SLANG: Fast Structured Covariance Approximations for Bayesian Deep Learning with Natural Gradient , 2018, NeurIPS.

[53]  David Barber,et al.  A Scalable Laplace Approximation for Neural Networks , 2018, ICLR.

[54]  Tianqi Chen,et al.  Stochastic Gradient Hamiltonian Monte Carlo , 2014, ICML.

[55]  Nikolay Laptev,et al.  Deep and Confident Prediction for Time Series at Uber , 2017, 2017 IEEE International Conference on Data Mining Workshops (ICDMW).

[56]  Xizhao Wang,et al.  Transferring Case Knowledge To Adaptation Knowledge: An Approach for Case‐Base Maintenance , 2001, Comput. Intell..

[57]  Julien Cornebise,et al.  Weight Uncertainty in Neural Network , 2015, ICML.

[58]  Juan José Rodríguez Diez,et al.  Rotation Forest: A New Classifier Ensemble Method , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[59]  Andrew Gordon Wilson,et al.  Averaging Weights Leads to Wider Optima and Better Generalization , 2018, UAI.

[60]  Md Zahidul Islam,et al.  Forest PA: Constructing a decision forest by penalizing attributes used in previous trees , 2017, Expert Syst. Appl..

[61]  Farhad Pourpanah,et al.  Dual VAEGAN: A generative model for generalized zero-shot learning , 2021, Applied Soft Computing.

[62]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[63]  Jose M. Alvarez,et al.  The Relevance of Bayesian Layer Positioning to Model Uncertainty in Deep Bayesian Active Learning , 2018, ArXiv.

[64]  Lourdes Agapito,et al.  Structured Uncertainty Prediction Networks , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[65]  Chee Peng Lim,et al.  An improved fuzzy ARTMAP and Q-learning agent model for pattern classification , 2019, Neurocomputing.

[66]  Antonio Criminisi,et al.  Bayesian Image Quality Transfer , 2016, MICCAI.

[67]  Saeid Nahavandi,et al.  BARF: A new direct and cross-based binary residual feature fusion with uncertainty-aware module for medical image classification , 2021, Inf. Sci..

[68]  OctoMiao Overcoming catastrophic forgetting in neural networks , 2016 .

[69]  Chang-Tien Lu,et al.  Towards More Accurate Uncertainty Estimation in Text Classification , 2020, EMNLP.

[70]  Davide Scaramuzza,et al.  A General Framework for Uncertainty Estimation in Deep Learning , 2020, IEEE Robotics and Automation Letters.

[71]  Gilles Bernard,et al.  Deep Extremely Randomized Trees , 2019, ICONIP.

[72]  William Yang Wang,et al.  Quantifying Uncertainties in Natural Language Processing Tasks , 2018, AAAI.

[73]  Hadi Sadoghi Yazdi,et al.  Combination of loss functions for deep text classification , 2020, Int. J. Mach. Learn. Cybern..

[74]  Hoover,et al.  Canonical dynamics: Equilibrium phase-space distributions. , 1985, Physical review. A, General physics.

[75]  Thiago J. M. Moura,et al.  Combining diversity measures for ensemble pruning , 2016, Pattern Recognit. Lett..

[76]  Kaizhu Huang,et al.  Convex ensemble learning with sparsity and diversity , 2014, Inf. Fusion.

[77]  Xi-Zhao Wang,et al.  Intuitionistic Fuzzy Twin Support Vector Machines , 2019, IEEE Transactions on Fuzzy Systems.

[78]  Sébastien Ourselin,et al.  Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks , 2018, Neurocomputing.

[79]  Li Wen,et al.  Rotation-Based Deep Forest for Hyperspectral Imagery Classification , 2019, IEEE Geoscience and Remote Sensing Letters.

[80]  Xin Yao,et al.  Diversity creation methods: a survey and categorisation , 2004, Inf. Fusion.

[81]  David M. Blei,et al.  Stochastic Gradient Descent as Approximate Bayesian Inference , 2017, J. Mach. Learn. Res..

[82]  Fuad E. Alsaadi,et al.  A novel randomised particle swarm optimizer , 2020, Int. J. Mach. Learn. Cybern..

[83]  Willem Waegeman,et al.  Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods , 2019, Machine Learning.

[84]  Sam Kwong,et al.  Incorporating Diversity and Informativeness in Multiple-Instance Active Learning , 2017, IEEE Transactions on Fuzzy Systems.

[85]  Ponnuthurai N. Suganthan,et al.  Random Forests with ensemble of feature spaces , 2014, Pattern Recognit..

[86]  Sebastian Nowozin,et al.  Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift , 2019, NeurIPS.

[87]  Yang Song,et al.  Error-Correcting Output Codes with Ensemble Diversity for Robust Learning in Neural Networks , 2019, AAAI.

[88]  Gavin Brown,et al.  "Good" and "Bad" Diversity in Majority Vote Ensembles , 2010, MCS.

[89]  Ariel D. Procaccia,et al.  Variational Dropout and the Local Reparameterization Trick , 2015, NIPS.

[90]  Rui Ye,et al.  Considering diversity and accuracy simultaneously for ensemble pruning , 2017, Appl. Soft Comput..

[91]  Daniel C. Castro,et al.  Bayesian Deep Learning for Accelerated MR Image Reconstruction , 2018, MLMIR@MICCAI.

[92]  Suyash P. Awate,et al.  Perfect MCMC Sampling in Bayesian MRFs for Uncertainty Estimation in Segmentation , 2018, MICCAI.

[93]  Xiaobo Liu,et al.  Deep Multigrained Cascade Forest for Hyperspectral Image Classification , 2019, IEEE Transactions on Geoscience and Remote Sensing.

[94]  Emilio Corchado,et al.  A survey of multiple classifier systems as hybrid systems , 2014, Inf. Fusion.

[95]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[96]  Timothy Baldwin,et al.  Modelling Uncertainty in Collaborative Document Quality Assessment , 2019, EMNLP.

[97]  Yee Whye Teh,et al.  Bayesian Learning via Stochastic Gradient Langevin Dynamics , 2011, ICML.

[98]  Kagan Tumer,et al.  Error Correlation and Error Reduction in Ensemble Classifiers , 1996, Connect. Sci..

[99]  Ji Feng,et al.  Multi-Layered Gradient Boosting Decision Trees , 2018, NeurIPS.

[100]  Derek Partridge,et al.  Software Diversity: Practical Statistics for Its Measurement and Exploitation | Draft Currently under Revision , 1996 .

[101]  Kevin Smith,et al.  Bayesian Uncertainty Estimation for Batch Normalized Deep Networks , 2018, ICML.

[102]  Emrah Akkoyun,et al.  Predicting abdominal aortic aneurysm growth using patient-oriented growth models with two-step Bayesian inference , 2020, Comput. Biol. Medicine.

[103]  Daniel Simpson,et al.  Hamiltonian Monte Carlo using an adjoint-differentiated Laplace approximation: Bayesian inference for latent Gaussian models and beyond , 2020, NeurIPS.

[104]  Xiangyu Zhang,et al.  Bounding Box Regression With Uncertainty for Accurate Object Detection , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[105]  Philipp Berens,et al.  Test-time Data Augmentation for Estimation of Heteroscedastic Aleatoric Uncertainty in Deep Neural Networks , 2018 .

[106]  Elie Bienenstock,et al.  Neural Networks and the Bias/Variance Dilemma , 1992, Neural Computation.

[107]  Shakir Mohamed,et al.  Variational Inference with Normalizing Flows , 2015, ICML.

[108]  Jing Gao,et al.  Bias-variance decomposition of absolute errors for diagnosing regression models of continuous data , 2021, Patterns.

[109]  James Nga-Kwok Liu,et al.  A New Method for Knowledge and Information Management Domain Ontology Graph Model , 2013, IEEE Transactions on Systems, Man, and Cybernetics: Systems.

[110]  Roberto Cipolla,et al.  Modelling uncertainty in deep learning for camera relocalization , 2015, 2016 IEEE International Conference on Robotics and Automation (ICRA).

[111]  Huanhuan Chen,et al.  When does Diversity Help Generalization in Classification Ensembles? , 2019, ArXiv.

[112]  Nassir Navab,et al.  Bayesian QuickNAT: Model uncertainty in deep whole-brain segmentation for structure-wise quality control , 2018, NeuroImage.

[113]  Li Sun,et al.  Generative Localization With Uncertainty Estimation Through Video-CT Data for Bronchoscopic Biopsy , 2020, IEEE Robotics and Automation Letters.

[114]  Eric Bauer,et al.  An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants , 1999, Machine Learning.

[115]  Saeid Nahavandi,et al.  Uncertainty quantification in skin cancer classification using three-way decision-based Bayesian deep learning , 2021, Comput. Biol. Medicine.

[116]  Pierre Geurts,et al.  Extremely randomized trees , 2006, Machine Learning.

[117]  Alex Graves,et al.  Practical Variational Inference for Neural Networks , 2011, NIPS.

[118]  Andrew Gordon Wilson,et al.  Cyclical Stochastic Gradient MCMC for Bayesian Deep Learning , 2019, ICLR.

[119]  Louis-Philippe Morency,et al.  Multimodal Machine Learning: A Survey and Taxonomy , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[120]  Dorothea Kolossa,et al.  Detecting Adversarial Examples for Speech Recognition via Uncertainty Quantification , 2020, INTERSPEECH.

[121]  J. Propp,et al.  Exact sampling with coupled Markov chains and applications to statistical mechanics , 1996 .

[122]  Zoubin Ghahramani,et al.  Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning , 2015, ICML.

[123]  Md Zahidul Islam,et al.  Forest CERN: A New Decision Forest Building Technique , 2016, PAKDD.

[124]  Mohamed Medhat Gaber,et al.  Random forests: from early developments to recent advancements , 2014 .

[125]  Aggelos K. Katsaggelos,et al.  Bayesian K-SVD Using Fast Variational Inference , 2017, IEEE Transactions on Image Processing.

[126]  Tieyong Zeng,et al.  Soft-Edge Assisted Network for Single Image Super-Resolution , 2020, IEEE Transactions on Image Processing.

[127]  Shaukat Ali,et al.  Prediction Surface Uncertainty Quantification in Object Detection Models for Autonomous Driving , 2021, 2021 IEEE International Conference on Artificial Intelligence Testing (AITest).

[128]  Ji Feng,et al.  Deep forest , 2017, IJCAI.

[129]  Anders Krogh,et al.  Neural Network Ensembles, Cross Validation, and Active Learning , 1994, NIPS.

[130]  Thomas G. Dietterich,et al.  Solving Multiclass Learning Problems via Error-Correcting Output Codes , 1994, J. Artif. Intell. Res..

[131]  Nitish Srivastava,et al.  Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..

[132]  Rudolph Triebel,et al.  Estimating Model Uncertainty of Neural Networks in Sparse Information Form , 2020, ICML.

[133]  Ran Wang,et al.  An analysis on the relationship between uncertainty and misclassification rate of classifiers , 2020, Inf. Sci..

[134]  Andrew Gordon Wilson,et al.  A Simple Baseline for Bayesian Uncertainty in Deep Learning , 2019, NeurIPS.

[135]  Fuad E. Alsaadi,et al.  Deep-reinforcement-learning-based images segmentation for quantitative analysis of gold immunochromatographic strip , 2020, Neurocomputing.

[136]  Yuchao Dai,et al.  Uncertainty-aware Joint Salient Object and Camouflaged Object Detection , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[137]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[138]  Eunho Yang,et al.  Uncertainty-Aware Attention for Reliable Interpretation and Prediction , 2018, NeurIPS.

[139]  Antonio Criminisi,et al.  Bayesian Image Quality Transfer with CNNs: Exploring Uncertainty in dMRI Super-Resolution , 2017, MICCAI.

[140]  Jishnu Mukhoti,et al.  Evaluating Bayesian Deep Learning Methods for Semantic Segmentation , 2018, ArXiv.

[141]  R. A. Leibler,et al.  On Information and Sufficiency , 1951 .

[142]  Saeid Nahavandi,et al.  Neural Network-Based Uncertainty Quantification: A Survey of Methodologies and Applications , 2018, IEEE Access.

[143]  Sebastian Nowozin,et al.  Hydra: Preserving Ensemble Diversity for Model Distillation , 2020, ArXiv.

[144]  Xizhao Wang,et al.  Learning from Uncertainty for Big Data: Future Analytical Challenges and Strategies , 2016, IEEE Systems, Man, and Cybernetics Magazine.

[145]  Ran Wang,et al.  Discovering the Relationship Between Generalization and Uncertainty by Incorporating Complexity of Classification , 2018, IEEE Transactions on Cybernetics.

[146]  Junchi Yan,et al.  The Diversified Ensemble Neural Network , 2020, NeurIPS.

[147]  Ran Wang,et al.  A Study on the Uncertainty of Convolutional Layers in Deep Neural Networks , 2020, ArXiv.

[148]  Laurent Heutte,et al.  Dynamic Random Forests , 2012, Pattern Recognit. Lett..

[149]  Ian H. Witten,et al.  Stacking Bagged and Dagged Models , 1997, ICML.

[150]  M. Huber Perfect sampling using bounding chains , 2004, math/0405284.

[151]  Lev V. Utkin,et al.  A Siamese Deep Forest , 2017, Knowl. Based Syst..

[152]  Usman Qamar,et al.  An Efficient Rule-Based Classification of Diabetes Using ID3, C4.5, & CART Ensembles , 2014, 2014 12th International Conference on Frontiers of Information Technology.

[153]  Jawook Huh,et al.  Uncertainty-Aware Attention for Reliable Interpretation and Prediction , 2019 .

[154]  Saeid Nahavandi,et al.  MCUa: Multi-Level Context and Uncertainty Aware Dynamic Deep Ensemble for Breast Cancer Histology Image Classification , 2021, IEEE Transactions on Biomedical Engineering.

[155]  Hatice Gunes,et al.  Affect recognition from face and body: early fusion vs. late fusion , 2005, 2005 IEEE International Conference on Systems, Man and Cybernetics.

[156]  Min Sun,et al.  Efficient Uncertainty Estimation for Semantic Segmentation in Videos , 2018, ECCV.

[157]  Bin Zhang,et al.  Anomaly Detection and Condition Monitoring of UAV Motors and Propellers , 2018, 2018 IEEE SENSORS.

[158]  K. Dietmayer,et al.  Uncertainty Estimation in One-Stage Object Detection , 2019, 2019 IEEE Intelligent Transportation Systems Conference (ITSC).

[159]  Myunghee Cho Paik,et al.  Uncertainty quantification using Bayesian neural networks in classification: Application to biomedical image segmentation , 2020, Comput. Stat. Data Anal..

[160]  Hao Li,et al.  Robust Representation Learning with Feedback for Single Image Deraining , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[161]  Ran Wang,et al.  A semisupervised learning model based on fuzzy min-max neural networks for data classification , 2021, Appl. Soft Comput..

[162]  Charles Blundell,et al.  Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles , 2016, NIPS.

[163]  Weibo Liu,et al.  Melt pool segmentation for additive manufacturing: A generative adversarial network approach , 2021, Comput. Electr. Eng..

[164]  Farhad Pourpanah,et al.  Fuzzy measure with regularization for gene selection and cancer prediction , 2021, International Journal of Machine Learning and Cybernetics.

[165]  Luca Didaci,et al.  Using Diversity for Classifier Ensemble Pruning: An Empirical Investigation , 2018 .

[166]  Cordelia Schmid,et al.  Diversity With Cooperation: Ensemble Methods for Few-Shot Classification , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[167]  Alex Kendall,et al.  What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? , 2017, NIPS.