Classification in the Presence of Label Noise: A Survey

Label noise is an important issue in classification, with many potential negative consequences. For example, the accuracy of predictions may decrease, whereas the complexity of inferred models and the number of necessary training samples may increase. Many works in the literature have been devoted to the study of label noise and the development of techniques to deal with label noise. However, the field lacks a comprehensive survey on the different types of label noise, their consequences and the algorithms that consider label noise. This paper proposes to fill this gap. First, the definitions and sources of label noise are considered and a taxonomy of the types of label noise is proposed. Second, the potential consequences of label noise are discussed. Third, label noise-robust, label noise cleansing, and label noise-tolerant algorithms are reviewed. For each category of approaches, a short discussion is proposed to help the practitioner to choose the most suitable technique in its own particular field of application. Eventually, the design of experiments is also discussed, what may interest the researchers who would like to test their own algorithms. In this paper, label noise consists of mislabeled instances: no additional information is assumed to be available like e.g., confidences on labels.

[1]  I. Bross Misclassification in 2 X 2 Tables , 1954 .

[2]  M. Hills Allocation Rules and Their Error Rates , 1966 .

[3]  P. Lachenbruch Discriminant Analysis When the Initial Samples Are Misclassified , 1966 .

[4]  Peter E. Hart,et al.  Nearest neighbor pattern classification , 1967, IEEE Trans. Inf. Theory.

[5]  C. G. Hilborn,et al.  The Condensed Nearest Neighbor Rule , 1967 .

[6]  Peter E. Hart,et al.  The condensed nearest neighbor rule (Corresp.) , 1968, IEEE Trans. Inf. Theory.

[7]  A. Tenenbein A Double Sampling Scheme for Estimating from Binomial Data with Misclassifications , 1970 .

[8]  G. Gates,et al.  The reduced nearest neighbor rule (Corresp.) , 1972, IEEE Trans. Inf. Theory.

[9]  Don McNicol,et al.  A Primer of Signal Detection Theory , 1976 .

[10]  Dennis L. Wilson,et al.  Asymptotic Properties of Nearest Neighbor Rules Using Edited Data , 1972, IEEE Trans. Syst. Man Cybern..

[11]  G. McLachlan Asymptotic Results for Discriminant Analysis When the Initial Samples are Misclassified , 1972 .

[12]  P. Lachenbruch Discriminant Analysis When the Initial Samples are Misclassified II: Non-Random Misclassification Models , 1974 .

[13]  I. Tomek,et al.  Two Modifications of CNN , 1976 .

[14]  D. Collett,et al.  The Subjective Nature of Outlier Rejection Procedures , 1976 .

[15]  Glenn Shafer,et al.  A Mathematical Theory of Evidence , 2020, A Mathematical Theory of Evidence.

[16]  I. Tomek An Experiment with the Edited Nearest-Neighbor Rule , 1976 .

[17]  W. R. Buckland,et al.  Outliers in Statistical Data , 1979 .

[18]  P. Lachenbruch Note on Initial Misclassification Effects on the Quadratic Discriminant Function , 1979 .

[19]  A. P. Dawid,et al.  Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm , 1979 .

[20]  Douglas M. Hawkins Identification of Outliers , 1980, Monographs on Applied Probability and Statistics.

[21]  R. Tripathi,et al.  The Effect of Errors in Diagnosis and Measurement on the Estimation of the Probability of an Event , 1980 .

[22]  Jack Koplowitz,et al.  On the relation of performance to editing in nearest neighbor rules , 1981, Pattern Recognit..

[23]  Josef Kittler,et al.  Pattern recognition : a statistical approach , 1982 .

[24]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[25]  R. Chhikara,et al.  Linear discriminant analysis with misallocation in training samples , 1984 .

[26]  James M. Keller,et al.  A fuzzy K-nearest neighbor algorithm , 1985, IEEE Transactions on Systems, Man, and Cybernetics.

[27]  Leslie G. Valiant,et al.  Learning Disjunction of Conjunctions , 1985, IJCAI.

[28]  W. Krauth,et al.  Learning algorithms with optimal stability in neural networks , 1987 .

[29]  Ming Li,et al.  Learning in the presence of malicious errors , 1993, STOC '88.

[30]  P. Laird Learning from Good and Bad Data , 1988 .

[31]  David W. Aha,et al.  Noise-Tolerant Instance-Based Learning Algorithms , 1989, IJCAI.

[32]  Belur V. Dasarathy,et al.  Nearest neighbor (NN) norms: NN pattern classification techniques , 1991 .

[33]  Joseph L. Gastwirth,et al.  Bayesian Inference for Medical Screening Tests: Approximations Useful for the Analysis of Acquired Immune Deficiency Syndrome , 1991 .

[34]  Mihran Tuceryan,et al.  Relative sensitivity of a family of closest-point graphs in computer vision applications , 1991, Pattern Recognit..

[35]  Léon Bottou,et al.  Local Learning Algorithms , 1992, Neural Computation.

[36]  Godfried T. Toussaint,et al.  Relative neighborhood graphs and their relatives , 1992, Proc. IEEE.

[37]  Isabelle Guyon,et al.  Computer aided cleaning of large databases for character recognition , 1992, Proceedings., 11th IAPR International Conference on Pattern Recognition. Vol.II. Conference B: Pattern Recognition Methodology and Systems.

[38]  Ashwin Srinivasan,et al.  Distinguishing Exceptions From Noise in Non-Monotonic Learning , 1992 .

[39]  Robert L. Winkler,et al.  Implications of errors in survey data: a Bayesian model , 1992 .

[40]  Beatrice Santorini,et al.  Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.

[41]  Anil Gaba Inferences with an unknown noise level in a Bernoulli process , 1993 .

[42]  Scott E. Decatur Statistical queries and faulty PAC oracles , 1993, COLT '93.

[43]  Nicolò Cesa-Bianchi,et al.  On-line learning with malicious noise and the closure algorithm , 1994, Annals of Mathematics and Artificial Intelligence.

[44]  Isabelle Guyon,et al.  Discovering Informative Patterns and Data Cleaning , 1996, Advances in Knowledge Discovery and Data Mining.

[45]  Pietro Perona,et al.  Inferring Ground Truth from Subjective Labelling of Venus Images , 1994, NIPS.

[46]  Tom Bylander,et al.  Learning linear threshold functions in the presence of classification noise , 1994, COLT '94.

[47]  Scott E. Decatur Learning in Hybrid Noise Environments Using Statistical Queries , 1995, AISTATS.

[48]  Gary M. Weiss Learning with Rare Cases and Small Disjuncts , 1995, ICML.

[49]  George H. John Robust Decision Trees: Removing Outliers from Databases , 1995, KDD.

[50]  Thierry Denoeux,et al.  A k-nearest neighbor classification rule based on Dempster-Shafer theory , 1995, IEEE Trans. Syst. Man Cybern..

[51]  Robert H. Sloan,et al.  Four Types of Noise in Data for PAC Learning , 1995, Inf. Process. Lett..

[52]  L. Joseph,et al.  Bayesian estimation of disease prevalence and the parameters of diagnostic tests in the absence of a gold standard. , 1995, American journal of epidemiology.

[53]  Robert Tibshirani,et al.  Discriminant Adaptive Nearest Neighbor Classification , 1995, IEEE Trans. Pattern Anal. Mach. Intell..

[54]  P. Thall,et al.  Estimating Genomic Category Probabilities from Fluorescent in Situ Hybridization Counts with Misclassification , 1996 .

[55]  Carla E. Brodley,et al.  Improving automated land cover mapping by identifying and eliminating mislabeled observations from training data , 1996, IGARSS '96. 1996 International Geoscience and Remote Sensing Symposium.

[56]  Javed A. Aslam,et al.  On the Sample Complexity of Noise-Tolerant Learning , 1996, Inf. Process. Lett..

[57]  Yoav Freund,et al.  Experiments with a New Boosting Algorithm , 1996, ICML.

[58]  L. Joseph,et al.  Inferences for Likelihood Ratios in the Absence of a "Gold Standard" , 1996, Medical decision making : an international journal of the Society for Medical Decision Making.

[59]  Philippe Smets,et al.  Imperfect Information: Imprecision and Uncertainty , 1996, Uncertainty Management in Information Systems.

[60]  Bidyut Baran Chaudhuri,et al.  A new definition of neighborhood of a point in multi-dimensional space , 1996, Pattern Recognit. Lett..

[61]  R. Tibshirani,et al.  Discriminant Analysis by Gaussian Mixtures , 1996 .

[62]  Carla E. Brodley,et al.  Identifying and Eliminating Mislabeled Training Instances , 1996, AAAI/IAAI, Vol. 1.

[63]  Padhraic Smyth,et al.  Bounds on the mean classification error rate of multiple experts , 1996, Pattern Recognit. Lett..

[64]  Ray J. Hickey,et al.  Artificial Intelligence Noise modelling and evaluating learning from examples , 2003 .

[65]  Nada Lavrac,et al.  Noise Detection and Elimination Applied to Noise Handling in a KRK Chess Endgame , 1996, Inductive Logic Programming Workshop.

[66]  A. Hadgu The discrepancy in discrepant analysis , 1996, The Lancet.

[67]  Michael Evans,et al.  Bayesian Analysis of Binary Data Subject to Misclassification , 1996 .

[68]  Saso Dzeroski,et al.  Noise Elimination in Inductive Concept Learning: A Case Study in Medical Diagnosois , 1996, ALT.

[69]  Filiberto Pla,et al.  Prototype selection for the nearest neighbour rule through proximity graphs , 1997, Pattern Recognit. Lett..

[70]  Tony R. Martinez,et al.  Instance Pruning Techniques , 1997, ICML.

[71]  Nada Lavrac,et al.  Conditions for Occam's Razor Applicability and Noise Elimination , 1997, ECML.

[72]  Yoav Freund,et al.  Boosting the margin: A new explanation for the effectiveness of voting methods , 1997, ICML.

[73]  L. Breiman Arcing the edge , 1997 .

[74]  Edith Cohen,et al.  Learning noisy perceptrons by a perceptron in polynomial time , 1997, Proceedings 38th Annual Symposium on Foundations of Computer Science.

[75]  Tim Oates,et al.  The Effects of Training Set Size on Decision Tree Complexity , 1997, ICML.

[76]  Nobuhiro Yugami,et al.  An Average-Case Analysis of the k-Nearest Neighbar Classifier for Noisy Domains , 1997, IJCAI.

[77]  Thierry Denoeux,et al.  Analysis of evidence-theoretic decision rules for pattern classification , 1997, Pattern Recognit..

[78]  Claudio Gentile,et al.  Improved lower bounds for learning from noisy examples: an information-theoretic approach , 1998, COLT' 98.

[79]  G. Gates The Reduced Nearest Neighbor Rule , 1998 .

[80]  Gunnar Rätsch,et al.  Regularizing AdaBoost , 1998, NIPS.

[81]  Lars Kai Hansen,et al.  Design of robust neural network classifiers , 1998, Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181).

[82]  Thomas Redman,et al.  The impact of poor data quality on the typical enterprise , 1998, CACM.

[83]  Avrim Blum,et al.  The Bottleneck , 2021, Monopsony Capitalism.

[84]  K. Swallen,et al.  Adjustment of cancer incidence rates for ethnic misclassification. , 1998, Biometrics.

[85]  Haym Hirsh,et al.  The Problem with Noise and Small Disjuncts , 1998, ICML.

[86]  Yoram Singer,et al.  Improved Boosting Algorithms Using Confidence-rated Predictions , 1998, COLT' 98.

[87]  Yishay Mansour,et al.  Learning Conjunctions with Noise under Product Distributions , 1998, Inf. Process. Lett..

[88]  Ken Orr,et al.  Data quality and systems theory , 1998, CACM.

[89]  Gunnar Rätsch,et al.  An asymptotic analysis of AdaBoost in the binary classification case , 1998 .

[90]  Gunnar Rätsch,et al.  An Improvement of AdaBoost to Avoid Overfitting , 1998, ICONIP.

[91]  Nada Lavrac,et al.  Experiments with Noise Filtering in a Medical Domain , 1999, ICML.

[92]  D. Opitz,et al.  Popular Ensemble Methods: An Empirical Study , 1999, J. Artif. Intell. Res..

[93]  Nicolò Cesa-Bianchi,et al.  Sample-efficient strategies for learning in the presence of noise , 1999, JACM.

[94]  Nigel Gilbert,et al.  Computer Simulation in the Social Sciences , 1999 .

[95]  Yoav Freund,et al.  An Adaptive Version of the Boost by Majority Algorithm , 1999, COLT '99.

[96]  Bernhard Schölkopf,et al.  Support Vector Method for Novelty Detection , 1999, NIPS.

[97]  Yoav Freund,et al.  A Short Introduction to Boosting , 1999 .

[98]  Carla E. Brodley,et al.  Identifying Mislabeled Training Data , 1999, J. Artif. Intell. Res..

[99]  Joseph Picone,et al.  Support vector machines for automatic data cleanup , 2000, INTERSPEECH.

[100]  Beáta Megyesi,et al.  Ensemble of Classifiers for Noise Detection in PoS Tagged Corpora , 2000, TSD.

[101]  Virginia Wheway,et al.  Using Boosting to Detect Noisy Data , 2000, PRICAI Workshops.

[102]  Eduardo Gasca,et al.  Decontamination of Training Samples for Supervised Pattern Recognition Methods , 2000, SSPR/SPR.

[103]  Choh-Man Teng Evaluating Noise Correction , 2000, PRICAI.

[104]  Saso Dzeroski,et al.  Noise detection and elimination in data preprocessing: Experiments in medical domains , 2000, Appl. Artif. Intell..

[105]  Eleazar Eskin,et al.  Detecting Errors within a Corpus using Anomaly Detection , 2000, ANLP.

[106]  Thierry Denoeux,et al.  A neural network classifier based on Dempster-Shafer theory , 2000, IEEE Trans. Syst. Man Cybern. Part A.

[107]  Tom Heskes,et al.  The Use of Being Stubborn and Introspective , 2000 .

[108]  Nello Cristianini,et al.  An Introduction to Support Vector Machines and Other Kernel-based Learning Methods , 2000 .

[109]  Andrian Marcus,et al.  Data Cleansing: Beyond Integrity Analysis 1 , 2000 .

[110]  J. Friedman Special Invited Paper-Additive logistic regression: A statistical view of boosting , 2000 .

[111]  Rayid Ghani,et al.  Analyzing the effectiveness and applicability of co-training , 2000, CIKM '00.

[112]  Andrian Marcus,et al.  Data Cleansing: Beyond Integrity Analysis , 2000, IQ.

[113]  B. Schölkopf,et al.  Robust Ensemble Learning for Data Mining , 2000, PAKDD.

[114]  Osamu Watanabe,et al.  MadaBoost: A Modification of AdaBoost , 2000, COLT.

[115]  Bernhard Schölkopf,et al.  Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra , 2000, NIPS.

[116]  Peter L. Bartlett,et al.  Functional Gradient Techniques for Combining Hypotheses , 2000 .

[117]  Chuan Long,et al.  Boosting Noisy Data , 2001, ICML.

[118]  Tony R. Martinez,et al.  An algorithm for correcting mislabeled data , 2001, Intell. Data Anal..

[119]  D Gianola,et al.  Threshold Model for Misclassified Binary Responses with Applications to Animal Breeding , 2001, Biometrics.

[120]  Wenxin Jiang,et al.  Some Theoretical Aspects of Boosting in the Presence of Noisy Data , 2001, ICML.

[121]  Bernhard Schölkopf,et al.  Estimating the Support of a High-Dimensional Distribution , 2001, Neural Computation.

[122]  Choh-Man Teng,et al.  A Comparison of Noise Handling Techniques , 2001, FLAIRS.

[123]  P Gustafson,et al.  Case–Control Analysis with Partial Knowledge of Exposure Misclassification Probabilities , 2001, Biometrics.

[124]  Alexander J. Smola,et al.  Kernel Machines and Boolean Functions , 2001, NIPS.

[125]  Bernhard Schölkopf,et al.  Estimating a Kernel Fisher Discriminant in the Presence of Label Noise , 2001, ICML.

[126]  Lars Kai Hansen,et al.  Outlier estimation and detection application to skin lesion classification , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[127]  Sofie Verbaeten Identifying mislabeled training examples in ILP Classification Problems , 2002 .

[128]  Xiaohui Liu,et al.  Analyzing Outliers Cautiously , 2002, IEEE Trans. Knowl. Data Eng..

[129]  André Carlos Ponce de Leon Ferreira de Carvalho,et al.  The influence of noisy patterns on the performance of learning methods in the splice junction recognition problem , 2002, VII Brazilian Symposium on Neural Networks, 2002. SBRN 2002. Proceedings..

[130]  P. Vannoorenberghe,et al.  Handling uncertain labels in multiclass problems using belief decision trees , 2002 .

[131]  J. Schafer,et al.  Missing data: our view of the state of the art. , 2002, Psychological methods.

[132]  Fabrice Muhlenbach,et al.  Improving Classification by Removing or Relabeling Mislabeled Instances , 2002, ISMIS.

[133]  A. Hout,et al.  Randomized Response, Statistical Disclosure Control and Misclassificatio: a Review , 2002 .

[134]  David J. Hand,et al.  An Empirical Comparison of Three Boosting Algorithms on Real Data Sets with Artificial Class Noise , 2003, Multiple Classifier Systems.

[135]  J. Neuhaus,et al.  Binomial Regression with Misclassification , 2003, Biometrics.

[136]  Serafín Moral,et al.  Building classification trees using the total uncertainty criterion , 2003, Int. J. Intell. Syst..

[137]  Yiming Yang,et al.  Robustness of regularized linear classification methods in text categorization , 2003, SIGIR.

[138]  Massih-Reza Amini,et al.  Semi-Supervised Learning with Explicit Misclassification Modeling , 2003, IJCAI.

[139]  Xindong Wu,et al.  Eliminating Class Noise in Large Datasets , 2003, ICML.

[140]  David G. Stork,et al.  Evaluating Classifiers by Means of Test Data with Noisy Labels , 2003, IJCAI.

[141]  Sheng-De Wang,et al.  Training algorithms for fuzzy support vector machines with noisy data , 2003, 2003 IEEE XIII Workshop on Neural Networks for Signal Processing (IEEE Cat. No.03TH8718).

[142]  Yongdai Kim Averaged Boosting: A Noise-Robust Ensemble Method , 2003, PAKDD.

[143]  M. Thathachar,et al.  Networks of Learning Automata: Techniques for Online Stochastic Optimization , 2003 .

[144]  Roberto Alejo,et al.  Analysis of new techniques to obtain quality training sets , 2003, Pattern Recognit. Lett..

[145]  Nikunj C. Oza Boosting with Averaged Weight Vectors , 2003, Multiple Classifier Systems.

[146]  Rocco A. Servedio,et al.  Smooth Boosting and Learning with Malicious Noise , 2001, J. Mach. Learn. Res..

[147]  Rocco A. Servedio,et al.  Boosting in the presence of noise , 2003, STOC '03.

[148]  Tony R. Martinez,et al.  A noise filtering method using neural networks , 2003, IEEE International Workshop on Soft Computing Techniques in Instrumentation, Measurement and Related Applications, 2003. SCIMA 2003..

[149]  Anneleen Van Assche,et al.  Ensemble Methods for Noise Elimination in Classification Problems , 2003, Multiple Classifier Systems.

[150]  Tony R. Martinez,et al.  Reduction Techniques for Instance-Based Learning Algorithms , 2000, Machine Learning.

[151]  Gunnar Rätsch,et al.  Soft Margins for AdaBoost , 2001, Machine Learning.

[152]  Leo Breiman,et al.  Randomizing Outputs to Increase Prediction Accuracy , 2000, Machine Learning.

[153]  Pat McInturff,et al.  Modelling risk when binary outcomes are subject to error , 2004, Statistics in medicine.

[154]  David D. Palmer,et al.  Context-based Speech Recognition Error Detection and Correction , 2004, NAACL.

[155]  Leo Breiman,et al.  Random Forests , 2001, Machine Learning.

[156]  Ana Carolina Lorena,et al.  Evaluation of noise reduction techniques in the splice junction recognition problem , 2004 .

[157]  Byoung-Tak Zhang,et al.  Co-trained support vector machines for large scale unstructured document classification using unlabeled data and syntactic information , 2004, Inf. Process. Manag..

[158]  Raymond J. Mooney,et al.  Experiments on Ensembles with Missing and Noisy Data , 2004, Multiple Classifier Systems.

[159]  Yiming Yang,et al.  RCV1: A New Benchmark Collection for Text Categorization Research , 2004, J. Mach. Learn. Res..

[160]  Lawrence Joseph,et al.  Bayesian statistics for parasitologists. , 2004, Trends in parasitology.

[161]  Yi Li,et al.  The Relaxed Online Maximum Margin Algorithm , 1999, Machine Learning.

[162]  Thomas G. Dietterich An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization , 2000, Machine Learning.

[163]  Yasubumi Sakakibara,et al.  Noise-Tolerant Occam Algorithms and Their Applications to Learning Decision Trees , 1993, Machine Learning.

[164]  Michael J. Pazzani,et al.  Error reduction through learning multiple descriptions , 2004, Machine Learning.

[165]  Taghi M. Khoshgoftaar,et al.  Generating multiple noise elimination filters with the ensemble-partitioning filter , 2004, Proceedings of the 2004 IEEE International Conference on Information Reuse and Integration, 2004. IRI 2004..

[166]  Peter Clark,et al.  The CN2 Induction Algorithm , 1989, Machine Learning.

[167]  T. Swartz,et al.  Bayesian identifiability and misclassification in multinomial data , 2004 .

[168]  Geoffrey I. Webb,et al.  MultiBoosting: A Technique for Combining Boosting and Wagging , 2000, Machine Learning.

[169]  Yoram Singer,et al.  Leveraging the margin more carefully , 2004, ICML.

[170]  Padraig Cunningham,et al.  An Analysis of Case-Base Editing in a Spam Filtering System , 2004, ECCBR.

[171]  J. Ross Quinlan,et al.  Induction of Decision Trees , 1986, Machine Learning.

[172]  Xingquan Zhu,et al.  Class Noise vs. Attribute Noise: A Quantitative Study , 2003, Artificial Intelligence Review.

[173]  D. Kibler,et al.  Instance-based learning algorithms , 2004, Machine Learning.

[174]  Nikunj C. Oza,et al.  AveBoost2: Boosting for Noisy Data , 2004, Multiple Classifier Systems.

[175]  L. Tarassenko,et al.  Semi-supervised learning of probabilistic models for ECG segmentation , 2004, The 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[176]  Fabrice Muhlenbach,et al.  Identifying and Handling Mislabelled Instances , 2004, Journal of Intelligent Information Systems.

[177]  Victoria J. Hodge,et al.  A Survey of Outlier Detection Methodologies , 2004, Artificial Intelligence Review.

[178]  Rui Xu,et al.  Survey of clustering algorithms , 2005, IEEE Transactions on Neural Networks.

[179]  Xindong Wu,et al.  Bridging Local and Global Data Cleansing: Identifying Class Noise in Large, Distributed Data Datasets , 2006, Data Mining and Knowledge Discovery.

[180]  Malik Beshir Malik,et al.  Applied Linear Regression , 2005, Technometrics.

[181]  Jorge Alberto Achcar,et al.  Bayesian analysis of correlated misclassified binary data , 2005, Comput. Stat. Data Anal..

[182]  D. Angluin,et al.  Learning From Noisy Examples , 1988, Machine Learning.

[183]  Taghi M. Khoshgoftaar,et al.  The partitioning- and rule-based filter for noise detection , 2005, IRI -2005 IEEE International Conference on Information Reuse and Integration, Conf, 2005..

[184]  Choh-Man Teng,et al.  Dealing with Data Corruption in Remote Sensing , 2005, IDA.

[185]  A. Hadgu,et al.  Evaluation of Nucleic Acid Amplification Tests in the Absence of a Perfect Gold-Standard Test: A Review of the Statistical and Epidemiologic Issues , 2005, Epidemiology.

[186]  Gonzalo Martínez-Muñoz,et al.  Switching class labels to generate classification ensembles , 2005, Pattern Recognit..

[187]  José R. Dorronsoro,et al.  Boosting Parallel Perceptrons for Label Noise Reduction in Classification Problems , 2005, IWINAC.

[188]  Massih-Reza Amini,et al.  Semi-supervised learning with an imperfect supervisor , 2005, Knowledge and Information Systems.

[189]  Pietro Perona,et al.  Pruning training sets for learning of object categories , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[190]  Philippe Smets,et al.  Decision making in the TBM: the necessity of the pignistic transformation , 2005, Int. J. Approx. Reason..

[191]  Stephen Kwek,et al.  A boosting approach to remove class label noise , 2005, Fifth International Conference on Hybrid Intelligent Systems (HIS'05).

[192]  Michael I. Jordan,et al.  Convexity, Classification, and Risk Bounds , 2006 .

[193]  Wensheng Zhang,et al.  A method for predicting disease subtypes in presence of misclassification among training samples using gene expression: application to human breast cancer , 2006, Bioinform..

[194]  Xindong Wu,et al.  Class Noise Handling for Effective Cost-Sensitive Learning by Cost-Guided Iterative Classification Filtering , 2006, IEEE Transactions on Knowledge and Data Engineering.

[195]  Hui Xiong,et al.  Enhancing data analysis with noise removal , 2006, IEEE Transactions on Knowledge and Data Engineering.

[196]  Bruce Edmonds,et al.  The Nature of Noise , 2009, EPOS.

[197]  Koby Crammer,et al.  Robust Support Vector Machine Training via Convex Outlier Ablation , 2006, AAAI.

[198]  Hyun-Chul Kim,et al.  Bayesian Gaussian Process Classification with the EM-EP Algorithm , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[199]  Aníbal R. Figueiras-Vidal,et al.  Boosting by weighting critical and erroneous samples , 2006, Neurocomputing.

[200]  Mykola Pechenizkiy,et al.  Class Noise and Supervised Learning in Medical Domains: The Effect of Feature Extraction , 2006, 19th IEEE Symposium on Computer-Based Medical Systems (CBMS'06).

[201]  F. Melgani,et al.  An Adaptive SVM Nearest Neighbor Classifier for Remotely Sensed Imagery , 2006, 2006 IEEE International Symposium on Geoscience and Remote Sensing.

[202]  Enrico Blanzieri,et al.  Detecting potential labeling errors in microarrays by data perturbation , 2006, Bioinform..

[203]  Daniel Hernández-Lobato,et al.  Building Ensembles of Neural Networks with Class-Switching , 2006, ICANN.

[204]  Günther Palm,et al.  A Study of the Robustness of KNN Classifiers Trained Using Soft Labels , 2006, ANNPR.

[205]  Heiko Hoffmann,et al.  Kernel PCA for novelty detection , 2007, Pattern Recognit..

[206]  Shuji Hashimoto,et al.  Learning from imperfect data , 2007, Appl. Soft Comput..

[207]  Taghi M. Khoshgoftaar,et al.  An Empirical Study of the Classification Performance of Learners on Imbalanced and Noisy Software Quality Data , 2007, 2007 IEEE International Conference on Information Reuse and Integration.

[208]  Qiang Yang,et al.  Mining competent case bases for case-based reasoning , 2007, Artif. Intell..

[209]  R. Bharat Rao,et al.  Bayesian Co-Training , 2007, J. Mach. Learn. Res..

[210]  Pang-Ning Tan,et al.  Kernel Based Detection of Mislabeled Training Examples , 2007, SDM.

[211]  R. Gerlach,et al.  Bayesian model selection for logistic regression with misclassified outcomes , 2007 .

[212]  Shifu Chen,et al.  Identifying and Correcting Mislabeled Training Instances , 2007, Future Generation Communication and Networking (FGCN 2007).

[213]  Roni Khardon,et al.  Noise Tolerant Variants of the Perceptron Algorithm , 2007, J. Mach. Learn. Res..

[214]  Marcel J. T. Reinders,et al.  Classification in the presence of class noise using a probabilistic Kernel Fisher method , 2007, Pattern Recognit..

[215]  Francisco Javier Girón González-Torre,et al.  Misclassified multinomial data: a Bayesian approach , 2007 .

[216]  Carla E. Brodley,et al.  Class Noise Mitigation Through Instance Weighting , 2007, ECML.

[217]  Dilek Z. Hakkani-Tür,et al.  Automatic Labeling Inconsistencies Detection and Correction for Sentence Unit Segmentation in Conversational Speech , 2007, MLMI.

[218]  L. Joseph,et al.  Robustness of Prevalence Estimates Derived from Misclassified Data from Administrative Databases , 2007, Biometrics.

[219]  Thierry Denoeux,et al.  Mixture Model Estimation with Soft Labels , 2008, SMPS.

[220]  Brendan T. O'Connor,et al.  Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.

[221]  Hyun-Chul Kim,et al.  Outlier Robust Gaussian Process Classification , 2008, SSPR/SPR.

[222]  Jean-Michel Renders,et al.  Semi-supervised Document Classification with a Mislabeling Error Model , 2008, ECIR.

[223]  Didier Dubois,et al.  Soft Methods for Handling Variability and Imprecision , 2008 .

[224]  Yanchun Zhang,et al.  Support Vector Machine for Outlier Detection in Breast Cancer Survivability Prediction , 2008, APWeb Workshops.

[225]  Farid Melgani,et al.  Nearest Neighbor Classification of Remote Sensing Images With the Maximal Margin Principle , 2008, IEEE Transactions on Geoscience and Remote Sensing.

[226]  Wei Xiong,et al.  Fuzzy relevance vector machine for learning from unbalanced data and noise , 2008, Pattern Recognit. Lett..

[227]  F. J. Girón,et al.  A Bayesian model for multinomial sampling with misclassified data , 2008 .

[228]  D. Sculley,et al.  Filtering Email Spam in the Presence of Noisy User Feedback , 2008, CEAS.

[229]  Daniel Hernández-Lobato,et al.  Class-switching neural network ensembles , 2008, Neurocomputing.

[230]  Taghi M. Khoshgoftaar,et al.  Identifying learners robust to low quality data , 2008, 2008 IEEE International Conference on Information Reuse and Integration.

[231]  André Carlos Ponce de Leon Ferreira de Carvalho,et al.  Ensembles of Pre-processing Techniques for Noise Detection in Gene Expression Data , 2008, ICONIP.

[232]  Jieping Ye,et al.  Generalized Linear Discriminant Analysis: A Unified Framework and Efficient Model Selection , 2008, IEEE Transactions on Neural Networks.

[233]  Nuno Vasconcelos,et al.  On the Design of Loss Functions for Classification: theory, robustness to outliers, and SavageBoost , 2008, NIPS.

[234]  Tony R. Martinez,et al.  Using Decision Trees and Soft Labeling to Filter Mislabeled Data , 2008 .

[235]  Taghi M. Khoshgoftaar,et al.  Software quality modeling: The impact of class noise on the random forest classifier , 2008, 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence).

[236]  Kiichi Urahama,et al.  Error-correcting semi-supervised learning with mode-filter on graphs , 2009, 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops.

[237]  Liva Ralaivola,et al.  Learning SVMs from Sloppily Labeled Data , 2009, ICANN.

[238]  Charles Bouveyron,et al.  Robust supervised classification with mixture models: Learning from data with uncertain labels , 2009, Pattern Recognit..

[239]  Ohad Shamir,et al.  Good learners for evil teachers , 2009, ICML '09.

[240]  Stefanie Nowak,et al.  Using one-class SVM outliers detection for verification of collaboratively tagged image training sets , 2009, 2009 IEEE International Conference on Multimedia and Expo.

[241]  Andrés R. Masegosa,et al.  An Experimental Study about Simple Decision Trees for Bagging Ensemble on Datasets with Classification Noise , 2009, ECSQARU.

[242]  Lorenzo Bruzzone,et al.  A Novel Context-Sensitive Semisupervised SVM Classifier Robust to Mislabeled Training Samples , 2009, IEEE Transactions on Geoscience and Remote Sensing.

[243]  Qiang Yang,et al.  Semi-supervised protein subcellular localization , 2009, BMC Bioinformatics.

[244]  Taghi M. Khoshgoftaar,et al.  Knowledge discovery from imbalanced and noisy data , 2009, Data Knowl. Eng..

[245]  Enrico Blanzieri,et al.  Noise reduction for instance-based learning with a local maximal margin approach , 2010, Journal of Intelligent Information Systems.

[246]  Enrico Blanzieri,et al.  A Scalable Noise Reduction Technique for Large Case-Based Systems , 2009, ICCBR.

[247]  VARUN CHANDOLA,et al.  Anomaly detection: A survey , 2009, CSUR.

[248]  Farid Melgani,et al.  Automatic Ground-Truth Validation With Genetic Algorithms for Multispectral Image Classification , 2009, IEEE Transactions on Geoscience and Remote Sensing.

[249]  Thierry Denoeux,et al.  Learning from data with uncertain labels by boosting credal classifiers , 2009, U '09.

[250]  Juxin Liu,et al.  Bayesian analysis of a matched case–control study with expert prior information on both the misclassification of exposure and the exposure–disease association , 2009, Statistics in medicine.

[251]  Charles Bouveyron,et al.  Weakly-Supervised Classification with Mixture Models for Cervical Cancer Detection , 2009, IWANN.

[252]  Charles Bouveyron,et al.  Supervised classification of categorical data with uncertain labels for DNA barcoding , 2009, ESANN.

[253]  Gordon V. Cormack,et al.  Spam filter evaluation with imprecise ground truth , 2009, SIGIR.

[254]  Beata Beigman Klebanov,et al.  Squibs: From Annotator Agreement to Noise Models , 2009, CL.

[255]  Chen Zhang,et al.  Methods for labeling error detection in microarrays based on the effect of data perturbation on the regression model , 2009, Bioinform..

[256]  Beata Beigman Klebanov,et al.  Learning with Annotation Noise , 2009, ACL.

[257]  Trevor J. M. Bench-Capon,et al.  Arguing from Experience to Classifying Noisy Data , 2009, DaWaK.

[258]  Gordon V. Cormack,et al.  Genre-based decomposition of email class noise , 2009, KDD.

[259]  André Carlos Ponce de Leon Ferreira de Carvalho,et al.  Use of Classification Algorithms in Noise Detection and Elimination , 2009, HAIS.

[260]  Thierry Denoeux,et al.  Learning from partially supervised data using mixture models and belief functions , 2009, Pattern Recognit..

[261]  Md. Nasir Sulaiman,et al.  The effect of noise on RWTSAIRS classifier. , 2009 .

[262]  Ying Gao,et al.  Semi-supervised Classification and Noise Detection , 2009, 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery.

[263]  Kiichi Urahama,et al.  Error-correcting semi-supervised pattern recognition with mode filter on graphs , 2010, 2010 2nd International Symposium on Aware Computing.

[264]  Hadi Sadoghi Yazdi,et al.  Emphatic Constraints Support Vector Machine , 2010 .

[265]  Fabricio A. Breve,et al.  Semi-supervised learning from imperfect data through particle cooperation and competition , 2010, The 2010 International Joint Conference on Neural Networks (IJCNN).

[266]  Michael I. Jordan,et al.  Heavy-Tailed Process Priors for Selective Shrinkage , 2010, NIPS.

[267]  Nuno Vasconcelos,et al.  On the design of robust classifiers for computer vision , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[268]  Yingtao Bi,et al.  The efficiency of logistic regression compared to normal discriminant analysis under class-conditional classification noise , 2010, J. Multivar. Anal..

[269]  André Carlos Ponce de Leon Ferreira de Carvalho,et al.  Pre-processing for noise detection in gene expression classification data , 2009, Journal of the Brazilian Computer Society.

[270]  Marc-André Weber,et al.  Comparative Validation of Graphical Models for Learning Tumor Segmentations from Noisy Manual Annotations , 2010, MCV.

[271]  Taghi M. Khoshgoftaar,et al.  Supervised Neural Network Modeling: An Empirical Investigation Into Learning From Imbalanced Data With Labeling Errors , 2010, IEEE Transactions on Neural Networks.

[272]  Andrés R. Masegosa,et al.  Bagging Decision Trees on Data Sets with Classification Noise , 2010, FoIKS.

[273]  Jon Williamson Bruno de Finetti. Philosophical Lectures on Probability. Collected, edited, and annotated by Alberto Mura. Translated by Hykel Hosni. Synthese Library; 340 , 2010 .

[274]  Thierry Denoeux,et al.  Evidential Multi-Label Classification Approach to Learning from Data with Imprecise Labels , 2010, IPMU.

[275]  Lance Chun Che Fung,et al.  Data Cleaning for Classification Using Misclassification Analysis , 2010, J. Adv. Comput. Intell. Intell. Informatics.

[276]  Panagiotis G. Ipeirotis,et al.  Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.

[277]  Donghai Guan,et al.  Identifying mislabeled training data with the aid of unlabeled data , 2011, Applied Intelligence.

[278]  Albert Fornells,et al.  A study of the effect of different types of noise on the precision of supervised learning techniques , 2010, Artificial Intelligence Review.

[279]  Nada Lavrac,et al.  Advances in Class Noise Detection , 2010, ECAI.

[280]  E. Lesaffre,et al.  Correcting for misclassification for a monotone disease process with an application in dental research , 2010, Statistics in medicine.

[281]  Naresh Manwani,et al.  A Team of Continuous-Action Learning Automata for Noise-Tolerant Learning of Half-Spaces , 2010, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[282]  F. Gao,et al.  Improved Boosting algorithm with adaptive filtration , 2010, 2010 8th World Congress on Intelligent Control and Automation.

[283]  Gerardo Hermosillo,et al.  Learning From Crowds , 2010, J. Mach. Learn. Res..

[284]  Loris Nanni,et al.  Data pre-processing through reward–punishment editing , 2010, Pattern Analysis and Applications.

[285]  Beata Beigman Klebanov,et al.  Some Empirical Evidence for Annotation Noise in a Benchmarked Dataset , 2010, HLT-NAACL.

[286]  Tony R. Martinez,et al.  Improving classification accuracy by identifying and removing instances that should be misclassified , 2011, The 2011 International Joint Conference on Neural Networks.

[287]  Hua Yin,et al.  The problem of noise in classification: Past, current and future work , 2011, 2011 IEEE 3rd International Conference on Communication Software and Networks.

[288]  Mattias Rantalainen,et al.  Accounting for control mislabeling in case-control biomarker studies. , 2011, Journal of proteome research.

[289]  Benoît Frénay,et al.  Label Noise-Tolerant Hidden Markov Models for Segmentation: Application to ECGs , 2011, ECML/PKDD.

[290]  Carole Lartizien,et al.  Handling uncertainties in SVM classification , 2011, 2011 IEEE Statistical Signal Processing Workshop (SSP).

[291]  Xu Zhou,et al.  A Fast Algorithm for Outlier Detection in Microarray , 2011, CSEE.

[292]  Daniel Hernández-Lobato,et al.  Robust Multi-Class Gaussian Process Classification , 2011, NIPS.

[293]  Ata Kabán,et al.  Multi-class classification in the presence of labelling errors , 2011, ESANN.

[294]  Reza Ebrahimpour,et al.  Knitted fabric defect classification for uncertain labels based on Dempster-Shafer theory of evidence , 2011, Expert Syst. Appl..

[295]  Zhi-Hua Zhou,et al.  CoTrade: Confident Co-Training With Data Editing , 2011, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[296]  R. Hanner,et al.  FISH-BOL and seafood identification: Geographically dispersed case studies reveal systemic market substitution across Canada , 2011, Mitochondrial DNA.

[297]  Blaine Nelson,et al.  Support Vector Machines Under Adversarial Label Noise , 2011, ACML.

[298]  Doina Caragea,et al.  Semi-supervised Learning of Alternatively Spliced Exons Using Co-training , 2011, 2011 IEEE International Conference on Bioinformatics and Biomedicine.

[299]  Loris Nanni,et al.  Reduced Reward-punishment editing for building ensembles of classifiers , 2011, Expert Syst. Appl..

[300]  Kwong-Sak Leung,et al.  A Survey of Crowdsourcing Systems , 2011, 2011 IEEE Third Int'l Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third Int'l Conference on Social Computing.

[301]  Jun Du,et al.  When Does Cotraining Work in Real Data? , 2011, IEEE Transactions on Knowledge and Data Engineering.

[302]  David P. Williams Label Alteration to Improve Underwater Mine Classification , 2011, IEEE Geoscience and Remote Sensing Letters.

[303]  Brian Mac Namee,et al.  Profiling instances in noise reduction , 2012, Knowl. Based Syst..

[304]  Boris Breši Knowledge Acquisition in Databases , 2012 .

[305]  Claudia Eckert,et al.  Adversarial Label Flips Attack on Support Vector Machines , 2012, ECAI.

[306]  Andrés R. Masegosa,et al.  Bagging schemes on the presence of class noise in classification , 2012, Expert Syst. Appl..

[307]  Francisco Herrera,et al.  Analyzing the presence of noise in multi-class problems: alleviating its influence with the One-vs-One decomposition , 2012, Knowledge and Information Systems.

[308]  Francisco Herrera,et al.  A First Study on Decomposition Strategies with Data with Class Noise Using Decision Trees , 2012, HAIS.

[309]  Taghi M. Khoshgoftaar,et al.  Robustness of Threshold-Based Feature Rankers with Data Sampling on Noisy and Imbalanced Data , 2012, FLAIRS.

[310]  Claudia Lopez-Vizcon,et al.  Detection of mislabelling in the fresh potato retail market employing microsatellite markers , 2012 .

[311]  Chin-Teng Lin,et al.  A Spatial–Contextual Support Vector Machine for Remotely Sensed Image Classification , 2012, IEEE Transactions on Geoscience and Remote Sensing.

[312]  E. Garcia-Vazquez,et al.  Species misidentification in mixed hake fisheries may lead to overexploitation and population bottlenecks , 2012 .

[313]  Mangui Liang,et al.  Fuzzy support vector machine based on within-class scatter for classification problems with outliers or noises , 2013, Neurocomputing.

[314]  Naresh Manwani,et al.  Noise Tolerance Under Risk Minimization , 2011, IEEE Transactions on Cybernetics.

[315]  D. Cawthorn,et al.  A high incidence of species substitution and mislabelling detected in meat products sold in South Africa , 2013 .

[316]  Benoît Frénay,et al.  Computational Statistics and Data Analysis , 2022 .

[317]  Michael J. Watts,et al.  IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS Publication Information , 2020, IEEE Transactions on Neural Networks and Learning Systems.

[318]  E. Acuña An Algorithm for Detecting Noise on Supervised Classification , 2022 .