If You Can't Beat Them, Join Them: Learning with Noisy Data
暂无分享,去创建一个
[1] Oded Maron,et al. Multiple-Instance Learning for Natural Scene Classification , 1998, ICML.
[2] Jianguo Zhang,et al. The PASCAL Visual Object Classes Challenge , 2006 .
[3] Matti Pietikäinen,et al. Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns , 2002, IEEE Trans. Pattern Anal. Mach. Intell..
[4] Kristen Grauman,et al. Keywords to visual categories: Multiple-instance learning forweakly supervised object categorization , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.
[5] Michael S. Bernstein,et al. ImageNet Large Scale Visual Recognition Challenge , 2014, International Journal of Computer Vision.
[6] Ata Kabán,et al. Multi-class classification in the presence of labelling errors , 2011, ESANN.
[7] Nitish Srivastava,et al. Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.
[8] Xinlei Chen,et al. Webly Supervised Learning of Convolutional Networks , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[9] Chuan Long,et al. Boosting Noisy Data , 2001, ICML.
[10] Zhi-Hua Zhou,et al. Neural Networks for Multi-Instance Learning , 2002 .
[11] Yi Li,et al. The Relaxed Online Maximum Margin Algorithm , 1999, Machine Learning.
[12] Geoffrey E. Hinton,et al. Learning to Label Aerial Images from Noisy Data , 2012, ICML.
[13] Pietro Perona,et al. Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.
[14] Peter Auer,et al. Generic object recognition with boosting , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[15] Felix X. Yu,et al. SVM for learning with label proportions , 2013, ICML 2013.
[16] Trevor Darrell,et al. Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.
[17] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[18] Antonio Torralba,et al. Ieee Transactions on Pattern Analysis and Machine Intelligence 1 80 Million Tiny Images: a Large Dataset for Non-parametric Object and Scene Recognition , 2022 .
[19] Nagarajan Natarajan,et al. Learning with Noisy Labels , 2013, NIPS.
[20] Thomas G. Dietterich. An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization , 2000, Machine Learning.
[21] Ivor W. Tsang,et al. Text-based image retrieval using progressive multi-instance learning , 2011, 2011 International Conference on Computer Vision.
[22] Stefanie N. Lindstaedt,et al. On the Feasibility of a Tag-Based Approach for Deciding Which Objects a Picture Shows: An Empirical Study , 2009, SAMT.
[23] Gideon S. Mann,et al. Putting Semantic Information Extraction on the Map : Noisy Label Models for Fact Extraction , 2007 .
[24] Gary Doran,et al. A theoretical and empirical analysis of support vector machine methods for multiple-instance classification , 2014, Machine Learning.
[25] G LoweDavid,et al. Distinctive Image Features from Scale-Invariant Keypoints , 2004 .
[26] Zhi-Hua Zhou,et al. Multi-Instance Multi-Label Learning with Application to Scene Classification , 2006, NIPS.
[27] C. V. Jawahar,et al. Cats and dogs , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[28] Fei-Fei Li,et al. Detecting Avocados to Zucchinis: What Have We Done, and Where Are We Going? , 2013, 2013 IEEE International Conference on Computer Vision.
[29] Yoav Freund,et al. Experiments with a New Boosting Algorithm , 1996, ICML.
[30] Luc Van Gool,et al. The 2005 PASCAL Visual Object Classes Challenge , 2005, MLCW.
[31] Roni Khardon,et al. Noise Tolerant Variants of the Perceptron Algorithm , 2007, J. Mach. Learn. Res..
[32] Chih-Jen Lin,et al. LIBSVM: A library for support vector machines , 2011, TIST.
[33] Thomas Hofmann,et al. Support Vector Machines for Multiple-Instance Learning , 2002, NIPS.
[34] M. Verleysen,et al. Classification in the Presence of Label Noise: A Survey , 2014, IEEE Transactions on Neural Networks and Learning Systems.
[35] David Cohn,et al. Active Learning , 2010, Encyclopedia of Machine Learning.
[36] Nello Cristianini,et al. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods , 2000 .
[37] Ata Kabán,et al. Label-Noise Robust Logistic Regression and Its Applications , 2012, ECML/PKDD.
[38] ZissermanAndrew,et al. The Pascal Visual Object Classes Challenge , 2015 .
[39] Rob Fergus,et al. Learning from Noisy Labels with Deep Neural Networks , 2014, ICLR.
[40] G DietterichThomas. An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees , 2000 .
[41] Alexander J. Smola,et al. Kernel Machines and Boolean Functions , 2001, NIPS.
[42] Ata Kabán,et al. Boosting in the presence of label noise , 2013, UAI.
[43] W. Krauth,et al. Learning algorithms with optimal stability in neural networks , 1987 .
[44] Marcel J. T. Reinders,et al. Classification in the presence of class noise using a probabilistic Kernel Fisher method , 2007, Pattern Recognit..
[45] Thomas G. Dietterich,et al. Solving the Multiple Instance Problem with Axis-Parallel Rectangles , 1997, Artif. Intell..
[46] Bernhard Schölkopf,et al. Estimating a Kernel Fisher Discriminant in the Presence of Label Noise , 2001, ICML.