Label Noise-Robust Learning using a Confidence-Based Sieving Strategy
暂无分享,去创建一个
[1] Florian Tramèr,et al. Quantifying Memorization Across Neural Language Models , 2022, ICLR.
[2] Tongliang Liu,et al. Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations , 2021, ICLR.
[3] Mingming Gong,et al. Instance-dependent Label-noise Learning under a Structural Causal Model , 2021, NeurIPS.
[4] Daniel Coelho de Castro,et al. Active label cleaning for improved dataset quality under resource constraints , 2021, Nature Communications.
[5] Erkun Yang,et al. Understanding and Improving Early Stopping for Learning with Noisy Labels , 2021, NeurIPS.
[6] Hanlin Tang,et al. On the geometry of generalization and memorization in deep neural networks , 2021, ICLR.
[7] Qi Wu,et al. Jo-SRC: A Contrastive Approach for Combating Noisy Labels , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Se-Young Yun,et al. FINE Samples for Learning with Noisy Labels , 2021, NeurIPS.
[9] Samy Bengio,et al. Understanding deep learning (still) requires rethinking generalization , 2021, Commun. ACM.
[10] Gang Niu,et al. Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization , 2021, ICML.
[11] Pheng-Ann Heng,et al. Robustness of Accuracy Metric and its Inspirations in Learning with Noisy Labels , 2020, AAAI.
[12] Yang Liu,et al. Learning with Instance-Dependent Label Noise: A Sample Sieve Approach , 2020, ICLR.
[13] Fan Zhang,et al. P-DIFF: Learning Classifier with Noisy Labels based on Probability Difference Distributions , 2020, 2020 25th International Conference on Pattern Recognition (ICPR).
[14] Hwanjun Song,et al. Learning From Noisy Labels With Deep Neural Networks: A Survey , 2020, IEEE Transactions on Neural Networks and Learning Systems.
[15] Dimitris N. Metaxas,et al. Error-Bounded Correction of Noisy Labels , 2020, ICML.
[16] Sheng Liu,et al. Early-Learning Regularization Prevents Memorization of Noisy Labels , 2020, NeurIPS.
[17] Gang Niu,et al. Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning , 2020, NeurIPS.
[18] Gang Niu,et al. Parts-dependent Label Noise: Towards Instance-dependent Label Noise , 2020, ArXiv.
[19] Xiaohua Zhai,et al. Are we done with ImageNet? , 2020, ArXiv.
[20] Lei Feng,et al. Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Junnan Li,et al. DivideMix: Learning with Noisy Labels as Semi-supervised Learning , 2020, ICLR.
[22] Kilian Q. Weinberger,et al. Identifying Mislabeled Data using the Area Under the Margin Ranking , 2020, NeurIPS.
[23] David F. Steiner,et al. Chest Radiograph Interpretation with Deep Learning Models: Assessment with Radiologist-adjudicated Reference Standards and Population-adjusted Evaluation. , 2019, Radiology.
[24] Weilong Yang,et al. Beyond Synthetic Noise: Deep Learning on Controlled Noisy Labels , 2019, ICML.
[25] Tiberiu T. Cocias,et al. A survey of deep learning techniques for autonomous driving , 2019, J. Field Robotics.
[26] E. Topol,et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. , 2019, The Lancet. Digital health.
[27] Gang Niu,et al. Confidence Scores Make Instance-dependent Label-noise Learning Possible , 2019, ICML.
[28] Jae-Gil Lee,et al. Prestopping: How Does Early Stopping Help Generalization against Label Noise? , 2019, ArXiv.
[29] Thomas L. Griffiths,et al. Human Uncertainty Makes Classification More Robust , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[30] Gang Niu,et al. Are Anchor Points Really Indispensable in Label-Noise Learning? , 2019, NeurIPS.
[31] Jae-Gil Lee,et al. SELFIE: Refurbishing Unclean Samples for Robust Deep Learning , 2019, ICML.
[32] Xingrui Yu,et al. How does Disagreement Help Generalization against Label Corruption? , 2019, ICML.
[33] Geraint Rees,et al. Clinically applicable deep learning for diagnosis and referral in retinal disease , 2018, Nature Medicine.
[34] Masashi Sugiyama,et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels , 2018, NeurIPS.
[35] Gary Marcus,et al. Deep Learning: A Critical Appraisal , 2018, ArXiv.
[36] Li Fei-Fei,et al. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels , 2017, ICML.
[37] Geoffrey E. Hinton,et al. Dynamic Routing Between Capsules , 2017, NIPS.
[38] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[39] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[40] Shai Shalev-Shwartz,et al. Decoupling "when to update" from "how to update" , 2017, NIPS.
[41] Richard Nock,et al. Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[42] Frank Hutter,et al. SGDR: Stochastic Gradient Descent with Warm Restarts , 2016, ICLR.
[43] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[44] Xiaogang Wang,et al. Learning from massive noisy labeled data for image classification , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[45] Nagarajan Natarajan,et al. Learning with Noisy Labels , 2013, NIPS.
[46] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[47] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[48] Chen Gong,et al. Robust early-learning: Hindering the memorization of noisy labels , 2021, ICLR.
[49] Pheng-Ann Heng,et al. Noise against noise: stochastic label noise helps combat inherent label noise , 2021, ICLR.
[50] Yang Liu,et al. The importance of understanding instance-level noisy labels , 2021, ArXiv.
[51] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[52] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .