Learning with Noisy Labels over Imbalanced Subpopulations
暂无分享,去创建一个
Bingzhe Wu | Yu Zhao | Zongbo Han | Bing He | Jianhua Yao | Mingcai Chen
[1] Xiansheng Hua,et al. Identifying Hard Noise in Long-Tailed Sample Distribution , 2022, ECCV.
[2] Tongliang Liu,et al. Selective-Supervised Contrastive Learning with Noisy Labels , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Michael Zhang,et al. Correct-N-Contrast: A Contrastive Approach for Improving Robustness to Spurious Correlations , 2022, ICML.
[4] C. Schmid,et al. Learning with Neighbor Consistency for Noisy Labels , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] James Y. Zou,et al. Improving Out-of-Distribution Robustness via Selective Augmentation , 2022, ICML.
[6] Zhi-Fan Wu,et al. NGC: A Unified Framework for Learning with Open-World Noisy Data , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[7] Chelsea Finn,et al. Just Train Twice: Improving Group Robustness without Training Group Information , 2021, ICML.
[8] Paul Michel,et al. Examining and Combating Spurious Features under Distribution Shift , 2021, ICML.
[9] Pradeep Ravikumar,et al. DORO: Distributional and Outlier Robust Optimization , 2021, ICML.
[10] Johan A.K. Suykens,et al. Boosting Co-teaching with Compression Regularization for Label Noise , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[11] Se-Young Yun,et al. FINE Samples for Learning with Noisy Labels , 2021, NeurIPS.
[12] Dimitris N. Metaxas,et al. A Topological Filter for Learning with Label Noise , 2020, NeurIPS.
[13] N. O'Connor,et al. Multi-Objective Interpolation Training for Robustness to Label Noise , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Christopher Ré,et al. No Subclass Left Behind: Fine-Grained Robustness in Coarse-Grained Classification Problems , 2020, NeurIPS.
[15] Ivor W. Tsang,et al. A Survey of Label-noise Representation Learning: Past, Present and Future , 2020, ArXiv.
[16] Suvrit Sra,et al. Coping with Label Shift via Distributionally Robust Optimisation , 2020, ICLR.
[17] R. Zemel,et al. Environment Inference for Invariant Learning , 2020, ICML.
[18] Jinwoo Shin,et al. Learning from Failure: Training Debiased Classifier from Biased Classifier , 2020, ArXiv.
[19] Tatsunori B. Hashimoto,et al. Distributionally Robust Neural Networks , 2020, ICLR.
[20] Junnan Li,et al. DivideMix: Learning with Noisy Labels as Semi-supervised Learning , 2020, ICLR.
[21] David Berthelot,et al. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence , 2020, NeurIPS.
[22] Tatsunori B. Hashimoto,et al. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization , 2019, ArXiv.
[23] Quoc V. Le,et al. Randaugment: Practical automated data augmentation with a reduced search space , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).
[24] David Lopez-Paz,et al. Invariant Risk Minimization , 2019, ArXiv.
[25] Jae-Gil Lee,et al. SELFIE: Refurbishing Unclean Samples for Robust Deep Learning , 2019, ICML.
[26] Pengfei Chen,et al. Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels , 2019, ICML.
[27] Xingrui Yu,et al. How does Disagreement Help Generalization against Label Corruption? , 2019, ICML.
[28] Mohan S. Kankanhalli,et al. Learning to Learn From Noisy Labeled Data , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Fei Wang,et al. Deep learning for healthcare: review, opportunities and challenges , 2018, Briefings Bioinform..
[30] Percy Liang,et al. Fairness Without Demographics in Repeated Loss Minimization , 2018, ICML.
[31] James Bailey,et al. Dimensionality-Driven Learning with Noisy Labels , 2018, ICML.
[32] Masashi Sugiyama,et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels , 2018, NeurIPS.
[33] Li Fei-Fei,et al. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels , 2017, ICML.
[34] Michael E. Houle,et al. Local Intrinsic Dimensionality I: An Extreme-Value-Theoretic Foundation for Similarity Applications , 2017, SISAP.
[35] Wei Li,et al. WebVision Database: Visual Learning and Understanding from Web Data , 2017, ArXiv.
[36] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[37] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[38] Masashi Sugiyama,et al. Revisiting Distributionally Robust Supervised Learning in Classification , 2016, 1611.02041.
[39] Xiaogang Wang,et al. Learning from massive noisy labeled data for image classification , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[40] Dumitru Erhan,et al. Training Deep Neural Networks on Noisy Labels with Bootstrapping , 2014, ICLR.
[41] Xiaogang Wang,et al. Deep Learning Face Attributes in the Wild , 2014, 2015 IEEE International Conference on Computer Vision (ICCV).
[42] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[43] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[44] Alexander Zien,et al. A continuation method for semi-supervised SVMs , 2006, ICML.
[45] D. Angluin,et al. Learning From Noisy Examples , 1988, Machine Learning.
[46] M. Pappert,et al. A VIRTUAL EVENT , 2022 .
[47] Jeff A. Bilmes,et al. Robust Curriculum Learning: from clean label detection to noisy label self-correction , 2021, ICLR.
[48] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .