Dynamics-Aware Loss for Learning with Label Noise

Label noise poses a serious threat to deep neural networks (DNNs). Employing robust loss functions which reconcile fitting ability with robustness is a simple but effective strategy to handle this problem. However, the widely-used static trade-off between these two factors contradicts the dynamics of DNNs learning with label noise, leading to inferior performance. Therefore, we propose a dynamics-aware loss (DAL) to solve this problem. Considering that DNNs tend to first learn beneficial patterns, then gradually overfit harmful label noise, DAL strengthens the fitting ability initially, then gradually improves robustness. Moreover, at the later stage, to further reduce the negative impact of label noise and combat underfitting simultaneously, we let DNNs put more emphasis on easy examples than hard ones and introduce a bootstrapping term. Both the detailed theoretical analyses and extensive experimental results demonstrate the superiority of our method. Our source code can be found in https://github.com/XiuchuanLi/DAL.

[1]  Yang Yu,et al.  GraphDPI: Partial label disambiguation by graph representation learning via mutual information maximization , 2022, Pattern Recognit..

[2]  Mingxing Tan,et al.  PolyLoss: A Polynomial Expansion Perspective of Classification Loss Functions , 2022, ICLR.

[3]  Tongliang Liu,et al.  Selective-Supervised Contrastive Learning with Noisy Labels , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Zhihui Zhu,et al.  Robust Training under Label Noise by Over-parameterization , 2022, ICML.

[5]  Jialu Wang,et al.  Beyond Images: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features , 2022, ICML.

[6]  Ehsan Amid,et al.  Constrained Instance and Class Reweighting for Robust Learning under Label Noise , 2021, ArXiv.

[7]  Yilong Yin,et al.  Learning to Rectify for Robust Learning with Noisy Labels , 2021, Pattern Recognit..

[8]  Deming Zhai,et al.  Learning with Noisy Labels via Sparse Regularization , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[9]  Junjun Jiang,et al.  Asymmetric Loss Functions for Learning with Noisy Labels , 2021, ICML.

[10]  Mingming Gong,et al.  Sample Selection with Uncertainty of Losses for Learning with Noisy Labels , 2021, ICLR.

[11]  HaiYang Zhang,et al.  DualGraph: A graph-based method for reasoning about label noise , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[12]  Tongliang Liu,et al.  Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network , 2021, ICML.

[13]  Hossein Azizpour,et al.  Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels , 2021, NeurIPS.

[14]  I. Reid,et al.  ScanMix: Learning from Severe Label Noise via Semantic Clustering and Semi-Supervised Learning , 2021, Pattern Recognit..

[15]  Samy Bengio,et al.  Understanding deep learning (still) requires rethinking generalization , 2021, Commun. ACM.

[16]  Gang Niu,et al.  Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model , 2021, AAAI.

[17]  Fei Yin,et al.  Convolutional Prototype Network for Open Set Recognition , 2020, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[18]  Pheng-Ann Heng,et al.  Beyond Class-Conditional Assumption: A Primary Attempt to Combat Instance-Dependent Label Noise , 2020, AAAI.

[19]  Hwanjun Song,et al.  Learning From Noisy Labels With Deep Neural Networks: A Survey , 2020, IEEE Transactions on Neural Networks and Learning Systems.

[20]  Fengmao Lv,et al.  Can Cross Entropy Loss Be Robust to Label Noise? , 2020, IJCAI.

[21]  Sheng Liu,et al.  Early-Learning Regularization Prevents Memorization of Noisy Labels , 2020, NeurIPS.

[22]  James Bailey,et al.  Normalized Loss Functions for Deep Learning with Noisy Labels , 2020, ICML.

[23]  Gang Niu,et al.  Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning , 2020, NeurIPS.

[24]  Gang Niu,et al.  Parts-dependent Label Noise: Towards Instance-dependent Label Noise , 2020, ArXiv.

[25]  David A. Clifton,et al.  ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[26]  Lei Feng,et al.  Combating Noisy Labels by Agreement: A Joint Training Method with Co-Regularization , 2020, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[27]  Junnan Li,et al.  DivideMix: Learning with Noisy Labels as Semi-supervised Learning , 2020, ICLR.

[28]  Yang Liu,et al.  Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates , 2019, ICML.

[29]  Junmo Kim,et al.  NLNL: Negative Learning for Noisy Labels , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[30]  James Bailey,et al.  Symmetric Cross Entropy for Robust Learning With Noisy Labels , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[31]  Manfred K. Warmuth,et al.  Robust Bi-Tempered Logistic Loss Based on Bregman Divergences , 2019, NeurIPS.

[32]  Gang Niu,et al.  Are Anchor Points Really Indispensable in Label-Noise Learning? , 2019, NeurIPS.

[33]  Jae-Gil Lee,et al.  SELFIE: Refurbishing Unclean Samples for Robust Deep Learning , 2019, ICML.

[34]  Xingrui Yu,et al.  How does Disagreement Help Generalization against Label Corruption? , 2019, ICML.

[35]  James Bailey,et al.  Dimensionality-Driven Learning with Noisy Labels , 2018, ICML.

[36]  Mert R. Sabuncu,et al.  Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels , 2018, NeurIPS.

[37]  Masashi Sugiyama,et al.  Co-teaching: Robust training of deep neural networks with extremely noisy labels , 2018, NeurIPS.

[38]  Wei Li,et al.  WebVision Database: Visual Learning and Understanding from Web Data , 2017, ArXiv.

[39]  Yoshua Bengio,et al.  A Closer Look at Memorization in Deep Networks , 2017, ICML.

[40]  Aritra Ghosh,et al.  Robust Loss Functions under Label Noise for Deep Neural Networks , 2017, AAAI.

[41]  Jian Sun,et al.  Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[42]  Kurt Hornik,et al.  Multilayer feedforward networks are universal approximators , 1989, Neural Networks.

[43]  Tongliang Liu,et al.  Moderate Coreset: A Universal Method of Data Selection for Real-world Data-efficient Deep Learning , 2023, ICLR.

[44]  Chen Gong,et al.  Robust early-learning: Hindering the memorization of noisy labels , 2021, ICLR.

[45]  Li Li,et al.  Human-Driven FOL Explanations of Deep Learning , 2020, IJCAI.

[46]  Cheng-Hsin Weng,et al.  On the Trade-off between Adversarial and Backdoor Robustness , 2020, NeurIPS.

[47]  Wen-Chuan Lee,et al.  Trojaning Attack on Neural Networks , 2018, NDSS.