Multi Label Loss Correction against Missing and Corrupted Labels

Missing and corrupted labels can significantly ruin the learning process and, consequently, the classifier performance. Multi-label learning where each instance is tagged with variable number of labels is particularly affected. Although missing labels (false-negatives) is a well-studied problem in multi-label learning, it is considerably more challenging to have both false-negatives (missing labels) and false-positives (corrupted labels) simultaneously in multi-label datasets. In this paper, we propose Multi-Label Loss with Self Correction ( MLLSC ) which is a loss robust against coincident missing and corrupted labels. MLLSC computes the loss based on the true-positive (true-negative) or false-positive (false-negative) labels and deep neural network expertise. To distinguish between false-positive (false-negative) and true-positive (true-negative) labels, we use the output probability of the deep neural network during the learning process. Our method As MLLSC can be combined with different types of multi-label loss functions, we also address the label imbalance problem of multi-label datasets. Empirical evaluation on real-world vision datasets, i.e., MS-COCO, and MIR-FLICKR, shows that our method under medium (0.3) and high (0.6) corrupted and missing label probabilities outperform the state-of-the-art methods by, on average 23.97% and 9.31% mean average precision (mAP) points, respectively.

[1]  Liang Lin,et al.  Semantic-Aware Representation Blending for Multi-Label Image Recognition with Partial Labels , 2022, AAAI.

[2]  Yandong Guo,et al.  Simple and Robust Loss Design for Multi-Label Learning with Missing Labels , 2021, ArXiv.

[3]  Jun Zhu,et al.  Query2Label: A Simple Transformer Way to Multi-Label Classification , 2021, ArXiv.

[4]  Robert Birke,et al.  LABELNET: Recovering Noisy Labels , 2021, 2021 International Joint Conference on Neural Networks (IJCNN).

[5]  Jiliang Tang,et al.  Towards the Memorization Effect of Neural Networks in Adversarial Training , 2021, ArXiv.

[6]  Yixin Zhang,et al.  RPN Prototype Alignment For Domain Adaptive Object Detector , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[7]  Nebojsa Jojic,et al.  Multi-Label Learning from Single Positive Labels , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Guodong Zhou,et al.  Multi-modal Multi-label Emotion Recognition with Heterogeneous Hierarchical Message Passing , 2021, AAAI.

[9]  Lihi Zelnik-Manor,et al.  ImageNet-21K Pretraining for the Masses , 2021, NeurIPS Datasets and Benchmarks.

[10]  Emanuel Ben Baruch,et al.  Asymmetric Loss For Multi-Label Classification , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).

[11]  Amirmasoud Ghiassi,et al.  TrustNet: Learning from Trusted Data Against (A)symmetric Label Noise , 2020, BDCAT.

[12]  Gang Niu,et al.  Searching to Exploit Memorization Effect in Learning with Noisy Labels , 2020, ICML.

[13]  Carla P. Gomes,et al.  Disentangled Variational Autoencoder based Multi-Label Classification with Covariance-Aware Multivariate Probit Model , 2020, IJCAI.

[14]  Mingming Gong,et al.  Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels , 2020, ICML.

[15]  Yan Yan,et al.  Partial Label Learning with Batch Label Correction , 2020, AAAI.

[16]  Haifeng Shen,et al.  Adaptive Object Detection with Dual Multi-Label Prediction , 2020, ECCV.

[17]  Joost van de Weijer,et al.  Orderless Recurrent Models for Multi-Label Classification , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[18]  James Bailey,et al.  Symmetric Cross Entropy for Robust Learning With Noisy Labels , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).

[19]  Xin Geng,et al.  Partial Label Learning via Label Enhancement , 2019, AAAI.

[20]  Pengfei Chen,et al.  Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels , 2019, ICML.

[21]  Sheng-Jun Huang,et al.  Partial Multi-Label Learning , 2018, AAAI.

[22]  Kevin Gimpel,et al.  Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise , 2018, NeurIPS.

[23]  Ross B. Girshick,et al.  Focal Loss for Dense Object Detection , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[24]  Kaiming He,et al.  Focal Loss for Dense Object Detection , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).

[25]  Gang Niu,et al.  Learning from Complementary Labels , 2017, NIPS.

[26]  Richard Nock,et al.  Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[27]  Pietro Perona,et al.  Microsoft COCO: Common Objects in Context , 2014, ECCV.

[28]  Inderjit S. Dhillon,et al.  Large-scale Multi-label Learning with Missing Labels , 2013, ICML.

[29]  Alexandre Bernardino,et al.  Matrix Completion for Multi-label Image Classification , 2011, NIPS.

[30]  Mark J. Huiskes,et al.  The MIR flickr retrieval evaluation , 2008, MIR '08.

[31]  Jeremy M. Wolfe,et al.  26.5 brief comms NEW , 2005 .

[32]  R. Birke,et al.  LABNET: A Collaborative Method for DNN Training and Label Aggregation , 2022, ICAART.