L_DMI: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise
暂无分享,去创建一个
Yizhou Wang | Yuqing Kong | Peng Cao | Yilun Xu | Yizhou Wang | Peng Cao | Yuqing Kong | Yilun Xu
[1] Xiaogang Wang,et al. Learning from massive noisy labeled data for image classification , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Nagarajan Natarajan,et al. Learning from Binary Labels with Instance-Dependent Corruption , 2016, ArXiv.
[3] James Bailey,et al. Dimensionality-Driven Learning with Noisy Labels , 2018, ICML.
[4] Grant Schoenebeck,et al. Water from Two Rocks: Maximizing the Mutual Information , 2018, EC.
[5] J. Paul Brooks,et al. Support Vector Machines with the Ramp Loss and the Hard Margin Loss , 2011, Oper. Res..
[6] Arash Vahdat,et al. Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks , 2017, NIPS.
[7] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[8] E. Seneta. Non-negative Matrices and Markov Chains , 2008 .
[9] Nuno Vasconcelos,et al. On the Design of Loss Functions for Classification: theory, robustness to outliers, and SavageBoost , 2008, NIPS.
[10] Aditya Krishna Menon,et al. Learning with Symmetric Label Noise: The Importance of Being Unhinged , 2015, NIPS.
[11] Yizhou Wang,et al. Max-MIG: an Information Theoretic Approach for Joint Learning from Crowds , 2019, ICLR.
[12] Kiyoharu Aizawa,et al. Joint Optimization Framework for Learning with Noisy Labels , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[13] Roland Vollgraf,et al. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms , 2017, ArXiv.
[14] Bo Pang,et al. Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales , 2005, ACL.
[15] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Kun Yi,et al. Probabilistic End-To-End Noise Correction for Learning With Noisy Labels , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Clayton Scott,et al. A Rate of Convergence for Mixture Proportion Estimation, with Application to Learning from Noisy Labels , 2015, AISTATS.
[18] Yoshua Bengio,et al. Learning deep representations by mutual information estimation and maximization , 2018, ICLR.
[19] Jun Sun,et al. Safeguarded Dynamic Label Regression for Noisy Supervision , 2019, AAAI.
[20] Bin Yang,et al. Learning to Reweight Examples for Robust Deep Learning , 2018, ICML.
[21] Kevin Gimpel,et al. Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise , 2018, NeurIPS.
[22] Dacheng Tao,et al. Classification with Noisy Labels by Importance Reweighting , 2014, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[23] Joan Bruna,et al. Training Convolutional Networks with Noisy Labels , 2014, ICLR 2014.
[24] Mert R. Sabuncu,et al. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels , 2018, NeurIPS.
[25] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[26] Yuqing Kong,et al. Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks , 2019, SODA.
[27] Aritra Ghosh,et al. Robust Loss Functions under Label Noise for Deep Neural Networks , 2017, AAAI.
[28] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[29] Xingrui Yu,et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels , 2018, NeurIPS.
[30] Sang Joon Kim,et al. A Mathematical Theory of Communication , 2006 .
[31] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[32] Richard Nock,et al. Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[34] Naresh Manwani,et al. Noise Tolerance Under Risk Minimization , 2011, IEEE Transactions on Cybernetics.
[35] Ambuj Tewari,et al. Mixture Proportion Estimation via Kernel Embeddings of Distributions , 2016, ICML.
[36] Yoon Kim,et al. Convolutional Neural Networks for Sentence Classification , 2014, EMNLP.
[37] Kotagiri Ramamohanarao,et al. Learning with Bounded Instance- and Label-dependent Label Noise , 2017, ICML.
[38] Ivor W. Tsang,et al. Masking: A New Perspective of Noisy Supervision , 2018, NeurIPS.
[39] Li Fei-Fei,et al. MentorNet: Regularizing Very Deep Neural Networks on Corrupted Labels , 2017, ArXiv.
[40] Lei Zhang,et al. CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[41] Imre Csiszár,et al. Information Theory and Statistics: A Tutorial , 2004, Found. Trends Commun. Inf. Theory.
[42] Jacob Goldberger,et al. Training deep neural-networks using a noise adaptation layer , 2016, ICLR.
[43] Abhinav Gupta,et al. Learning from Noisy Large-Scale Datasets with Minimal Supervision , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[44] Nagarajan Natarajan,et al. Learning with Noisy Labels , 2013, NIPS.
[45] Dumitru Erhan,et al. Training Deep Neural Networks on Noisy Labels with Bootstrapping , 2014, ICLR.