Wasserstein Adversarial Regularization (WAR) on label noise
暂无分享,去创建一个
Nicolas Courty | Devis Tuia | R'emi Flamary | Bharath Bhushan Damodaran | Sylvain Lobry | Kilian Fatras
[1] Mert R. Sabuncu,et al. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels , 2018, NeurIPS.
[2] Xiaogang Wang,et al. Learning from massive noisy labeled data for image classification , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Richard Nock,et al. Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Xingrui Yu,et al. How does Disagreement Help Generalization against Label Corruption? , 2019, ICML.
[5] Tom Goldstein,et al. Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training? , 2019, ArXiv.
[6] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[7] Jonathan Krause,et al. The Unreasonable Effectiveness of Noisy Data for Fine-Grained Recognition , 2015, ECCV.
[8] Arash Vahdat,et al. Toward Robustness against Label Noise in Training Deep Discriminative Neural Networks , 2017, NIPS.
[9] Fei Wang,et al. The Devil of Face Recognition is in the Noise , 2018, ECCV.
[10] Antonio Criminisi,et al. Harvesting Image Databases from the Web , 2007, 2007 IEEE 11th International Conference on Computer Vision.
[11] Alessandro Rudi,et al. Differential Properties of Sinkhorn Approximation for Learning with Wasserstein Distance , 2018, NeurIPS.
[12] G. Golub,et al. Eigenvalue computation in the 20th century , 2000 .
[13] Ross B. Girshick,et al. Seeing through the Human Reporting Bias: Visual Classifiers from Noisy Human-Centric Labels , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[14] Aritra Ghosh,et al. Robust Loss Functions under Label Noise for Deep Neural Networks , 2017, AAAI.
[15] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[16] Xingrui Yu,et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels , 2018, NeurIPS.
[17] Morgane Goibert,et al. Adversarial Robustness via Adversarial Label-Smoothing , 2019, ArXiv.
[18] James Bailey,et al. Symmetric Cross Entropy for Robust Learning With Noisy Labels , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[19] Dumitru Erhan,et al. Training Deep Neural Networks on Noisy Labels with Bootstrapping , 2014, ICLR.
[20] Kiyoharu Aizawa,et al. Joint Optimization Framework for Learning with Noisy Labels , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[21] Luca Antiga,et al. Automatic differentiation in PyTorch , 2017 .
[22] Yale Song,et al. Learning from Noisy Labels with Distillation , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[23] Carla E. Brodley,et al. Identifying Mislabeled Training Data , 1999, J. Artif. Intell. Res..
[24] Gustavo Camps-Valls,et al. Structured Output SVM for Remote Sensing Image Classification , 2009, 2009 IEEE International Workshop on Machine Learning for Signal Processing.
[25] Jonathon Shlens,et al. Explaining and Harnessing Adversarial Examples , 2014, ICLR.
[26] Uri Shaham,et al. Understanding adversarial training: Increasing local stability of supervised models through robust optimization , 2015, Neurocomputing.
[27] Ramesh Raskar,et al. Pairwise Confusion for Fine-Grained Visual Classification , 2017, ECCV.
[28] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[29] Jae-Gil Lee,et al. SELFIE: Refurbishing Unclean Samples for Robust Deep Learning , 2019, ICML.
[30] Hossein Mobahi,et al. Learning with a Wasserstein Loss , 2015, NIPS.
[31] Thomas Brox,et al. U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.
[32] James Bailey,et al. Dimensionality-Driven Learning with Noisy Labels , 2018, ICML.
[33] Mark Sandler,et al. The Effects of Noisy Labels on Deep Convolutional Neural Networks for Music Tagging , 2017, IEEE Transactions on Emerging Topics in Computational Intelligence.
[34] Marco Cuturi,et al. Computational Optimal Transport: With Applications to Data Science , 2019 .
[35] Rob Fergus,et al. Learning from Noisy Labels with Deep Neural Networks , 2014, ICLR.
[36] Aditya Krishna Menon,et al. Learning with Symmetric Label Noise: The Importance of Being Unhinged , 2015, NIPS.
[37] Gabriel Peyré,et al. Learning Generative Models with Sinkhorn Divergences , 2017, AISTATS.
[38] Bin Yang,et al. Learning to Reweight Examples for Robust Deep Learning , 2018, ICML.
[39] Pengfei Chen,et al. Understanding and Utilizing Deep Neural Networks Trained with Noisy Labels , 2019, ICML.
[40] Lei Zhang,et al. CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[41] Yoshua Bengio,et al. A Closer Look at Memorization in Deep Networks , 2017, ICML.
[42] Marco Cuturi,et al. Sinkhorn Distances: Lightspeed Computation of Optimal Transport , 2013, NIPS.
[43] Kevin Gimpel,et al. Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise , 2018, NeurIPS.
[44] Li Fei-Fei,et al. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels , 2017, ICML.