ChiMera: Learning with noisy labels by contrasting mixed-up augmentations
暂无分享,去创建一个
Zixuan Liu | Xin Zhang | Junjun He | Dan Fu | Dimitris Samaras | Robby Tan | Xiao Wang | Sheng Wang | Robby T. Tan | Dimitris Samaras
[1] Xike Xie,et al. OT-Filter: An Optimal Transport Filter for Learning with Noisy Labels , 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2] S. Shan,et al. DISC: Learning from Noisy Labels via Dynamic Instance-Specific Selection and Correction , 2023, Computer Vision and Pattern Recognition.
[3] Junping Zhang,et al. Twin Contrastive Learning with Noisy Labels , 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[4] Yuxi Li,et al. Learning from Noisy Labels with Decoupled Meta Label Purifier , 2023, 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] W. Wang,et al. Learning from Long-Tailed Noisy Data with Sample Selection and Balanced Loss , 2022, ArXiv.
[6] Yuxi Li,et al. Learning from Noisy Labels with Coarse-to-Fine Sample Credibility Modeling , 2022, ECCV Workshops.
[7] Junchi Yan,et al. M-Mix: Generating Hard Negatives via Multi-sample Mixing for Contrastive Learning , 2022, KDD.
[8] Yizhou Yu,et al. Centrality and Consistency: Two-Stage Clean Samples Identification for Learning with Instance-Dependent Noisy Labels , 2022, ECCV.
[9] J. Zhao,et al. ProMix: Combating Label Noise via Maximizing Clean Sample Utility , 2022, IJCAI.
[10] Heng Huang,et al. Noise Is Also Useful: Negative Correlation-Steered Latent Contrastive Learning , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Fumin Shen,et al. PNP: Robust Learning from Noisy Labels by Probabilistic Noise Prediction , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[12] Masashi Sugiyama,et al. Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[13] Il-Chul Moon,et al. From Noisy Prediction to True Label: Noisy Prediction Calibration via Generative Model , 2022, ICML.
[14] Yanwei Fu,et al. Scalable Penalized Regression for Noise Detection in Learning with Noisy Labels , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[15] Tongliang Liu,et al. Selective-Supervised Contrastive Learning with Noisy Labels , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[16] Boyu Wang,et al. On Learning Contrastive Representations for Learning with Noisy Labels , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[17] Zhihui Zhu,et al. Robust Training under Label Noise by Over-parameterization , 2022, ICML.
[18] C. Schmid,et al. Learning with Neighbor Consistency for Noisy Labels , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Baharan Mirzasoleiman,et al. Investigating Why Contrastive Learning Benefits Robustness Against Label Noise , 2022, ICML.
[20] Tongliang Liu,et al. Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations , 2021, ICLR.
[21] Tongliang Liu,et al. Me-Momentum: Extracting Hard Confident Examples from Noisily Labeled Data , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[22] Caiming Xiong,et al. Learning from Noisy Data with Robust Representation Learning , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[23] Zhi-Fan Wu,et al. NGC: A Unified Framework for Learning with Open-World Noisy Data , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[24] Erkun Yang,et al. Understanding and Improving Early Stopping for Learning with Noisy Labels , 2021, NeurIPS.
[25] Mingming Gong,et al. Sample Selection with Uncertainty of Losses for Learning with Noisy Labels , 2021, ICLR.
[26] Tongliang Liu,et al. Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network , 2021, ICML.
[27] Hossein Azizpour,et al. Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels , 2021, NeurIPS.
[28] I. Reid,et al. LongReMix: Robust Learning with High Confidence Samples in a Noisy Label Environment , 2021, Pattern Recognit..
[29] Yi Ding,et al. Augmentation Strategies for Learning with Noisy Labels , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[30] Roland S. Zimmermann,et al. Contrastive Learning Inverts the Data Generating Process , 2021, ICML.
[31] Yang Liu,et al. A Second-Order Approach to Learning with Instance-Dependent Label Noise , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[32] Xinlei Chen,et al. Exploring Simple Siamese Representation Learning , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[33] Quanming Yao,et al. Decoupling Representation and Classifier for Noisy Label Learning , 2020, ArXiv.
[34] Quoc V. Le,et al. Towards Domain-Agnostic Contrastive Learning , 2020, ICML.
[35] Zhangyang Wang,et al. Graph Contrastive Learning with Augmentations , 2020, NeurIPS.
[36] Gihun Lee,et al. MixCo: Mix-up Contrastive Learning for Visual Representation , 2020, ArXiv.
[37] James Y. Zou,et al. How Does Mixup Help With Robustness and Generalization? , 2020, ICLR.
[38] Yang Liu,et al. Learning with Instance-Dependent Label Noise: A Sample Sieve Approach , 2020, ICLR.
[39] Avi Mendelson,et al. Contrast to Divide: Self-Supervised Pre-Training for Learning with Noisy Labels , 2020, 2022 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV).
[40] Junnan Li,et al. MoPro: Webly Supervised Learning with Momentum Prototypes , 2020, ICLR.
[41] Sheng Liu,et al. Early-Learning Regularization Prevents Memorization of Noisy Labels , 2020, NeurIPS.
[42] Geoffrey E. Hinton,et al. Big Self-Supervised Models are Strong Semi-Supervised Learners , 2020, NeurIPS.
[43] Gang Niu,et al. Parts-dependent Label Noise: Towards Instance-dependent Label Noise , 2020, ArXiv.
[44] Pierre H. Richemond,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[45] Phillip Isola,et al. Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere , 2020, ICML.
[46] David Berthelot,et al. ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring , 2020, ICLR.
[47] Ce Liu,et al. Supervised Contrastive Learning , 2020, NeurIPS.
[48] Eric P. Xing,et al. Un-mix: Rethinking Image Mixtures for Unsupervised Visual Representation Learning , 2020, AAAI.
[49] Kaiming He,et al. Improved Baselines with Momentum Contrastive Learning , 2020, ArXiv.
[50] Junnan Li,et al. DivideMix: Learning with Noisy Labels as Semi-supervised Learning , 2020, ICLR.
[51] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[52] David Berthelot,et al. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence , 2020, NeurIPS.
[53] Ross B. Girshick,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[54] Gang Niu,et al. Confidence Scores Make Instance-dependent Label-noise Learning Possible , 2019, ICML.
[55] Xiaogang Wang,et al. Deep Self-Learning From Noisy Labels , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[56] Jae-Gil Lee,et al. SELFIE: Refurbishing Unclean Samples for Robust Deep Learning , 2019, ICML.
[57] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[58] Kun Yi,et al. Probabilistic End-To-End Noise Correction for Learning With Noisy Labels , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[59] Oriol Vinyals,et al. Representation Learning with Contrastive Predictive Coding , 2018, ArXiv.
[60] Masashi Sugiyama,et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels , 2018, NeurIPS.
[61] Kiyoharu Aizawa,et al. Joint Optimization Framework for Learning with Noisy Labels , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[62] Li Fei-Fei,et al. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels , 2017, ICML.
[63] Hongyi Zhang,et al. mixup: Beyond Empirical Risk Minimization , 2017, ICLR.
[64] Wei Li,et al. WebVision Database: Visual Learning and Understanding from Web Data , 2017, ArXiv.
[65] Shai Shalev-Shwartz,et al. Decoupling "when to update" from "how to update" , 2017, NIPS.
[66] Samy Bengio,et al. Understanding deep learning requires rethinking generalization , 2016, ICLR.
[67] Jacob Goldberger,et al. Training deep neural-networks using a noise adaptation layer , 2016, ICLR.
[68] Richard Nock,et al. Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[69] Jian Sun,et al. Identity Mappings in Deep Residual Networks , 2016, ECCV.
[70] Sergey Ioffe,et al. Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning , 2016, AAAI.
[71] Xiaogang Wang,et al. Learning from massive noisy labeled data for image classification , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[72] Andrew Zisserman,et al. Very Deep Convolutional Networks for Large-Scale Image Recognition , 2014, ICLR.
[73] Geoffrey E. Hinton,et al. ImageNet classification with deep convolutional neural networks , 2012, Commun. ACM.
[74] Rui Chen,et al. An Information Fusion Approach to Learning with Instance-Dependent Label Noise , 2022, ICLR.
[75] Tongliang Liu,et al. Class-Dependent Label-Noise Learning with Cycle-Consistency Regularization , 2022, NeurIPS.
[76] L. Aitchison. InfoNCE is a variational autoencoder , 2021, ArXiv.
[77] Jinwoo Shin,et al. i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning , 2021, ICLR.
[78] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[79] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .