暂无分享,去创建一个
Meng Cao | Jiulong Shan | Ping Huang | Haoping Bai | Haoping Bai | Ping-Chia Huang | Jiulong Shan | Mengyao Cao
[1] Xinlei Chen,et al. Exploring Simple Siamese Representation Learning , 2020, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[2] Ilya Sutskever,et al. Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.
[3] Shin Ishii,et al. Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning , 2017, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[4] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[5] Timo Aila,et al. Temporal Ensembling for Semi-Supervised Learning , 2016, ICLR.
[6] Augustus Odena,et al. Semi-Supervised Learning with Generative Adversarial Networks , 2016, ArXiv.
[7] Yi Yang,et al. Multi-Class Active Learning by Uncertainty Sampling with Diversity Maximization , 2015, International Journal of Computer Vision.
[8] Bogdan Raducanu,et al. Reducing Label Effort: Self-Supervised meets Active Learning , 2021, 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW).
[9] Furu Wei,et al. BEiT: BERT Pre-Training of Image Transformers , 2021, ArXiv.
[10] Stephen Lin,et al. Deep Metric Transfer for Label Propagation with Limited Annotated Data , 2018, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).
[11] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[12] Max Welling,et al. Semi-supervised Learning with Deep Generative Models , 2014, NIPS.
[13] Tapani Raiko,et al. Semi-supervised Learning with Ladder Networks , 2015, NIPS.
[14] Silvio Savarese,et al. Active Learning for Convolutional Neural Networks: A Core-Set Approach , 2017, ICLR.
[15] Chris H. Q. Ding,et al. Active Learning for Support Vector Machines with Maximum Model Change , 2014, ECML/PKDD.
[16] Ali Razavi,et al. Data-Efficient Image Recognition with Contrastive Predictive Coding , 2019, ICML.
[17] YangYi,et al. Multi-Class Active Learning by Uncertainty Sampling with Diversity Maximization , 2015 .
[18] Kristen Grauman,et al. Active Image Segmentation Propagation , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[19] Sanja Fidler,et al. Towards Good Practices for Efficiently Annotating Large-Scale Image Classification Datasets , 2021, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Yuhong Guo,et al. Active Instance Sampling via Matrix Partition , 2010, NIPS.
[21] Bernhard Schölkopf,et al. Learning with Local and Global Consistency , 2003, NIPS.
[22] Georg Heigold,et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2021, ICLR.
[23] Kaiming He,et al. Improved Baselines with Momentum Contrastive Learning , 2020, ArXiv.
[24] Julien Mairal,et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.
[25] Harri Valpola,et al. Weight-averaged consistency targets improve semi-supervised deep learning results , 2017, ArXiv.
[26] Yoshua Bengio,et al. Interpolation Consistency Training for Semi-Supervised Learning , 2019, IJCAI.
[27] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .
[28] Kaiming He,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[29] Jeff Donahue,et al. Large Scale GAN Training for High Fidelity Natural Image Synthesis , 2018, ICLR.
[30] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[31] David Berthelot,et al. MixMatch: A Holistic Approach to Semi-Supervised Learning , 2019, NeurIPS.
[32] Andrew Gordon Wilson,et al. There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average , 2018, ICLR.
[33] Neoklis Polyzotis,et al. Data Management Challenges in Production Machine Learning , 2017, SIGMOD Conference.