暂无分享,去创建一个
Xinggang Wang | Qi Tian | Hongkai Xiong | Wenrui Dai | Jiemin Fang | Haohang Xu | Xiaopeng Zhang | Lingxi Xie | Lingxi Xie | Qi Tian | Xinggang Wang | H. Xiong | Xiaopeng Zhang | Jiemin Fang | Haohang Xu | Wenrui Dai
[1] Geoffrey E. Hinton,et al. Big Self-Supervised Models are Strong Semi-Supervised Learners , 2020, NeurIPS.
[2] Ross B. Girshick,et al. Mask R-CNN , 2017, 1703.06870.
[3] Li Fei-Fei,et al. ImageNet: A large-scale hierarchical image database , 2009, CVPR.
[4] Paolo Favaro,et al. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles , 2016, ECCV.
[5] Kaiming He,et al. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[6] Yonglong Tian,et al. Contrastive Representation Distillation , 2019, ICLR.
[7] Bing Li,et al. Knowledge Distillation via Instance Relationship Graph , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[8] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[9] Iasonas Kokkinos,et al. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[10] Hamed Pirsiavash,et al. CompRess: Self-Supervised Learning by Compressing Representations , 2020, NeurIPS.
[11] Feiyue Huang,et al. DisCo: Remedy Self-supervised Learning on Lightweight Models with Distilled Contrastive Learning , 2021, ArXiv.
[12] Nikos Komodakis,et al. Unsupervised Representation Learning by Predicting Image Rotations , 2018, ICLR.
[13] Nikos Komodakis,et al. Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer , 2016, ICLR.
[14] Julien Mairal,et al. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments , 2020, NeurIPS.
[15] Alexei A. Efros,et al. Colorful Image Colorization , 2016, ECCV.
[16] Michal Valko,et al. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning , 2020, NeurIPS.
[17] Lei Zhang,et al. SEED: Self-supervised Distillation For Visual Representation , 2021, ArXiv.
[18] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[19] Kaiming He,et al. Improved Baselines with Momentum Contrastive Learning , 2020, ArXiv.
[20] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[21] Alexei A. Efros,et al. Context Encoders: Feature Learning by Inpainting , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Junsong Yuan,et al. Rethinking Soft Labels for Knowledge Distillation: A Bias-Variance Tradeoff Perspective , 2021, ICLR.
[23] Yoshua Bengio,et al. FitNets: Hints for Thin Deep Nets , 2014, ICLR.
[24] Yan Lu,et al. Relational Knowledge Distillation , 2019, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[25] Kaiming He,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).