Tuning Pre-trained Model via Moment Probing
暂无分享,去创建一个
[1] Xinchao Wang,et al. Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning , 2022, NeurIPS.
[2] Jiangliu Wang,et al. AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition , 2022, NeurIPS.
[3] Serge J. Belongie,et al. Visual Prompt Tuning , 2022, ECCV.
[4] Trevor Darrell,et al. A ConvNet for the 2020s , 2022, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[5] Ross B. Girshick,et al. Masked Autoencoders Are Scalable Vision Learners , 2021, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[6] Peng Gao,et al. Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling , 2021, ArXiv.
[7] Jenia Jitsev,et al. LAION-400M: Open Dataset of CLIP-Filtered 400 Million Image-Text Pairs , 2021, ArXiv.
[8] Chen Change Loy,et al. Learning to Prompt for Vision-Language Models , 2021, International Journal of Computer Vision.
[9] Shenghua Gao,et al. AS-MLP: An Axial Shifted MLP Architecture for Vision , 2021, ICLR.
[10] Yoav Goldberg,et al. BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models , 2021, ACL.
[11] Yelong Shen,et al. LoRA: Low-Rank Adaptation of Large Language Models , 2021, ICLR.
[12] Joe Davison,et al. Compacter: Efficient Low-Rank Hypercomplex Adapter Layers , 2021, NeurIPS.
[13] A. Dosovitskiy,et al. MLP-Mixer: An all-MLP Architecture for Vision , 2021, NeurIPS.
[14] Julien Mairal,et al. Emerging Properties in Self-Supervised Vision Transformers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[15] Saining Xie,et al. An Empirical Study of Training Self-Supervised Vision Transformers , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[16] Ilya Sutskever,et al. Learning Transferable Visual Models From Natural Language Supervision , 2021, ICML.
[17] S. Gelly,et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale , 2020, ICLR.
[18] D. Song,et al. The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization , 2020, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[19] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[20] Ross B. Girshick,et al. Momentum Contrast for Unsupervised Visual Representation Learning , 2019, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[21] Dawn Song,et al. Natural Adversarial Examples , 2019, 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[22] Thomas G. Dietterich,et al. Benchmarking Neural Network Robustness to Common Corruptions and Perturbations , 2019, ICLR.
[23] Mona Attariyan,et al. Parameter-Efficient Transfer Learning for NLP , 2019, ICML.
[24] Michelle J. McGrath,et al. layers , 2018, Elsa Prochazka - architectureality.
[25] Rogério Schmidt Feris,et al. SpotTune: Transfer Learning Through Adaptive Fine-Tuning , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[26] Qilong Wang,et al. Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.
[27] Frank Hutter,et al. Decoupled Weight Decay Regularization , 2017, ICLR.
[28] Subhransu Maji,et al. Improved Bilinear Pooling with CNNs , 2017, BMVC.
[29] Chen Sun,et al. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[30] Kaiming He,et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour , 2017, ArXiv.
[31] Qilong Wang,et al. Is Second-Order Information Helpful for Large-Scale Visual Recognition? , 2017, 2017 IEEE International Conference on Computer Vision (ICCV).
[32] Jonathan Krause,et al. Fine-Grained Car Detection for Visual Census Estimation , 2017, AAAI.
[33] Shu Kong,et al. Low-Rank Bilinear Pooling for Fine-Grained Classification , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[34] Geoffrey E. Hinton,et al. Layer Normalization , 2016, ArXiv.
[35] Jian Sun,et al. Deep Residual Learning for Image Recognition , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Yang Gao,et al. Compact Bilinear Pooling , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[37] Pietro Perona,et al. Building a bird recognition app and large scale dataset with citizen scientists: The fine print in fine-grained dataset collection , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[38] Subhransu Maji,et al. Bilinear CNN Models for Fine-Grained Visual Recognition , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[39] Pietro Perona,et al. The Caltech-UCSD Birds-200-2011 Dataset , 2011 .
[40] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[41] Andrew Zisserman,et al. Automated Flower Classification over a Large Number of Classes , 2008, 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing.
[42] Radford M. Neal. Pattern Recognition and Machine Learning , 2007, Technometrics.
[43] Simon Haykin,et al. Modern signal processing , 1988 .
[44] Qilong Wang,et al. DropCov: A Simple yet Effective Method for Improving Deep Architectures , 2022, NeurIPS.
[45] Percy Liang,et al. Prefix-Tuning: Optimizing Continuous Prompts for Generation , 2021, ACL.
[46] Stephen Lin,et al. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows , 2021, 2021 IEEE/CVF International Conference on Computer Vision (ICCV).
[47] Zhuang Liu,et al. A Robustly Optimized BERT Pre-training Approach with Post-training , 2021, CCL.
[48] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[49] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[50] Fei-Fei Li,et al. Novel Dataset for Fine-Grained Image Categorization : Stanford Dogs , 2012 .
[51] Alex Krizhevsky,et al. Learning Multiple Layers of Features from Tiny Images , 2009 .