MEAL: Stable and Active Learning for Few-Shot Prompting
暂无分享,去创建一个
[1] Roi Reichart,et al. Multi-task Active Learning for Pre-trained Transformer-based Models , 2022, Transactions of the Association for Computational Linguistics.
[2] M. Lewis,et al. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? , 2022, Conference on Empirical Methods in Natural Language Processing.
[3] R. Salakhutdinov,et al. FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding , 2021, ACL.
[4] Martin Potthast,et al. Revisiting Uncertainty-based Query Strategies for Active Learning with Transformers , 2021, FINDINGS.
[5] S. Riedel,et al. Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity , 2021, ACL.
[6] Nikolaos Aletras,et al. Active Learning by Acquiring Contrastive Examples , 2021, EMNLP.
[7] Colin Raffel,et al. Improving and Simplifying Pattern Exploiting Training , 2021, EMNLP.
[8] D. Klein,et al. Calibrate Before Use: Improving Few-Shot Performance of Language Models , 2021, ICML.
[9] Danqi Chen,et al. Making Pre-trained Language Models Better Few-shot Learners , 2021, ACL.
[10] Anna Korhonen,et al. A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters , 2020, ACL.
[11] Hinrich Schütze,et al. It’s Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners , 2020, NAACL.
[12] Maksym Andriushchenko,et al. On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines , 2020, ICLR.
[13] Timo Schick,et al. Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference , 2020, EACL.
[14] Hinrich Schütze,et al. Few-Shot Text Generation with Natural Language Instructions , 2020, EMNLP.
[15] Eyal Shnarch,et al. Active Learning for BERT: An Empirical Study , 2020, EMNLP.
[16] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[17] Ali Farhadi,et al. Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping , 2020, ArXiv.
[18] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[19] John Langford,et al. Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds , 2019, ICLR.
[20] Maneesh Singh,et al. Sampling Bias in Deep Active Classification: An Empirical Study , 2019, EMNLP.
[21] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[22] Fedor Zhdanov,et al. Diverse mini-batch Active Learning , 2019, ArXiv.
[23] Andrew Gordon Wilson,et al. Averaging Weights Leads to Wider Optima and Better Generalization , 2018, UAI.
[24] Hao Li,et al. Visualizing the Loss Landscape of Neural Nets , 2017, NeurIPS.
[25] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[26] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[27] Burr Settles,et al. Active Learning Literature Survey , 2009 .
[28] Andrew McCallum,et al. Reducing Labeling Effort for Structured Prediction Tasks , 2005, AAAI.
[29] Chris Brockett,et al. Automatically Constructing a Corpus of Sentential Paraphrases , 2005, IJCNLP.
[30] Lawrence O. Hall,et al. Active learning to recognize multiple types of plankton , 2004, Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004..
[31] Dan Roth,et al. Learning Question Classifiers , 2002, COLING.
[32] Andrew McCallum,et al. Toward Optimal Active Learning through Sampling Estimation of Error Reduction , 2001, ICML.
[33] David A. Cohn,et al. Active Learning with Statistical Models , 1996, NIPS.