KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering
暂无分享,去创建一个
Minghui Qiu | Jun Huang | J. Wang | Ming Gao | Chengyu Wang | Qiuhui Shi | Hongbin Wang
[1] Minghui Qiu,et al. EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing , 2022, EMNLP.
[2] Zhiyuan Liu,et al. PTR: Prompt Tuning with Rules for Text Classification , 2021, AI Open.
[3] Haytham Assem,et al. Qasar: Self-Supervised Learning Framework for Extractive Question Answering , 2021, 2021 IEEE International Conference on Big Data (Big Data).
[4] Zhilin Yang,et al. P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks , 2021, ArXiv.
[5] P. Natarajan,et al. FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models , 2021, EMNLP.
[6] Huajun Chen,et al. Drop Redundant, Shrink Irrelevant: Selective Knowledge Injection for Language Pretraining , 2021, IJCAI.
[7] Guanghui Qin,et al. Learning How to Ask: Querying LMs with Mixtures of Soft Prompts , 2021, NAACL.
[8] Zhifang Sui,et al. Incorporating Connections Beyond Knowledge Embeddings: A Plug-and-Play Module to Enhance Commonsense Reasoning in Machine Reading Comprehension , 2021, ArXiv.
[9] Zhengxiao Du,et al. GPT Understands, Too , 2021, AI Open.
[10] Omer Levy,et al. Few-Shot Question Answering by Pretraining Span Selection , 2021, ACL.
[11] Danqi Chen,et al. Making Pre-trained Language Models Better Few-shot Learners , 2021, ACL.
[12] Timo Schick,et al. Exploiting Cloze-Questions for Few-Shot Text Classification and Natural Language Inference , 2020, EACL.
[13] Zhiyuan Liu,et al. KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation , 2019, Transactions of the Association for Computational Linguistics.
[14] Percy Liang,et al. Prefix-Tuning: Optimizing Continuous Prompts for Generation , 2021, ACL.
[15] Sameer Singh,et al. Eliciting Knowledge from Language Models Using Automatically Generated Prompts , 2020, EMNLP.
[16] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[17] Geoffrey E. Hinton,et al. A Simple Framework for Contrastive Learning of Visual Representations , 2020, ICML.
[18] Wenhan Xiong,et al. Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model , 2019, ICLR.
[19] Omer Levy,et al. SpanBERT: Improving Pre-training by Representing and Predicting Spans , 2019, TACL.
[20] Danqi Chen,et al. MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension , 2019, EMNLP.
[21] Ming-Wei Chang,et al. Natural Questions: A Benchmark for Question Answering Research , 2019, TACL.
[22] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[23] An Yang,et al. Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension , 2019, ACL.
[24] Chao Wang,et al. Explicit Utilization of General Knowledge in Machine Reading Comprehension , 2018, ACL.
[25] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[26] Yoshua Bengio,et al. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering , 2018, EMNLP.
[27] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[28] Pasquale Minervini,et al. Convolutional 2D Knowledge Graph Embeddings , 2017, AAAI.
[29] Ming Zhou,et al. Gated Self-Matching Networks for Reading Comprehension and Question Answering , 2017, ACL.
[30] Omer Levy,et al. Zero-Shot Relation Extraction via Reading Comprehension , 2017, CoNLL.
[31] Eunsol Choi,et al. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.
[32] Kyunghyun Cho,et al. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine , 2017, ArXiv.
[33] Guokun Lai,et al. RACE: Large-scale ReAding Comprehension Dataset From Examinations , 2017, EMNLP.
[34] Philip Bachman,et al. NewsQA: A Machine Comprehension Dataset , 2016, Rep4NLP@ACL.
[35] Shuohang Wang,et al. Machine Comprehension Using Match-LSTM and Answer Pointer , 2016, ICLR.
[36] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[37] Navdeep Jaitly,et al. Pointer Networks , 2015, NIPS.
[38] Geoffrey E. Hinton,et al. Visualizing Data using t-SNE , 2008 .