暂无分享,去创建一个
Morteza Ziyadi | Weizhu Chen | Abhishek Goswami | Jade Huang | Yuting Sun | Weizhu Chen | Jade Huang | Morteza Ziyadi | Yuting Sun | Abhishek Goswami
[1] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[2] Jiwei Li,et al. A Unified MRC Framework for Named Entity Recognition , 2019, ACL.
[3] Yaojie Lu,et al. A Rigourous Study on Named Entity Recognition: Can Fine-tuning Pretrained Model Lead to the Promised Land? , 2020, EMNLP.
[4] Iryna Gurevych,et al. Low Resource Sequence Tagging with Weak Labels , 2020, AAAI.
[5] Ani Nenkova,et al. Interpretability Analysis for Named Entity Recognition to Understand System Predictions and How They Can Improve , 2020, ArXiv.
[6] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[7] Philip Yu,et al. MZET: Memory Augmented Zero-Shot Fine-grained Named Entity Typing , 2020, COLING.
[8] Huajun Chen,et al. Improving Few-shot Text Classification via Pretrained Language Representations , 2019, ArXiv.
[9] Bing Li,et al. Fine-Grained Named Entity Typing over Distantly Supervised Data Based on Refined Representations , 2020, AAAI.
[10] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[11] Pierre Lison,et al. Named Entity Recognition without Labelled Data: A Weak Supervision Approach , 2020, ACL.
[12] Fei Wang,et al. Coreference Resolution as Query-based Span Prediction , 2019, ArXiv.
[13] Xin Li,et al. A Chinese Corpus for Fine-grained Entity Typing , 2020, LREC.
[14] Ming-Wei Chang,et al. REALM: Retrieval-Augmented Language Model Pre-Training , 2020, ICML.
[15] Varvara Logacheva,et al. Few-shot classification in named entity recognition task , 2018, SAC.
[16] Andrew McCallum,et al. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data , 2001, ICML.
[17] Chao Zhang,et al. Partially-Typed NER Datasets Integration: Connecting Practice to Theory , 2020, ArXiv.
[18] Richard S. Zemel,et al. Prototypical Networks for Few-shot Learning , 2017, NIPS.
[19] Ani Nenkova,et al. Entity-Switched Datasets: An Approach to Auditing the In-Domain Robustness of Named Entity Recognition Models , 2020, ArXiv.
[20] Karl Stratos,et al. Label-Agnostic Sequence Labeling by Copying Nearest Neighbors , 2019, ACL.
[21] Erik F. Tjong Kim Sang,et al. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition , 2003, CoNLL.
[22] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[23] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[24] Mark Chen,et al. Language Models are Few-Shot Learners , 2020, NeurIPS.
[25] Wei Qiu,et al. Boundary Enhanced Neural Span Classification for Nested Named Entity Recognition , 2020, AAAI.