暂无分享,去创建一个
Yaser Al-Onaizan | Smaranda Muresan | Jie Ma | Miguel Ballesteros | Faisal Ladhak | Rishita Anubhai | Kasturi Bhattacharjee | Rishita Anubhai | Miguel Ballesteros | S. Muresan | Faisal Ladhak | Y. Al-Onaizan | Kasturi Bhattacharjee | Jie Ma
[1] Philip S. Yu,et al. BERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis , 2019, NAACL.
[2] Yuchen Zhang,et al. CoNLL-2012 Shared Task: Modeling Multilingual Unrestricted Coreference in OntoNotes , 2012, EMNLP-CoNLL Shared Task.
[3] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[4] Quoc V. Le,et al. Semi-Supervised Sequence Modeling with Cross-View Training , 2018, EMNLP.
[5] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[6] Andrew McCallum,et al. Energy and Policy Considerations for Deep Learning in NLP , 2019, ACL.
[7] Suresh Manandhar,et al. SemEval-2014 Task 4: Aspect Based Sentiment Analysis , 2014, *SEMEVAL.
[8] Eduard H. Hovy,et al. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF , 2016, ACL.
[9] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[10] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[11] Philip S. Yu,et al. Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction , 2018, ACL.
[12] Roland Vollgraf,et al. Contextual String Embeddings for Sequence Labeling , 2018, COLING.
[13] Sebastian Stabinger,et al. Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification , 2020, LREC.
[14] Thomas Wolf,et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.
[15] Doug Downey,et al. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks , 2020, ACL.
[16] Guillaume Lample,et al. Neural Architectures for Named Entity Recognition , 2016, NAACL.
[17] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[18] Bernardo Magnini,et al. Exploring Named Entity Recognition As an Auxiliary Task for Slot Filling in Conversational Language Understanding , 2018, SCAI@EMNLP.
[19] Erik F. Tjong Kim Sang,et al. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition , 2003, CoNLL.
[20] Chin-Yew Lin,et al. Towards Improving Neural Named Entity Recognition with Gazetteers , 2019, ACL.
[21] Huaiyu Zhu. On Information and Sufficiency , 1997 .
[22] Dacheng Tao,et al. A Survey on Multi-view Learning , 2013, ArXiv.
[23] Luke S. Zettlemoyer,et al. Cloze-driven Pretraining of Self-attention Networks , 2019, EMNLP.
[24] Mitchell P. Marcus,et al. Text Chunking using Transformation-Based Learning , 1995, VLC@ACL.
[25] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[26] Avrim Blum,et al. The Bottleneck , 2021, Monopsony Capitalism.
[27] David Yarowsky,et al. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods , 1995, ACL.
[28] Christopher Potts,et al. Learning Word Vectors for Sentiment Analysis , 2011, ACL.
[29] Eugene Charniak,et al. Effective Self-Training for Parsing , 2006, NAACL.