暂无分享,去创建一个
Sonal Kumari | Vibhav Agarwal | Bharath Challa | Kranti Chalamalasetti | Sourav Ghosh | Harshavardhana | Barath Raj Kandur Raja
[1] Eduard H. Hovy,et al. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF , 2016, ACL.
[2] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[3] Weiran Xu,et al. Combining Word-Level and Character-Level Representations for Relation Classification of Informal Text , 2017, Rep4NLP@ACL.
[4] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[5] Chengzhi Zhang,et al. Automatic Keyword Extraction from Documents Using Conditional Random Fields , 2008 .
[6] Hyeon Gyu Kim. Efficient Keyword Extraction from Social Big Data Based on Cohesion Scoring , 2020 .
[7] Martín Abadi,et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems , 2016, ArXiv.
[8] Franck Dernoncourt,et al. SemEval-2020 Task 10: Emphasis Selection for Written Text in Visual Media , 2020, SEMEVAL.
[9] Juan-Zi Li,et al. Keyword Extraction Using Support Vector Machine , 2006, WAIM.
[10] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[11] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[12] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[13] Mitsuru Ishizuka,et al. Keyword extraction from a single document using word co-occurrence statistical information , 2004, Int. J. Artif. Intell. Tools.
[14] Franck Dernoncourt,et al. Learning Emphasis Selection for Written Text in Visual Media from Crowd-Sourced Label Distributions , 2019, ACL.
[15] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[16] Hao Tian,et al. ERNIE 2.0: A Continual Pre-training Framework for Language Understanding , 2019, AAAI.
[17] Rajiv Ratn Shah,et al. MIDAS at SemEval-2020 Task 10: Emphasis Selection using Label Distribution Learning and Contextual Embeddings , 2020, SemEval@COLING.
[18] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[19] Mark Last,et al. Graph-Based Keyword Extraction for Single-Document Summarization , 2008, COLING 2008.
[20] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[21] Rishabh Agarwal,et al. IITK at SemEval-2020 Task 10: Transformers for Emphasis Selection , 2020, SemEval@COLING.
[22] Ann Bies,et al. The Penn Treebank: Annotating Predicate Argument Structure , 1994, HLT.
[23] Qun Liu,et al. TinyBERT: Distilling BERT for Natural Language Understanding , 2020, EMNLP.
[24] Liliana Heer. Neon , 2007 .
[25] Yu Sun,et al. ERNIE at SemEval-2020 Task 10: Learning Word Emphasis Selection by Pre-trained Language Model , 2020, SEMEVAL.
[26] Nick Cramer,et al. Automatic Keyword Extraction from Individual Documents , 2010 .
[27] Yiming Yang,et al. MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices , 2020, ACL.
[28] Thomas Wolf,et al. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter , 2019, ArXiv.