A Survey on Knowledge-Enhanced Pre-trained Language Models
暂无分享,去创建一个
Xiangyu Liu | Yong Chen | Yifei Li | Yanlei Shang | Dell Zhang | Chaoqi Zhen | Yong Chen
[1] H. Xu,et al. Med-BERT: A Pretraining Framework for Medical Records Named Entity Recognition , 2022, IEEE Transactions on Industrial Informatics.
[2] Li Dong,et al. Visually-Augmented Language Modeling , 2022, ICLR.
[3] Julian McAuley,et al. Instilling Type Knowledge in Language Models via Multi-Task QA , 2022, NAACL-HLT.
[4] Dawei Yin,et al. Incorporating Explicit Knowledge in Pre-trained Language Models for Passage Re-ranking , 2022, SIGIR.
[5] Sung Ju Hwang,et al. KALA: Knowledge-Augmented Language Model Adaptation , 2022, NAACL.
[6] Mona T. Diab,et al. A Review on Language Models as Knowledge Bases , 2022, ArXiv.
[7] Md. Faisal Mahbub Chowdhury,et al. KGI: An Integrated Framework for Knowledge Intensive Language Tasks , 2022, EMNLP.
[8] Ryan J. Lowe,et al. Training language models to follow instructions with human feedback , 2022, NeurIPS.
[9] Li Dong,et al. A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models , 2022, ArXiv.
[10] David Bau,et al. Locating and Editing Factual Associations in GPT , 2022, NeurIPS.
[11] Huajun Chen,et al. Ontology-enhanced Prompt-tuning for Few-shot Learning , 2022, WWW.
[12] Phil Blunsom,et al. Relational Memory-Augmented Language Models , 2022, TACL.
[13] Mona T. Diab,et al. Knowledge-Augmented Language Models for Cause-Effect Relation Classification , 2021, CSRR.
[14] Yueqing Sun,et al. JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering , 2021, NAACL.
[15] Xiaofeng He,et al. DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding , 2021, AAAI.
[16] Parminder Bhatia,et al. Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey , 2021, ArXiv.
[17] Zaiqiao Meng,et al. Rewire-then-Probe: A Contrastive Recipe for Probing Biomedical Knowledge of Pre-trained Language Models , 2021, ACL.
[18] Shuohang Wang,et al. Dict-BERT: Enhancing Language Model Pre-training with Dictionary , 2021, Findings.
[19] Jian Yang,et al. A Survey of Knowledge Enhanced Pre-trained Models , 2021, ArXiv.
[20] Weizhu Chen,et al. XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge , 2021, AAAI.
[21] Xingyi Cheng,et al. K-AID: Enhancing Pre-trained Language Models with Domain Knowledge for Question Answering , 2021, CIKM.
[22] Ngoc Thang Vu,et al. Does External Knowledge Help Explainable Natural Language Inference? Automatic Evaluation vs. Human Ratings , 2021, BLACKBOXNLP.
[23] Frank Keller,et al. Memory and Knowledge Augmented Language Models for Inferring Salience in Long-Form Stories , 2021, EMNLP.
[24] Zhan Shi,et al. Unsupervised Pre-training with Structured Knowledge for Improving Natural Language Inference , 2021, ArXiv.
[25] Alfio Gliozzo,et al. Robust Retrieval Augmented Generation for Zero-shot Slot Filling , 2021, EMNLP.
[26] Chengyu Wang,et al. SMedBERT: A Knowledge-Enhanced Pre-trained Language Model with Structured Semantics for Medical Text Mining , 2021, ACL.
[27] Maosong Sun,et al. Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification , 2021, ACL.
[28] Thomas Hofmann,et al. How to Query Language Models? , 2021, ArXiv.
[29] Noura Al Moubayed,et al. ExBERT: An External Knowledge Enhanced BERT for Natural Language Inference , 2021, ICANN.
[30] Hiroaki Hayashi,et al. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing , 2021, ACM Comput. Surv..
[31] Hao Tian,et al. ERNIE 3.0: Large-scale Knowledge Enhanced Pre-training for Language Understanding and Generation , 2021, ArXiv.
[32] Yue Zhang,et al. Can Generative Pre-trained Language Models Serve As Knowledge Bases for Closed-book QA? , 2021, ACL.
[33] Zhiyuan Liu,et al. Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents , 2021, AI Open.
[34] Fei Huang,et al. Improving Biomedical Pretrained Language Models with Knowledge , 2021, BIONLP.
[35] Daniel E. Ho,et al. When does pretraining help?: assessing self-supervised learning for law and the CaseHOLD dataset of 53,000+ legal holdings , 2021, ICAIL.
[36] Nicola De Cao,et al. Editing Factual Knowledge in Language Models , 2021, EMNLP.
[37] Chuanqi Tan,et al. KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction , 2021, WWW.
[38] Song Xu,et al. K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce , 2021, EMNLP.
[39] J. Leskovec,et al. QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering , 2021, NAACL.
[40] Danai Koutra,et al. Relational World Knowledge Representation in Contextual Language Models: A Review , 2021, EMNLP.
[41] Xiang Ren,et al. Refining Language Models with Compositional Explanations , 2021, NeurIPS.
[42] Yunhai Tong,et al. Syntax-BERT: Improving Pre-trained Transformers with Syntax Trees , 2021, EACL.
[43] Catherine Havasi,et al. Combining pre-trained language models and structured knowledge , 2021, ArXiv.
[44] Kinjal Basu,et al. Knowledge-driven Natural Language Understanding of English Text and its Applications , 2021, AAAI.
[45] Bo Chen,et al. Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation , 2021, AAAI.
[46] Yangqiu Song,et al. CoCoLM: Complex Commonsense Enhanced Language Model with Discourse Relations , 2020, FINDINGS.
[47] Zhiyuan Liu,et al. ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning , 2020, ACL.
[48] Xin Jiang,et al. KgPLM: Knowledge-guided Language Model Pre-training via Generative and Discriminative Learning , 2020, ArXiv.
[49] Dilek Z. Hakkani-Tür,et al. Incorporating Commonsense Knowledge Graph in Pretrained Models for Social Commonsense Tasks , 2020, DEELIO.
[50] Dawn Song,et al. Language Models are Open Knowledge Graphs , 2020, ArXiv.
[51] H. Kaka,et al. UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual Embeddings Using the Unified Medical Language System Metathesaurus , 2020, NAACL.
[52] Wei Wu,et al. Knowledge-Grounded Dialogue Generation with Pre-trained Language Models , 2020, EMNLP.
[53] Mohit Bansal,et al. Vokenization: Improving Language Understanding via Contextualized, Visually-Grounded Supervision , 2020, EMNLP.
[54] Yejin Choi,et al. COMET-ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs , 2020, AAAI.
[55] Donghan Yu,et al. JAKET: Joint Pre-training of Knowledge Graph and Language Understanding , 2020, AAAI.
[56] Hiroyuki Shindo,et al. LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention , 2020, EMNLP.
[57] Zheng Zhang,et al. CoLAKE: Contextualized Language and Knowledge Embedding , 2020, COLING.
[58] Philip S. Yu,et al. KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense Reasoning , 2020, AAAI.
[59] Furu Wei,et al. Language Generation with Multi-hop Reasoning on Commonsense Knowledge Graph , 2020, EMNLP.
[60] Ming Zhou,et al. GraphCodeBERT: Pre-training Code Representations with Data Flow , 2020, ICLR.
[61] Yue Wang,et al. Multimodal Joint Attribute Prediction and Value Extraction for E-commerce Product , 2020, EMNLP.
[62] Fuzhen Zhuang,et al. E-BERT: A Phrase and Product Knowledge Enhanced Language Model for E-commerce , 2020, ArXiv.
[63] Fuzhen Zhuang,et al. E-BERT: A Phrase and Product Knowledge Enhanced Language Model for E-commerce , 2020, 2009.02835.
[64] Nicola De Cao,et al. KILT: a Benchmark for Knowledge Intensive Language Tasks , 2020, NAACL.
[65] Kentaro Inui,et al. Language Models as Knowledge Bases: On Entity Representations, Storage Capacity, and Paraphrased Queries , 2020, EACL.
[66] William W. Cohen,et al. Facts as Experts: Adaptable and Interpretable Neural Memory over Symbolic Knowledge , 2020, ArXiv.
[67] Paul N. Bennett,et al. Knowledge-Aware Language Model Pretraining , 2020, ArXiv.
[68] Iryna Gurevych,et al. Common Sense or World Knowledge? Investigating Adapter-Based Knowledge Injection into Pretrained Transformers , 2020, DEELIO.
[69] Fabio Petroni,et al. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks , 2020, NeurIPS.
[70] Xinyan Xiao,et al. SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis , 2020, ACL.
[71] Tao Shen,et al. Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning , 2020, EMNLP.
[72] Eunsol Choi,et al. Entities as Experts: Sparse Memory Access with Entity Supervision , 2020, EMNLP.
[73] Xipeng Qiu,et al. Pre-trained models for natural language processing: A survey , 2020, Science China Technological Sciences.
[74] Bill Yuchen Lin,et al. CommonGen: A Constrained Text Generation Challenge for Generative Commonsense Reasoning , 2020, FINDINGS.
[75] Ming-Wei Chang,et al. REALM: Retrieval-Augmented Language Model Pre-Training , 2020, ICML.
[76] Colin Raffel,et al. How Much Knowledge Can You Pack into the Parameters of a Language Model? , 2020, EMNLP.
[77] Xuanjing Huang,et al. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters , 2020, FINDINGS.
[78] Minlie Huang,et al. A Knowledge-Enhanced Pretraining Model for Commonsense Story Generation , 2020, TACL.
[79] Wenhan Xiong,et al. Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model , 2019, ICLR.
[80] Frank F. Xu,et al. How Can We Know What Language Models Know? , 2019, Transactions of the Association for Computational Linguistics.
[81] Zhiyuan Liu,et al. KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation , 2019, Transactions of the Association for Computational Linguistics.
[82] Hinrich Schütze,et al. E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT , 2019, FINDINGS.
[83] Jiyeon Han,et al. Why Do Masked Neural Language Models Still Need Common Sense Knowledge? , 2019, ArXiv.
[84] Minlie Huang,et al. SentiLARE: Linguistic Knowledge Enhanced Language Representation for Sentiment Analysis , 2019, EMNLP.
[85] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[86] Richard Socher,et al. Evaluating the Factual Consistency of Abstractive Text Summarization , 2019, EMNLP.
[87] Danqi Chen,et al. MRQA 2019 Shared Task: Evaluating Generalization in Reading Comprehension , 2019, EMNLP.
[88] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[89] Chunyan Miao,et al. Knowledge-Enriched Transformer for Emotion Detection in Textual Conversations , 2019, EMNLP.
[90] Kuntal Kumar Pal,et al. How Additional Knowledge can Improve Natural Language Commonsense Question Answering , 2019, 1909.08855.
[91] Zhe Zhao,et al. K-BERT: Enabling Language Representation with Knowledge Graph , 2019, AAAI.
[92] Michael W. Mahoney,et al. Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT , 2019, AAAI.
[93] Noah A. Smith,et al. Knowledge Enhanced Contextual Word Representations , 2019, EMNLP.
[94] Chengsheng Mao,et al. KG-BERT: BERT for Knowledge Graph Completion , 2019, ArXiv.
[95] Anna Korhonen,et al. Specializing Unsupervised Pretraining Models for Word-Level Semantic Similarity , 2019, COLING.
[96] Zhao Hai,et al. Semantics-aware BERT for Language Understanding , 2019, AAAI.
[97] Xiang Ren,et al. KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning , 2019, EMNLP.
[98] Sebastian Riedel,et al. Language Models as Knowledge Bases? , 2019, EMNLP.
[99] Yejin Choi,et al. Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning , 2019, EMNLP.
[100] Abhinav Sethy,et al. Knowledge Enhanced Attention for Robust Natural Language Inference , 2019, ArXiv.
[101] Kenton Lee,et al. Giving BERT a Calculator: Finding Operations and Arguments with Reading Comprehension , 2019, EMNLP.
[102] Zhen-Hua Ling,et al. Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models , 2019, ArXiv.
[103] Yoav Shoham,et al. SenseBERT: Driving Some Sense into BERT , 2019, ACL.
[104] Ming-Wei Chang,et al. Natural Questions: A Benchmark for Question Answering Research , 2019, TACL.
[105] Hao Tian,et al. ERNIE 2.0: A Continual Pre-training Framework for Language Understanding , 2019, AAAI.
[106] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[107] An Yang,et al. Enhancing Pre-Trained Language Representations with Rich Knowledge for Machine Reading Comprehension , 2019, ACL.
[108] Nelson F. Liu,et al. Barack’s Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling , 2019, ACL.
[109] Yejin Choi,et al. COMET: Commonsense Transformers for Automatic Knowledge Graph Construction , 2019, ACL.
[110] Colin Anderson. Embedding , 2019, Manual for the Examination of Bone.
[111] Maosong Sun,et al. ERNIE: Enhanced Language Representation with Informative Entities , 2019, ACL.
[112] Xin Liu,et al. ASER: A Large-scale Eventuality Knowledge Graph , 2019, WWW.
[113] Yu Sun,et al. ERNIE: Enhanced Representation through Knowledge Integration , 2019, ArXiv.
[114] Iz Beltagy,et al. SciBERT: A Pretrained Language Model for Scientific Text , 2019, EMNLP.
[115] Jaewoo Kang,et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining , 2019, Bioinform..
[116] Zhiyuan Liu,et al. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation , 2018, EMNLP.
[117] J. Weston,et al. Wizard of Wikipedia: Knowledge-Powered Conversational agents , 2018, ICLR.
[118] Alan W. Black,et al. A Dataset for Document Grounded Conversations , 2018, EMNLP.
[119] Minlie Huang,et al. Story Ending Generation with Incremental Encoding and Commonsense Knowledge , 2018, AAAI.
[120] Peter Clark,et al. Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering , 2018, EMNLP.
[121] Omer Levy,et al. Ultra-Fine Entity Typing , 2018, ACL.
[122] Christophe Gravier,et al. T-REx: A Large Scale Alignment of Natural Language with Knowledge Base Triples , 2018, LREC.
[123] Samuel R. Bowman,et al. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.
[124] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[125] Tao Zhang,et al. Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges , 2018, IEEE Signal Processing Magazine.
[126] Danqi Chen,et al. Position-aware Attention and Supervised Data Improve Slot Filling , 2017, EMNLP.
[127] Leon Derczynski,et al. Results of the WNUT2017 Shared Task on Novel and Emerging Entity Recognition , 2017, NUT@EMNLP.
[128] Richard Socher,et al. Learned in Translation: Contextualized Word Vectors , 2017, NIPS.
[129] Pasquale Minervini,et al. Convolutional 2D Knowledge Graph Embeddings , 2017, AAAI.
[130] Bin Liang,et al. CN-DBpedia: A Never-Ending Chinese Knowledge Extraction System , 2017, IEA/AIE.
[131] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[132] Eunsol Choi,et al. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension , 2017, ACL.
[133] Kyunghyun Cho,et al. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine , 2017, ArXiv.
[134] Catherine Havasi,et al. ConceptNet 5.5: An Open Multilingual Graph of General Knowledge , 2016, AAAI.
[135] Jianfeng Gao,et al. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset , 2016, CoCo@NIPS.
[136] Xiang Li,et al. Commonsense Knowledge Base Completion , 2016, ACL.
[137] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[138] Zhiyong Lu,et al. BioCreative V CDR task corpus: a resource for chemical disease relation extraction , 2016, Database J. Biol. Databases Curation.
[139] Nathanael Chambers,et al. A Corpus and Cloze Evaluation for Deeper Understanding of Commonsense Stories , 2016, NAACL.
[140] Özlem Uzuner,et al. Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1 , 2015, J. Biomed. Informatics.
[141] Xiang Zhang,et al. Character-level Convolutional Networks for Text Classification , 2015, NIPS.
[142] Daniel S. Weld,et al. Design Challenges for Entity Linking , 2015, TACL.
[143] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[144] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[145] Markus Krötzsch,et al. Wikidata , 2014, Commun. ACM.
[146] Suresh Manandhar,et al. SemEval-2014 Task 4: Aspect Based Sentiment Analysis , 2014, *SEMEVAL.
[147] Núria Queralt-Rosinach,et al. Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research , 2014, BMC Bioinformatics.
[148] Zhiyong Lu,et al. NCBI disease corpus: A resource for disease name recognition and concept normalization , 2014, J. Biomed. Informatics.
[149] Jeffrey Dean,et al. Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.
[150] Paloma Martínez,et al. The DDI corpus: An annotated corpus with pharmacological substances and drug-drug interactions , 2013, J. Biomed. Informatics.
[151] Christopher Potts,et al. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank , 2013, EMNLP.
[152] Andrew Chou,et al. Semantic Parsing on Freebase from Question-Answer Pairs , 2013, EMNLP.
[153] Jeffrey Dean,et al. Efficient Estimation of Word Representations in Vector Space , 2013, ICLR.
[154] Iryna Gurevych,et al. Wiktionary: a new rival for expert-built lexicons? Exploring the possibilities of collaborative lexicography , 2012 .
[155] Laura Inés Furlong,et al. The EU-ADR corpus: Annotated drugs, diseases, targets, and their relationships , 2012, J. Biomed. Informatics.
[156] Shuying Shen,et al. 2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text , 2011, J. Am. Medical Informatics Assoc..
[157] Tom Michael Mitchell,et al. Toward an Architecture for Never-Ending Language Learning , 2010, AAAI.
[158] Richard Tzong-Han Tsai,et al. Overview of BioCreative II gene mention recognition , 2008, Genome Biology.
[159] Praveen Paritosh,et al. Freebase: a collaboratively created graph database for structuring human knowledge , 2008, SIGMOD Conference.
[160] Neville Ryant,et al. A large-scale classification of English verbs , 2008, Lang. Resour. Evaluation.
[161] Jens Lehmann,et al. DBpedia: A Nucleus for a Web of Open Data , 2007, ISWC/ASWC.
[162] Gina-Anne Levow,et al. The Third International Chinese Language Processing Bakeoff: Word Segmentation and Named Entity Recognition , 2006, SIGHAN@COLING/ACL.
[163] W. Scott. Dictionary of sociology , 2005 .
[164] Nigel Collier,et al. Introduction to the Bio-entity Recognition Task at JNLPBA , 2004, NLPBA/BioNLP.
[165] Erik F. Tjong Kim Sang,et al. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition , 2003, CoNLL.
[166] Erik T. Mueller,et al. Open Mind Common Sense: Knowledge Acquisition from the General Public , 2002, OTM.
[167] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[168] Benjamin W. Wah,et al. Editorial: Two Named to Editorial Board of IEEE Transactions on Knowledge and Data Engineering , 1996 .
[169] George A. Miller,et al. WordNet: A Lexical Database for English , 1995, HLT.
[170] Christopher D. Manning,et al. GreaseLM: Graph REASoning Enhanced Language Models , 2022, ICLR.
[171] Vijay Sadashivaiah,et al. Improving Language Model Predictions via Prompts Enriched with Knowledge Graphs , 2022, DL4KG@ISWC.
[172] Frederick Liu,et al. Tracing Knowledge in Language Models Back to the Training Data , 2022, ArXiv.
[173] Shafiq R. Joty,et al. Knowledge Based Multilingual Language Model , 2021, ArXiv.
[174] Anubhav Jain,et al. The Impact of Domain-Specific Pre-Training on Named Entity Recognition Tasks in Materials Science , 2021, SSRN Electronic Journal.
[175] Yice Zhang,et al. CN-HIT-IT.NLP at SemEval-2020 Task 4: Enhanced Language Representation with Multiple Knowledge Triples , 2020, SEMEVAL.
[176] Ion Androutsopoulos,et al. LEGAL-BERT: "Preparing the Muppets for Court'" , 2020, EMNLP.
[177] Yejin Choi,et al. ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning , 2019, AAAI.
[178] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[179] Jonathan Berant,et al. CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge , 2019, NAACL.
[180] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[181] Partha Talukdar,et al. HyTE: Hyperplane-based Temporally aware Knowledge Graph Embedding , 2018, EMNLP.
[182] Heng Ji,et al. Cross-lingual Name Tagging and Linking for 282 Languages , 2017, ACL.
[183] Anália Lourenço,et al. Overview of the BioCreative VI chemical-protein interaction Track , 2017 .
[184] Özlem Uzuner,et al. JAMIA Focus on Medical Record De-identification , 2007 .