Self-supervised context-aware COVID-19 document exploration through atlas grounding
暂无分享,去创建一个
Matthew B. Blaschko | Tinne Tuytelaars | Dusan Grujicic | Gorjan Radevski | T. Tuytelaars | Dusan Grujicic | Gorjan Radevski
[1] S. Varga,et al. Depletion of Alveolar Macrophages Ameliorates Virus-Induced Disease following a Pulmonary Coronavirus Infection , 2014, PloS one.
[2] Doug Downey,et al. Construction of the Literature Graph in Semantic Scholar , 2018, NAACL.
[3] Sanja Fidler,et al. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[4] Jean Serra,et al. Image Analysis and Mathematical Morphology , 1983 .
[5] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[6] Ilya Kostrikov,et al. PlaNet - Photo Geolocation with Convolutional Neural Networks , 2016, ECCV.
[7] Niels Kuster,et al. Development of a new generation of high-resolution anatomical models for medical device evaluation: the Virtual Population 3.0 , 2014, Physics in medicine and biology.
[8] B Pflesser,et al. A Realistic Model of Human Structure from the Visible Human Data , 2001, Methods of Information in Medicine.
[9] Niels Kuster,et al. The Virtual Family—development of surface-based anatomical models of two adults and two children for dosimetric simulations , 2010, Physics in medicine and biology.
[10] Shih-Fu Chang,et al. Multi-Level Multimodal Common Semantic Space for Image-Phrase Grounding , 2018, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[11] Guillaume Lample,et al. Cross-lingual Language Model Pretraining , 2019, NeurIPS.
[12] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[13] Ming-Wei Chang,et al. Well-Read Students Learn Better: The Impact of Student Initialization on Knowledge Distillation , 2019, ArXiv.
[14] Thomas Wolf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[15] Daniel King,et al. ScispaCy: Fast and Robust Models for Biomedical Natural Language Processing , 2019, BioNLP@ACL.
[16] Jaewoo Kang,et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining , 2019, Bioinform..
[17] Caiying Zhang,et al. Elevated level of renal xanthine oxidase mRNA transcription after nephropathogenic infectious bronchitis virus infection in growing layers , 2015, Journal of veterinary science.
[18] Yiming Yang,et al. Transformer-XL: Attentive Language Models beyond a Fixed-Length Context , 2019, ACL.
[19] Xuan Wang,et al. Comprehensive Named Entity Recognition on CORD-19 with Distant or Weak Supervision , 2020, ArXiv.
[20] Olivier Bodenreider,et al. The Unified Medical Language System (UMLS): integrating biomedical terminology , 2004, Nucleic Acids Res..
[21] Oren Etzioni,et al. CORD-19: The Covid-19 Open Research Dataset , 2020, NLPCOVID19.
[22] Fang Chen,et al. A Unified Neural Network Model for Geolocating Twitter Users , 2018, CoNLL.
[23] Lei Chen,et al. Object Grounding via Iterative Context Reasoning , 2019, 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW).
[24] Allan Jabri,et al. Learning Visually Grounded Sentence Representations , 2018, NAACL.
[25] Pengtao Xie,et al. Identifying Radiological Findings Related to COVID-19 from Medical Literature , 2020, ArXiv.
[26] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[27] Karl Heinz Höhne,et al. Segmentation of the Visible Human for high-quality volume-based visualization , 1997, Medical Image Anal..
[28] Vineet Gandhi,et al. Learning Unsupervised Visual Grounding Through Semantic Self-Supervision , 2018, IJCAI.
[29] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[30] Andreas Pommert,et al. Creating a high-resolution spatial/symbolic model of the inner organs based on the Visible Human , 2001, Medical Image Anal..
[31] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[32] Wilson L. Taylor,et al. “Cloze Procedure”: A New Tool for Measuring Readability , 1953 .
[33] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[34] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[35] Yong Jae Lee,et al. Weakly-Supervised Visual Grounding of Phrases with Linguistic Structures , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[36] Steven Bird,et al. NLTK: The Natural Language Toolkit , 2002, ACL.
[37] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[38] George Kurian,et al. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation , 2016, ArXiv.
[39] Kyle Lo,et al. SciBERT: Pretrained Contextualized Embeddings for Scientific Text , 2019, ArXiv.
[40] Lav R. Varshney,et al. CTRL: A Conditional Transformer Language Model for Controllable Generation , 2019, ArXiv.
[41] Wei-Hung Weng,et al. Publicly Available Clinical BERT Embeddings , 2019, Proceedings of the 2nd Clinical Natural Language Processing Workshop.