暂无分享,去创建一个
Nayeon Lee | Yejin Bang | Pascale Fung | Andrea Madotto | Pascale Fung | Andrea Madotto | Nayeon Lee | Yejin Bang
[1] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[2] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[3] Luke S. Zettlemoyer,et al. Deep Contextualized Word Representations , 2018, NAACL.
[4] Colin Raffel,et al. How Much Knowledge Can You Pack Into the Parameters of a Language Model? , 2020, EMNLP.
[5] Sebastian Riedel,et al. UCL Machine Reading Group: Four Factor Framework For Fact Finding (HexaF) , 2018, FEVER@EMNLP.
[6] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[7] William Yang Wang. “Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection , 2017, ACL.
[8] Preslav Nakov,et al. Integrating Stance Detection and Fact Checking in a Unified Corpus , 2018, NAACL.
[9] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[10] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[11] Hinrich Schütze,et al. Book Reviews: Foundations of Statistical Natural Language Processing , 1999, CL.
[12] Oren Etzioni,et al. Open Information Extraction from the Web , 2007, CACM.
[13] Nayer M. Wanas,et al. Web-based statistical fact checking of textual documents , 2010, SMUC '10.
[14] Smaranda Muresan,et al. Where is Your Evidence: Improving Fact-checking by Justification Modeling , 2018 .
[15] Iryna Gurevych,et al. UKP-Athene: Multi-Sentence Textual Entailment for Claim Verification , 2018, FEVER@EMNLP.
[16] Haonan Chen,et al. Combining Fact Extraction and Verification with Neural Semantic Matching Networks , 2018, AAAI.
[17] R'emi Louf,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[18] Johan Bollen,et al. Computational Fact Checking from Knowledge Networks , 2015, PloS one.
[19] Andreas Vlachos,et al. FEVER: a Large-scale Dataset for Fact Extraction and VERification , 2018, NAACL.
[20] Jiliang Tang,et al. Multi-Source Multi-Class Fake News Detection , 2018, COLING.
[21] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[22] Sebastian Riedel,et al. Language Models as Knowledge Bases? , 2019, EMNLP.
[23] Huan Liu,et al. Understanding User Profiles on Social Media for Fake News Detection , 2018, 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR).
[24] Davide Eynard,et al. Fake News Detection on Social Media using Geometric Deep Learning , 2019, ArXiv.
[25] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[26] Pascale Fung,et al. Improving Large-Scale Fact-Checking using Decomposable Attention Models and Lexical Tagging , 2018, EMNLP.
[27] Ming-Wei Chang,et al. REALM: Retrieval-Augmented Language Model Pre-Training , 2020, ICML.
[28] Chu-Ren Huang,et al. Fake News Detection Through Multi-Perspective Speaker Profiles , 2017, IJCNLP.
[29] Gerhard Weikum,et al. DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning , 2018, EMNLP.