Recovering Lexically and Semantically Reused Texts
暂无分享,去创建一个
[1] Lincoln A. Mullen,et al. The Spine of American Law: Digital Text Analysis and U.S. Legal Practice , 2018 .
[2] Bhavana Dalvi,et al. Pretrained Language Models for Sequential Sentence Classification , 2019, EMNLP.
[3] Samuel R. Bowman,et al. A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference , 2017, NAACL.
[4] Arman Cohan,et al. Longformer: The Long-Document Transformer , 2020, ArXiv.
[5] Dragomir R. Radev,et al. Identifying Non-Explicit Citing Sentences for Citation-Based Summarization. , 2010, ACL.
[6] Grigori Sidorov,et al. A Winning Approach to Text Alignment for Text Reuse Detection at PAN 2014 , 2014, CLEF.
[7] Iryna Gurevych,et al. Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging , 2017, EMNLP.
[8] Christopher Potts,et al. A large annotated corpus for learning natural language inference , 2015, EMNLP.
[9] Marco Büchler,et al. Non-Literal Text Reuse in Historical Texts: An Approach to Identify Reuse Transformations and its Application to Bible Reuse , 2016, EMNLP.
[10] Kyle Lo,et al. S2ORC: The Semantic Scholar Open Research Corpus , 2020, ACL.
[11] Diyi Yang,et al. Hierarchical Attention Networks for Document Classification , 2016, NAACL.
[12] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[13] Matthias Hagen,et al. Overview of the 1st international competition on plagiarism detection , 2009 .
[14] Deepa Gupta,et al. Detection of idea plagiarism using syntax-Semantic concept extractions with genetic algorithm , 2017, Expert Syst. Appl..
[15] David A. Smith,et al. Source Attribution: Recovering the Press Releases Behind Health Science News , 2020, ICWSM.
[16] Jamie Callan,et al. Deeper Text Understanding for IR with Contextual Neural Language Modeling , 2019, SIGIR.
[17] Christopher D. Manning,et al. Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.
[18] Dragomir R. Radev,et al. Reference Scope Identification in Citing Sentences , 2012, NAACL.
[19] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[20] C. Lee Giles,et al. ParsCit: an Open-source CRF Reference String Parsing Package , 2008, LREC.
[21] Chris Brockett,et al. Automatically Constructing a Corpus of Sentential Paraphrases , 2005, IJCNLP.
[22] Jure Leskovec,et al. QUOTUS: The Structure of Political Media Coverage as Revealed by Quoting Patterns , 2015, WWW.
[23] David A. Smith,et al. Tracing the Flow of Policy Ideas in Legislatures: A Text Reuse Approach , 2015 .
[24] W. Bruce Croft,et al. Search Engines - Information Retrieval in Practice , 2009 .
[25] John Lee,et al. A Computational Model of Text Reuse in Ancient Literary Texts , 2007, ACL.
[26] Doug Downey,et al. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks , 2020, ACL.
[27] Jure Leskovec,et al. Meme-tracking and the dynamics of the news cycle , 2009, KDD.
[28] Jianfeng Gao,et al. A Human Generated MAchine Reading COmprehension Dataset , 2018 .
[29] Steven Bird,et al. NLTK: The Natural Language Toolkit , 2002, ACL.
[30] Natalia Gimelshein,et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.
[31] Eneko Agirre,et al. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation , 2017, *SEMEVAL.
[32] Jeffrey Pennington,et al. GloVe: Global Vectors for Word Representation , 2014, EMNLP.
[33] Noah A. Smith,et al. To Tune or Not to Tune? Adapting Pretrained Representations to Diverse Tasks , 2019, RepL4NLP@ACL.
[34] W. Bruce Croft,et al. Neural Ranking Models with Weak Supervision , 2017, SIGIR.
[35] Eric Xing,et al. Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling , 2020, ICLR.
[36] Dragomir R. Radev,et al. The ACL Anthology Reference Corpus: A Reference Dataset for Bibliographic Research in Computational Linguistics , 2008, LREC.
[37] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[38] Jimmy J. Lin,et al. Applying BERT to Document Retrieval with Birch , 2019, EMNLP.
[39] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[40] Mihai Surdeanu,et al. The Stanford CoreNLP Natural Language Processing Toolkit , 2014, ACL.