暂无分享,去创建一个
Hai Zhao | Zhuosheng Zhang | Junlong Li | Xi Zhou | Xiang Zhou | Hai Zhao | Zhuosheng Zhang | Xi Zhou | Xiang Zhou | Junlong Li
[1] Quoc V. Le,et al. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators , 2020, ICLR.
[2] Wilson L. Taylor,et al. “Cloze Procedure”: A New Tool for Measuring Readability , 1953 .
[3] Yu Sun,et al. ERNIE: Enhanced Representation through Knowledge Integration , 2019, ArXiv.
[4] Claire Cardie,et al. DREAM: A Challenge Data Set and Models for Dialogue-Based Reading Comprehension , 2019, TACL.
[5] Xu Tan,et al. MASS: Masked Sequence to Sequence Pre-training for Language Generation , 2019, ICML.
[6] Jaewoo Kang,et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining , 2019, Bioinform..
[7] Dilek Z. Hakkani-Tür,et al. MMM: Multi-stage Multi-task Learning for Multi-choice Reading Comprehension , 2020, AAAI.
[8] Ying Chen,et al. Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network , 2018, ACL.
[9] Rajesh Ranganath,et al. ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission , 2019, ArXiv.
[10] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[11] Dilek Z. Hakkani-Tür,et al. Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations , 2019, INTERSPEECH.
[12] Natasha Jaques,et al. Approximating Interactive Human Evaluation with Self-Play for Open-Domain Dialog Systems , 2019, NeurIPS.
[13] Kevin Gimpel,et al. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations , 2019, ICLR.
[14] Yoshua Bengio,et al. Generative Adversarial Nets , 2014, NIPS.
[15] Alec Radford,et al. Improving Language Understanding by Generative Pre-Training , 2018 .
[16] Doug Downey,et al. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks , 2020, ACL.
[17] Yiming Yang,et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.
[18] Iryna Gurevych,et al. Dialogue Coherence Assessment Without Explicit Dialogue Act Labels , 2019, ACL.
[19] Jason Weston,et al. Personalizing Dialogue Agents: I have a dog, do you have pets too? , 2018, ACL.
[20] Yue Zhang,et al. MuTual: A Dataset for Multi-Turn Dialogue Reasoning , 2020, ACL.
[21] Taesun Whang,et al. An Effective Domain Adaptive Post-Training Method for BERT in Response Selection , 2020, INTERSPEECH.
[22] Jianfeng Gao,et al. DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation , 2020, ACL.
[23] Hai Zhao,et al. Dual Multi-head Co-attention for Multi-choice Reading Comprehension , 2020, ArXiv.
[24] Hui Wan. Multi-task Learning with Multi-head Attention for Multi-choice Reading Comprehension , 2020, ArXiv.
[25] Joelle Pineau,et al. Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses , 2017, ACL.
[26] Quoc V. Le,et al. Towards a Human-like Open-Domain Chatbot , 2020, ArXiv.
[27] Hao Tian,et al. ERNIE 2.0: A Continual Pre-training Framework for Language Understanding , 2019, AAAI.
[28] Ellen M. Voorhees,et al. The TREC-8 Question Answering Track Report , 1999, TREC.
[29] Maxine Eskenazi,et al. Unsupervised Evaluation of Interactive Dialog with DialoGPT , 2020, SIGDIAL.
[30] Zhoujun Li,et al. Sequential Match Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots , 2016, ArXiv.
[31] Alon Lavie,et al. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments , 2005, IEEvaluation@ACL.
[32] Tatsuya Kawahara,et al. Designing Precise and Robust Dialogue Response Evaluators , 2020, ACL.
[33] Xiaoyu Shen,et al. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset , 2017, IJCNLP.
[34] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[35] Evgeny A. Stepanov,et al. Coherence Models for Dialogue , 2018, INTERSPEECH.
[36] Salim Roukos,et al. Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.
[37] Hannes Schulz,et al. Relevance of Unsupervised Metrics in Task-Oriented Dialogue for Evaluating Natural Language Generation , 2017, ArXiv.
[38] Jason Weston,et al. What makes a good conversation? How controllable attributes affect human judgments , 2019, NAACL.
[39] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[40] Kilian Q. Weinberger,et al. BERTScore: Evaluating Text Generation with BERT , 2019, ICLR.
[41] Mary Williamson,et al. Can You Put it All Together: Evaluating Conversational Agents’ Ability to Blend Skills , 2020, ACL.
[42] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[43] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[44] Iz Beltagy,et al. SciBERT: A Pretrained Language Model for Scientific Text , 2019, EMNLP.
[45] Mirella Lapata,et al. Modeling Local Coherence: An Entity-Based Approach , 2005, ACL.
[46] Joelle Pineau,et al. The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems , 2015, SIGDIAL Conference.
[47] Dongyan Zhao,et al. RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems , 2017, AAAI.