暂无分享,去创建一个
Junji Tomita | Kyosuke Nishida | Hisako Asano | Yasuhito Ohsugi | Itsumi Saito | Kyosuke Nishida | Itsumi Saito | H. Asano | J. Tomita | Yasuhito Ohsugi
[1] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[2] Eunsol Choi,et al. QuAC: Question Answering in Context , 2018, EMNLP.
[3] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[4] Mike Lewis,et al. Generative Question Answering: Learning to Answer the Whole Question , 2018, ICLR.
[5] Ilya Sutskever,et al. Language Models are Unsupervised Multitask Learners , 2019 .
[6] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[7] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[8] Mark Yatskar,et al. A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC , 2018, NAACL.
[9] Danqi Chen,et al. CoQA: A Conversational Question Answering Challenge , 2018, TACL.
[10] Jason Weston,et al. Reading Wikipedia to Answer Open-Domain Questions , 2017, ACL.
[11] Chenguang Zhu,et al. SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering , 2018, ArXiv.
[12] Eunsol Choi,et al. CONVERSATIONAL MACHINE COMPREHENSION , 2019 .
[13] Ali Farhadi,et al. Bidirectional Attention Flow for Machine Comprehension , 2016, ICLR.