暂无分享,去创建一个
[1] Geoffrey E. Hinton,et al. Layer Normalization , 2016, ArXiv.
[2] Percy Liang,et al. From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood , 2017, ACL.
[3] Eunsol Choi,et al. QuAC: Question Answering in Context , 2018, EMNLP.
[4] Quoc V. Le,et al. QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension , 2018, ICLR.
[5] Yoav Artzi,et al. Situated Mapping of Sequential Instructions to Actions with Single-step Reward Observation , 2018, ACL.
[6] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[7] Chenguang Zhu,et al. SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering , 2018, ArXiv.
[8] Iain Murray,et al. BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning , 2019, ICML.
[9] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[10] Mark Yatskar,et al. A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC , 2018, NAACL.
[11] Danqi Chen,et al. CoQA: A Conversational Question Answering Challenge , 2018, TACL.
[12] Percy Liang,et al. Know What You Don’t Know: Unanswerable Questions for SQuAD , 2018, ACL.
[13] W. Bruce Croft,et al. Attentive History Selection for Conversational Question Answering , 2019, CIKM.
[14] Dan Klein,et al. Unified Pragmatic Models for Generating and Following Instructions , 2017, NAACL.
[15] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[16] Eunsol Choi,et al. CONVERSATIONAL MACHINE COMPREHENSION , 2019 .
[17] Yelong Shen,et al. FusionNet: Fusing via Fully-Aware Attention with Application to Machine Comprehension , 2017, ICLR.
[18] Percy Liang,et al. Simpler Context-Dependent Logical Forms via Model Projections , 2016, ACL.