暂无分享,去创建一个
Fuzhen Zhuang | Furu Wei | Zihan Zhang | Shaohan Huang | Qi Zhang | Ting Jiang | Deqing Wang | Haizhen Huang | Liangjie Zhang | Furu Wei | Fuzhen Zhuang | Shaohan Huang | Qi Zhang | Ting Jiang | Deqing Wang | Zihan Zhang | Liangjie Zhang | Haizhen Huang
[1] Victor O. K. Li,et al. Non-Autoregressive Neural Machine Translation , 2017, ICLR.
[2] Hao Zhou,et al. Imitation Learning for Non-Autoregressive Neural Machine Translation , 2019, ACL.
[3] Ming Zhou,et al. ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training , 2020, FINDINGS.
[4] Jiancheng Lv,et al. GLGE: A New General Language Generation Evaluation Benchmark , 2021, FINDINGS.
[5] Taku Kudo,et al. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing , 2018, EMNLP.
[6] Di He,et al. Layer-Wise Coordination between Encoder and Decoder for Neural Machine Translation , 2018, NeurIPS.
[7] Ankush Gupta,et al. A Deep Generative Framework for Paraphrase Generation , 2017, AAAI.
[8] Jianfeng Gao,et al. UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training , 2020, ICML.
[9] Ming Zhou,et al. Neural Question Generation from Text: A Preliminary Study , 2017, NLPCC.
[10] Omer Levy,et al. Mask-Predict: Parallel Decoding of Conditional Masked Language Models , 2019, EMNLP.
[11] Noah A. Smith,et al. Deep Encoder, Shallow Decoder: Reevaluating the Speed-Quality Tradeoff in Machine Translation , 2020, ArXiv.
[12] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[13] Nikit Begwani,et al. Diversity driven Query Rewriting in Search Advertising , 2021, KDD.
[14] Lukasz Kaiser,et al. Attention is All you Need , 2017, NIPS.
[15] Mirella Lapata,et al. Don’t Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization , 2018, EMNLP.
[16] Xu Tan,et al. MASS: Masked Sequence to Sequence Pre-training for Language Generation , 2019, ICML.
[17] Jason Lee,et al. Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement , 2018, EMNLP.
[18] Omer Levy,et al. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.
[19] Myle Ott,et al. fairseq: A Fast, Extensible Toolkit for Sequence Modeling , 2019, NAACL.
[20] Omer Levy,et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.
[21] Di He,et al. Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input , 2018, AAAI.
[22] Dayiheng Liu,et al. BANG: Bridging Autoregressive and Non-autoregressive Generation with Large Scale Pretraining , 2021, ICML.
[23] Jie Zhou,et al. Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information , 2019, ArXiv.
[24] Furu Wei,et al. Dictionary-Guided Editing Networks for Paraphrase Generation , 2018, AAAI.
[25] Jian Zhang,et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text , 2016, EMNLP.
[26] Jiatao Gu,et al. Fully Non-autoregressive Neural Machine Translation: Tricks of the Trade , 2020, FINDINGS.
[27] Ming-Wei Chang,et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.
[28] Jakob Uszkoreit,et al. Insertion Transformer: Flexible Sequence Generation via Insertion Operations , 2019, ICML.
[29] Yao Zhao,et al. Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks , 2018, EMNLP.
[30] Robert L. Mercer,et al. The Mathematics of Statistical Machine Translation: Parameter Estimation , 1993, CL.
[31] Changhan Wang,et al. Levenshtein Transformer , 2019, NeurIPS.
[32] Xiaodong Liu,et al. Unified Language Model Pre-training for Natural Language Understanding and Generation , 2019, NeurIPS.
[33] Furu Wei,et al. MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers , 2020, NeurIPS.
[34] Tie-Yan Liu,et al. Incorporating BERT into Neural Machine Translation , 2020, ICLR.