ReGen: Reinforcement Learning for Text and Knowledge Base Generation using Pretrained Language Models
暂无分享,去创建一个
[1] Angela Fan,et al. Improving Text-to-Text Pre-trained Models for the Graph-to-Text Task , 2020, WEBNLG.
[2] Colin Raffel,et al. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..
[3] Zheng Zhang,et al. Fork or Fail: Cycle-Consistent Training with Many-to-One Mappings , 2020, AISTATS.
[4] Frank Hutter,et al. Fixing Weight Decay Regularization in Adam , 2017, ArXiv.
[5] Jianfeng Gao,et al. Deep Reinforcement Learning for Dialogue Generation , 2016, EMNLP.
[6] Wojciech Zaremba,et al. Reinforcement Learning Neural Turing Machines - Revised , 2015 .
[7] Richard Socher,et al. A Deep Reinforced Model for Abstractive Summarization , 2017, ICLR.
[8] Eric Xing,et al. Connecting the Dots Between MLE and RL for Sequence Prediction , 2019 .
[9] Maja Popovic,et al. chrF++: words helping character n-grams , 2017, WMT.
[10] Oshin Agarwal,et al. Large Scale Knowledge Graph Based Synthetic Corpus Generation for Knowledge-Enhanced Language Model Pre-training , 2020, ArXiv.
[11] Joelle Pineau,et al. An Actor-Critic Algorithm for Sequence Prediction , 2016, ICLR.
[12] Tin Lay Nwe,et al. Automatic Myanmar Image Captioning using CNN and LSTM-Based Language Model , 2020, SLTU.
[13] Lysandre Debut,et al. HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.
[14] Oshin Agarwal,et al. Machine Translation Aided Bilingual Data-to-Text Generation and Semantic Parsing , 2020, WEBNLG.
[15] Inkit Padhi,et al. DualTKB: A Dual Learning Bridge between Text and Knowledge Base , 2020, EMNLP.
[16] Alon Lavie,et al. METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments , 2007, WMT@ACL.
[17] Michael White,et al. Leveraging Large Pretrained Models for WebNLG 2020 , 2020, WEBNLG.
[18] Weinan Zhang,et al. CycleGT: Unsupervised Graph-to-Text and Text-to-Graph Generation via Cycle Training , 2020, WEBNLG.
[19] Ruotian Luo. A Better Variant of Self-Critical Sequence Training , 2020, ArXiv.
[20] Marc'Aurelio Ranzato,et al. Sequence Level Training with Recurrent Neural Networks , 2015, ICLR.
[21] Iryna Gurevych,et al. Investigating Pretrained Language Models for Graph-to-Text Generation , 2020, ArXiv.
[22] Zheng Zhang,et al. P: A Plan-and-Pretrain Approach for Knowledge Graph-to-Text Generation , 2022 .
[23] Mohammed J. Zaki,et al. Reinforcement Learning Based Graph-to-Sequence Model for Natural Question Generation , 2019, ICLR.
[24] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[25] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.
[26] Vaibhava Goel,et al. Self-Critical Sequence Training for Image Captioning , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).