暂无分享,去创建一个
Richard Socher | Caiming Xiong | Wojciech Kryscinski | Romain Paulus | R. Socher | Caiming Xiong | Romain Paulus | Wojciech Kryscinski
[1] Aharon Ben-Tal,et al. Characterization of Pareto and Lexicographic Optimal Solutions , 1980 .
[2] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[3] Yishay Mansour,et al. Policy Gradient Methods for Reinforcement Learning with Function Approximation , 1999, NIPS.
[4] Alex Alves Freitas,et al. Automatic Text Summarization Using a Machine Learning Approach , 2002, SBIA.
[5] Richard M. Schwartz,et al. Hedge Trimmer: A Parse-and-Trim Approach to Headline Generation , 2003, HLT-NAACL 2003.
[6] Chin-Yew Lin,et al. ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.
[7] Richard S. Sutton,et al. Reinforcement Learning: An Introduction , 1998, IEEE Trans. Neural Networks.
[8] Paul Over,et al. DUC in context , 2007, Inf. Process. Manag..
[9] Yann LeCun,et al. Regularization of Neural Networks using DropConnect , 2013, ICML.
[10] Yasemin Altun,et al. Overcoming the Lack of Parallel Data in Sentence Compression , 2013, EMNLP.
[11] Ferenc Huszar,et al. How (not) to Train your Generative Model: Scheduled Sampling, Likelihood, Adversary? , 2015, ArXiv.
[12] Pieter Abbeel,et al. Gradient Estimation Using Stochastic Computation Graphs , 2015, NIPS.
[13] Jason Weston,et al. A Neural Attention Model for Abstractive Sentence Summarization , 2015, EMNLP.
[14] Fabrizio Silvestri,et al. HEADS: Headline Generation as Sequence Prediction Using an Abstract Feature-Rich Space , 2015, NAACL.
[15] 悠太 菊池,et al. 大規模要約資源としてのNew York Times Annotated Corpus , 2015 .
[16] Phil Blunsom,et al. Teaching Machines to Read and Comprehend , 2015, NIPS.
[17] Samy Bengio,et al. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.
[18] Yoshua Bengio,et al. Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.
[19] Dan Klein,et al. Learning-Based Single-Document Summarization with Compression and Anaphoricity Constraints , 2016, ACL.
[20] Alexander M. Rush,et al. Sequence-to-Sequence Learning as Beam-Search Optimization , 2016, EMNLP.
[21] Yoshua Bengio,et al. Professor Forcing: A New Algorithm for Training Recurrent Networks , 2016, NIPS.
[22] Bowen Zhou,et al. Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond , 2016, CoNLL.
[23] Alán Aspuru-Guzik,et al. Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models , 2017, ArXiv.
[24] Bowen Zhou,et al. SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents , 2016, AAAI.
[25] Hakan Inan,et al. Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling , 2016, ICLR.
[26] Feng Liu,et al. Actor-Critic Sequence Training for Image Captioning , 2017, ArXiv.
[27] Lior Wolf,et al. Using the Output Embedding to Improve Language Models , 2016, EACL.
[28] Christopher D. Manning,et al. Get To The Point: Summarization with Pointer-Generator Networks , 2017, ACL.
[29] Richard Socher,et al. A Deep Reinforced Model for Abstractive Summarization , 2017, ICLR.
[30] Min Yang,et al. Generative Adversarial Network for Abstractive Text Summarization , 2017, AAAI.
[31] Adam Coates,et al. Cold Fusion: Training Seq2Seq Models Together with Language Models , 2017, INTERSPEECH.
[32] Ramakanth Pasunuru,et al. Multi-Reward Reinforced Summarization with Saliency and Entailment , 2018, NAACL.
[33] Andrew M. Dai,et al. MaskGAN: Better Text Generation via Filling in the ______ , 2018, ICLR.
[34] Pratik Rane,et al. Self-Critical Sequence Training for Image Captioning , 2018 .
[35] Ji Wang,et al. Pretraining-Based Natural Language Generation for Text Summarization , 2019, CoNLL.