Neural Abstractive Text Summarization

Abstractive text summarization is a complex task whose goal is to generate a concise version of a text without necessarily reusing the sentences from the original source, but still preserving the meaning and the key contents. We address this issue by modeling the problem as a sequence to sequence learning and exploiting Recurrent Neural Networks (RNNs). This work is a discussion about our ongoing research on abstractive text summarization, where our aim is to investigate methods to infuse prior knowledge into deep neural networks. We believe that these approaches can obtain better performance than the state-of-the-art models for generating well-formed and meaningful summaries.

[1]  Yoshua Bengio,et al.  Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling , 2014, ArXiv.

[2]  Yoshua Bengio,et al.  A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..

[3]  Zhen Wang,et al.  Knowledge Graph and Text Jointly Embedding , 2014, EMNLP.

[4]  Jeffrey L. Elman,et al.  Finding Structure in Time , 1990, Cogn. Sci..

[5]  Quoc V. Le,et al.  Sequence to Sequence Learning with Neural Networks , 2014, NIPS.

[6]  Thierry Poibeau,et al.  Automatic Text Summarization: Past, Present and Future , 2013, Multi-source, Multilingual Information Extraction and Summarization.

[7]  Naomie Salim,et al.  A review on abstractive summarization methods , 2014 .

[8]  Bowen Zhou,et al.  Sequence-to-Sequence RNNs for Text Summarization , 2016, ArXiv.

[9]  Yoshua Bengio,et al.  A Neural Knowledge Language Model , 2016, ArXiv.

[10]  Jason Weston,et al.  A Neural Attention Model for Abstractive Sentence Summarization , 2015, EMNLP.

[11]  Chin-Yew Lin,et al.  ROUGE: A Package for Automatic Evaluation of Summaries , 2004, ACL 2004.

[12]  Andrew McCallum,et al.  Row-less Universal Schema , 2016, AKBC@NAACL-HLT.

[13]  Peter Norvig,et al.  Inference in Text Understanding , 1987, AAAI.

[14]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[15]  Marc Dymetman,et al.  Log-Linear RNNs: Towards Recurrent Neural Networks with Flexible Prior Knowledge , 2016, ArXiv.

[16]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[17]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[18]  Karen Spärck Jones Automatic summarising: The state of the art , 2007, Inf. Process. Manag..