KG-to-Text Generation with Slot-Attention and Link-Attention

Knowledge Graph (KG)-to-Text generation task aims to generate a text description for a structured knowledge which can be viewed as a series of slot-value records. The previous seq2seq models for this task fail to capture the connections between the slot type and its slot value and the connections among multiple slots, and fail to deal with the out-of-vocabulary (OOV) words. To overcome these problems, this paper proposes a novel KG-to-text generation model with hybrid of slot-attention and link-attention. To evaluate the proposed model, we conduct experiments on the real-world dataset, and the experimental results demonstrate that our model could achieve significantly higher performance than previous models in terms of BLEU and ROUGE scores.

[1]  James T. Kwok,et al.  Accelerated Gradient Methods for Stochastic Optimization and Online Learning , 2009, NIPS.

[2]  Christopher D. Manning,et al.  Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.

[3]  Alexander M. Rush,et al.  Challenges in Data-to-Document Generation , 2017, EMNLP.

[4]  Hang Li,et al.  “ Tony ” DNN Embedding for “ Tony ” Selective Read for “ Tony ” ( a ) Attention-based Encoder-Decoder ( RNNSearch ) ( c ) State Update s 4 SourceVocabulary Softmax Prob , 2016 .

[5]  Navdeep Jaitly,et al.  Pointer Networks , 2015, NIPS.

[6]  Antonio Moreno,et al.  Target-dependent Sentiment Analysis of Tweets using a Bi-directional Gated Recurrent Unit , 2017, WEBIST.

[7]  Christopher D. Manning,et al.  Get To The Point: Summarization with Pointer-Generator Networks , 2017, ACL.

[8]  Anirban Laha,et al.  Scalable Micro-planned Generation of Discourse from Structured Data , 2018, Computational Linguistics.

[9]  Philipp Koehn,et al.  Scalable Modified Kneser-Ney Language Model Estimation , 2013, ACL.

[10]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[11]  Alexander M. Rush,et al.  Structured Attention Networks , 2017, ICLR.

[12]  Sivaji Bandyopadhyay,et al.  Statistical Natural Language Generation from Tabular Non-textual Data , 2016, INLG.

[13]  Jaime G. Carbonell,et al.  Generation from Abstract Meaning Representation using Tree Transducers , 2016, NAACL.

[14]  Markus Krötzsch,et al.  Wikidata , 2014, Commun. ACM.

[15]  Dan Klein,et al.  A Simple Domain-Independent Probabilistic Approach to Generation , 2010, EMNLP.

[16]  Mirella Lapata,et al.  Concept-to-text Generation via Discriminative Reranking , 2012, ACL.

[17]  Qiang Zhou,et al.  Leveraging Conceptualization for Short-Text Embedding , 2018, IEEE Transactions on Knowledge and Data Engineering.

[18]  Kazunori Matsumoto,et al.  Sequence-to-Sequence Model with Attention for Time Series Classification , 2016, 2016 IEEE 16th International Conference on Data Mining Workshops (ICDMW).

[19]  Li Zhao,et al.  Attention-based LSTM for Aspect-level Sentiment Classification , 2016, EMNLP.

[20]  Qiang Zhou,et al.  CSE: Conceptual Sentence Embeddings based on Attention Model , 2016, ACL.

[21]  Bowen Zhou,et al.  A Structured Self-attentive Sentence Embedding , 2017, ICLR.

[22]  Will Radford,et al.  Learning to generate one-sentence biographies from Wikidata , 2017, EACL.

[23]  Yong Zhang,et al.  A Hierarchical Attention Seq2seq Model with CopyNet for Text Summarization , 2018, 2018 International Conference on Robots & Intelligent System (ICRIS).

[24]  David Grangier,et al.  Neural Text Generation from Structured Data with Application to the Biography Domain , 2016, EMNLP.

[25]  Dan Klein,et al.  Learning Semantic Correspondences with Less Supervision , 2009, ACL.

[26]  Yue Zhang,et al.  A Graph-to-Sequence Model for AMR-to-Text Generation , 2018, ACL.

[27]  Tao Shen,et al.  DiSAN: Directional Self-Attention Network for RNN/CNN-free Language Understanding , 2017, AAAI.

[28]  Matthew R. Walter,et al.  What to talk about and how? Selective Generation using LSTMs with Coarse-to-Fine Alignment , 2015, NAACL.

[29]  Yoshua Bengio,et al.  Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation , 2014, EMNLP.

[30]  Quoc V. Le,et al.  Addressing the Rare Word Problem in Neural Machine Translation , 2014, ACL.

[31]  Daniel Duma,et al.  Generating Natural Language from Linked Data: Unsupervised template extraction , 2013, IWCS.

[32]  Hwee Tou Ng,et al.  An Unsupervised Neural Attention Model for Aspect Extraction , 2017, ACL.

[33]  Bonnie L. Webber,et al.  Brief Review: Natural Language Generation in Health Care , 1997, J. Am. Medical Informatics Assoc..

[34]  Geoffrey E. Hinton,et al.  Generating Text with Recurrent Neural Networks , 2011, ICML.

[35]  Raymond J. Mooney,et al.  Learning to sportscast: a test of grounded language acquisition , 2008, ICML '08.

[36]  Nancy Green,et al.  Generation of Biomedical Arguments for Lay Readers , 2006, INLG.

[37]  Ehud Reiter,et al.  Generating Approximate Geographic Descriptions , 2009, ENLG.

[38]  Karen Kukich,et al.  Design of a Knowledge-Based Report Generator , 1983, ACL.

[39]  Pascal Poupart,et al.  Order-Planning Neural Text Generation From Structured Data , 2017, AAAI.

[40]  Mirella Lapata,et al.  Unsupervised Concept-to-text Generation with Hypergraphs , 2012, NAACL.

[41]  Xu Sun,et al.  Improving Semantic Relevance for Sequence-to-Sequence Learning of Chinese Social Media Text Summarization , 2017, ACL.

[42]  Mirella Lapata,et al.  A Global Model for Concept-to-Text Generation , 2013, J. Artif. Intell. Res..

[43]  Zhifang Sui,et al.  Table-to-text Generation by Structure-aware Seq2seq Learning , 2017, AAAI.

[44]  Zhiyuan Liu,et al.  Neural Relation Extraction with Selective Attention over Instances , 2016, ACL.

[45]  Christophe Gravier,et al.  Mind the (Language) Gap: Generation of Multilingual Wikipedia Summaries from Wikidata for ArticlePlaceholders , 2018, ESWC.

[46]  Alexander J. Smola,et al.  Neural Machine Translation with Recurrent Attention Modeling , 2016, EACL.

[47]  Christophe Gravier,et al.  Learning to Generate Wikipedia Summaries for Underserved Languages from Wikidata , 2018, NAACL.

[48]  Heng Ji,et al.  Describing a Knowledge Base , 2018, INLG.

[49]  Yejin Choi,et al.  Globally Coherent Text Generation with Neural Checklist Models , 2016, EMNLP.

[50]  Chengqi Zhang,et al.  Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Modeling , 2018, ICLR.