Selection and Generation: Learning towards Multi-Product Advertisement Post Generation

As the E-commerce thrives, high-quality online advertising copywriting has attracted more and more attention. Different from the advertising copywriting for a single product, an advertisement (AD) post includes an attractive topic that meets the customer needs and description copywriting about several products under its topic. A good AD post can highlight the characteristics of each product, thus helps customers make a good choice among candidate products. Hence, multi-product AD post generation is meaningful and important. We propose a novel end-to-end model named S-MG Net to generate the AD post. Targeted at such a challenging real-world problem, we split the AD post generation task into two subprocesses: (1) select a set of products via the SelectNet (Selection Network). (2) generate a post including selected products via the MGenNet (Multi-Generator Network). Concretely, SelectNet first captures the post topic and the relationship among the products to output the representative products. Then, MGenNet generates the description copywriting of each product. Experiments conducted on a large-scale real-world AD post dataset demonstrate that our proposed model achieves impressive performance in terms of both automatic metrics as well as human evaluations.

[1]  Dongyan Zhao,et al.  Plan-And-Write: Towards Better Automatic Storytelling , 2018, AAAI.

[2]  Dongyan Zhao,et al.  Learning to Write Stories with Thematic Consistency and Wording Novelty , 2019, AAAI.

[3]  Samy Bengio,et al.  Generating Sentences from a Continuous Space , 2015, CoNLL.

[4]  Joelle Pineau,et al.  Bootstrapping Dialog Systems with Word Embeddings , 2014 .

[5]  Dongyan Zhao,et al.  Stick to the Facts: Learning towards a Fidelity-oriented E-Commerce Product Description Generation , 2019, EMNLP.

[6]  Joelle Pineau,et al.  Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models , 2015, AAAI.

[7]  Dongyan Zhao,et al.  Get The Point of My Utterance! Learning Towards Effective Responses with Multi-Head Attention Mechanism , 2018, IJCAI.

[8]  Natalia Gimelshein,et al.  PyTorch: An Imperative Style, High-Performance Deep Learning Library , 2019, NeurIPS.

[9]  Xiaodong Gu,et al.  DialogWAE: Multimodal Response Generation with Conditional Wasserstein Auto-Encoder , 2018, ICLR.

[10]  Pieter Abbeel,et al.  Emergence of Grounded Compositional Language in Multi-Agent Populations , 2017, AAAI.

[11]  Zhifang Sui,et al.  Table-to-text Generation by Structure-aware Seq2seq Learning , 2017, AAAI.

[12]  Tom Schaul,et al.  StarCraft II: A New Challenge for Reinforcement Learning , 2017, ArXiv.

[13]  Goran Glavas,et al.  Unsupervised Cross-Lingual Information Retrieval Using Monolingual Data Only , 2018, SIGIR.

[14]  Mirella Lapata,et al.  Vector-based Models of Semantic Composition , 2008, ACL.

[15]  Dongyan Zhao,et al.  Abstractive Text Summarization by Incorporating Reader Comments , 2018, AAAI.

[16]  Dongyan Zhao,et al.  RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems , 2017, AAAI.

[17]  Rui Yan,et al.  Learning towards Abstractive Timeline Summarization , 2019, IJCAI.

[18]  Zhoujun Li,et al.  A Sequential Matching Framework for Multi-Turn Response Selection in Retrieval-Based Chatbots , 2017, CL.

[19]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[20]  Rob Fergus,et al.  Learning Multiagent Communication with Backpropagation , 2016, NIPS.

[21]  Yann Dauphin,et al.  Hierarchical Neural Story Generation , 2018, ACL.

[22]  Dongyan Zhao,et al.  Multi-Representation Fusion Network for Multi-Turn Response Selection in Retrieval-Based Chatbots , 2019, WSDM.

[23]  Christopher D. Manning,et al.  Get To The Point: Summarization with Pointer-Generator Networks , 2017, ACL.

[24]  Yann Dauphin,et al.  Convolutional Sequence to Sequence Learning , 2017, ICML.

[25]  Dongyan Zhao,et al.  From Standard Summarization to New Tasks and Beyond: Summarization with Manifold Information , 2020, IJCAI.

[26]  Yoram Singer,et al.  Adaptive Subgradient Methods for Online Learning and Stochastic Optimization , 2011, J. Mach. Learn. Res..

[27]  Dongyan Zhao,et al.  Modeling Personalization in Continuous Space for Response Generation via Augmented Wasserstein Autoencoders , 2019, EMNLP.

[28]  Yejin Choi,et al.  Deep Communicating Agents for Abstractive Summarization , 2018, NAACL.

[29]  Christopher D. Manning,et al.  Effective Approaches to Attention-based Neural Machine Translation , 2015, EMNLP.

[30]  Tao Zhang,et al.  Automatic Generation of Pattern-controlled Product Description in E-commerce , 2019, WWW.

[31]  Jing Liu,et al.  A Statistical Framework for Product Description Generation , 2017, IJCNLP.

[32]  Jie Tang,et al.  Towards Knowledge-Based Personalized Product Description Generation in E-commerce , 2019, KDD.

[33]  Xuanjing Huang,et al.  Recurrent Neural Network for Text Classification with Multi-Task Learning , 2016, IJCAI.

[34]  Dongyan Zhao,et al.  GSN: A Graph-Structured Network for Multi-Party Dialogues , 2019, IJCAI.

[35]  Xu Sun,et al.  A Skeleton-Based Model for Promoting Coherence Among Sentences in Narrative Story Generation , 2018, EMNLP.

[36]  Vasile Rus,et al.  A Comparison of Greedy and Optimal Assessment of Natural Language Student Input Using Word-to-Word Similarity Metrics , 2012, BEA@NAACL-HLT.

[37]  Feng Ji,et al.  Simple and Effective Text Matching with Richer Alignment Features , 2019, ACL.

[38]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[39]  Joelle Pineau,et al.  A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues , 2016, AAAI.