SentiGAN: Generating Sentimental Texts via Mixture Adversarial Networks

Generating texts of different sentiment labels is getting more and more attention in the area of natural language generation. Recently, Generative Adversarial Net (GAN) has shown promising results in text generation. However, the texts generated by GAN usually suffer from the problems of poor quality, lack of diversity and mode collapse. In this paper, we propose a novel framework SentiGAN, which has multiple generators and one multi-class discriminator, to address the above problems. In our framework, multiple generators are trained simultaneously, aiming at generating texts of different sentiment labels without supervision. We propose a penalty based objective in the generators to force each of them to generate diversified examples of a specific sentiment label. Moreover, the use of multiple generators and one multi-class discriminator can make each generator focus on generating its own examples of a specific sentiment label accurately. Experimental results on four datasets demonstrate that our model consistently outperforms several state-of-the-art text generation methods in the sentiment accuracy and quality of generated texts.

[1]  Lantao Yu,et al.  SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient , 2016, AAAI.

[2]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[3]  S. Crawford,et al.  Volume 1 , 2012, Journal of Diabetes Investigation.

[4]  Xiaoyan Zhu,et al.  Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory , 2017, AAAI.

[5]  M. V. Rossum,et al.  In Neural Computation , 2022 .

[6]  Yong Yu,et al.  Long Text Generation via Adversarial Training with Leaked Information , 2017, AAAI.

[7]  Mirella Lapata,et al.  Learning to Generate Product Reviews from Attributes , 2017, EACL.

[8]  Max Welling,et al.  Semi-supervised Learning with Deep Generative Models , 2014, NIPS.

[9]  Samy Bengio,et al.  Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.

[10]  Yoshua Bengio,et al.  Professor Forcing: A New Algorithm for Training Recurrent Networks , 2016, NIPS.

[11]  Lukás Burget,et al.  Extensions of recurrent neural network language model , 2011, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[12]  William Yang Wang,et al.  MojiTalk: Generating Emotional Responses at Scale , 2017, ACL.

[13]  Matthias Bethge,et al.  A note on the evaluation of generative models , 2015, ICLR.

[14]  Philip Bachman,et al.  Data Generation as Sequential Decision Making , 2015, NIPS.

[15]  Bing Liu,et al.  Mining and summarizing customer reviews , 2004, KDD.

[16]  Rob Fergus,et al.  Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks , 2015, NIPS.

[17]  Jure Leskovec,et al.  From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews , 2013, WWW.

[18]  Kevin Lin,et al.  Adversarial Ranking for Language Generation , 2017, NIPS.

[19]  Yoshua Bengio,et al.  Generative Adversarial Networks , 2014, ArXiv.

[20]  Eric P. Xing,et al.  Harnessing Deep Neural Networks with Logic Rules , 2016, ACL.

[21]  Sharad Vikram,et al.  Generative Concatenative Nets Jointly Learn to Write and Classify Reviews , 2015, 1511.03683.

[22]  Xiaojun Wan,et al.  Towards Automatic Generation of Product Reviews from Aspect-Sentiment Scores , 2017, INLG.

[23]  Simon Osindero,et al.  Conditional Generative Adversarial Nets , 2014, ArXiv.

[24]  Alexei A. Efros,et al.  Image-to-Image Translation with Conditional Adversarial Networks , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Xiang Zhang,et al.  Text Understanding from Scratch , 2015, ArXiv.