Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

Pre-trained language models have been recently shown to benefit task-oriented dialogue (TOD) systems. Despite their success, existing methods often formulate this task as a cascaded generation problem which can lead to error accumulation across different sub-tasks and greater data annotation overhead. In this study, we present PPTOD, a unified plug-and-play model for task-oriented dialogue. In addition, we introduce a new dialogue multi-task pre-training strategy that allows the model to learn the primary TOD task completion skills from heterogeneous dialog corpora. We extensively test our model on three benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification. Experimental results show that PPTOD achieves new state of the art on all evaluated tasks in both high-resource and low-resource scenarios. Furthermore, comparisons against previous SOTA methods show that the responses generated by PPTOD are more factually correct and semantically coherent as judged by human annotators.

[1]  Dani Yogatama,et al.  A Contrastive Framework for Neural Text Generation , 2022, NeurIPS.

[2]  Zaiqiao Meng,et al.  TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning , 2021, NAACL-HLT.

[3]  Nigel Collier,et al.  Plan-then-Generate: Controlled Data-to-Text Generation via Planning , 2021, EMNLP.

[4]  Nigel Collier,et al.  Few-Shot Table-to-Text Generation with Prototype Memory , 2021, EMNLP.

[5]  Baolin Peng,et al.  Soloist: Building Task Bots at Scale with Transfer Learning and Machine Teaching , 2021, Transactions of the Association for Computational Linguistics.

[6]  Guodun Li,et al.  Dialogue State Tracking with Multi-Level Fusion of Predicted Dialogue States and Conversations , 2021, SIGDIAL.

[7]  Qi Liu,et al.  Pretraining the Noisy Channel Model for Task-Oriented Dialogue , 2021, Transactions of the Association for Computational Linguistics.

[8]  Sonal Gupta,et al.  Muppet: Massive Multi-task Representations with Pre-Finetuning , 2021, EMNLP.

[9]  Xiaojun Quan,et al.  UBAR: Towards Fully End-to-End Task-Oriented Dialog Systems with GPT-2 , 2020, AAAI.

[10]  Yang Wang,et al.  A Sequence-to-Sequence Approach to Dialogue State Tracking , 2020, ACL.

[11]  Bishal Santra,et al.  Hierarchical Transformer for Task Oriented Dialog Systems , 2020, NAACL.

[12]  Yan Wang,et al.  PROTOTYPE-TO-STYLE: Dialogue Generation With Style-Aware Editing on Retrieval Memory , 2020, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[13]  Pascale Fung,et al.  MinTL: Minimalist Transfer Learning for Task-Oriented Dialogue Systems , 2020, EMNLP.

[14]  Zhijian Ou,et al.  A Probabilistic End-To-End Task-Oriented Dialog Model with Latent Belief States towards Semi-Supervised Learning , 2020, EMNLP.

[15]  Jie Zhou,et al.  A Contextual Hierarchical Attention Network with Adaptive Objective for Dialogue State Tracking , 2020, ACL.

[16]  Carel van Niekerk,et al.  TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking , 2020, SIGDIAL.

[17]  R. Socher,et al.  A Simple Language Model for Task-Oriented Dialogue , 2020, NeurIPS.

[18]  Richard Socher,et al.  TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue , 2020, EMNLP.

[19]  Zhijian Ou,et al.  Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context , 2019, AAAI.

[20]  Sang-Woo Lee,et al.  Efficient Dialogue State Tracking by Selectively Overwriting Memory , 2019, ACL.

[21]  Tsung-Hsien,et al.  ConveRT: Efficient and Accurate Conversational Representations from Transformers , 2019, FINDINGS.

[22]  Jianfeng Gao,et al.  DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation , 2019, Annual Meeting of the Association for Computational Linguistics.

[23]  Omer Levy,et al.  BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension , 2019, ACL.

[24]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[25]  Hua Wu,et al.  PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable , 2019, ACL.

[26]  Philip S. Yu,et al.  Find or Classify? Dual Strategy for Slot-Value Predictions on Multi-Domain Dialog State Tracking , 2019, STARSEM.

[27]  Raghav Gupta,et al.  Towards Scalable Multi-domain Conversational Agents: The Schema-Guided Dialogue Dataset , 2019, AAAI Conference on Artificial Intelligence.

[28]  Zhou Yu,et al.  MOSS: End-to-End Dialog System Framework with Modular Supervision , 2019, AAAI.

[29]  Ray Kurzweil,et al.  Multilingual Universal Sentence Encoder for Semantic Retrieval , 2019, ACL.

[30]  Anoop Cherian,et al.  The Eighth Dialog System Technology Challenge , 2019, ArXiv.

[31]  Li Zhou,et al.  Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering , 2019, ArXiv.

[32]  R'emi Louf,et al.  HuggingFace's Transformers: State-of-the-art Natural Language Processing , 2019, ArXiv.

[33]  Lingjia Tang,et al.  An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction , 2019, EMNLP.

[34]  Jianmo Ni,et al.  Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation , 2019, EMNLP.

[35]  Gökhan Tür,et al.  Flexibly-Structured Model for Task-Oriented Dialogues , 2019, SIGdial.

[36]  Dilek Z. Hakkani-Tür,et al.  Dialog State Tracking: A Neural Reading Comprehension Approach , 2019, SIGdial.

[37]  Omer Levy,et al.  RoBERTa: A Robustly Optimized BERT Pretraining Approach , 2019, ArXiv.

[38]  Maxine Eskénazi,et al.  Structured Fusion Networks for Dialog , 2019, SIGdial.

[39]  Tae-Yoon Kim,et al.  SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking , 2019, ACL.

[40]  Yiming Yang,et al.  XLNet: Generalized Autoregressive Pretraining for Language Understanding , 2019, NeurIPS.

[41]  Richard Socher,et al.  Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems , 2019, ACL.

[42]  Richard Socher,et al.  Unifying Question Answering and Text Classification via Span Extraction , 2019, ArXiv.

[43]  Thomas Wolf,et al.  TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents , 2019, ArXiv.

[44]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[45]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[46]  Ehsan Hosseini-Asl,et al.  Toward Scalable Neural Dialogue State Tracking Model , 2018, ArXiv.

[47]  Samuel R. Bowman,et al.  Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks , 2018, ArXiv.

[48]  Jianfeng Gao,et al.  Microsoft Dialogue Challenge: Building End-to-End Task-Completion Dialogue Systems , 2018, ArXiv.

[49]  Min-Yen Kan,et al.  Sequicity: Simplifying Task-oriented Dialogue Systems with Single Sequence-to-Sequence Architectures , 2018, ACL.

[50]  Richard Socher,et al.  The Natural Language Decathlon: Multitask Learning as Question Answering , 2018, ArXiv.

[51]  Bing Liu,et al.  End-to-End Learning of Task-Oriented Dialogs , 2018, NAACL.

[52]  Richard Socher,et al.  Global-Locally Self-Attentive Encoder for Dialogue State Tracking , 2018, ACL.

[53]  Omer Levy,et al.  GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding , 2018, BlackboxNLP@EMNLP.

[54]  Luke S. Zettlemoyer,et al.  Deep Contextualized Word Representations , 2018, NAACL.

[55]  Christopher D. Manning,et al.  Key-Value Retrieval Networks for Task-Oriented Dialogue , 2017, SIGDIAL Conference.

[56]  Hannes Schulz,et al.  Frames: a corpus for adding memory to goal-oriented dialogue systems , 2017, SIGDIAL Conference.

[57]  Jianfeng Gao,et al.  Investigation of Language Understanding Impact for Reinforcement Learning Based Dialogue Systems , 2017, ArXiv.

[58]  Tsung-Hsien Wen,et al.  Neural Belief Tracker: Data-Driven Dialogue State Tracking , 2016, ACL.

[59]  David Vandyke,et al.  A Network-based End-to-End Trainable Task-oriented Dialogue System , 2016, EACL.

[60]  Jimmy Ba,et al.  Adam: A Method for Stochastic Optimization , 2014, ICLR.

[61]  Milica Gasic,et al.  POMDP-Based Statistical Spoken Dialog Systems: A Review , 2013, Proceedings of the IEEE.

[62]  Ian S. Dunn,et al.  Exploring the Limits , 2009 .

[63]  Steve J. Young,et al.  Partially observable Markov decision processes for spoken dialog systems , 2007, Comput. Speech Lang..

[64]  Salim Roukos,et al.  Bleu: a Method for Automatic Evaluation of Machine Translation , 2002, ACL.

[65]  Michael Böttner,et al.  Natural Language , 1997, Relational Methods in Computer Science.

[66]  J. Fleiss Measuring nominal scale agreement among many raters. , 1971 .