Coreference Augmentation for Multi-Domain Task-Oriented Dialogue State Tracking

Dialogue State Tracking (DST), which is the process of inferring user goals by estimating belief states given the dialogue history, plays a critical role in task-oriented dialogue systems. A coreference phenomenon observed in multi-turn conversations is not addressed by existing DST models, leading to suboptimal performances. In this paper, we propose Coreference Dialogue State Tracker (CDST) that explicitly models the coreference feature. In particular, at each turn, the proposed model jointly predicts the coreferred domain-slot pair and extracts the coreference values from the dialogue context. Experimental results on MultiWOZ 2.1 dataset show that the proposed model achieves the state-of-the-art joint goal accuracy of 56.47%.

[1]  Minlie Huang,et al.  MultiWOZ 2.3: A multi-domain task-oriented dataset enhanced with annotation corrections and co-reference annotation , 2020, ArXiv.

[2]  Gyuwan Kim,et al.  Efficient Dialogue State Tracking by Selectively Overwriting Memory , 2020, ACL.

[3]  Richard Socher,et al.  Global-Locally Self-Attentive Encoder for Dialogue State Tracking , 2018, ACL.

[4]  Ian Lane,et al.  BERT-DST: Scalable End-to-End Dialogue State Tracking with Bidirectional Encoder Representations from Transformer , 2019, INTERSPEECH.

[5]  Yan Wang,et al.  Improving Open-Domain Dialogue Systems via Multi-Turn Incomplete Utterance Restoration , 2019, EMNLP.

[6]  Cheng Niu,et al.  Improving Multi-turn Dialogue Modelling with Utterance ReWriter , 2019, ACL.

[7]  Chi Wang,et al.  Schema-Guided Multi-Domain Dialogue State Tracking with Graph Attention Neural Networks , 2020, AAAI.

[8]  Philip S. Yu,et al.  Find or Classify? Dual Strategy for Slot-Value Predictions on Multi-Domain Dialog State Tracking , 2019, STARSEM.

[9]  Richard Socher,et al.  Transferable Multi-Domain State Generator for Task-Oriented Dialogue Systems , 2019, ACL.

[10]  Tae-Yoon Kim,et al.  SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking , 2019, ACL.

[11]  Ming-Wei Chang,et al.  Well-Read Students Learn Better: On the Importance of Pre-training Compact Models , 2019 .

[12]  Changjian Hu,et al.  GECOR: An End-to-End Generative Ellipsis and Co-reference Resolution Model for Task-Oriented Dialogue , 2019, EMNLP.

[13]  Mihail Eric,et al.  MultiWOZ 2. , 2019 .

[14]  Colin Raffel,et al.  Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer , 2019, J. Mach. Learn. Res..

[15]  Tsung-Hsien Wen,et al.  Neural Belief Tracker: Data-Driven Dialogue State Tracking , 2016, ACL.

[16]  Nurul Lubis,et al.  TripPy: A Triple Copy Strategy for Value Independent Neural Dialog State Tracking , 2020, SIGdial.

[17]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.