Generating Strategic Dialogue for Negotiation with Theory of Mind

We propose a framework to integrate the concept of Theory of Mind (ToM) into generating utterances for task-oriented dialogue. Our approach explores the ability to model and infer personality types of opponents, predicts their responses, and uses this information to adapt the agent's high-level strategy in negotiation tasks. We introduce a probabilistic formulation for the first-order theory of mind and test our approach on the CraigslistBargain dataset. Experiments show that our method using ToM inference achieves a 40\% higher dialogue agreement rate compared to baselines on a mixed population of opponents. We also show that our model displays diverse negotiation behavior with different types of opponents.

[1]  Dan Klein,et al.  Reasoning about Pragmatics with Neural Listeners and Speakers , 2016, EMNLP.

[2]  Yu-Jung Heo,et al.  Answerer in Questioner's Mind: Information Theoretic Approach to Goal-Oriented Visual Dialog , 2018, NeurIPS.

[3]  Nir Oren,et al.  Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence Opponent Models with Uncertainty for Strategic Argumentation , 2022 .

[4]  Peter McBurney,et al.  Opponent Modelling in Persuasion Dialogues , 2013, IJCAI.

[5]  Alan W Black,et al.  A Dynamic Strategy Coach for Effective Negotiation , 2019, SIGdial.

[6]  Jianfeng Gao,et al.  RMM: A Recursive Mental Model for Dialog Navigation , 2020, FINDINGS.

[7]  M. Tomasello,et al.  Does the chimpanzee have a theory of mind? 30 years later , 2008, Trends in Cognitive Sciences.

[8]  Jordan L. Boyd-Graber,et al.  Opponent Modeling in Deep Reinforcement Learning , 2016, ICML.

[9]  Tim Miller,et al.  The Minds of Many: Opponent Modeling in a Stochastic Game , 2017, IJCAI.

[10]  Pieter Abbeel,et al.  Value Iteration Networks , 2016, NIPS.

[11]  Dan Klein,et al.  Speaker-Follower Models for Vision-and-Language Navigation , 2018, NeurIPS.

[12]  Alex Graves,et al.  Asynchronous Methods for Deep Reinforcement Learning , 2016, ICML.

[13]  Nir Oren,et al.  Arguing Using Opponent Models , 2009, ArgMAS.

[14]  Dan Klein,et al.  Pragmatically Informative Text Generation , 2019, NAACL.

[15]  Derek Chen,et al.  Decoupling Strategy and Generation in Negotiation Dialogues , 2018, EMNLP.

[16]  Jun Wang,et al.  Multi-Agent Reinforcement Learning , 2020, Deep Reinforcement Learning.

[17]  Noah D. Goodman,et al.  Knowledge and implicature: Modeling language understanding as social cognition , 2012, CogSci.

[18]  Geoffrey Zweig,et al.  Fast and easy language understanding for dialog systems with Microsoft Language Understanding Intelligent Service (LUIS) , 2015, SIGDIAL Conference.

[19]  David Vandyke,et al.  A Network-based End-to-End Trainable Task-oriented Dialogue System , 2016, EACL.

[20]  Christopher Potts,et al.  Learning in the Rational Speech Acts Model , 2015, ArXiv.

[21]  Noah D. Goodman,et al.  Learning to refer informatively by amortizing pragmatic reasoning , 2020, CogSci.

[22]  H. Francis Song,et al.  Machine Theory of Mind , 2018, ICML.

[23]  Kee-Eung Kim,et al.  Bayes-Adaptive Monte-Carlo Planning and Learning for Goal-Oriented Dialogues , 2020, AAAI.

[24]  Milica Gasic,et al.  POMDP-Based Statistical Spoken Dialog Systems: A Review , 2013, Proceedings of the IEEE.

[25]  Michael C. Frank,et al.  Review Pragmatic Language Interpretation as Probabilistic Inference , 2022 .

[26]  Jianfeng Gao,et al.  Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access , 2016, ACL.

[27]  Oliver Lemon,et al.  Evaluating Persuasion Strategies and Deep Reinforcement Learning methods for Negotiation Dialogue agents , 2017, EACL.

[28]  David R. Traum,et al.  Multi-party, Multi-issue, Multi-strategy Negotiation for Multi-modal Virtual Agents , 2008, IVA.