Mirroring to Build Trust in Digital Assistants

We describe experiments towards building a conversational digital assistant that considers the preferred conversational style of the user. In particular, these experiments are designed to measure whether users prefer and trust an assistant whose conversational style matches their own. To this end we conducted a user study where subjects interacted with a digital assistant that responded in a way that either matched their conversational style, or did not. Using self-reported personality attributes and subjects' feedback on the interactions, we built models that can reliably predict a user's preferred conversational style.

[1]  Morgan Quigley,et al.  ROS: an open-source Robot Operating System , 2009, ICRA 2009.

[2]  Salima Hassas,et al.  Developmental Learning for Social Robots in Real-World Interactions , 2018, HRI 2018.

[3]  Paul Ruvolo,et al.  Applying machine learning to infant interaction: The development is in the details , 2010, Neural Networks.

[4]  Louis-Philippe Morency,et al.  Latent Mixture of Discriminative Experts for Multimodal Prediction Modeling , 2010, COLING.

[5]  Roderick I. Swaab,et al.  Early words that work: When and how virtual linguistic mimicry facilitates negotiation outcomes , 2011 .

[6]  Joakim Gustafson,et al.  Predicting Speaker Changes and Listener Responses with and without Eye-Contact , 2011, INTERSPEECH.

[7]  Giulio Sandini,et al.  In Press, Ieee Transactions on Autonomous Mental Development , 2010 .

[8]  Mohamed Chetouani,et al.  Automatic Imitation Assessment in Interaction , 2012, HBU.

[9]  J. Pennebaker,et al.  Psychological aspects of natural language. use: our words, our selves. , 2003, Annual review of psychology.

[10]  Anton Nijholt,et al.  Automatic visual mimicry expression analysis in interpersonal interaction , 2011, CVPR 2011 WORKSHOPS.

[11]  Anton Nijholt,et al.  Towards visual and vocal mimicry recognition in human-human interactions , 2011, 2011 IEEE International Conference on Systems, Man, and Cybernetics.

[12]  Mohamed Chetouani,et al.  Multimodal coordination: exploring relevant features and measures , 2010, SSPW '10.

[13]  Goren Gordon,et al.  Deep Curiosity Loops in Social Environments , 2018, ArXiv.

[14]  M. Pickering,et al.  Toward a mechanistic psychology of dialogue , 2004, Behavioral and Brain Sciences.

[15]  Bogdan Raducanu,et al.  Head-gestures mirroring detection in dyadic social interactions with computer vision-based wearable devices , 2016, Neurocomputing.

[16]  Daniel Jurafsky,et al.  Extracting Social Meaning: Identifying Interactional Style in Spoken Conversation , 2009, NAACL.

[17]  Louis-Philippe Morency,et al.  A multimodal end-of-turn prediction model: learning from parasocial consensus sampling , 2011, AAMAS.

[18]  Mohamed Chetouani,et al.  Interpersonal Synchrony: A Survey of Evaluation Methods across Disciplines , 2012, IEEE Transactions on Affective Computing.

[19]  J. Pennebaker,et al.  Linguistic Style Matching in Social Interaction , 2002 .

[20]  F. Ramseyer,et al.  Nonverbal synchrony in psychotherapy: coordinated body movement reflects relationship quality and outcome. , 2011, Journal of consulting and clinical psychology.

[21]  T. Chartrand,et al.  The chameleon effect: the perception-behavior link and social interaction. , 1999, Journal of personality and social psychology.

[22]  P. Taylor,et al.  Linguistic Style Matching and Negotiation Outcome , 2005 .

[23]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[24]  Julia Hirschberg,et al.  High Frequency Word Entrainment in Spoken Dialogue , 2008, ACL.

[25]  Tetsuya Ogata,et al.  Acquisition of Viewpoint Transformation and Action Mappings via Sequence to Sequence Imitative Learning by Deep Neural Networks , 2018, Front. Neurorobot..