Laughter Valence Prediction in Motivational Interviewing Based on Lexical and Acoustic Cues

Motivational Interviewing (MI) is a goal oriented psychotherapy counseling that aims to instill positive change in a client through discussion. Since the discourse is in the form of semi-structured natural conversation, it often involves a variety of non-verbal social and affective behaviors such as laughter. Laughter carries information related to affect, mood and personality and can offer a window into the mental state of a person. In this work, we conduct an analytical study on predicting the valence of laughters (positive, neutral or negative) based on lexical and acoustic cues, within the context of MI. We hypothesize that the valence of laughter can be predicted using a window of past and future context around the laughter and, design models to incorporate context, from both text and audio. Through these experiments we validate the relation of the two modalities to perceived laughter valence. Based on the outputs of the prediction experiment, we perform a follow up analysis of the results including: (i) identification of the optimal past and future context in the audio and lexical channels, (ii) investigation of the differences in the prediction patterns for the counselor and the client and, (iii) analysis of feature patterns across the two modalities.

[1]  Eddie McNamara Motivational interviewing and cognitive intervention , 2002 .

[2]  Athanasios Katsamanis,et al.  An acoustic analysis of shared enjoyment in ECA interactions of children with autism , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[3]  Merlin Suarez,et al.  Building a Multimodal Laughter Database for Emotion Recognition , 2012, LREC.

[4]  D. Black,et al.  Pathological Laughter: A Review of the Literature , 1982, The Journal of nervous and mental disease.

[5]  Merlin Suarez,et al.  Discovering Emotions in Filipino Laughter Using Audio Features , 2010, 2010 3rd International Conference on Human-Centric Computing.

[6]  Günther Palm,et al.  Multimodal Laughter Detection in Natural Discourses , 2009, Human Centered Robot Systems, Cognition, Interaction, Technology.

[7]  Rahul Gupta,et al.  Predicting client's inclination towards target behavior change in motivational interviewing and investigating the role of laughter , 2014, INTERSPEECH.

[8]  P. Gray,et al.  Working with Emotions : Responding to the Challenge of Difficult Pupil Behaviour in Schools , 2002 .

[9]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[10]  Dirk Wildgruber,et al.  Differentiation of emotions in laughter at the behavioral level. , 2009, Emotion.

[11]  Elmar Nöth,et al.  The INTERSPEECH 2012 Speaker Trait Challenge , 2012, INTERSPEECH.

[12]  J. Bachorowski,et al.  The evolution of emotional experience: A "selfish-gene" account of smiling and laughter in early hominids and humans. , 2001 .

[13]  Harry Wechsler,et al.  Mixture of experts for classification of gender, ethnic origin, and pose of human faces , 2000, IEEE Trans. Neural Networks Learn. Syst..

[14]  David A. van Leeuwen,et al.  Affective multimodal mirror: sensing and eliciting laughter , 2007, HCM '07.

[15]  A. Mahrer,et al.  An integrative review of strong laughter in psychotherapy: What it is and how it works. , 1984 .

[16]  Björn Schuller,et al.  Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.

[17]  Naveen Kumar,et al.  A discriminative reliability-aware classification model with applications to intelligibility classification in pathological speech , 2015, INTERSPEECH.

[18]  Robert Tibshirani,et al.  Classification by Pairwise Coupling , 1997, NIPS.

[19]  Gerardo Hermosillo,et al.  Learning From Crowds , 2010, J. Mach. Learn. Res..

[20]  J. Bachorowski,et al.  Reconsidering the Evolution of Nonlinguistic Communication: The Case of Laughter , 2003 .

[21]  N. Kuiper,et al.  Laughter and Stress in Daily Life: Relation to Positive and Negative Affect , 1998 .

[22]  Rahul Gupta,et al.  Classification of emotional content of sighs in dyadic human interactions , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[23]  Joshua Foer,et al.  Laughter: A Scientific Investigation , 2001, The Yale Journal of Biology and Medicine.

[24]  References , 1971 .

[25]  Phillip J. Glenn Laughter in Interaction , 2003 .

[26]  A. Gibson,et al.  Motivational interviewing. , 2000, The practising midwife.

[27]  Radoslaw Niewiadomski,et al.  Multimodal Analysis of Laughter for an Interactive System , 2013, INTETAIN.

[28]  Rahul Gupta,et al.  Analysis and modeling of the role of laughter in motivational interviewing based psychotherapy conversations , 2015, INTERSPEECH.

[29]  Nicu Sebe,et al.  Proceedings of the international workshop on Human-centered multimedia , 2007, MM 2007.

[30]  H. Westra,et al.  Core skills in motivational interviewing. , 2013, Psychotherapy.