Detecting Attempts at Humor in Multiparty Meetings

Systems designed for the automatic summarization of meetings have considered the propositional content of contributions by each speaker, but not the explicit techniques that speakers use to downgrade the perceived seriousness of those contributions. We analyze one such technique, namely attempts at humor. We find that speech spent on attempts at humor is rare by time but that it correlates strongly with laughter, which is more frequent. Contextual features describing the temporal and multiparticipant distribution of manually transcribed laughter yield error rates for the detection of attempts at humor which are 4 times lower than those obtained using oracle lexical information. Furthermore, we show that similar performance can be achieved by considering only the speaker's laughter, indicating that meeting participants explicitly signal their attempts at humor by laughing themselves. Finally, we present evidence which suggests that, on small time scales, the production of attempts at humor and their ratification via laughter often involves only two participants, belying the allegedly multiparty nature of the interaction.

[1]  Andreas Stolcke,et al.  Distinguishing deceptive from non-deceptive speech , 2005, INTERSPEECH.

[2]  Kornel Laskowski,et al.  Modeling other talkers for improved dialog act recognition in meetings , 2009, INTERSPEECH.

[3]  Klaus Zechner,et al.  Automatic Summarization of Open-Domain Multiparty Dialogues in Diverse Genres , 2002, CL.

[4]  D. V. Leeuwen,et al.  Evaluating automatic laughter segmentation in meetings using acoustic and acoustic-phonetic features , 2007 .

[5]  Phillip J. Glenn Laughter in Interaction , 2003 .

[6]  Elizabeth Shriberg,et al.  Automatic dialog act segmentation and classification in multiparty meetings , 2005, Proceedings. (ICASSP '05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005..

[7]  S. Burger,et al.  On the Correlation between Perceptual and Contextual Aspects of Laughter in Meetings , 2007 .

[8]  Nikki Mirghafori,et al.  Automatic laughter detection using neural networks , 2007, INTERSPEECH.

[9]  Tanja Schultz,et al.  Detection of Laughter-in-Interaction in Multichannel Close-Talk Microphone Recordings of Meetings , 2008, MLMI.

[10]  Andrei Popescu-Belis,et al.  Multi-level Dialogue Act Tags , 2004, SIGDIAL Workshop.

[11]  Hiromichi Hosoma Preliminary Notes on the Sequential Organization of Smile and Laughter , 2008, JSAI.

[12]  Daniel P. W. Ellis,et al.  Laughter Detection in Meetings , 2004 .

[13]  Antinus Nijholt Observations on humor act construction , 2004 .

[14]  Kornel Laskowski,et al.  Annotation and Analysis of Emotionally Relevant Behavior in the ISL Meeting Corpus , 2006, LREC.

[15]  Sophia F. Dziegielewski,et al.  Humor , 2003 .

[16]  Elizabeth Shriberg,et al.  The ICSI Meeting Recorder Dialog Act (MRDA) Corpus , 2004, SIGDIAL Workshop.

[17]  Helena Kangasharju,et al.  Emotions in Organizations , 2009 .

[18]  Swapna Somasundaran,et al.  Detecting Arguing and Sentiment in Meetings , 2007, SIGdial.

[19]  Eric Fosler-Lussier,et al.  Discourse Segmentation of Multi-Party Conversation , 2003, ACL.

[20]  Andreas Stolcke,et al.  The ICSI Meeting Corpus , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[21]  Kornel Laskowski Modeling vocal interaction for text-independent detection of involvement hotspots in multi-party meetings , 2008, 2008 IEEE Spoken Language Technology Workshop.

[22]  Akinori Ito,et al.  Smile and laughter recognition using speech processing and face recognition from conversation video , 2005, 2005 International Conference on Cyberworlds (CW'05).