Analysis of laughter events in real science classes by using multiple environment sensor data

The extraction of sound events in environments where a large number of people are present is a challenging problem. In order to tackle that problem, we have been developing a sound environment intelligence system which is able to get information about who is talking, where and when, based on integration of multiple microphone arrays and human tracking technologies. We installed the developed system in a science room of an elementary school, and collected data of real science classes during a period of one month. In the present paper, among the sound activities appearing in the science classes, we focused on the analysis of laughter events, considering that laughter conveys important social functions in communication. Laughter events were extracted by making use of visual displays of spatial-temporal information provided by the developed system. Subjective evaluation of the laughter events revealed relationship between the laughter type (including production, style, and vowel-quality aspects), the functions in communication, and the appropriateness in the classroom context.

[1]  Nick Campbell,et al.  Acoustic Features of Four Types of Laughter in Natural Conversational Speech , 2011, ICPhS.

[2]  D. Wildgruber,et al.  Formant characteristics of human laughter. , 2011, Journal of voice : official journal of the Voice Foundation.

[3]  Takayuki Kanda,et al.  Person Tracking in Large Public Spaces Using 3-D Range Sensors , 2013, IEEE Transactions on Human-Machine Systems.

[4]  Hiroshi Ishiguro,et al.  Evaluation of a MUSIC-based real-time sound localization of multiple sound sources in real noisy environments , 2009, 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[5]  Norihiro Hagita,et al.  Combining laser range finders and local steered response power for audio monitoring , 2012, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[6]  N. Campbell Whom we laugh with affects how we laugh , 2007 .

[7]  Hiroshi Mizoguchi,et al.  A predefined command recognition system using a ceiling microphone array in noisy housing environments , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Taras Butko,et al.  Detection and Positioning of Overlapped Sounds in a Room Environment , 2012, INTERSPEECH.

[9]  L. Devillers,et al.  Positive and Negative emotional states behind the laughs in spontaneous spoken dialogs , 2007 .

[10]  Norihiro Hagita,et al.  Using multiple microphone arrays and reflections for 3D localization of sound sources , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.