Speaking with avatars - influence of social interaction on movement behavior in interactive hearing experiments

This study investigated to what extent social interaction influences the motion behavior of normal-hearing listeners in interactive hearing experiments involving audiovisual virtual reality. To answer this question, an experiment with eleven participants was performed, using two different levels of virtualization, two levels of interactivity, and two different noise levels. The task of the participants was either to communicate with two real or two virtual interlocutors (conditions ’real’ and ’active’) or to listen passively to a conversation between three virtual characters (condition ’passive’). During the experiment, the gaze, head and body motion of the participants was recorded. An analysis of variances showed that the gaze and saccade behavior does not differ between ’real’ and ’active’ conditions. Behavior was found to be different between ’active’ and ’passive’ conditions. For the head motion related parameters, such a significant effect was not found. A classifier was trained to distinguish between conditions based on motion and pose features. Performance was higher for the comparison between ’active’ and ’passive’ conditions than for the comparison between ’real’ and ’active’ conditions, indicating that the measured difference in motion behavior was not sufficient to distinguish between the ’real’ and the ’active’ condition.

[1]  Gerard Llorach,et al.  Movement and Gaze Behavior in Virtual Audiovisual Listening Environments Resembling Everyday Life , 2019, Trends in hearing.

[2]  C. N. Macrae,et al.  Are You Looking at Me? Eye Gaze and Person Perception , 2002, Psychological science.

[3]  Louis-Philippe Morency,et al.  OpenFace 2.0: Facial Behavior Analysis Toolkit , 2018, 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018).

[4]  Giso Grimm,et al.  Review of Self-Motion in the Context of Hearing and Hearing Device Research. , 2020, Ear and hearing.

[5]  Gerard Llorach,et al.  Influence of visual cues on head and eye movements during listening tasks in multi-talker audiovisual environments with animated characters , 2018, Speech Commun..

[6]  Giso Grimm,et al.  A toolbox for rendering virtual acoustic environments in the context of audiology , 2018, Acta Acustica united with Acustica.

[7]  William M Whitmer,et al.  Speech, movement, and gaze behaviours during dyadic conversation in noise , 2019, Scientific Reports.

[8]  G. Grimm,et al.  Multi-user posture and gesture classification for ‘ subject-inthe-loop ’ applications , 2017 .

[9]  Stefania Serafin,et al.  Superhuman Hearing - Virtual Prototyping of Artificial Hearing: a Case Study on Interactions and Acoustic Beamforming , 2020, IEEE Transactions on Visualization and Computer Graphics.

[10]  Gerard Llorach,et al.  Web-Based Live Speech-Driven Lip-Sync , 2016, 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES).

[11]  Giso Grimm,et al.  The Virtual Reality Lab: Realization and Application of Virtual Sound Environments , 2020, Ear and hearing.

[12]  Douglas S. Brungart,et al.  The Quest for Ecological Validity in Hearing Science: What It Is, Why It Matters, and How to Advance It , 2020, Ear and Hearing.

[13]  Anton Nijholt,et al.  Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes , 2001, CHI.