Movement and Gaze Behavior in Virtual Audiovisual Listening Environments Resembling Everyday Life

Recent achievements in hearing aid development, such as visually guided hearing aids, make it increasingly important to study movement behavior in everyday situations in order to develop test methods and evaluate hearing aid performance. In this work, audiovisual virtual environments (VEs) were designed for communication conditions in a living room, a lecture hall, a cafeteria, a train station, and a street environment. Movement behavior (head movement, gaze direction, and torso rotation) and electroencephalography signals were measured in these VEs in the laboratory for 22 younger normal-hearing participants and 19 older normal-hearing participants. These data establish a reference for future studies that will investigate the movement behavior of hearing-impaired listeners and hearing aid users for comparison. Questionnaires were used to evaluate the subjective experience in the VEs. A test–retest comparison showed that the measured movement behavior is reproducible and that the measures of movement behavior used in this study are reliable. Moreover, evaluation of the questionnaires indicated that the VEs are sufficiently realistic. The participants rated the experienced acoustic realism of the VEs positively, and although the rating of the experienced visual realism was lower, the participants felt to some extent present and involved in the VEs. Analysis of the movement data showed that movement behavior depends on the VE and the age of the subject and is predictable in multitalker conversations and for moving distractors. The VEs and a database of the collected data are publicly available.

[1]  Gerard Llorach,et al.  Web-Based Live Speech-Driven Lip-Sync , 2016, 2016 8th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES).

[2]  Jacob Benesty,et al.  Subspace superdirective beamformers based on joint diagonalization , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[3]  Maarten De Vos,et al.  Decoding the attended speech stream with multi-channel EEG: implications for online, daily-life applications , 2015, Journal of neural engineering.

[4]  Thomas Lunner,et al.  Real-time estimation of eye gaze by in-ear electrodes , 2017, 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).

[5]  Karolina Smeds,et al.  Common Sound Scenarios: A Context-Driven Categorization of Everyday Sound Environments for Application in Hearing-Device Research. , 2016, Journal of the American Academy of Audiology.

[6]  Virginia Best,et al.  The Benefit of a Visually Guided Beamformer in a Dynamic Speech Task , 2017, Trends in hearing.

[7]  John J. Foxe,et al.  Attentional Selection in a Cocktail Party Environment Can Be Decoded from Single-Trial EEG. , 2015, Cerebral cortex.

[8]  Holger Regenbrecht,et al.  The Experience of Presence: Factor Analytic Insights , 2001, Presence: Teleoperators & Virtual Environments.

[9]  Gerard Llorach,et al.  Towards Realistic Immersive Audiovisual Simulations for Hearing Research: Capture, Virtual Scenes and Reproduction , 2018, AVSU@MM.

[10]  Thomas Brand,et al.  Development of an adaptive scaling method for subjective listening effort. , 2017, The Journal of the Acoustical Society of America.

[11]  Roel Vertegaal,et al.  The Attentive Hearing Aid: Eye Selection of Auditory Sources for Hearing Impaired Users , 2009, INTERACT.

[12]  Jesper Jensen,et al.  Informed Direction of Arrival estimation using a spherical-head model for Hearing Aid applications , 2016, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[13]  António Augusto de Sousa,et al.  The effects of body position on Reflexive Motor Acts and the sense of presence in virtual environments , 2018, Comput. Graph..

[14]  Erin M Picou,et al.  Potential Benefits and Limitations of Three Types of Directional Processing in Hearing Aids , 2014, Ear and hearing.

[15]  Giso Grimm,et al.  A gaze-based attention model for spatially-aware hearing aids , 2018, ITG Symposium on Speech Communication.

[16]  Martin Hansen,et al.  Recording and classification of the acoustic environment of hearing aid users. , 2008, Journal of the American Academy of Audiology.

[17]  Gary W. Elko,et al.  A simple adaptive first-order differential microphone , 1995, Proceedings of 1995 Workshop on Applications of Signal Processing to Audio and Accoustics.

[18]  Jacques A Grange,et al.  The benefit of head orientation to speech intelligibility in noise. , 2016, The Journal of the Acoustical Society of America.

[19]  Thomas Lunner,et al.  Single-channel in-ear-EEG detects the focus of auditory attention to concurrent tone streams and mixed speech , 2016, bioRxiv.

[20]  S. Debener,et al.  Concealed, Unobtrusive Ear-Centered EEG Acquisition: cEEGrids for Transparent EEG , 2017, Front. Hum. Neurosci..

[21]  R B Isler,et al.  Age related effects of restricted head movements on the useful field of view of drivers. , 1997, Accident; analysis and prevention.

[22]  Tao Zhang,et al.  Comparison of two binaural beamforming approaches for hearing aids , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[23]  Gerard Llorach,et al.  Influence of visual cues on head and eye movements during listening tasks in multi-talker audiovisual environments with animated characters , 2018, Speech Commun..

[24]  Tim Brookes,et al.  Head Movements Made by Listeners in Experimental and Real-Life Listening Activities , 2013 .

[25]  Volker Hohmann,et al.  Strategy-selective noise reduction for binaural digital hearing aids , 2003, Speech Commun..

[26]  Volker Hohmann,et al.  Robustness Analysis of Binaural Hearing Aid Beamformer Algorithms by Means of Objective Perceptual Quality Measures , 2007, 2007 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics.

[27]  M. Hubert,et al.  Outlier detection for skewed data , 2008 .

[28]  Franz Gravenhorst,et al.  Ear-worn reference data collection and annotation for multimodal context-aware hearing instruments , 2012, 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society.

[29]  Gerhard Tröster,et al.  Recognition of Hearing Needs from Body and Eye Movements to Improve Hearing Instruments , 2011, Pervasive.

[30]  Jerome Daniel,et al.  Ambisonics Encoding of Other Audio Formats for Multiple Listening Conditions , 1998 .

[31]  W. O. Brimijoin,et al.  Auditory and visual orienting responses in listeners with and without hearing-impairment. , 2010, The Journal of the Acoustical Society of America.

[32]  Ian D. Bishop,et al.  Subjective responses to simulated and real environments: a comparison , 2003 .

[33]  J. Tiffin,et al.  The Purdue pegboard; norms and studies of reliability and validity. , 1948, The Journal of applied psychology.

[34]  Tim Brookes,et al.  An Investigation Into Head Movements Made When Evaluating Various Attributes of Sound , 2007 .

[35]  Aubrey Lewis,et al.  SUBJECTIVE RESPONSES , 1961 .

[36]  Anna Warzybok,et al.  The multilingual matrix test: Principles, applications, and comparison across languages: A review , 2015, International journal of audiology.

[37]  Anton Nijholt,et al.  Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes , 2001, CHI.

[38]  Giso Grimm,et al.  A toolbox for rendering virtual acoustic environments in the context of audiology , 2018, Acta Acustica united with Acustica.