A Systematic Review of the Methods and Experiments Aimed to Reduce Front-Back Confusions in the Free-Field and Virtual Auditory Environments

Human spatial representation is determined by the interaction of a wide range of stimuli, including visual, neuromotor and acoustic information. In a virtual acoustic environment that intends to simulate the perception of sound as in real-world conditions by rendering the audio stimuli over headphones, the accuracy of audio presentation is of the utmost importance. Nonetheless, a well-known problem which affects binaural audio localization is represented by the front-back confusion, a situation in which the listener perceives the sounds which come from the front as coming from the back and vice-versa. Over the years, many theoretical and practical approaches have been devised in order to reduce the incidence of front-back confusions, including head movement, source movement, sound filtering using early reflections, the simulation of reverberant environments or anthropometric estimation. This paper aims to study, review and compare the most relevant methods and experiments designed to decrease the rate of front-back confusion errors that appear in the case of synthesized 3D sound delivered over headphones in virtual auditory displays and in real-world conditions.

[1]  Iiro P. Jääskeläinen,et al.  Psychophysics and neuronal bases of sound localization in humans , 2014, Hearing Research.

[2]  B. Shinn-Cunningham,et al.  Task-modulated “what” and “where” pathways in human auditory cortex , 2006, Proceedings of the National Academy of Sciences.

[3]  F L Wightman,et al.  Headphone simulation of free-field listening. II: Psychophysical validation. , 1989, The Journal of the Acoustical Society of America.

[4]  Catarina Mendonça A review on auditory space adaptations to altered head-related cues , 2014, Front. Neurosci..

[5]  Stephan Riek,et al.  Improving 3-D Audio Localisation Through the Provision of Supplementary Spatial Audio Cues , 2012 .

[6]  G. Wersenyi,et al.  Effect of Emulated Head-Tracking for Reducing Localization Errors in Virtual Audio Simulation , 2009, IEEE Transactions on Audio, Speech, and Language Processing.

[7]  Brian F. G. Katz,et al.  Eliciting adaptation to non-individual HRTF spectral cues with multi-modal training presence , 2004 .

[8]  H. Wallach,et al.  The role of head movements and vestibular and visual cues in sound localization. , 1940 .

[9]  S. David,et al.  Does attention play a role in dynamic receptive field adaptation to changing acoustic salience in A1? , 2007, Hearing Research.

[10]  F L Wightman,et al.  Headphone simulation of free-field listening. I: Stimulus synthesis. , 1989, The Journal of the Acoustical Society of America.

[11]  B. Shinn-Cunningham,et al.  Tori of confusion: binaural localization cues for sources within reach of a listener. , 2000, The Journal of the Acoustical Society of America.

[12]  Gaëtan Parseihian,et al.  Rapid head-related transfer function adaptation using a virtual auditory environment. , 2012, The Journal of the Acoustical Society of America.

[13]  F L Wightman,et al.  Resolution of front-back ambiguity in spatial hearing by listener and source movement. , 1999, The Journal of the Acoustical Society of America.

[14]  O. Kirkeby,et al.  Resolution of front-back confusion in virtual acoustic imaging systems. , 2000, The Journal of the Acoustical Society of America.

[15]  Mark R. Anderson,et al.  Direct comparison of the impact of head tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source. , 2001, Journal of the Audio Engineering Society. Audio Engineering Society.

[16]  Shoji Shimada,et al.  An Improved Method for Accurate Sound Localization , 2005 .