FEASIBILITY OF MULTIPLE NON-SPEECH SOUNDS PRESENTATION USING HEADPHONES

This paper describes a study of listeners' ability to segregate spatially separated sources of non-speech sounds. Short sounds from musical instruments were played over headphones at different spatial positions using stereo panning or 3-D audio processing with Head-Related Transfer Functions. The number of sound positions was limited to five in this study. One, three or five sound items were played to the listener, multiple sounds being presented with four different onset times from simultaneous to successive replay. The subjects had to spatially discriminate one sound item, i.e. identify a given instrument and find its position. Performance was assessed by measure of response time and error-rate. A preference grading was also included in this test to compare the two headphone presentation techniques employed.

[1]  Chris Schmandt,et al.  AudioStreamer: exploiting simultaneity for listening , 1995, CHI 95 Conference Companion.

[2]  Michael Cohen,et al.  Extending the notion of a window system to audio , 1990, Computer.

[3]  Durand R. Begault,et al.  A Virtual Audio Guidance and Alert System for Commercial Aircraft Operations , 1996 .

[4]  Ian Pitt,et al.  Pointing in an auditory interface for blind users , 1995, 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century.

[5]  Durand R. Begault,et al.  Head-Up Auditory Display Research at NASA Ames Research Center , 1995 .

[6]  Stephen A. Brewster,et al.  Earcons as a Method of Providing Navigational Cues in a Menu Hierarchy , 1996, BCS HCI.

[7]  H S Colburn,et al.  Speech intelligibility and localization in a multi-source environment. , 1999, The Journal of the Acoustical Society of America.

[8]  E. C. Cherry Some Experiments on the Recognition of Speech, with One and with Two Ears , 1953 .

[9]  Mark R. Anderson,et al.  Direct comparison of the impact of head tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source. , 2001, Journal of the Audio Engineering Society. Audio Engineering Society.

[10]  Durand R. Begault,et al.  Perceptual similarity of measured and synthetic HRTF filtered speech stimuli , 1992 .

[11]  A. Bronkhorst,et al.  Multichannel speech intelligibility and talker recognition using monaural, binaural, and three-dimensional auditory presentation. , 2000, The Journal of the Acoustical Society of America.

[12]  Chris Schmandt,et al.  Dynamic Soundscape: mapping time to space for audio browsing , 1997, CHI.

[13]  Simon Carlile,et al.  Virtual Auditory Space: Generation and Applications , 2013, Neuroscience Intelligence Unit.

[14]  T. Anderson,et al.  Binaural and spatial hearing in real and virtual environments , 1997 .