APPLYING SPATIAL AUDIO TO HUMAN INTERFACES: 25 YEARS OF NASA EXPERIENCE

From the perspective of human factors engineering, the inclusion of spatial audio within a human-machine interface is advantageous from several perspectives. Demonstrated benefits include the ability to monitor multiple streams of speech and non-speech warning tones using a ‘cocktail party’ advantage, and for aurally-guided visual search. Other potential benefits include the spatial coordination and interaction of multimodal events, and evaluation of new communication technologies and alerting systems using virtual simulation. Many of these technologies were developed at NASA Ames Research Center, beginning in 1985. This paper reviews examples and describes the advantages of spatial sound in NASA-related technologies, including space operations, aeronautics, and search and rescue. The work has involved hardware and software development as well as basic and applied research.

[1]  Bernard D. Adelstein,et al.  Spatial sensor lag in virtual environment systems , 1993, Other Conferences.

[2]  Elizabeth M. Wenzel,et al.  A software-based system for interactive spatial sound synthesis , 2000 .

[3]  J. Gibson The Ecological Approach to Visual Perception , 1979 .

[4]  M. Godfroy,et al.  Spatial Variations of Visual—Auditory Fusion Areas , 2003, Perception.

[5]  Corinne Roumes,et al.  Multisensory enhancement of localization with synergetic visual-auditory cues , 2004 .

[6]  Durand R. Begault Muti-channel spatialization system for audio signals , 1997 .

[7]  Durand R. Begault,et al.  Spatially modulated auditory alerts for aviation , 2006 .

[8]  Durand R. Begault,et al.  Speech Synthesis for Data Link: a Study of Overall Quality and Comprehension Effort , 2009 .

[9]  Elizabeth M. Wenzel,et al.  Localization in Virtual Acoustic Displays , 1992, Presence: Teleoperators & Virtual Environments.

[10]  Alison McGann,et al.  Data link air traffic control and flight deck environments: Experiment in flight crew performance , 1993 .

[11]  Stefania Serafin,et al.  Proceedings of the 19th International Congress on Acoustics , 2007 .

[12]  Stephen R. Ellis,et al.  Direction judgement error in computer generated displays and actual scenes , 1991 .

[13]  Carl Machover,et al.  Virtual reality , 1994, IEEE Computer Graphics and Applications.

[14]  Durand R. Begault,et al.  Design and Verification of HeadZap, a Semi-automated HRIR Measurement System , 2006 .

[15]  E. C. Cherry Some Experiments on the Recognition of Speech, with One and with Two Ears , 1953 .

[16]  S. Ellis Pictorial communication in virtual and real environments , 1991 .

[17]  F L Wightman,et al.  Localization using nonindividualized head-related transfer functions. , 1993, The Journal of the Acoustical Society of America.

[18]  D R Begault,et al.  Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation , 1993, Human factors.

[19]  Elizabeth M. Wenzel,et al.  Sound Lab: A real-time, software-based system for the study of spatial hearing , 2000 .

[20]  Durand R. Begault,et al.  Auditory alert systems with enhanced detectability , 2008 .

[21]  Elizabeth M. Wenzel,et al.  Virtual Interface Environment Workstations , 1988 .

[22]  Yoshitaka Nakajima,et al.  Auditory Scene Analysis: The Perceptual Organization of Sound Albert S. Bregman , 1992 .

[23]  Paul O. Thompson,et al.  Some Audio Considerations in Air Control Towers , 1953 .

[24]  Durand R. Begault,et al.  Challenges to the Successful Implementation of 3-D Sound , 1991 .

[25]  D R Begault,et al.  Three-dimensional audio versus head-down traffic alert and collision avoidance system displays. , 1996, The International journal of aviation psychology.

[26]  Durand R. Begault Virtual Acoustic Displays for Teleconferencing: Intelligibility Advantage for 'Telephone-Grade' Audio , 1999 .

[27]  F L Wightman,et al.  Headphone simulation of free-field listening. I: Stimulus synthesis. , 1989, The Journal of the Acoustical Society of America.

[28]  N. Bolognini,et al.  Enhancement of visual perception by crossmodal visuo-auditory interaction , 2002, Experimental Brain Research.

[29]  F L Wightman,et al.  Headphone simulation of free-field listening. II: Psychophysical validation. , 1989, The Journal of the Acoustical Society of America.

[30]  James R. Marston,et al.  Personal Guidance System for People with Visual Impairment: A Comparison of Spatial Displays for Route Guidance , 2005, Journal of visual impairment & blindness.

[31]  Scott H. Foster,et al.  A Virtual Display System for Conveying Three-Dimensional Acoustic Information , 1988 .

[32]  E. C. Cmm,et al.  on the Recognition of Speech, with , 2008 .