3D sound memory in virtual environments

Virtual auditory environments (VAEs) are created by processing digital sounds such that they convey a 3D location to the listener. This technology has the potential to augment systems in which an operator tracks the positions of targets. Prior work has established that listeners can locate sounds in VAEs, however less is known concerning listener memory for virtual sounds. In this study, three experimental tasks assessed listener recall of sound positions and identities, using free and cued recall, with one or more delays. Overall, accuracy degrades as listeners recall the environment, however when using free recall, listeners exhibited less degradation.

[1]  Michael Vorländer,et al.  Virtual Auditory Displays , 2014, Handbook of Virtual Environments, 2nd ed..

[2]  Timothy D. Wilson,et al.  Telling more than we can know: Verbal reports on mental processes. , 1977 .

[3]  Gregory H. Wakefield,et al.  Subjective selection of head-related transfer functions (HRTFs) based on spectral coloration and interaural time differences (ITD) cues , 2012 .

[4]  Gregory H. Wakefield,et al.  User Selected HRTFs: Reduced Complexity and Improved Perception , 2010 .

[5]  Ellen C. Haas,et al.  Enhancing System Safety with 3-D Audio Displays , 1997 .

[6]  Roberta L. Klatzky,et al.  Encoding, learning, and spatial updating of multiple object locations specified by 3-D sound, spatial language, and vision , 2003, Experimental Brain Research.

[7]  Jessie Y. C. Chen,et al.  Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies , 2006 .

[8]  Charles S. Watson,et al.  Perception of Complex Waveforms , 2007 .

[9]  Philippe A. Palanque,et al.  A model-based approach for real-time embedded multimodal systems in military aircrafts , 2004, ICMI '04.

[10]  Russell L. Martin,et al.  Memory for the locations of environmental sounds. , 2011, The Journal of the Acoustical Society of America.

[11]  Yen Nguyen Audible Assistance on Escaping to an Emergency Exit : A Comparison of a Manipulated 2 D and a 3 D Audio Model Viet , .

[12]  Robert J. Zatorre,et al.  Semantic Elaboration in Auditory and Visual Spatial Memory , 2010, Front. Psychology.

[13]  F. Parmentier,et al.  Functional characteristics of auditory temporal-spatial short-term memory: evidence from serial order errors. , 2000, Journal of experimental psychology. Learning, memory, and cognition.

[14]  Terri L. Bonebright,et al.  Sonific ation Report: Status of the Field and Research Agenda , 2010 .

[15]  Durand R. Begault,et al.  3-D Sound for Virtual Reality and Multimedia Cambridge , 1994 .

[16]  Scott H. Foster,et al.  A Virtual Display System for Conveying Three-Dimensional Acoustic Information , 1988 .

[17]  J. Blauert Spatial Hearing: The Psychophysics of Human Sound Localization , 1983 .

[18]  Hans G. Kaper,et al.  Data sonification and sound visualization , 1999, Comput. Sci. Eng..

[19]  R. Klatzky,et al.  Learning directions of objects specified by vision, spatial audition, or auditory spatial language. , 2002, Learning & memory.

[20]  Gregory H. Wakefield,et al.  Effects of Interface Type on Navigation in a Virtual Spatial Auditory Environment , 2010 .

[21]  John L. Cotton,et al.  Verbal Reports on Mental Processes , 1980 .

[22]  Michael A. Casey,et al.  The sound dimension , 1997 .

[23]  Helge J. Ritter,et al.  Sound and meaning in auditory data display , 2004, Proceedings of the IEEE.