Real-Time Implementation of a General Model for Spatial Processing of Sounds

We perceive sounds in a spatial context. Without visual cues, we can often tell the direction or distance from which a sound comes. We also perceive things about the apparent acoustic environment of sounds, such as whether they seem to come from a reverberant cave or a padded cell. Multichannel recordings can portray the spatial characteristics of recorded sounds independent of listening conditions. In ways analogous to looking through windows, we can discern things about one acoustic environment through headphones or loudspeakers while we move about in another. Ideally, spatial processing of sounds would allow us to have complete control over the acoustic environment heard through the loudspeakers. Each sound located within this heard environment could have a specified "size," direction, distance, and apparent motion. We can use computers to gain such control over the spatial characteristics of sounds, but for musical applications we must always specify the acoustic processing we believe will produce the intended psychological effect. Spatial processing therefore involves the simultaneous consideration of two sets of problems: the physical characteristics of a space to be simulated and the psychological characteristics of sounds presented to listeners over loudspeakers. The work described in this article consists of (1) a conceptual model for representing the problem of spatial processing and (2) a description of an implementation of this model in the context of the Cmusic sound synthesis program (Moore 1982). Localization

[1]  S S Stevens,et al.  Neural events and psychophysical law. , 1971, Science.

[2]  Davide Rocchesso,et al.  Sound Spatializations in Real Time By First-Reflection Simulation , 1994, ICMC.

[3]  Juan G. Roederer,et al.  Introduction to the physics and psychophysics of music , 1973 .

[4]  F. Richard Moore,et al.  Spatialization of sounds over loudspeakers , 1989 .

[5]  Manfred R. Schroeder,et al.  Toward better acoustics for concert halls , 1980 .

[6]  F. Richard Moore,et al.  The Computer Audio Research Laboratory at UCSD , 1982 .

[7]  J A Molino Psychophysical verification of predicted interaural differences in localizing distant sound sources. , 1974, The Journal of the Acoustical Society of America.

[8]  Klaus Wendt The Transmission of Room Information , 1961 .

[9]  Max V. Mathews,et al.  The Technology Of Computer Music , 1970 .

[10]  M. Schroeder Binaural dissimilarity and optimum ceilings for concert halls: More lateral sound diffusion , 1979 .

[11]  Miller Puckette,et al.  Pure Data , 1997, ICMC.

[12]  Mark B. Gardner Binaural Detection of Single Frequency Signals in the Presence of Noise , 1961 .

[13]  J. Blauert Spatial Hearing: The Psychophysics of Human Sound Localization , 1983 .

[14]  John M. Chowning,et al.  THE SIMULATION OF MOVING SOUND SOURCES , 1970 .

[15]  James A. Moorer,et al.  About This Reverberation Business , 1978 .

[16]  David Zicarelli,et al.  An Extensible Real-time Signal Processing Environment for Max , 1998, ICMC.