In implementing a spatial auditory display, many engineering compromises must be made to achieve a practical system. One such compromise involves devising methods for interpolating between the head-related transfer functions (HRTFs) used to synthesize spatial stimuli in order to achieve smooth motion trajectories and locations at finer resolutions than the empirical data. The perceptual consequences of interpolation can only be assessed by psychophysical studies. This paper compares three subjects' localization judgments for stimuli synthesized from non-interpolated HRTFs. Simple linear interpolations of the empirical HRTFs, stimuli synthesized from non-interpolated minimum-phase approximations of the HRTFs, and linear interpolations of the minimum-phase HRTFs. The empirical HRTFs used were derived from a different subject (SDO) from a previous study by Wightman and Kistler (1989) and whose data are provided with the Convolvotron synthetic 3D audio system. In general, the three subjects showed the same high rates of front-back and up-down confusions that were observed in a recent experiment using non-individualized (non-interpolated) transforms from SDO. However, there were no obvious differences in localization accuracy between the different types of synthesis conditions.<<ETX>>
[1]
Frederic L. Wightman,et al.
Perceptual consequences of engineering compromises in synthesis of virtual auditory objects
,
1992
.
[2]
F. Wightman,et al.
A model of head-related transfer functions based on principal components analysis and minimum-phase reconstruction.
,
1992,
The Journal of the Acoustical Society of America.
[3]
F L Wightman,et al.
Headphone simulation of free-field listening. I: Stimulus synthesis.
,
1989,
The Journal of the Acoustical Society of America.
[4]
F L Wightman,et al.
Localization using nonindividualized head-related transfer functions.
,
1993,
The Journal of the Acoustical Society of America.
[5]
Scott H. Foster,et al.
A Virtual Display System for Conveying Three-Dimensional Acoustic Information
,
1988
.