Localising synthesised spatial audio filtered through a generalised HRTF

The presentation of spatial audio through binaural headphones requires initial filtering of the source signal through Head Related Transfer Functions (HRTFs) to provide localisation cues in the form of Interaural Time Differences (ITD), Interaural Intensity Differences (IID), and Spectral Cues. These cues naturally occur by sound interacting with an individual's physical features, such as the shape of the pinna, head and shoulders. Incorporating a generalised (non-individual) HRTF into an audio display is cost effective, however, non-individual HRTFs provide less identifiable cues for sound localisation. The purpose of the current study was to determine the localisation accuracy about the azimuth for 'white noise' signals that had been filtered through a generalised HRTF database using Sound Lab (SLAB). These data provide baseline information for subsequent experiments into optimising the presentation of sound using non-individual spatial cues. Eleven untrained participants listened to signals randomly presented at 10 degree increments in azimuth and -20, 0, and +20 degrees elevation. Azimuth localisation accuracy compared similarly with previous studies. Generalised HRTF effects caused a tendency for localisation estimates to bias considerably toward the ipsilateral side of the interaural axis. Approximately 20% of signals resulted in Front-Back confusions. Asymmetrical performance was observed, which possibly supports previous claims that spatial audio processing ability differs between cerebral hemispheres.