Real-time GPU-based convolution: a follow-up

The generation of spatial audio is computationally very demanding and therefore, accurate spatial audio is typically overlooked in games and virtual environment applications thus leading to a decrease in both performance and the user's sense of presence or immersion. Previous work has examined the application of the graphics processing unit to the generation of real-time spatial audio. In particular, a GPU-based convolution method was developed that allowed for real-time convolution between an arbitrarily sized auditory signal and a filter. Despite the large computational savings, that GPU-based method introduced noise/artifacts to the lower-order bytes of the resulting output signal which may have resulted in a number of perceptual consequences. This work builds upon the previous GPU-based convolution method and describe a GPU-based convolution method that employs a superior GPU that eliminates the noise/artifacts of the previous method and provides further computational savings.

[1]  Bill Kapralos,et al.  Spatial sound for video games and virtual environments utilizing real-time GPU-based convolution , 2008, Future Play.

[2]  C. Avendano,et al.  The CIPIC HRTF database , 2001, Proceedings of the 2001 IEEE Workshop on the Applications of Signal Processing to Audio and Acoustics (Cat. No.01TH8575).

[3]  Simon Carlile,et al.  Virtual Auditory Space: Generation and Applications , 2013, Neuroscience Intelligence Unit.

[4]  Michael R. M. Jenkin,et al.  Virtual Audio Systems , 2008, PRESENCE: Teleoperators and Virtual Environments.