Personalized 3d sound rendering for content creation, delivery, and presentation

Advanced models for 3D audio rendering are increasingly needed in the networked electronic media world, and play a central role within the strategic research objectives identified in the NEM research agenda. This paper presents a model for sound spatialization which includes additional features with respect to existing systems, being parametrized according to anthropometric information of the user, and being based on audio processing with low-order filters, thus allowing for significant reduction of the computational costs. This technology can offer a transversal contribution to the NEM research objectives, with respect to content creation and adaptation, intelligent delivery and augmented media presentation, by improving the quality of the immersive experience in a number of contexts where realistic spatialization and personalised sound reproduction is a key requirement, in particular in mobile contexts with headphone-based rendering.