Perceptual control of convolution based room simulators

Reverberation processing has been intensively studied in audio and acoustics research for many years now. Early approaches used feedback delay networks to control the temporal distribution of reflections and to simulate the statistical properties of room reverberation. Thanks to the increase in processing power and the development of low-latency convolution algorithms a new generation of reverberation processors has been developed. They apply room impulse responses (RIR) measured in real concert halls and thus guarantee naturalness and authenticity of reverberation. Extending this approach to the use of higher-order spherical microphone arrays provides the means for analyzing the spatiotemporal distribution of acoustic energy. This space-time-frequency representation of the acoustic wave field is also referred to as directional room impulse responses (DRIR) in literature. The objective of the presented work is to develop a perceptually motivated signal-processing environment based on the analysis and re-synthesis of DRIRs. It first extracts perceptual features from measured DRIRs (e.g. source presence and listener envelopment) and thus provides a perceptual signature of the measured room. The room acoustic behavior can then be modified along the various perceptual dimensions, preserving the microstructure of the original RIRs, before being re-synthesized for the use with reverberation processors.