Comparison of Modal Versus Delay-and-Sum Beamforming in the Context of Data-Based Binaural Synthesis

Several approaches to data-based binaural synthesis have been published that capture a sound field by means of a spherical microphone array. The captured sound field is typically decomposed into plane waves which are then auralized using head-related transfer functions (HRTFs). The decomposition into plane waves is often based on modal beamforming techniques which represent the captured sound field with respect to surface spherical harmonics. An efficient and numerically stable approximation to modal beamforming is the delay-and-sum technique. This paper compares these two beamforming techniques in the context of data-based binaural synthesis. Their frequencyand time-domain properties are investigated, as well as the perceptual properties of the resulting binaural synthesis according to a binaural model.

[1]  W M Hartmann,et al.  On the externalization of sound images. , 1996, The Journal of the Acoustical Society of America.

[2]  B. Rafaely,et al.  France SOUND LOCALIZATION IN A SOUND FIELD REPRESENTED BY SPHERICAL HARMONICS , 2010 .

[3]  Boaz Rafaely,et al.  Phase-mode versus delay-and-sum spherical microphone array processing , 2005, IEEE Signal Processing Letters.

[4]  Sascha Spors,et al.  Evaluation of perceptual properties of phase-mode beamforming in the context of data-based binaural synthesis , 2012, 2012 5th International Symposium on Communications, Control and Signal Processing.

[5]  Frank Melchior,et al.  Dual Radius Spherical Cardioid Microphone Arrays for Binaural Auralization , 2009 .

[6]  Rodney A. Kennedy,et al.  Intrinsic Limits of Dimensionality and Richness in Random Multipath Fields , 2007, IEEE Transactions on Signal Processing.

[7]  Volker Hohmann,et al.  Auditory model based direction estimation of concurrent speakers from binaural signals , 2011, Speech Commun..

[8]  J. Marozeau,et al.  Loudness and intensity coding , 2010 .

[9]  B. Rafaely Plane-wave decomposition of the sound field on a sphere by spherical convolution , 2004 .

[10]  Sascha Spors,et al.  Analysis and Improvement of Pre-Equalization in 2.5-Dimensional Wave Field Synthesis , 2010 .

[11]  F L Wightman,et al.  Localization using nonindividualized head-related transfer functions. , 1993, The Journal of the Acoustical Society of America.

[12]  Boaz Rafaely,et al.  Interaural cross correlation in a sound field represented by spherical harmonics. , 2009, The Journal of the Acoustical Society of America.

[13]  E. Williams,et al.  Fourier Acoustics: Sound Radiation and Nearfield Acoustical Holography , 1999 .

[14]  R. Rabenstein,et al.  The Theory of Wave Field Synthesis Revisited , 2008 .

[15]  Sascha Spors,et al.  A Free Database of Head Related Impulse Response Measurements in the Horizontal Plane with Multiple Distances , 2011 .

[16]  Boaz Rafaely,et al.  Open-Sphere Designs for Spherical Microphone Arrays , 2007, IEEE Transactions on Audio, Speech, and Language Processing.

[17]  H. Steven Colburn,et al.  Role of spectral detail in sound-source localization , 1998, Nature.

[18]  Boaz Rafaely,et al.  Spatial Aliasing in Spherical Microphone Arrays , 2007, IEEE Transactions on Signal Processing.

[19]  A. Mills On the minimum audible angle , 1958 .

[20]  J. Blauert Spatial Hearing: The Psychophysics of Human Sound Localization , 1983 .

[21]  F. Wightman,et al.  The dominant role of low-frequency interaural time differences in sound localization. , 1992, The Journal of the Acoustical Society of America.

[22]  Boaz Rafaely,et al.  Analysis and design of spherical microphone arrays , 2005, IEEE Transactions on Speech and Audio Processing.

[23]  Morten Løve Jepsen,et al.  Towards a binaural modelling toolbox , 2011 .

[24]  R. Duraiswami,et al.  Fast Multipole Methods for the Helmholtz Equation in Three Dimensions , 2005 .