Correlation and stationarity of speech radiation: consequences for linear multichannel filtering

Speech processing using multichannel microphone systems is often based on slowly adapting, linear filters. These systems are able to extract a single source from a mixture (and suppress the others)-if the speech radiation can be described by a linear and time-invariant transfer function. Here, we test this assumption using a two-channel microphone array and a human talker as the speech source. We measure correlations between the signals received by the two microphones for individual phonemes using the magnitude squared coherence. Stationarity is addressed by comparing optimal filters between different phoneme pairs using the system distance. We find that, in particular for fricatives, the coherence of the speech signals radiated to different directions is very low. We also find, that the transfer functions from the mouth to the microphones differ significantly between vowels, depending on the locations of the two microphones. These measurements show that the general mixing model does not hold for speech for arbitrary microphone setups, and that multichannel microphone systems have to be carefully designed.

[1]  Harvey F. Silverman,et al.  Characterization of talker radiation pattern using a microphone array , 1994, Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing.

[2]  S. Haykin,et al.  Adaptive Filter Theory , 1986 .

[3]  D. A. Chalker,et al.  Models for representing the acoustic radiation impedance of the mouth , 1985, IEEE Trans. Acoust. Speech Signal Process..

[4]  Sven Nordholm,et al.  Adaptive microphone array employing calibration signals: an analytical evaluation , 1999, IEEE Trans. Speech Audio Process..

[5]  Abeer Alwan,et al.  Noise source models for fricative consonants , 2000, IEEE Trans. Speech Audio Process..

[6]  Simon Haykin,et al.  Adaptive filter theory (2nd ed.) , 1991 .

[7]  Mikio Tohyama,et al.  Fundamentals of Acoustic Signal Processing , 1998 .

[8]  Magnús Pétursson,et al.  Elemente einer Akustischen Phonetik , 1986 .

[9]  G. Carter Coherence and time delay estimation , 1987, Proceedings of the IEEE.

[10]  J. Flanagan Speech Analysis, Synthesis and Perception , 1971 .

[11]  Georgios B. Giannakis,et al.  Multichannel blind signal separation and reconstruction , 1997, IEEE Trans. Speech Audio Process..

[12]  Michael S. Brandstein,et al.  Cell-based beamforming (CE-BABE) for speech acquisition with microphone arrays , 2000, IEEE Trans. Speech Audio Process..