Statistical graphical models for scene analysis, source separation and other audio applications

Statistical Graphical Models for Scene Analysis, Source Separation and Other Audio Applications Manuel J. Reyes Gómez The problem of separating overlapping sound sources has long been a research goal in sound processing, not least because of the apparent ease with which we as listeners achieve perceptual separation and isolation of sound sources in our everyday experiences. Human listeners use their prior knowledge of all the sound classes that they have experienced through their lives to impose constraints on the form that elements on a mixture can take. Listeners use the information obtained from partial observation of the unmixed context to disambiguate the components where the energy is locally swamped by interfering sources. Researchers working on this problem (Ellis 1996) argue that just as human listeners have top-down knowledge, prior constraints on the form that the mixture components can take is the critical component to making source separation systems work. In this thesis, we propose to encode these contraints in the form of models which capture the statistical distributions of the features of mixture components, using the framework of statistical graphical models, and then use those models to estimate obscured or corrupted portions from partial observations. Our overarching goal is to explain composed data as a composition of the models of the individual sources. After reviewing the basic statistical tools, this dissertation describes three models of this kind. The first uses multiple-microphone recordings from reverberant rooms combined in a filter-and-sum setup. The filter coefficients are optimized to match system output against a model of speech taken from a speech recognizer. The second model addresses the more difficult case of a single channel recording, and handles the tractability problems of the very large number of states required by decomposing the signal into subbands. The final model provides very precise fits to source signals without an enormous dictionary of prototypes, but instead by exploiting the observation that much of a real-world signal can be described as systematic local spectral deformations of adjacent time frames; by inferring these deformations between occasional spectral templates, the entire sound is accurately described. For this last model, we show in detail how a mixture of two sources can be segmented at points where local deformations do not provide adequate explanation, to delineate regions dominated by one source. Individual sources can then be reconstructed by interpolation of the deformation parameters to reconstruct estimates of the mixture components even when they are hidden behind high-energy maskers. Although acoustic scene analysis and source separation are used as motivating and illustrative applications throught, the intrinsic descriptions of the nature of sound sources captured by these models could have other, broader applications in signal recognition, compression and modification, and even beyond audio in other domains where signal properties have the appropriate nontrivial local structure.

[1]  Michael I. Jordan,et al.  Learning Spectral Clustering , 2003, NIPS.

[2]  Scott Rickard,et al.  Blind separation of speech mixtures via time-frequency masking , 2004, IEEE Transactions on Signal Processing.

[3]  Michael I. Jordan,et al.  An Introduction to Graphical Models , 2001 .

[4]  R. M. Warren Perceptual Restoration of Missing Speech Sounds , 1970, Science.

[5]  Jeff A. Bilmes,et al.  Data-driven extensions to HMM statistical dependencies , 1998, ICSLP.

[6]  Andreas Stolcke,et al.  The Meeting Project at ICSI , 2001, HLT.

[7]  John R. Hershey,et al.  Audio-Visual Sound Separation Via Hidden Markov Models , 2001, NIPS.

[8]  Daniel P. W. Ellis,et al.  Multi-channel source separation by factorial HMMs , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..

[9]  John R. Hershey,et al.  Single microphone source separation using high resolution signal reconstruction , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[10]  Keansub Lee,et al.  Minimal-impact audio-based personal archives , 2004, CARPE'04.

[11]  David Pearce,et al.  The aurora experimental framework for the performance evaluation of speech recognition systems under noisy conditions , 2000, INTERSPEECH.

[12]  Michael I. Jordan,et al.  On Spectral Clustering: Analysis and an algorithm , 2001, NIPS.

[13]  Guy J. Brown,et al.  Separation of speech from interfering sounds based on oscillatory correlation , 1999, IEEE Trans. Neural Networks.

[14]  X. Jin Factor graphs and the Sum-Product Algorithm , 2002 .

[15]  William T. Freeman,et al.  Understanding belief propagation and its generalizations , 2003 .

[16]  Hervé Bourlard,et al.  Subband-based speech recognition , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[17]  Richard M. Stern,et al.  Speech recognizer-based microphone array processing for robust hands-free speech recognition , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[18]  Brendan J. Frey,et al.  Learning flexible sprites in video layers , 2001, Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001.

[19]  Sam T. Roweis,et al.  One Microphone Source Separation , 2000, NIPS.

[20]  D. E. Davies,et al.  Array signal processing , 1983 .

[21]  William T. Freeman,et al.  Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology , 1999, Neural Computation.

[22]  Mitchel Weintraub,et al.  A theory and computational model of auditory monaural sound separation , 1985 .

[23]  S. Chen,et al.  Speaker, Environment and Channel Change Detection and Clustering via the Bayesian Information Criterion , 1998 .

[24]  Aapo Hyvärinen,et al.  Survey on Independent Component Analysis , 1999 .

[25]  Michael I. Jordan,et al.  Factorial Hidden Markov Models , 1995, Machine Learning.

[26]  Daniel Patrick Whittlesey Ellis,et al.  Prediction-driven computational auditory scene analysis , 1996 .

[27]  Michael I. Jordan,et al.  Blind One-microphone Speech Separation: A Spectral Learning Approach , 2004, NIPS.

[28]  Geoffrey E. Hinton,et al.  A View of the Em Algorithm that Justifies Incremental, Sparse, and other Variants , 1998, Learning in Graphical Models.

[29]  C. Burrus,et al.  Array Signal Processing , 1989 .

[30]  Daniel P. W. Ellis,et al.  Decoding speech in the presence of other sources , 2005, Speech Commun..

[31]  M.J. Reyes-Gomez,et al.  Multi-channel source separation by beamforming trained with factorial HMMs , 2003, 2003 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (IEEE Cat. No.03TH8684).

[32]  B Kollmeier,et al.  Real-time multiband dynamic compression and noise reduction for binaural hearing aids. , 1993, Journal of rehabilitation research and development.

[33]  Sam T. Roweis,et al.  Factorial models and refiltering for speech separation and denoising , 2003, INTERSPEECH.

[34]  Brendan J. Frey,et al.  Epitomic analysis of appearance and shape , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[35]  Martin Cooke,et al.  Modelling auditory processing and organisation , 1993, Distinguished dissertations in computer science.

[36]  Guy J. Brown Computational auditory scene analysis : a representational approach , 1993 .

[37]  Assaf Zomet,et al.  Learning to Perceive Transparency from the Statistics of Natural Scenes , 2002, NIPS.