Mask estimation based on sound localisation for missing data speech recognition
暂无分享,去创建一个
[1] John C. Webster,et al. Responding to One of Two Simultaneous Messages , 1954 .
[2] E. Owens,et al. An Introduction to the Psychology of Hearing , 1997 .
[3] DeLiang Wang,et al. Speech segregation based on sound localization , 2001, IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222).
[4] Phil D. Green,et al. Robust automatic speech recognition with missing and unreliable acoustic data , 2001, Speech Commun..
[5] Yoshitaka Nakajima,et al. Auditory Scene Analysis: The Perceptual Organization of Sound Albert S. Bregman , 1992 .
[6] Guy J. Brown,et al. A binaural processor for missing data speech recognition in the presence of noise and small-room reverberation , 2004, Speech Commun..
[7] Richard Lippmann,et al. Speech recognition by machines and humans , 1997, Speech Commun..
[8] Jon Barker,et al. Soft decisions in missing data techniques for robust automatic speech recognition , 2000, INTERSPEECH.
[9] John B. Shoven,et al. I , Edinburgh Medical and Surgical Journal.
[10] R. G. Leonard,et al. A database for speaker-independent digit recognition , 1984, ICASSP.