Speech Segregation Using an Event-synchronous Auditory Image and STRAIGHT
暂无分享,去创建一个
[1] Roy D. Patterson,et al. Speech segregation based on fundamental event information using an auditory vocoder , 2003, INTERSPEECH.
[2] Hideki Kawahara,et al. Restructuring speech representations using a pitch-adaptive time-frequency smoothing and an instantaneous-frequency-based F0 extraction: Possible role of a repetitive structure in sounds , 1999, Speech Commun..
[3] Alan V. Oppenheim,et al. Evaluation of an adaptive comb filtering method for enhancing speech degraded by white noise addition , 1978 .
[4] Roy D. Patterson,et al. Speech segregation using event synchronous auditory vocoder , 2003, 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03)..
[5] R. Patterson,et al. Time-domain modeling of peripheral auditory processing: a modular architecture and a software platform. , 1995, The Journal of the Acoustical Society of America.
[6] T. W. Parsons. Separation of speech from interfering speech by means of harmonic selection , 1976 .
[7] Roy D. Patterson,et al. Speech Segregation Using an Auditory Vocoder With Event-Synchronous Enhancements , 2006, IEEE Transactions on Audio, Speech, and Language Processing.
[8] B. Gold,et al. The channel vocoder , 1967 .
[9] Tomohiro Nakatani,et al. Robust fundamental frequency estimation against background noise and spectral distortion , 2002, INTERSPEECH.
[10] R. Patterson,et al. Complex Sounds and Auditory Images , 1992 .