Logatom articulation index evaluation of speech enhanced by blind sourceseparation and single-channel noise reduction

The subjective logatom articulation index of speech signals enhanced by means of various digital signal processing methods has been measured. To improve intelligibility, the convolutive blind source separation (BSS) algorithm by Parra and Spence [1] has been used in combination with classical denoising algorithms. The efficiency of these algorithms has been investigated for speech material recorded in two spatial configurations. It has been shown that the BSS algorithm can highly improve speech recognition. Moreover, a combination of the BSS with single-microphone denoising methods can additionally increase the logatom articulation index.

[1]  Richard M. Schwartz,et al.  Enhancement of speech corrupted by acoustic noise , 1979, ICASSP.

[2]  张辉,et al.  Real-Time Implementation of an Efficient Speech Enhancement Algorithm for Digital Hearing Aids , 2006 .

[3]  Aleksander Sek,et al.  Speech intelligibility in various spatial configurations of backgroundnoise , 2005 .

[4]  Schuster,et al.  Separation of a mixture of independent signals using time delayed correlations. , 1994, Physical review letters.

[5]  Jeffrey J DiGiovanni,et al.  A psychophysical evaluation of spectral enhancement. , 2005, Journal of speech, language, and hearing research : JSLHR.

[6]  Hugo Fastl,et al.  Investigations into speech intelligibility in the presence of differentmasking noises for hearing aids with variable attack and release times , 2005 .

[7]  S. Amari,et al.  Approximate maximum likelihood source separation using the natural gradient , 2001, 2001 IEEE Third Workshop on Signal Processing Advances in Wireless Communications (SPAWC'01). Workshop Proceedings (Cat. No.01EX471).

[8]  Kiyotoshi Matsuoka,et al.  A neural net for blind separation of nonstationary signals , 1995, Neural Networks.

[9]  Andrzej Cichocki,et al.  Adaptive Blind Signal and Image Processing - Learning Algorithms and Applications , 2002 .

[11]  Douglas D. O'Shaughnessy,et al.  Speech communications - human and machine, 2nd Edition , 2000 .

[12]  Fan-Gang Zeng,et al.  Effects of directional microphone and adaptive multichannel noise reduction algorithm on cochlear implant performance. , 2006, The Journal of the Acoustical Society of America.

[13]  Harald Höge Basic parameters in speech processing. The need for evaluation , 2007 .

[14]  Lucas C. Parra,et al.  Convolutive blind separation of non-stationary sources , 2000, IEEE Trans. Speech Audio Process..

[15]  Paris Smaragdis,et al.  Information theoretic approaches to source separation , 1997 .

[16]  Lucas C. Parra,et al.  On-line Blind Source Separation of Non-Stationary Signals , 2001 .

[17]  Pascal Scalart,et al.  Speech enhancement based on a priori signal to noise estimation , 1996, 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing Conference Proceedings.

[18]  Seungjin Choi Blind Source Separation and Independent Component Analysis : A Review , 2004 .

[19]  Stefan Brachmanski Estimation of logatom intelligibility with the STI method for Polish speechtransmitted via communication channels , 2004 .

[20]  Eric Moulines,et al.  A blind source separation technique using second-order statistics , 1997, IEEE Trans. Signal Process..

[21]  Moeness G. Amin,et al.  New approach for blind source separation using time-frequency distributions , 1996, Optics & Photonics.

[22]  Jong Ho Won,et al.  Improving performance in noise for hearing aids and cochlear implants using coherent modulation filtering , 2008, Hearing Research.

[23]  Hiroshi Sawada,et al.  Blind Source Separation of Convolutive Mixtures of Speech in Frequency Domain , 2005, IEICE Trans. Fundam. Electron. Commun. Comput. Sci..

[24]  Masataka Goto,et al.  Real-time sound source localization and separation system and its application to automatic speech recognition , 2001, INTERSPEECH.

[25]  Monique Boymans,et al.  Interactive fitting of multiple algorithms implemented in the same digital hearing aid , 2007, International journal of audiology.

[26]  Jing Liu,et al.  Foreground auditory scene analysis for hearing aids , 2007, Pattern Recognit. Lett..

[27]  Christine Serviere,et al.  BLIND SEPARATION OF CONVOLUTIVE AUDIO MIXTURES USING NONSTATIONARITY , 2003 .

[28]  Soo-Young Lee Blind Source Separation and Independent Component Analysis: A Review , 2005 .

[29]  Paris Smaragdis,et al.  Efficient blind separation of convolved sound mixtures , 1997, Proceedings of 1997 Workshop on Applications of Signal Processing to Audio and Acoustics.

[30]  David Malah,et al.  Speech enhancement using a minimum mean-square error log-spectral amplitude estimator , 1984, IEEE Trans. Acoust. Speech Signal Process..

[31]  Robert Trimble,et al.  Hearing Aid Speech Enhancement: A Multiresolution Analysis Approach , 2005, EuroIMSA.

[33]  S. Boll,et al.  Suppression of acoustic noise in speech using spectral subtraction , 1979 .

[34]  Andrzej Cichocki,et al.  Second Order Nonstationary Source Separation , 2002, J. VLSI Signal Process..