Robust gender-dependent acoustic-phonetic modelling in continuous speech recognition based on a new automatic male/female classification

The authors present a new automatic male/female classification method based on the location in the frequency domain of the first two formants. This classification is based on a new automatic formant extraction which is faster than a peak picking technique. Gender-dependent acoustic-phonetic models stemming from this classification are used in the INRS continuous speech recognition system with the ATIS corpora. An improvement of 14% is obtained with these models in comparison to the baseline speaker-independent system.