Capturing Complementary Information via Reversed Filter Bank and Parallel Implementation with MFCC for Improved Text-Independent Speaker Identification

A state of the art speaker identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, mel-frequency cepstral coefficients (MFCC) modeled on the human auditory system have been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This work proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature improves performance baseline of MFCC based system. The proposition is validated by experiments conducted on two different kinds of databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with two different classifier paradigms, namely Gaussian Mixture Models (GMM) and Polynomial Classifier (PC) and for various model orders