Further Optimisations of Constant Q Cepstral Processing for Integrated Utterance Verification and Text-Dependent Speaker Verification

Many authentication applications involving automatic speaker verification (ASV) demand robust performance using short-duration, fixed or prompted text utterances. Text constraints not only reduce the phone-mismatch between enrolment and test utterances, which generally leads to improved performance, but also provide an ancillary level of security. This can take the form of explicit utterance verification (UV). An integrated UV + ASV system should then verify access attempts which contain not just the expected speaker, but also the expected text content. This paper presents such a system and introduces new features which are used for both UV and ASV tasks. Based upon multi-resolution, spectro-temporal analysis and when fused with more traditional parameterisations, the new features not only generally outperform Mel-frequency cepstral coefficients, but also are shown to be complementary when fusing systems at score level. Finally, the joint operation of UV and ASV greatly decreases false acceptances for unmatched text trials. Index Terms speaker verification, utterance verification, text dependent, constant Q transform.

[1]  E. Owens,et al.  An Introduction to the Psychology of Hearing , 1997 .

[2]  Steven F. Boll,et al.  Constant-Q signal analysis and synthesis , 1978, ICASSP.

[3]  Stan Davis,et al.  Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Se , 1980 .

[4]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[5]  Judith C. Brown Calculation of a constant Q spectral transform , 1991 .

[6]  Chin-Hui Lee,et al.  Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains , 1994, IEEE Trans. Speech Audio Process..

[7]  Carmen García-Mateo,et al.  A novel technique for the combination of utterance and speaker verification systems in a text-dependent speaker verification task , 1998, ICSLP.

[8]  Douglas A. Reynolds,et al.  Speaker Verification Using Adapted Gaussian Mixture Models , 2000, Digit. Signal Process..

[9]  Biing-Hwang Juang,et al.  Automatic verbal information verification for user authentication , 2000, IEEE Trans. Speech Audio Process..

[10]  Bo Xu,et al.  Using nonstandard SVM for combination of Speaker Verification and Verbal Information Verification in speaker authentication system , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[11]  Karin Dressler,et al.  SINUSOIDAL EXTRACTION USING AN EFFICIENT IMPLEMENTATION OF A MULTI-RESOLUTION FFT , 2006 .

[12]  Filipe C. C. Beltrao Diniz,et al.  High-Selectivity Filter Banks for Spectral Analysis of Music Signals , 2007, EURASIP J. Adv. Signal Process..

[13]  James H. Elder,et al.  Probabilistic Linear Discriminant Analysis for Inferences About Identity , 2007, 2007 IEEE 11th International Conference on Computer Vision.

[14]  John S. D. Mason,et al.  Reinforced temporal structure information for embedded utterance-based speaker recognition , 2008, INTERSPEECH.

[15]  Giovanni Costantini,et al.  Event based transcription system for polyphonic piano music , 2009, Signal Process..

[16]  Ernesto López,et al.  An Efficient Multi-Resolution Spectral Transform for Music Analysis , 2009, ISMIR.

[17]  Haizhou Li,et al.  An overview of text-independent speaker recognition: From features to supervectors , 2010, Speech Commun..

[18]  Patrick Kenny,et al.  Mixture of PLDA Models in i-vector Space for Gender-Independent Speaker Recognition , 2011, INTERSPEECH.

[19]  Simon J. D. Prince,et al.  Computer Vision: Models, Learning, and Inference , 2012 .

[20]  Driss Matrouf,et al.  Variance-spectra based normalization for i-vector standard and probabilistic linear discriminant analysis , 2012, Odyssey.

[21]  Anssi Klapuri,et al.  Audio Pitch Shifting Using the Constant-Q Transform , 2013 .

[22]  S. Rickard,et al.  Towards shifted NMF for improved monaural separation , 2013 .

[23]  Bin Ma,et al.  The reddots data collection for speaker recognition , 2015, INTERSPEECH.

[24]  Zheng-Hua Tan,et al.  Dependent Speaker Verification Using Unsupervised HMM-UBM and Temporal GMM-UBM , 2016 .

[25]  Tomi Kinnunen,et al.  Utterance Verification for Text-Dependent Speaker Recognition: A Comparative Assessment Using the RedDots Corpus , 2016, INTERSPEECH.

[26]  Nicholas W. D. Evans,et al.  Articulation Rate Filtering of CQCC Features for Automatic Speaker Verification , 2016, INTERSPEECH.

[27]  Nicholas W. D. Evans,et al.  A New Feature for Automatic Speaker Verification Anti-Spoofing: Constant Q Cepstral Coefficients , 2016, Odyssey.

[28]  Florin Curelaru,et al.  Front-End Factor Analysis For Speaker Verification , 2018, 2018 International Conference on Communications (COMM).