IITG- Indigo Submissions for NIST 2018 Speaker Recognition Evaluation and Post-Challenge Improvements

This paper describes the submissions of team Indigo at Indian Institute of Technology Guwahati (IITG) to the NIST 2018 Speaker Recognition Evaluation (SRE18) challenge. These speaker verification (SV) systems are developed for the fixed training condition task in SRE18. The evaluation data in SRE18 is derived from two corpora: (i) Call My Net 2 (CMN2), and (ii) Video Annotation for Speech Technology (VAST). The VAST set is obtained by extracting audio from video having high musical/noisy background. Thus, it helps in assessing the robustness of the SV systems. A number of sub-systems are developed which differ in front-end modeling paradigms, backend classifiers, and suppression of repeating pattern in the data. The fusion of sub-systems is submitted as the primary system which achieved actual detection cost function (actDCF) and equal error rate (EER) of 0.77 and 13.79 %, respectively, on the SRE18 evaluation data. Post-challenge efforts include the domain adaptation of the scores and the voice activity detection using deep neural network. With these enhancements, for the VAST trials, the best single sub-system achieves the relative reductions of 38.4% and 11.6% in actDCF and EER, respectively.

[1]  Daniel Garcia-Romero,et al.  Speaker diarization with plda i-vector scoring and unsupervised calibration , 2014, 2014 IEEE Spoken Language Technology Workshop (SLT).

[2]  Sanjeev Khudanpur,et al.  X-Vectors: Robust DNN Embeddings for Speaker Recognition , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[3]  Harsha Vardhan,et al.  The Leap Speaker Recognition System for NIST SRE 2018 Challenge , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[4]  S. R. Mahadeva Prasanna,et al.  IITG-Indigo System for NIST 2016 SRE Challenge , 2017, INTERSPEECH.

[5]  Niko Brümmer,et al.  The BOSARIS Toolkit: Theory, Algorithms and Code for Surviving the New DCF , 2013, ArXiv.

[6]  Daniel Povey,et al.  The Kaldi Speech Recognition Toolkit , 2011 .

[7]  Pietro Laface,et al.  Large-Scale Training of Pairwise Support Vector Machines for Speaker Recognition , 2014, IEEE/ACM Transactions on Audio, Speech, and Language Processing.

[8]  Shrikanth S. Narayanan,et al.  Simplified supervised i-vector modeling with application to robust and efficient language identification and speaker verification , 2014, Comput. Speech Lang..

[9]  Andreas Fischer,et al.  Pairwise support vector machines and their application to large scale problems , 2012, J. Mach. Learn. Res..

[10]  Aaron Lawson,et al.  The Speakers in the Wild (SITW) Speaker Recognition Database , 2016, INTERSPEECH.

[11]  Tomi Kinnunen,et al.  A practical, self-adaptive voice activity detector for speaker verification with noisy telephone and microphone data , 2013, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.

[12]  Steve Young,et al.  The HTK hidden Markov model toolkit: design and philosophy , 1993 .

[13]  Patrick Kenny,et al.  Front-End Factor Analysis for Speaker Verification , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[14]  Daniel Povey,et al.  MUSAN: A Music, Speech, and Noise Corpus , 2015, ArXiv.

[15]  Daniel Garcia-Romero,et al.  Analysis of i-vector Length Normalization in Speaker Recognition Systems , 2011, INTERSPEECH.

[16]  Sanjeev Khudanpur,et al.  Deep neural network-based speaker embeddings for end-to-end speaker verification , 2016, 2016 IEEE Spoken Language Technology Workshop (SLT).

[17]  Bryan Pardo,et al.  REpeating Pattern Extraction Technique (REPET): A Simple Method for Music/Voice Separation , 2013, IEEE Transactions on Audio, Speech, and Language Processing.

[18]  Douglas A. Reynolds,et al.  The 2018 NIST Speaker Recognition Evaluation , 2019, INTERSPEECH.