SynAug: Synthesis-Based Data Augmentation for Text-Dependent Speaker Verification

Text-dependent speaker verification systems trained on large amount of labelled data exhibit remarkable performance. However, collecting the speech from a lot of speakers with target transcript is a lengthy and expensive process. In this work, we propose a synthesis based data augmentation method (SynAug) to expand the training set with more speakers and text-controlled synthesized speech. The performance of SynAug is evaluated on the RSR2015 dataset. Experimental results show that for i-vector framework, the proposed methods can boost the system performance significantly, especially for the low-resource condition where the amount of genuine speech is extremely limited. Moreover, combined with traditional data augmentation methods such as adding noises and reverberation, the systems could be further strengthened in extremely limited resource situation.