Automatic Recognition of Speaker Age and Gender Based on Deep Neural Networks

In the given article, we present a novel approach in the paralinguistic field of age and gender recognition by speaker voice based on deep neural networks. The training and testing of proposed models were implemented on the German speech corpus aGender. We conducted experiments using different network topologies, including neural networks with fully-connected and convolutional layers. In a joint recognition of speaker age and gender, our system reached the recognition performance measured as unweighted accuracy of 48.41%. In a separate age and gender recognition setup, the obtained performance was 57.53% and 88.80%, respectively. Applied deep neural networks provide the best result of speaker age recognition in comparison to existing traditional classification methods.

[1]  Lukás Burget,et al.  Brno university of technology system for interspeech 2010 paralinguistic challenge , 2010, INTERSPEECH.

[2]  Dong Yu,et al.  Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition , 2012, IEEE Transactions on Audio, Speech, and Language Processing.

[3]  Buket D. Barkana,et al.  Deep neural network framework and transformed MFCCs for speaker's age and gender classification , 2017, Knowl. Based Syst..

[4]  Dong Yu,et al.  Language recognition using deep-structured conditional random fields , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.

[5]  Luca Antiga,et al.  Automatic differentiation in PyTorch , 2017 .

[6]  Vasif V. Nabiyev,et al.  A new approach with score-level fusion for the classification of a speaker age and gender , 2016, Comput. Electr. Eng..

[7]  Elmar Nöth,et al.  Age and gender recognition based on multiple systems - early vs. late fusion , 2010, INTERSPEECH.

[8]  Isabel Trancoso,et al.  Age and gender classification using fusion of acoustic and prosodic features , 2010, INTERSPEECH.

[9]  Colin Raffel,et al.  librosa: Audio and Music Signal Analysis in Python , 2015, SciPy.

[10]  Sanjeev Khudanpur,et al.  X-Vectors: Robust DNN Embeddings for Speaker Recognition , 2018, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[11]  Larry P. Heck,et al.  MSR Identity Toolbox v1.0: A MATLAB Toolbox for Speaker Recognition Research , 2013 .

[12]  Honglak Lee,et al.  Sparse deep belief net model for visual area V2 , 2007, NIPS.

[13]  Hermann Ney,et al.  A Deep Learning Approach to Machine Transliteration , 2009, WMT@EACL.

[14]  Björn Schuller,et al.  Opensmile: the munich versatile and fast open-source audio feature extractor , 2010, ACM Multimedia.

[15]  Shrikanth S. Narayanan,et al.  Automatic speaker age and gender recognition using acoustic and prosodic level information fusion , 2013, Comput. Speech Lang..

[16]  Buket D. Barkana,et al.  New transformed features generated by deep bottleneck extractor and a GMM–UBM classifier for speaker age and gender classification , 2017, Neural Computing and Applications.

[17]  Joanna Grzybowska,et al.  Speaker Age Classification and Regression Using i-Vectors , 2016, INTERSPEECH.

[18]  Sanjeev Khudanpur,et al.  End-to-end Deep Neural Network Age Estimation , 2018, INTERSPEECH.

[19]  Geoffrey E. Hinton,et al.  Modeling pixel means and covariances using factorized third-order boltzmann machines , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[20]  Felix Burkhardt,et al.  A Database of Age and Gender Annotated Telephone Speech , 2010, LREC.

[21]  Rok Gajsek,et al.  Gender and affect recognition based on GMM and GMM-UBM modeling with relevance MAP estimation , 2010, INTERSPEECH.

[22]  Buket D. Barkana,et al.  Deep neural network combined posteriors for speakers' age and gender classification , 2016, 2016 Annual Connecticut Conference on Industrial Electronics, Technology & Automation (CT-IETA).

[23]  Björn W. Schuller,et al.  The INTERSPEECH 2010 paralinguistic challenge , 2010, INTERSPEECH.

[24]  Trung Le,et al.  Fuzzy support vector machines for age and gender classification , 2010, INTERSPEECH.