Singer identification based on vocal and instrumental models

We propose a novel method to identify the singer of a query song from the audio database. The database contains over 100 popular songs of solo singers. The rhythm structure of the song is analyzed using our proposed rhythm tracking method and the song is segmented into beat space time frames, where within the beat space time length the harmonic structure is quasi stationary. This inter-beat time resolution of the song is used for both feature extraction and training of the classifiers (i.e. support vector machine (SVM) for vocal/instrumental boundary detection and Gaussian mixture models (GMMs) for modeling the singer). Combining the instrumental music similarities in the songs of the same singer with the vocal model can improve the identification of the singer with an accuracy of over 87%.