Differentiation of Speech and Song Using Occurrence Pattern of Delta Energy

Differentiation of speech and song from acoustic signal is a challenging issue. It is a significant part of automatic classification of audio. Most of the previous works have been done for classifying speech and non-speech, but comparatively less work has been done for differentiating speech and song. Mostly, frequency and perceptual domain features were common in those works. In this work, a small dimensional acoustic feature has been proposed. Speech differs from song due to the absence of instrumental part within it which is present in song and causes increase of energy for song signal compared to speech signal. Short-time energy (STE), an acoustic feature, can reflect this observation. For precise study of energy variation, features based on very small change of energy, Delta Energy, and co-occurrence matrix of it are considered. For classification purpose, some well-known classifiers have been employed. Experimental result has been compared with existing methodologies to reflect the efficiency of the proposed system.

[1]  David Gerhard Perceptual features for a fuzzy speech-song classification , 2002, ICASSP.

[2]  B. Mohammad Mosleh,et al.  A Review on Speech-Music Discrimination Methods , 2014 .

[3]  Dima Ruinskiy,et al.  An Effective Algorithm for Automatic Detection and Exact Demarcation of Breath Sounds in Speech and Song Signals , 2007, IEEE Transactions on Audio, Speech, and Language Processing.

[4]  Alessandra Flammini,et al.  Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach , 2002, EURASIP J. Adv. Signal Process..

[5]  Ling-Hwei Chen,et al.  A New Approach For Classification Of Generic Audio Data , 2005, Int. J. Pattern Recognit. Artif. Intell..

[6]  P. Dhanalakshmi,et al.  Speech/Music Classification using wavelet based Feature Extraction Techniques , 2014, J. Comput. Sci..

[7]  Dima Ruinskiy,et al.  A Decision-Tree-Based Algorithm for Speech/Music Classification and Segmentation , 2009, EURASIP J. Audio Speech Music. Process..

[8]  Juan Manuel Montero-Martínez,et al.  Histogram Equalization-Based Features for Speech, Music, and Song Discrimination , 2010, IEEE Signal Processing Letters.

[9]  Mahesh Panchal,et al.  A Review on Support Vector Machine for Data Classification , 2012 .

[10]  George Tzanetakis,et al.  Song-specific bootstrapping of singing voice structure , 2004, 2004 IEEE International Conference on Multimedia and Expo (ICME) (IEEE Cat. No.04TH8763).

[11]  Changshui Zhang,et al.  Separation of Music Signals by Harmonic Structure Modeling , 2005, NIPS.

[12]  Scott E. Umbaugh,et al.  Computer Imaging: Digital Image Analysis and Processing , 2005 .

[13]  Inês Salselas,et al.  Music and speech in early development: automatic analysis and classification of prosodic features from two Portuguese variants , 2011 .

[14]  David Gerhard Pitch-based acoustic feature analysis for the discrimination of speech and monophonic singing , 2002 .

[15]  David Gerhard Silence as a cue to rhythm in the analysis of speech and song , 2003 .