Extracting Emotions from Music Data

Music is not only a set of sounds, it evokes emotions, subjectively perceived by listeners. The growing amount of audio data available on CDs and in the Internet wakes up a need for content-based searching through these files. The user may be interested in finding pieces in a specific mood. The goal of this paper is to elaborate tools for such a search. A method for the appropriate objective description (parameterization) of audio files is proposed, and experiments on a set of music pieces are described. The results are summarized in concluding chapter.

[1]  Pedro Cano,et al.  Automatic Segmentation for Music Classification using Competitive Hidden Markov Models , 2000, ISMIR.

[2]  François Pachet,et al.  Knowledge Management and Musical Metadata , 2005 .

[3]  Gerhard Widmer,et al.  Discovering simple rules in complex data: A meta-learning algorithm and some surprising musical discoveries , 2003, Artif. Intell..

[4]  D. Schwartz Encyclopedia of Knowledge Management , 2005 .

[5]  Ichiro Fujinaga,et al.  Realtime Recognition of Orchestral Instruments , 2000, International Conference on Mathematics and Computing.

[6]  M. Lavy Emotion and the experience of listening to music : a framework for empirical research , 2001 .

[7]  François Pachet,et al.  Beyond the cybernetic jam fantasy: the continuator , 2004, IEEE Computer Graphics and Applications.

[8]  Jean Carletta,et al.  Assessing Agreement on Classification Tasks: The Kappa Statistic , 1996, CL.

[9]  J. Sloboda,et al.  Music and emotion: Theory and research , 2001 .

[10]  K. Hevner Experimental studies of the elements of expression in music , 1936 .

[11]  Piotr Synak,et al.  Application of Temporal Descriptors to Musical Instrument Sound Recognition , 2003, Journal of Intelligent Information Systems.

[12]  George Tzanetakis,et al.  MARSYAS: a framework for audio analysis , 1999, Organised Sound.

[13]  Andrzej Czyzewski,et al.  Representing Musical Instrument Sounds for Their Automatic Classification , 2001 .

[14]  J C Brown Computer identification of musical instruments using pattern recognition with cepstral coefficients as features. , 1999, The Journal of the Acoustical Society of America.

[15]  Adam L. Berger,et al.  ERROR-CORRECTING OUTPUT CODING FOR TEXT CLASSIFICATION , 1999 .

[16]  I. Cross Music, Cognition, Culture, and Evolution , 2001, Annals of the New York Academy of Sciences.

[17]  Youngmoo E. Kim,et al.  Musical instrument identification: A pattern‐recognition approach , 1998 .

[18]  Xavier Rodet,et al.  Automatically selecting signal descriptors for SoundClassification , 2002, ICMC.

[19]  Xavier Serra,et al.  Towards Instrument Segmentation for Music Content Description: a Critical Review of Instrument Classification Techniques , 2000, ISMIR.

[20]  Frank Dellaert,et al.  Recognizing emotion in speech , 1996, Proceeding of Fourth International Conference on Spoken Language Processing. ICSLP '96.

[21]  Ralf Kompe,et al.  Emotional space improves emotion recognition , 2002, INTERSPEECH.

[22]  Ramón López de Mántaras,et al.  Ai and Music: From Composition to Expressive Performance , 2002, AI Mag..

[23]  Tao Li,et al.  Detecting emotion in music , 2003, ISMIR.

[24]  Anssi Klapuri,et al.  Musical instrument recognition using cepstral coefficients and temporal features , 2000, 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings (Cat. No.00CH37100).