Emotion Based MIDI Files Retrieval System

This chapter presents a query answering system (QAS) associated with MIDI music database and a query language which atomic expressions represent various types of emotions. System for automatic indexing of music by emotions is one of the main modules of QAS. Its construction required building a training database, manual indexation of learning instances, finding a collection of features describing musical segments, and finally building classifiers. A hierarchical model of emotions consisting of two levels, L1 and L2, was used. A collection of harmonic and rhythmic attributes extracted from music files allowed emotion detection in music with an average of 83% accuracy at the level L1. The presented QAS is a collection of personalized search engines (PSE), each one based on a personalized system for automatic indexing of music by emotions. In order to use QAS, user profile has to be built and compared to representative profiles of PSE′s. The nearest one is identified and used in answering user query.

[1]  Homer H. Chen,et al.  Music emotion recognition: the role of individuality , 2007, HCM '07.

[2]  Zbigniew W. Ras,et al.  Solving Failing Queries through Cooperation and Collaboration , 2006, World Wide Web.

[3]  R. Thayer The biopsychology of mood and arousal , 1989 .

[4]  Zhang Naiyao,et al.  User-adaptive music emotion recognition , 2004, Proceedings 7th International Conference on Signal Processing, 2004. Proceedings. ICSP '04. 2004..

[5]  K. Hevner Experimental studies of the elements of expression in music , 1936 .

[6]  Ian H. Witten,et al.  Data mining: practical machine learning tools and techniques, 3rd Edition , 1999 .

[7]  Tao Li,et al.  Detecting emotion in music , 2003, ISMIR.

[8]  Terry Gaasterland Cooperative Answering through Controlled Query Relaxation , 1997, IEEE Expert.

[9]  Geoff Holmes,et al.  Benchmarking Attribute Selection Techniques for Discrete Class Data Mining , 2003, IEEE Trans. Knowl. Data Eng..

[10]  Steve DiPaola,et al.  Emotional remapping of music to facial animation , 2006, Sandbox '06.

[11]  Lie Lu,et al.  Automatic mood detection from acoustic music data , 2003, ISMIR.

[12]  Sebastian Thrun,et al.  Text Classification from Labeled and Unlabeled Documents using EM , 2000, Machine Learning.

[13]  George Tzanetakis,et al.  Musical genre classification of audio signals , 2002, IEEE Trans. Speech Audio Process..

[14]  Ichiro Fujinaga,et al.  Automatic Genre Classification Using Large High-Level Musical Feature Sets , 2004, ISMIR.

[15]  Daniel T. Larose,et al.  Discovering Knowledge in Data: An Introduction to Data Mining , 2005 .

[16]  Piotr Synak,et al.  Multi-Label Classification of Emotions in Music , 2006, Intelligent Information Systems.

[17]  Yi-Hsuan Yang,et al.  Detecting and Classifying Emotion in Popular Music , 2006, JCIS.