A two-layer model for music pleasure regression

We adopt a two-layer regression model for music pleasure regression. Pleasure orientation of a song is estimated first, and then different regressors are used to predict degree of pleasure according to the estimated orientation. By using corresponding regressors for each instance, there is a big improvement when we assume the first layer is perfect in comparison with one-layer model. By tuning the confidence threshold of the orientation classifier of the two-layer model, we get improvement over one-layer model.

[1]  Jeffrey J. Scott,et al.  MUSIC EMOTION RECOGNITION: A STATE OF THE ART REVIEW , 2010 .

[2]  M. Bradley,et al.  Affective Norms for English Words (ANEW): Instruction Manual and Affective Ratings , 1999 .

[3]  Yi-Hsuan Yang,et al.  Ranking-Based Emotion Recognition for Music Organization and Retrieval , 2011, IEEE Transactions on Audio, Speech, and Language Processing.

[4]  Hsuan-Tien Lin,et al.  A note on Platt’s probabilistic outputs for support vector machines , 2007, Machine Learning.

[5]  Petri Toiviainen,et al.  Prediction of Multidimensional Emotional Ratings in Music from Audio Using Multivariate Regression Models , 2009, ISMIR.

[6]  Léon Bottou,et al.  Local Learning Algorithms , 1992, Neural Computation.

[7]  James Ze Wang,et al.  On shape and the computability of emotions , 2012, ACM Multimedia.

[8]  Nicholas J. Belkin,et al.  Categories of Music Description and Search Terms and Phrases Used by Non-Music Experts , 2002, ISMIR.

[9]  Abdelwadood Mesleh,et al.  Chi Square Feature Extraction Based Svms Arabic Language Text Categorization System , 2007 .

[10]  Yi-Hsuan Yang,et al.  A Regression Approach to Music Emotion Recognition , 2008, IEEE Transactions on Audio, Speech, and Language Processing.

[11]  杨德顺,et al.  Music Emotion Regression Based on Multi-modal Features , 2012 .

[12]  Shiliang Zhang,et al.  Affective MTV analysis based on arousal and valence features , 2008, 2008 IEEE International Conference on Multimedia and Expo.

[13]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[14]  Yi-Hsuan Yang,et al.  Exploiting genre for music emotion classification , 2009, 2009 IEEE International Conference on Multimedia and Expo.

[15]  Ichiro Fujinaga,et al.  jAudio: An Feature Extraction Library , 2005, ISMIR.

[16]  Björn W. Schuller,et al.  OpenEAR — Introducing the munich open-source emotion and affect recognition toolkit , 2009, 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops.