Mood and Emotional Classification
暂无分享,去创建一个
[1] Peter Knees,et al. A music search engine built upon audio-based and web-based similarity measures , 2007, SIGIR.
[2] Tao Li,et al. Semisupervised learning from different information sources , 2005, Knowledge and Information Systems.
[3] P. Laukka,et al. Expression, Perception, and Induction of Musical Emotions: A Review and a Questionnaire Study of Everyday Listening , 2004 .
[4] Durga L. Shrestha,et al. Experiments with AdaBoost.RT, an Improved Boosting Scheme for Regression , 2006, Neural Computation.
[5] Thierry Bertin-Mahieux,et al. Automatic Tagging of Audio: The State-of-the-Art , 2011 .
[6] François Pachet,et al. Popular music access: The Sony music browser , 2004, J. Assoc. Inf. Sci. Technol..
[7] Gert R. G. Lanckriet,et al. Five Approaches to Collecting Tags for Music , 2008, ISMIR.
[8] Luis von Ahn. Games with a Purpose , 2006, Computer.
[9] G. Peeters,et al. A Generic Training and Classification System for MIREX08 Classification Tasks: Audio Music Mood, Audio Genre, Audio Artist and Audio Tag , 2008 .
[10] Miin-Shen Yang,et al. Alternative c-means clustering algorithms , 2002, Pattern Recognit..
[11] Peter Knees,et al. A Document-Centered Approach to a Natural Language Music Search Engine , 2008, ECIR.
[12] Thierry Bertin-Mahieux,et al. Automatic Generation of Social Tags for Music Recommendation , 2007, NIPS.
[13] C. Harte,et al. Detecting harmonic change in musical audio , 2006, AMCMM '06.
[14] Chris Anderson,et al. The Long Tail: Why the Future of Business is Selling Less of More , 2006 .
[15] J. Sloboda. Music Structure and Emotional Response: Some Empirical Findings , 1991 .
[16] Lie Lu,et al. Automatic mood detection and tracking of music audio signals , 2006, IEEE Transactions on Audio, Speech, and Language Processing.
[17] José Fornari,et al. Multi-Feature Modeling of Pulse Clarity: Design, Validation and Optimization , 2008, ISMIR.
[18] Gustavo Carneiro,et al. Supervised Learning of Semantic Classes for Image Annotation and Retrieval , 2007, IEEE Transactions on Pattern Analysis and Machine Intelligence.
[19] J. Ross Quinlan,et al. C4.5: Programs for Machine Learning , 1992 .
[20] Jiebo Luo,et al. Large-scale multimodal semantic concept detection for consumer video , 2007, MIR '07.
[21] Yi-Hsuan Yang,et al. A Regression Approach to Music Emotion Recognition , 2008, IEEE Transactions on Audio, Speech, and Language Processing.
[22] Leann J. Mischel,et al. Listen while you work ? Quasi-experimental relations between personal-stereo headset use and employee work responses , 1995 .
[23] Daniel P. W. Ellis,et al. Please Scroll down for Article Journal of New Music Research a Web-based Game for Collecting Music Metadata a Web-based Game for Collecting Music Metadata , 2022 .
[24] Òscar Celma,et al. Annotating Music Collections: How Content-Based Similarity Helps to Propagate Labels , 2007, ISMIR.
[25] Youngmoo E. Kim,et al. Exploring automatic music annotation with "acoustically-objective" tags , 2010, MIR '10.
[26] Paul Lamere,et al. Social Tagging and Music Information Retrieval , 2008 .
[27] Dan Yang,et al. Disambiguating Music Emotion Using Software Agents , 2004, ISMIR.
[28] Teresa Lesiuk,et al. The effect of music listening on work performance , 2005 .
[29] P. Hampton. The emotional element in music. , 1945, The Journal of general psychology.
[30] Petri Toiviainen,et al. MIR in Matlab (II): A Toolbox for Musical Feature Extraction from Audio , 2007, ISMIR.
[31] C. Krumhansl. Music: A Link Between Cognition and Emotion , 2002 .
[32] Marketta Korhonen. MODELING CONTINUOUS EMOTIONAL APPRAISALS OF MUSIC USING SYSTEM IDENTIFICATION , 2004 .
[33] Tao Li,et al. Toward intelligent music information retrieval , 2006, IEEE Transactions on Multimedia.
[34] Gaël Richard,et al. Inferring Efficient Hierarchical Taxonomies for MIR Tasks: Application to Musical Instruments , 2005, ISMIR.
[35] Youngmoo E. Kim,et al. Prediction of Time-Varying Musical Mood Distributions Using Kalman Filtering , 2010, 2010 Ninth International Conference on Machine Learning and Applications.
[36] Gert R. G. Lanckriet,et al. User-centered design of a social game to tag music , 2009, HCOMP '09.
[37] Petri Toiviainen,et al. Prediction of Multidimensional Emotional Ratings in Music from Audio Using Multivariate Regression Models , 2009, ISMIR.
[38] P. Juslin,et al. Cue Utilization in Communication of Emotion in Music Performance: Relating Performance to Perception Studies of Music Performance , 2022 .
[39] J. Stephen Downie,et al. When Lyrics Outperform Audio for Music Mood Classification: A Feature Analysis , 2010, ISMIR.
[40] A. Gabrielsson. The Performance of Music , 1999 .
[41] Carlo Strapparava,et al. WordNet Affect: an Affective Extension of WordNet , 2004, LREC.
[42] Jonathan Foote,et al. Media segmentation using self-similarity decomposition , 2003, IS&T/SPIE Electronic Imaging.
[43] N. Rickard,et al. Relaxing music prevents stress-induced increases in subjective anxiety, systolic blood pressure, and heart rate in healthy males and females. , 2001, Journal of music therapy.
[44] Ichiro Fujinaga,et al. Musical genre classification: Is it worth pursuing and how can it be improved? , 2006, ISMIR.
[45] Steve Lawrence,et al. Inferring Descriptions and Similarity for Music from Community Metadata , 2002, ICMC.
[46] D. Watson,et al. Development and validation of brief measures of positive and negative affect: the PANAS scales. , 1988, Journal of personality and social psychology.
[47] David Watson,et al. Affects separable and inseparable : on the hierarchical arrangement of the negative affects , 1992 .
[48] François Pachet,et al. Representing Musical Genre: A State of the Art , 2003 .
[49] Marc Leman,et al. Prediction of Musical Affect Using a Combination of Acoustic Structural Cues , 2005 .
[50] J. Stephen Downie,et al. Improving mood classification in music digital libraries by combining lyrics and audio , 2010, JCDL '10.
[51] K. Hevner. Experimental studies of the elements of expression in music , 1936 .
[52] Brandon G. Morton,et al. Improving music emotion labeling using human computation , 2010, HCOMP '10.
[53] George Tzanetakis,et al. MARSYAS: a framework for audio analysis , 1999, Organised Sound.
[54] J. Russell. A circumplex model of affect. , 1980 .
[55] M.D. Korhonen,et al. Modeling emotional content of music using system identification , 2005, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).
[56] L. Wedin,et al. A multidimensional study of perceptual-emotional qualities in music. , 1972, Scandinavian journal of psychology.
[57] Gert R. G. Lanckriet,et al. Modeling music and words using a multi-class naïve Bayes approach , 2006, ISMIR.
[58] J. Easterbrook. The effect of emotion on cue utilization and the organization of behavior. , 1959, Psychological review.
[59] Tao Li,et al. Factors in automatic musical genre classification of audio signals , 2003, 2003 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (IEEE Cat. No.03TH8684).
[60] Adrian C. North,et al. The Functions of Music in Everyday Life: Redefining the Social in Music Psychology , 1999 .
[61] Gert R. G. Lanckriet,et al. Combining audio content and social context for semantic music discovery , 2009, SIGIR.
[62] Gert R. G. Lanckriet,et al. A Game-Based Approach for Collecting Semantic Annotations of Music , 2007, ISMIR.
[63] Òscar Celma,et al. Search Sounds: An audio crawler focused on weblogs , 2006, ISMIR.
[64] Seungmin Rho,et al. SMERS: Music Emotion Recognition Using Support Vector Regression , 2009, ISMIR.
[65] Edith Law,et al. Input-agreement: a new mechanism for collecting data using human computation games , 2009, CHI.
[66] Roger B. Dannenberg,et al. TagATune: A Game for Music and Sound Annotation , 2007, ISMIR.
[67] George A. Miller,et al. WordNet: A Lexical Database for English , 1995, HLT.
[68] Mert Bay,et al. The 2007 MIREX Audio Mood Classification Task: Lessons Learned , 2008, ISMIR.
[69] Douglas Turnbull,et al. Exploring "Artist Image" Using Content-Based Analysis Of Promotional Photos , 2010, ICMC.
[70] Lie Lu,et al. Automatic mood detection from acoustic music data , 2003, ISMIR.
[71] H. D. Brunk,et al. AN EMPIRICAL DISTRIBUTION FUNCTION FOR SAMPLING WITH INCOMPLETE INFORMATION , 1955 .
[72] K. Scherer,et al. Emotional states generated by music: An exploratory study of music experts , 2001 .
[73] George Tzanetakis,et al. MARSYAS SUBMISSIONS TO MIREX 2007 , 2007 .
[74] Wolfgang Nejdl,et al. Music Mood and Theme Classification - a Hybrid Approach , 2009, ISMIR.
[75] Douglas Turnbull,et al. Using Artist Similarity to Propagate Semantic Information , 2009, ISMIR.
[76] Joan Serrà,et al. Music Mood Representations from Social Tags , 2009, ISMIR.
[77] Chin-Hui Lee,et al. A Study on Attribute-Based Taxonomy for Music Information Retrieval , 2007, ISMIR.
[78] Niall J. L. Griffith,et al. Describing Melodic Structure Using a MusicTracker — Issues in Notation Andsound , 2002 .
[79] Youngmoo E. Kim,et al. Prediction of Time-varying Musical Mood Distributions from Audio , 2010, ISMIR.
[80] Matti Karjalainen,et al. A computationally efficient multipitch analysis model , 2000, IEEE Trans. Speech Audio Process..
[81] Mark B. Sandler,et al. A Semantic Space for Music Derived from Social Tags , 2007, ISMIR.
[82] W. Thompson,et al. Can Composers Express Emotions through Music? , 1992 .
[83] Elias Pampalk,et al. Content-based organization and visualization of music archives , 2002, MULTIMEDIA '02.
[84] Ming Li,et al. THINKIT'S SUBMISSIONS FOR MIREX2009 AUDIO MUSIC CLASSIFICATION AND SIMILARITY TASKS , 2009 .
[85] Youngmoo E. Kim,et al. Feature selection for content-based, time-varying musical emotion regression , 2010, MIR '10.
[86] Youngmoo E. Kim,et al. MoodSwings: A Collaborative Game for Music Mood Label Collection , 2008, ISMIR.
[87] K. Scherer. Which Emotions Can be Induced by Music? What Are the Underlying Mechanisms? And How Can We Measure Them? , 2004 .
[88] H. Schlosberg. The description of facial expressions in terms of two dimensions. , 1952, Journal of experimental psychology.
[89] George Tzanetakis,et al. Musical genre classification of audio signals , 2002, IEEE Trans. Speech Audio Process..
[90] Andreas F. Ehmann,et al. Lyric Text Mining in Music Mood Classification , 2009, ISMIR.
[91] Yajie Hu,et al. Lyric-based Song Emotion Detection with Affective Lexicon and Fuzzy Clustering Method , 2009, ISMIR.
[92] G. Bower. How might emotions affect learning , 1992 .
[93] Youngmoo E. Kim,et al. Meerkat: exploring semantic music discovery using personalized radio , 2010, MIR '10.
[94] Tao Li,et al. Content-based music similarity search and emotion detection , 2004, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[95] Densil Cabrera,et al. 'Psysound3': Software for Acoustical and Psychoacoustical Analysis of Sound Recordings , 2007 .
[96] Gert R. G. Lanckriet,et al. Combining Feature Kernels for Semantic Music Retrieval , 2008, ISMIR.
[97] S. Wold,et al. PLS-regression: a basic tool of chemometrics , 2001 .
[98] Adrian C. North,et al. Uses of Music in Everyday Life , 2004 .
[99] Gert R. G. Lanckriet,et al. Semantic Annotation and Retrieval of Music and Sound Effects , 2008, IEEE Transactions on Audio, Speech, and Language Processing.
[100] Ingrid Daubechies,et al. The wavelet transform, time-frequency localization and signal analysis , 1990, IEEE Trans. Inf. Theory.
[101] Laura A. Dabbish,et al. Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.
[102] Daniel P. W. Ellis,et al. Multiple-Instance Learning for Music Information Retrieval , 2008, ISMIR.
[103] Daniel P. W. Ellis,et al. Automatic Record Reviews , 2004, ISMIR.
[104] Tao Li,et al. Detecting emotion in music , 2003, ISMIR.
[105] Nello Cristianini,et al. Learning the Kernel Matrix with Semidefinite Programming , 2002, J. Mach. Learn. Res..
[106] Douglas Turnbull,et al. Using Regression to Combine Data Sources for Semantic Music Discovery , 2009, ISMIR.