A system for dynamic playlist generation driven by multimodal control signals and descriptors

This work describes a general approach to multimedia playlist generation and description and an application of the approach to music information retrieval. The example of system that we implemented updates a musical playlist on the fly based on prior information (musical preferences); current descriptors of the song that is being played; and fine-grained and semantically rich descriptors (descriptors of user's gestures, of environment conditions, etc.). The system incorporates a learning system that infers the user's preferences. Subjective tests have been conducted on usability and quality of the recommendation system.

[1]  Hendrik Purwins,et al.  Profiles of Pitch Classes - Circularity of Relative Pitch and Key: Experiments, Models, Music Analysis, and Perspectives , 2005 .

[2]  Emilia Gómez Gutiérrez,et al.  Tonal description of music audio signals , 2006 .

[3]  Gert R. G. Lanckriet,et al.  Smarter than Genius? Human Evaluation of Music Recommender Systems , 2009, ISMIR.

[4]  C. Krumhansl Cognitive Foundations of Musical Pitch , 1990 .

[5]  Sergi Jordà,et al.  The reacTable: a tangible tabletop musical instrument and collaborative workbench , 2006, SIGGRAPH '06.

[6]  Ferdinand Fuhrmann,et al.  Content-based music recommendation based on user preference examples , 2010, RecSys 2010.

[7]  Perfecto Herrera,et al.  Rocking around the clock eight days a week: an exploration of temporal patterns of music listening , 2010, RecSys 2010.

[8]  Jonathan Foote,et al.  Automatic audio segmentation using a measure of audio novelty , 2000, 2000 IEEE International Conference on Multimedia and Expo. ICME2000. Proceedings. Latest Advances in the Fast Changing World of Multimedia (Cat. No.00TH8532).

[9]  Ja-Ling Wu,et al.  Music Paste: Concatenating Music Clips based on Chroma and Rhythm Features , 2009, ISMIR.

[10]  Wei Pan,et al.  SoundSense: scalable sound sensing for people-centric applications on mobile phones , 2009, MobiSys '09.

[11]  Masahiro Niitsuma,et al.  Development of an Automatic Music Selection System Based on Runner's Step Frequency , 2008, ISMIR.

[12]  P. Juslin,et al.  Cue Utilization in Communication of Emotion in Music Performance: Relating Performance to Perception Studies of Music Performance , 2022 .

[13]  O. Lartillot,et al.  A MATLAB TOOLBOX FOR MUSICAL FEATURE EXTRACTION FROM AUDIO , 2007 .

[14]  Petri Toiviainen,et al.  MIR in Matlab (II): A Toolbox for Musical Feature Extraction from Audio , 2007, ISMIR.

[15]  Lie Lu,et al.  Automatic mood detection from acoustic music data , 2003, ISMIR.

[16]  Suh-Yin Lee,et al.  Emotion-based music recommendation by affinity discovery from film music , 2009, Expert Syst. Appl..

[17]  Marc Leman,et al.  The MEGA Project: Analysis and Synthesis of Multisensory Expressive Gesture in Performing Art Applications , 2005 .

[18]  Xavier Serra,et al.  SIMAC: semantic interaction with music audio contents , 2005 .