Active Music Listening Interfaces Based on Signal Processing

This paper introduces our research aimed at building "active music listening interfaces". This research approach is intended to enrich end-users' music listening experiences by applying music-understanding technologies based on signal processing. Active music listening is a way of listening to music through active interactions. We have developed seven interfaces for active music listening, such as interfaces for skipping sections of no interest within a musical piece while viewing a graphical overview of the entire song structure, for displaying virtual dancers or song lyrics synchronized with the music, for changing the timbre of instrument sounds in compact-disc recordings, and for browsing a large music collection to encounter interesting musical pieces or artists. These interfaces demonstrate the importance of music-understanding technologies and the benefit they offer to end users. Our hope is that this work will help change music listening into a more active, immersive experience.

[1]  Masataka Goto,et al.  INTER:D: a drum sound equalizer for controlling volume and timbre of drums , 2005 .

[2]  Hiromasa Fujihara,et al.  Automatic Synchronization between Lyrics and Music CD Recordings Based on Viterbi Alignment of Segregated Vocal Signals , 2006, Eighth IEEE International Symposium on Multimedia (ISM'06).

[3]  Koji Kazai,et al.  Investigation of brain activation while listening to and playing music using fNIRS 1 , 2006 .

[4]  Masataka Goto,et al.  A chorus section detection method for musical audio signals and its application to a music listening station , 2006, IEEE Transactions on Audio, Speech, and Language Processing.

[5]  Masataka Goto,et al.  MusicRainbow: A New User Interface to Discover Artists Using Audio-based Similarity and Web-based Labeling , 2006, ISMIR.

[6]  Masataka Goto,et al.  Musicream: New Music Playback Interface for Streaming, Sticking, Sorting, and Recalling Musical Pieces , 2005, ISMIR.

[7]  Masataka Goto Music scene description project: Toward audio-based real-time music understanding , 2003, ISMIR.

[8]  George Tzanetakis,et al.  MARSYAS: a framework for audio analysis , 1999, Organised Sound.

[9]  Bill N. Schilit,et al.  Beyond paper: supporting active reading with free form digital ink annotations , 1998, CHI.

[10]  Masataka Goto,et al.  An Audio-based Real-time Beat Tracking System for Music With or Without Drum-sounds , 2001 .

[11]  Masataka Goto,et al.  Integration and Adaptation of Harmonic and Inharmonic Models for Separating Polyphonic Musical Signals , 2007, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing - ICASSP '07.

[12]  Anssi Klapuri,et al.  Signal Processing Methods for Music Transcription , 2006 .

[13]  Masataka Goto,et al.  Drum Sound Recognition for Polyphonic Audio Signals by Adaptation and Matching of Spectrogram Templates With Harmonic Structure Suppression , 2007, IEEE Transactions on Audio, Speech, and Language Processing.

[14]  Masataka Goto,et al.  A real-time music-scene-description system: predominant-F0 estimation for detecting melody and bass lines in real-world audio signals , 2004, Speech Commun..

[15]  Yoichi Muraoka,et al.  A beat tracking system for acoustic signals of music , 1994, MULTIMEDIA '94.

[16]  George Tzanetakis,et al.  Automatic Musical Genre Classification of Audio Signals , 2001, ISMIR.

[17]  Gerhard Widmer,et al.  Exploring Music Collections by Browsing Different Views , 2004, Computer Music Journal.

[18]  Masataka Goto,et al.  Recent studies on music information processing , 2004 .

[19]  Masataka Goto,et al.  SmartMusicKIOSK: music listening station with chorus-search function , 2003, UIST '03.