The Extraction of Individual Music Preference Based on Deep Time-series CCA

This paper presents a novel method to extract individual music preference. The novel CCA, named Deep Time-series Canonical Correlation Analysis (DTCCA), is proposed to realize the aforementioned extraction. In contrast with standard CCA, our novel DTCCA can deal with not only the correlation between the input features but also the time-series relation in each input those simultaneously. A DTCCA-based latent space of projected features effectively reflects relationship between his/her EEG features and corresponding audio features rather than standard CCA's one since the audio and EEG signals have time-series relation respectively.