Cutting Music Source Separation Some Slakh: A Dataset to Study the Impact of Training Data Quality and Quantity
暂无分享,去创建一个
Jonathan Le Roux | Prem Seetharaman | Ethan Manilow | Gordon Wichern | Prem Seetharaman | G. Wichern | Ethan Manilow
[1] Naoya Takahashi,et al. Mmdenselstm: An Efficient Combination of Convolutional and Recurrent Neural Networks for Audio Source Separation , 2018, 2018 16th International Workshop on Acoustic Signal Enhancement (IWAENC).
[2] Fabian-Robert Stöter,et al. MUSDB18 - a corpus for music separation , 2017 .
[3] H. H. Mao,et al. The NES Music Database: A multi-instrumental dataset with expressive performance attributes , 2018, ISMIR.
[4] Matthias Mauch,et al. MedleyDB: A Multitrack Dataset for Annotation-Intensive MIR Research , 2014, ISMIR.
[5] Franck Giron,et al. Improving music source separation based on deep neural networks through data augmentation and network blending , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[6] Jonathan Le Roux,et al. Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[7] Karen Simonyan,et al. Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders , 2017, ICML.
[8] Jonathan Le Roux,et al. Deep clustering and conventional networks for music separation: Stronger together , 2016, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[9] Tillman Weyde,et al. Singing Voice Separation with Deep U-Net Convolutional Networks , 2017, ISMIR.
[10] Algorithms to measure audio programme loudness and true-peak audio level , 2011 .
[11] Joshua D. Reiss,et al. Spectral Characteristics of Popular Commercial Recordings 1950-2010 , 2013 .
[12] Jyh-Shing Roger Jang,et al. On the Improvement of Singing Voice Separation for Monaural Recordings Using the MIR-1K Dataset , 2010, IEEE Transactions on Audio, Speech, and Language Processing.
[13] Thomas Grill,et al. Exploring Data Augmentation for Improved Singing Voice Detection with Neural Networks , 2015, ISMIR.
[14] Zhuo Chen,et al. Deep clustering: Discriminative embeddings for segmentation and separation , 2015, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[15] Antoine Liutkus,et al. The 2016 Signal Separation Evaluation Campaign , 2017, LVA/ICA.
[16] Romain Hennequin,et al. Singing Voice Separation: A Study on Training Data , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[17] Yi-Hsuan Yang,et al. Multitask Learning for Frame-level Instrument Recognition , 2019, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[18] Yi-Hsuan Yang,et al. Vocal activity informed singing voice separation with the iKala dataset , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[19] Juan Pablo Bello,et al. Increasing drum transcription vocabulary using data synthesis , 2018 .
[20] Jonathan Le Roux,et al. SDR – Half-baked or Well Done? , 2018, ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[21] Emilia Gómez Gutiérrez,et al. Generating data to train convolutional neural networks for classical musicsource separation , 2017 .
[22] Gordon Wichern,et al. Comparison of Loudness Features for Automatic Level Adjustment in Mixing , 2015 .
[23] Antoine Liutkus,et al. The 2018 Signal Separation Evaluation Campaign , 2018, LVA/ICA.
[24] Joshua D. Reiss,et al. Autonomous Multitrack Equalization Based on Masking Reduction , 2015 .
[25] Colin Raffel,et al. Learning-Based Methods for Comparing Sequences, with Applications to Audio-to-MIDI Alignment and Matching , 2016 .