Building Musically-relevant Audio Features through Multiple Timescale Representations
暂无分享,去创建一个
[1] Mathieu Lagrange,et al. Multi-scale temporal fusion by boosting for music classification , 2011, ISMIR.
[2] Joakim Andén,et al. Multiscale Scattering for Audio Classification , 2011, ISMIR.
[3] Jason Weston,et al. Multi-Tasking with Joint Semantic Spaces for Large-Scale Music Annotation and Retrieval , 2011 .
[4] Douglas Eck,et al. Temporal Pooling and Multiscale Learning for Automatic Annotation and Ranking of Music Audio , 2011, ISMIR.
[5] J. Bergstra. Algorithms for Classifying Recorded Music by Genre , 2006 .
[6] Michael I. Mandel,et al. Evaluation of Algorithms Using Games: The Case of Music Tagging , 2009, ISMIR.
[7] Gert R. G. Lanckriet,et al. Semantic Annotation and Retrieval of Music using a Bag of Systems Representation , 2011, ISMIR.
[8] Chin-Hui Lee,et al. On the importance of modeling temporal information in music tag annotation , 2009, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing.
[9] Edward H. Adelson,et al. The Laplacian Pyramid as a Compact Image Code , 1983, IEEE Trans. Commun..
[10] Lawrence D. Jackel,et al. Backpropagation Applied to Handwritten Zip Code Recognition , 1989, Neural Computation.
[11] Honglak Lee,et al. Unsupervised feature learning for audio classification using convolutional deep belief networks , 2009, NIPS.
[12] Matthias Mauch,et al. Structural Change on Multiple Time Scales as a Correlate of Musical Complexity , 2011, ISMIR.