On the Potential of Simple Framewise Approaches to Piano Transcription
暂无分享,去创建一个
Gerhard Widmer | Sebastian Böck | Andreas Arzt | Rainer Kelz | Filip Korzeniowski | Matthias Dorfer | G. Widmer | A. Arzt | Matthias Dorfer | Sebastian Böck | Rainer Kelz | Filip Korzeniowski
[1] Roland Badeau,et al. Multipitch Estimation of Piano Sounds Using a New Probabilistic Spectral Smoothness Principle , 2010, IEEE Transactions on Audio, Speech, and Language Processing.
[2] Yoshua Bengio,et al. Understanding the difficulty of training deep feedforward neural networks , 2010, AISTATS.
[3] Markus Schedl,et al. Polyphonic piano note transcription with recurrent neural networks , 2012, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[4] Tillman Weyde,et al. A hybrid recurrent neural network for music transcription , 2014, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[5] Dan Klein,et al. Unsupervised Transcription of Piano Music , 2014, NIPS.
[6] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[7] Benjamin Schrauwen,et al. End-to-end learning for music audio , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[8] Thomas Brox,et al. Striving for Simplicity: The All Convolutional Net , 2014, ICLR.
[9] Yoshua Bengio,et al. Modeling Temporal Dependencies in High-Dimensional Sequences: Application to Polyphonic Music Generation and Transcription , 2012, ICML.
[10] Simon Dixon,et al. An End-to-End Neural Network for Polyphonic Piano Music Transcription , 2015, IEEE/ACM Transactions on Audio, Speech, and Language Processing.
[11] H. Zou,et al. Regularization and variable selection via the elastic net , 2005 .
[12] Jürgen Schmidhuber,et al. LSTM: A Search Space Odyssey , 2015, IEEE Transactions on Neural Networks and Learning Systems.
[13] Colin Raffel,et al. Lasagne: First release. , 2015 .
[14] Yoshua Bengio,et al. Practical Recommendations for Gradient-Based Training of Deep Architectures , 2012, Neural Networks: Tricks of the Trade.
[15] Judith C. Brown. Calculation of a constant Q spectral transform , 1991 .
[16] Katharina Eggensperger,et al. Towards an Empirical Foundation for Assessing Bayesian Optimization of Hyperparameters , 2013 .
[17] David D. Cox,et al. Hyperopt: A Python Library for Optimizing the Hyperparameters of Machine Learning Algorithms , 2013, SciPy.
[18] David Sussillo,et al. Random Walks: Training Very Deep Nonlinear Feed-Forward Networks with Smart Initialization , 2014, ArXiv.
[19] Thomas Fillon,et al. YAAFE, an Easy to Use and Efficient Audio Feature Extraction Software , 2010, ISMIR.
[20] Kevin Leyton-Brown,et al. An Efficient Approach for Assessing Hyperparameter Importance , 2014, ICML.
[21] Geoffrey E. Hinton,et al. Learning representations by back-propagating errors , 1986, Nature.
[22] Florian Krebs,et al. madmom: A New Python Audio and Music Signal Processing Library , 2016, ACM Multimedia.
[23] Sergey Ioffe,et al. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift , 2015, ICML.
[24] Geoffrey E. Hinton,et al. Deep Learning , 2015, Nature.
[25] Jasper Snoek,et al. Practical Bayesian Optimization of Machine Learning Algorithms , 2012, NIPS.
[26] Jian Sun,et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification , 2015, 2015 IEEE International Conference on Computer Vision (ICCV).
[27] Mark D. Plumbley,et al. Polyphonic piano transcription using non-negative Matrix Factorisation with group sparsity , 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[28] Y. Nesterov. A method for solving the convex programming problem with convergence rate O(1/k^2) , 1983 .
[29] Guigang Zhang,et al. Deep Learning , 2016, Int. J. Semantic Comput..
[30] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[31] Nitish Srivastava,et al. Dropout: a simple way to prevent neural networks from overfitting , 2014, J. Mach. Learn. Res..
[32] L. F. Abbott,et al. Random Walk Initialization for Training Very Deep Feedforward Networks , 2014, 1412.6558.
[33] Boris Polyak. Some methods of speeding up the convergence of iteration methods , 1964 .
[34] Peter M. Williams,et al. Bayesian Regularization and Pruning Using a Laplace Prior , 1995, Neural Computation.