暂无分享,去创建一个
Andreas Dengel | Sebastian Palacio | Federico Raue | Jorn Hees | Fatemeh Azimi | Jean-Francois Jacques Nicolas Nies | Federico Raue | A. Dengel | Jörn Hees | Sebastian Palacio | Fatemeh Azimi | Sebastián M. Palacio
[1] Jürgen Schmidhuber,et al. Long Short-Term Memory , 1997, Neural Computation.
[2] Sergio Gomez Colmenarejo,et al. Hybrid computing using a neural network with dynamic external memory , 2016, Nature.
[3] D. Weinshall,et al. Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks , 2018, ICML.
[4] G. Peterson. A day of great illumination: B. F. Skinner's discovery of shaping. , 2004, Journal of the experimental analysis of behavior.
[5] Lyle H. Ungar,et al. Machine Learning manuscript No. (will be inserted by the editor) Active Learning for Logistic Regression: , 2007 .
[6] Jason Weston,et al. Curriculum learning , 2009, ICML '09.
[7] Pieter Abbeel,et al. Reverse Curriculum Generation for Reinforcement Learning , 2017, CoRL.
[8] John Schulman,et al. Teacher–Student Curriculum Learning , 2017, IEEE Transactions on Neural Networks and Learning Systems.
[9] Jimmy Ba,et al. Adam: A Method for Stochastic Optimization , 2014, ICLR.
[10] Wojciech Zaremba,et al. Learning to Execute , 2014, ArXiv.
[11] Andrew McCallum,et al. Active Bias: Training More Accurate Neural Networks by Emphasizing High Variance Samples , 2017, NIPS.
[12] Andrew Zisserman,et al. Spatial Transformer Networks , 2015, NIPS.
[13] Terence D. Sanger,et al. Neural network learning control of robot manipulators using gradually increasing task difficulty , 1994, IEEE Trans. Robotics Autom..
[14] I. Pavlov,et al. Conditioned reflexes: An investigation of the physiological activity of the cerebral cortex , 2010, Annals of Neurosciences.
[15] Samy Bengio,et al. Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks , 2015, NIPS.
[16] Andreas Dengel,et al. A Reinforcement Learning Approach for Sequential Spatial Transformer Networks , 2019, ICANN.
[17] Wei Wu,et al. Dynamic Curriculum Learning for Imbalanced Data Classification , 2019, 2019 IEEE/CVF International Conference on Computer Vision (ICCV).
[18] Daphna Weinshall,et al. On The Power of Curriculum Learning in Training Deep Networks , 2019, ICML.
[19] Abhinav Gupta,et al. Training Region-Based Object Detectors with Online Hard Example Mining , 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[20] Andreas Dengel,et al. Hybrid-S2S: Video Object Segmentation with Recurrent Networks and Correspondence Matching , 2020 .
[21] Kai A. Krueger,et al. Flexible shaping: How learning in small steps helps , 2009, Cognition.
[22] Pascal Vincent,et al. The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training , 2009, AISTATS.
[23] Geoffrey E. Hinton,et al. Distilling the Knowledge in a Neural Network , 2015, ArXiv.
[24] Douglas L. T. Rohde,et al. Language acquisition in the absence of explicit negative evidence: how important is starting small? , 1999, Cognition.
[25] Li Fei-Fei,et al. MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels , 2017, ICML.
[26] Dong Xu,et al. SPFTN: A Self-Paced Fine-Tuning Network for Segmenting Objects in Weakly Labelled Videos , 2017, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[27] Alex Graves,et al. Recurrent Models of Visual Attention , 2014, NIPS.
[28] Richard S. Sutton,et al. Training and Tracking in Robotics , 1985, IJCAI.
[29] J. Stenton,et al. Learning how to teach. , 1973, Nursing mirror and midwives journal.
[30] Alex Graves,et al. Automated Curriculum Learning for Neural Networks , 2017, ICML.
[31] J. Elman. Learning and development in neural networks: the importance of starting small , 1993, Cognition.