Curriculum learning
暂无分享,去创建一个
Jason Weston | Yoshua Bengio | Jérôme Louradour | Ronan Collobert | J. Weston | Yoshua Bengio | Ronan Collobert | J. Louradour | R. Collobert
[1] E. Allgower,et al. Numerical Continuation Methods , 1990 .
[2] Johan Håstad,et al. On the power of small-depth threshold circuits , 1990, Proceedings [1990] 31st Annual Symposium on Foundations of Computer Science.
[3] Eugene L. Allgower,et al. Numerical continuation methods - an introduction , 1990, Springer series in computational mathematics.
[4] David Haussler,et al. Unsupervised learning of distributions on binary vectors using two layer networks , 1991, NIPS 1991.
[5] J. Elman. Learning and development in neural networks: the importance of starting small , 1993, Cognition.
[6] David A. Cohn,et al. Active Learning with Statistical Models , 1996, NIPS.
[7] Terence D. Sanger,et al. Neural network learning control of robot manipulators using gradually increasing task difficulty , 1994, IEEE Trans. Robotics Autom..
[8] Derényi,et al. Generalization in the programed teaching of a perceptron. , 1994, Physical review. E, Statistical physics, plasmas, fluids, and related interdisciplinary topics.
[9] Sebastian Thrun,et al. Explanation-based neural network learning a lifelong learning approach , 1995 .
[10] J. J. Moré,et al. Smoothing techniques for macromolecular global optimization , 1995 .
[11] Thomas F. Coleman,et al. Parallel continuation-based global optimization for molecular conformation and protein folding , 1994, J. Glob. Optim..
[12] Jorge J. Moré,et al. Global Continuation for Distance Geometry Problems , 1995, SIAM J. Optim..
[13] Douglas L. T. Rohde,et al. Language acquisition in the absence of explicit negative evidence: how important is starting small? , 1999, Cognition.
[14] Yoshua Bengio,et al. A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..
[15] Jean-Luc Gauvain,et al. Connectionist language modeling for large vocabulary continuous speech recognition , 2002, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing.
[16] G. Peterson. A day of great illumination: B. F. Skinner's discovery of shaping. , 2004, Journal of the experimental analysis of behavior.
[17] Yoshua Bengio,et al. Greedy Layer-Wise Training of Deep Networks , 2006, NIPS.
[18] Geoffrey E. Hinton,et al. Reducing the Dimensionality of Data with Neural Networks , 2006, Science.
[19] Yee Whye Teh,et al. A Fast Learning Algorithm for Deep Belief Nets , 2006, Neural Computation.
[20] Marc'Aurelio Ranzato,et al. Efficient Learning of Sparse Representations with an Energy-Based Model , 2006, NIPS.
[21] Geoffrey E. Hinton,et al. Restricted Boltzmann machines for collaborative filtering , 2007, ICML '07.
[22] Marc'Aurelio Ranzato,et al. Sparse Feature Learning for Deep Belief Networks , 2007, NIPS.
[23] Geoffrey E. Hinton,et al. Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure , 2007, AISTATS.
[24] Yoshua Bengio,et al. An empirical evaluation of deep architectures on problems with many factors of variation , 2007, ICML '07.
[25] Yoshua. Bengio,et al. Learning Deep Architectures for AI , 2007, Found. Trends Mach. Learn..
[26] Geoffrey E. Hinton,et al. Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes , 2007, NIPS.
[27] Jason Weston,et al. A unified architecture for natural language processing: deep neural networks with multitask learning , 2008, ICML '08.
[28] Jason Weston,et al. Deep learning via semi-supervised embedding , 2008, ICML '08.
[29] Yoshua Bengio,et al. Extracting and composing robust features with denoising autoencoders , 2008, ICML '08.
[30] Pascal Vincent,et al. The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training , 2009, AISTATS.
[31] Kai A. Krueger,et al. Flexible shaping: How learning in small steps helps , 2009, Cognition.