Learning Curves for Stochastic Gradient Descent in Linear Feedforward Networks
暂无分享,去创建一个
[1] Bernard Widrow,et al. 30 years of adaptive neural networks: perceptron, Madaline, and backpropagation , 1990, Proc. IEEE.
[2] Michael I. Jordan,et al. Advances in Neural Information Processing Systems 30 , 1995 .
[3] Marwan A. Jabri,et al. Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks , 1991, Neural Comput..
[4] Heskes,et al. Learning processes in neural networks. , 1991, Physical review. A, Atomic, molecular, and optical physics.
[5] Gert Cauwenberghs,et al. A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization , 1992, NIPS.
[6] Marwan A. Jabri,et al. Summed Weight Neuron Perturbation: An O(N) Improvement Over Weight Perturbation , 1992, NIPS.
[7] Michael Biehl,et al. On-Line Learning with a Perceptron , 1994 .
[8] Kurt Hornik,et al. Learning in linear neural networks: a survey , 1995, IEEE Trans. Neural Networks.
[9] Gert Cauwenberghs,et al. An analog VLSI recurrent neural network learning a continuous-time trajectory , 1996, IEEE Trans. Neural Networks.
[10] Alexander J. Smola,et al. Neural Information Processing Systems , 1997, NIPS 1997.
[11] Anthony C. C. Coolen,et al. Statistical mechanical analysis of the dynamics of learning in perceptrons , 1997, Stat. Comput..
[12] Ronald J. Williams,et al. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning , 2004, Machine Learning.