Asymptotics of Gradient-based Neural Network Training Algorithms
暂无分享,去创建一个
[1] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[2] Kurt Hornik,et al. Convergence of learning algorithms with constant learning rates , 1991, IEEE Trans. Neural Networks.
[3] H. White. Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network Models , 1989 .
[4] William A. Sethares,et al. Weak convergence and local stability properties of fixed step size recursive algorithms , 1993, IEEE Trans. Inf. Theory.
[5] William Finnoff,et al. Diffusion Approximations for the Constant Learning Rate Backpropagation Algorithm and Resistance to Local Minima , 1992, Neural Computation.
[6] Marc A. Berger,et al. An Introduction to Probability and Stochastic Processes , 1992 .
[7] John E. Moody,et al. Weight Space Probability Densities in Stochastic Learning: I. Dynamics and Equilibria , 1992, NIPS.
[8] Todd K. Leen,et al. Weight Space Probability Densities in Stochastic Learning: II. Transients and Basin Hopping Times , 1992, NIPS.
[9] Michel Loève,et al. Probability Theory I , 1977 .
[10] P. Bickel,et al. Mathematical Statistics: Basic Ideas and Selected Topics , 1977 .
[11] R. Tweedie. Criteria for classifying general Markov chains , 1976, Advances in Applied Probability.