Layered Neural Networks with Gaussian Hidden Units as Universal Approximations
暂无分享,去创建一个
[1] A. Prékopa,et al. Decomposition of superpositions of distribution functions , 1962 .
[2] W. Rudin. Principles of mathematical analysis , 1964 .
[3] D. Ruelle,et al. Ergodic theory of chaos and strange attractors , 1985 .
[4] A. Lapedes,et al. Nonlinear Signal Processing Using Neural Networks , 1987 .
[5] A. Lapedes,et al. Nonlinear signal processing using neural networks: Prediction and system modelling , 1987 .
[6] Sukhan Lee,et al. Multilayer feedforward potential function network , 1988, IEEE 1988 International Conference on Neural Networks.
[7] S. M. Nikol'skii,et al. APPROXIMATION OF FUNCTIONS ON THE SPHERE , 1988 .
[8] John E. Moody,et al. Fast Learning in Multi-Resolution Hierarchies , 1988, NIPS.
[9] John Moody,et al. Fast Learning in Networks of Locally-Tuned Processing Units , 1989, Neural Computation.
[10] Ken-ichi Funahashi,et al. On the approximate realization of continuous mappings by neural networks , 1989, Neural Networks.
[11] David E. Rumelhart,et al. Product Units: A Computationally Powerful and Biologically Plausible Extension to Backpropagation Networks , 1989, Neural Computation.
[12] H. White,et al. Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions , 1989, International 1989 Joint Conference on Neural Networks.
[13] Kurt Hornik,et al. Multilayer feedforward networks are universal approximators , 1989, Neural Networks.
[14] Mahesan Niranjan,et al. Neural networks and radial basis functions in classifying static speech patterns , 1990 .
[15] James D. Keeler,et al. Predicting the Future: Advantages of Semilocal Units , 1991, Neural Computation.
[16] Petri A. Jokinen,et al. Dynamically capacity allocating neural networks for continuous learning using sequential processing of data , 1991 .
[17] I. W. Sandberg. Approximations for nonlinear functions , 1992 .