Self-Organizing Neural Networks
暂无分享,去创建一个
Learning algorithms which were considered for a single perceptron, linear adaline, and multilayer perceptron belong to the class of supervised learning algorithms. In this case the training data is divided into input signals, x(n), and target signals, d(n). A typical learning algorithm is driven by error signals ε(n) which are the differences between the actual network output, y(n), and the desire (or target) output for a given input. For a pattern learning, we can express the weight update in the following general form ∆w(n) = L(w(n), x(n), ε(n)) where L represents a learning algorithm. If we say that a neural network can describe a model of data, then a multilayer perceptron describes the data in a form of a hypersurface which approximates a functional relationship between x(n), and d(n).
[1] Charles Leave. Neural Networks: Algorithms, Applications and Programming Techniques , 1992 .
[2] Simon Haykin,et al. Neural Networks: A Comprehensive Foundation , 1998 .
[3] Bart Kosko,et al. Neural networks for signal processing , 1992 .
[4] Anders Krogh,et al. Introduction to the theory of neural computation , 1994, The advanced book program.
[5] M.H. Hassoun,et al. Fundamentals of Artificial Neural Networks , 1996, Proceedings of the IEEE.