Construction and interpretation of multi-layer-perceptrons
暂无分享,去创建一个
Denker, Schwartz et al.(1987) start their paper with the sentence: "Since antiquity, man has dreamed of building a device that would "learn from examples", "form generalizations", and "discover the rules" behind patterns in the data". This paper offers an idea, how to construct a binary multi-layer-perceptron MLP out of some primitives, that we introduce, where hidden nodes have definite meaning. This can be used in-two directions. If one has theoretical background of the mapping performed by a MLP, this background one can be used to design essential parts of hidden layer and of the output layer. This may help to generate a good starting point for the usual back propagation algorithm. Secondly, if one has no idea at all what rules guide the mapping of the MLP, we show specific cases, where an interpretation is possible. The construction method will be illustrated by the standard examples of the "two-or-more-clumps" problem and or the parity-problem.
[1] Marcus Frean,et al. The Upstart Algorithm: A Method for Constructing and Training Feedforward Neural Networks , 1990, Neural Computation.
[2] Ronald L. Rivest,et al. Training a 3-node neural network is NP-complete , 1988, COLT '88.