Construction and interpretation of multi-layer-perceptrons

Denker, Schwartz et al.(1987) start their paper with the sentence: "Since antiquity, man has dreamed of building a device that would "learn from examples", "form generalizations", and "discover the rules" behind patterns in the data". This paper offers an idea, how to construct a binary multi-layer-perceptron MLP out of some primitives, that we introduce, where hidden nodes have definite meaning. This can be used in-two directions. If one has theoretical background of the mapping performed by a MLP, this background one can be used to design essential parts of hidden layer and of the output layer. This may help to generate a good starting point for the usual back propagation algorithm. Secondly, if one has no idea at all what rules guide the mapping of the MLP, we show specific cases, where an interpretation is possible. The construction method will be illustrated by the standard examples of the "two-or-more-clumps" problem and or the parity-problem.