For a three-layer neural network (one hidden layer) with its inputs and outputs not restricted to binary ones, the authors obtain conditions under which a given set of input patterns is mapped to an arbitrary set of output patterns. It is shown that the existence of J-1 hidden units is the necessary and sufficient condition for J input patterns, if the hidden units take binary values. When the number of the binary hidden units is infinity, it is proved that the resulting network simulates a three-layer network with infinite hidden units, whose activation function is absolutely integrable, and that the outputs from the network are arbitrary for continuous-valued inputs. It is also shown that the existence of an infinite number of hidden units is not only sufficient but also necessary if the activation function for the hidden units becomes discrete at most at the countable points.<<ETX>>
[1]
Marvin Minsky,et al.
Perceptrons: An Introduction to Computational Geometry
,
1969
.
[2]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[3]
James L. McClelland,et al.
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
,
1986
.
[4]
R. Hecht-Nielsen.
Kolmogorov''s Mapping Neural Network Existence Theorem
,
1987
.
[5]
Richard Lippmann,et al.
Neural Net and Traditional Classifiers
,
1987,
NIPS.
[6]
B. Irie,et al.
Capabilities of three-layered perceptrons
,
1988,
IEEE 1988 International Conference on Neural Networks.