Proposes two neural network models, Percognitron I and II, for position- and deformation-invariant visual pattern-recognition systems. The number of synapses between the Us4 and Uc4 levels of the Neocognitron are increased to achieve full interconnection among the nodes of the two levels. Then, Percognitron I is used to adapt the excitatory synapses between the Us4 and Uc4 levels using a single-layer perceptron-type adaptation. Percognitron II is then used to adapt the excitatory synapses between the Us4 and Uc4 levels and between the Us4 and Uc3 levels using a backpropagation-type adaptation. The rate of adaptation is controlled with a user-supplied gain factor for each level that is adapted. The autonomy of the Percognitrons having a fully interconnected fourth layer is briefly illustrated in comparison to D.H. Hubel and T.N. Wiesel's (1962, 1965) hierarchical model. The Percognitron is shown to effectively recognize handwritten Arabic numerals. The proposed approach can successfully recognize a greater variety of patterns, including distorted or shifted patterns, than the Neocognitron
[1]
Geoffrey E. Hinton,et al.
Learning internal representations by error propagation
,
1986
.
[2]
D H HUBEL,et al.
RECEPTIVE FIELDS AND FUNCTIONAL ARCHITECTURE IN TWO NONSTRIATE VISUAL AREAS (18 AND 19) OF THE CAT.
,
1965,
Journal of neurophysiology.
[3]
Takayuki Ito,et al.
Neocognitron: A neural network model for a mechanism of visual pattern recognition
,
1983,
IEEE Transactions on Systems, Man, and Cybernetics.
[4]
D. Hubel,et al.
Receptive fields, binocular interaction and functional architecture in the cat's visual cortex
,
1962,
The Journal of physiology.
[5]
A. A. Mullin,et al.
Principles of neurodynamics
,
1962
.