Networks of goal seeking neurons (GSN) are weightless models that were designed to overcome several problems with PLN networks. Weightless (Boolean) models are well-known by their ease to implement on hardware and software. Some hardware implementations and general improvements have been proposed for these models specially to perform better with their main target task: classification. In this work, the union of two earlier improvements is proposed and tested on classifiers that use GSN networks as their building blocks on a handwritten digit recognition task. Both improvements aim to maximize the number of learned examples. The GSN expansion strategy decreases saturation problems while the adaptation of presentation order modifies the sequence of patterns based on the GSN network behavior itself: GSN uses a fast one-shot learning algorithm when it is not allowed to modify used positions. Therefore, during the training, several examples are refused by the net since it cannot be learned. This behavior indicates when a GSN network is saturated. By using the idea of expanding the network when it is needed, it is possible to overcome this limitation. The adaptive order procedure improves the network learning since it allows difficult examples to be presented before easier ones. Results of simulations show the combined improvement leads to significant gains of performance. Moreover, some ideas can be applied on many other neural models.
[1]
Michael Fairhurst,et al.
A Goal Seeking Neuron for Boolean Neural Networks
,
1990
.
[2]
A. Cristini.
The Outline of the Theory
,
1999
.
[3]
Weber Martins,et al.
On-line expansion of goal seeking neuron networks
,
2000,
Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium.
[4]
G. G. Coghill,et al.
Improved recognition capabilities for goal seeking neuron
,
1992
.
[5]
W. Pitts,et al.
A Logical Calculus of the Ideas Immanent in Nervous Activity (1943)
,
2021,
Ideas That Created the Future.
[6]
Igor Aleksander,et al.
A probabilistic logic neuron network for associative learning
,
1989
.
[7]
W. W. Bledsoe,et al.
Pattern recognition and reading by machine
,
1959,
IRE-AIEE-ACM '59 (Eastern).
[8]
E. Caianiello.
Outline of a theory of thought-processes and thinking machines.
,
1961,
Journal of theoretical biology.