In this paper, starting from a general discussion on neural network dynamics from the standpoint of statistical mechanics, we discuss three different strategies to deal with the problem of pattern recognition in neural nets. Particularly we emphasized the role of matching the intrinsic correlations within the input patterns, to solve the problem of the optimal pattern recognition. In this context, the first two strategies, we applied to different problems and we discuss in this paper, consist essentially in adding either white noise or colored noise (deterministic chaos) on the input pattern pre-processing, to make easier for a classical backpropagation algorithm the class separation, respectively because the input patterns are too correlated among themselves or, on the contrary, are too noisy. The third more radical strategy, we applied to very hard pattern recognition problems in HEP experiments, consists in an automatic (dynamic) redefinition of the same net topology on the inner correlations of the inputs.
[1]
G. Parisi,et al.
Asymmetric neural networks and the process of learning
,
1986
.
[3]
Daniel J. Amit,et al.
Modeling brain function: the world of attractor neural networks, 1st Edition
,
1989
.
[4]
M. Mézard,et al.
Spin Glass Theory and Beyond
,
1987
.
[5]
Bart Kosko,et al.
Unsupervised learning in noise
,
1990,
International 1989 Joint Conference on Neural Networks.