A study is presented to compare the performance of multilayer perceptrons, radial basis function networks, and probabilistic neural networks for classification. In many classification problems, probabilistic neural networks have outperformed other neural classifiers. Unfortunately, with this kind of networks, the number of required operations to classify one pattern directly depends on the number of training patterns. This excessive computational cost makes this method difficult to be implemented in many real time applications. On the contrary, multilayer perceptrons have a reduced computational cost after training, but the required training set size to achieve low error rates is generally high. In this paper we propose an alternative method for training multilayer perceptrons, using data knowledge derived from the probabilistic neural network theory. Once the probability density functions have been estimated by the probabilistic neural network, a new training set can be generated using these estimated probability density functions. Results demonstrate that a multilayer perceptron trained with this enlarged training set achieves results equally good than those obtained with a probabilistic neural network, but with a lower computational cost.
[1]
G. Lewicki,et al.
Approximation by Superpositions of a Sigmoidal Function
,
2003
.
[2]
Donald F. Specht,et al.
Probabilistic neural networks
,
1990,
Neural Networks.
[3]
A. A. Mullin,et al.
Principles of neurodynamics
,
1962
.
[4]
Manuel Rosa-Zurera,et al.
Neural Solutions for High Range Resolution Radar Classification
,
2003,
IWANN.
[5]
Friedhelm Schwenker,et al.
Three learning phases for radial-basis-function networks
,
2001,
Neural Networks.
[6]
Mohammad Bagher Menhaj,et al.
Training feedforward networks with the Marquardt algorithm
,
1994,
IEEE Trans. Neural Networks.
[7]
Simon Haykin,et al.
Neural networks
,
1994
.
[8]
Heekuck Oh,et al.
Neural Networks for Pattern Recognition
,
1993,
Adv. Comput..