Alternative learning methods for training neural network classifiers

Neural networks have proven very useful in the field of pattern classification by mapping input patterns into one of several categories. One widely used neural network paradigm is the multi- layer perceptron employing back-propagation of errors learning -- often called back- propagation networks (BPNs). Rather than being specifically programmed, BPNs `learn' this mapping by exposure to a training set, a collection of input pattern samples matched with their corresponding output classification. The proper construction of this training set is crucial to successful training of a BPN. One of the criteria to be met for proper construction of a training set is that each of the classes must be adequately represented. A class that is represented less often in the training data may not be learned as completely or correctly, impairing the network's discrimination ability. This is due to the implicit setting of a priori probabilities which results from unequal sample sizes. The degree of impairment is a function of (among other factors) the relative number of samples of each class used for training. This paper addresses the problem of unequal representation in training sets by proposing two alternative methods of learning. One adjusts the learning rate for each class to achieve user- specified goals. The other utilizes a genetic algorithm to set the connection weights with a fitness function based on these same goals. These methods are tested using both artificial and real-world training data.