Evaluation of the Noise Injection in High Dimensions

ANN Classifiers The noise injection into the training samples has been shown to lead to improvement of the gener- alization ability of artificial neural network(ANN) classifiers. In this paper, we investigate the posi- tive effect of the noise injection on the generaliza- tion ability of ANN classifiers in high dimensions. We further show that the noise injection technique is very useful in situations where the true Bayes er- ror is small. We will consider ANN classifiers with one hidden layer. The input neurons correspond to the com- ponents of the pattern vector to be classified. The hidden layer has m neurons. The output neurons correspond to the pattern class labels. For simplic- ity we will focus on the two-class problem. Hence, the number of output neurons is 2. Each neuron of one layer except the output layer is fully con- nected to that of the only next layer. The back- propagation(BP) algorithm (ll) was used to train the ANN classifiers. Initial weights of a network were distributed uniformly in -0.5 to 0.5. Learning was terminated when the mean-squared error over a training set dropped below a specified threshold, or when the mean-squared error was unchanged. Here, the maximum number of iterations was set to 10000. The rate of convergence is considerably affected by the learning rate c. From preliminary experiments, we used c = 0.1.