Two factors that are known to have direct influence on the classification accuracy of any neural network are (1) the network complexity and (2) the representational accuracy of the training data. While pruning algorithms are used to tackle the complexity problem, no direct solutions are known for the second. Selecting training data at random from the sample space is the most popular method followed. Despite its simplicity, this method does not ensure nor guarantee that the training would be optimal. In this brief paper, we present a new method that is specific to a difference boosting neural network (DBNN) but could probably be extended to other networks as well. The method is iterative and fast, ensuring optimal selection of the minimum training data from a larger set in an automated manner. We test the performance of the new method on the some of the well known datasets from the UCI repository for benchmarking machine learning tools and show that the performance of the new method in almost all cases is better than that in any published method of comparable network complexity and that it requires only a fraction of the usual training data, thereby, making learning faster and more generic.
[1]
Catherine Blake,et al.
UCI Repository of machine learning databases
,
1998
.
[2]
Ninan Sajeeth Philip,et al.
Boosting the differences: A fast Bayesian classifier neural network
,
2000,
Intell. Data Anal..
[3]
David J. C. MacKay,et al.
Information-Based Objective Functions for Active Data Selection
,
1992,
Neural Computation.
[4]
Charles Elkan,et al.
Boosting and Naive Bayesian learning
,
1997
.
[5]
Ajit Kembhavi,et al.
A difference boosting neural network for automated star-galaxy classification
,
2002
.
[6]
S. C. Odewahn,et al.
Automated Galaxy Morphology: A Fourier Approach
,
2002
.
[7]
Ninan Sajeeth Philip,et al.
Distorted English Alphabet Identification : An application of Difference Boosting Algorithm
,
2000,
ArXiv.
[8]
Heekuck Oh,et al.
Neural Networks for Pattern Recognition
,
1993,
Adv. Comput..
[9]
C. Lee Giles,et al.
What Size Neural Network Gives Optimal Generalization? Convergence Properties of Backpropagation
,
1998
.