Dynamic Feature Set Training of Neural Nets for Classification

A system that performs classification of data typically is comprised of two components: a feature extractor and a classifier. The features extracted can be alternative representations of the input data and the classifier can be a neural network. Two important problems in the design of such a robust classification system are the selection of the optimal subset of features and the arthitecture and parameters of the neural net. We have developed an evolutionary programming (EP) approach for neural network classifier design that automatically selects the best subset of features, the neural net architecture, and neural net weights to yield optimal dassifier performance. The measure used to determine the optimal feature subset and neural net architecture is a tradeoff between the reduction of the training missclassification error, the number of features retained, and the network's computational complexity. Using this approach we have developed a system using a hierarchical neural net to classify the severity of coronary artery disease (CAD) from multiple-input ECC waveform representations collected during exercise tests. In developing the CAD classifier we found that some features were important during training to achieve good system performance. Once trained, the final system did not require these features to maintain this performance. But if these features were not considered during training, the final system's performance was substantially degraded. This suggests that dynamic feature sets maybe important in the development and training of neural net systems even though they may not be required for the final system. Presented here is a detailed description of the system design, EP training solution, and results.