Determining hearing threshold from brain stem evoked potentials. Optimizing a neural network to improve classification performance

Feed-forward neural networks in conjunction with back-propagation are an effective tool to automate the classification of biomedical signals. Most of the neural network research to date has been done with a view to accelerate learning speed. In the medical context, however, generalisation may be more important than learning speed. With the brain stem auditory evoked potential classification task described in this study, the authors found that parameter values that gave fastest learning could result in poor generalisation. In order to achieve maximum generalisation, it was necessary to fine tune the neural net for gain, momentum, batch size, and hidden layer size. Although this maximization could be time consuming, especially with larger training sets, the authors' results suggest that fine tuning parameters can have important clinical consequences, which justifies the time involved. In the authors' case, fine tuning parameters for high generalisation had the additional effect of reducing false negative classifications, with only a small sacrifice in learning speed.<<ETX>>

[1]  Özcan Özdamar,et al.  Auditory brainstem evoked potential classification for threshold detection by neural networks. I. Network design, similarities between human-expert and network classification, feasibility , 1992 .

[2]  D. Parisi,et al.  Learning the learning parameters , 1991, [Proceedings] 1991 IEEE International Joint Conference on Neural Networks.

[3]  Richard Lippmann,et al.  Practical Characteristics of Neural Network and Conventional Pattern Classifiers on Artificial and Speech Problems , 1989, NIPS.

[4]  Özcan Özdamar,et al.  Auditory brainstem evoked potential classification for threshold detection by neural networks. II. Effects of input coding, training set size and composition and network size on performance , 1992 .

[5]  Gerald Tesauro,et al.  How Tight Are the Vapnik-Chervonenkis Bounds? , 1992, Neural Computation.

[6]  Jocelyn Sietsma,et al.  Creating artificial neural networks that generalize , 1991, Neural Networks.

[7]  G. Kane Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol 1: Foundations, vol 2: Psychological and Biological Models , 1994 .

[8]  Tom Tollenaere,et al.  SuperSAB: Fast adaptive back propagation with good scaling properties , 1990, Neural Networks.

[9]  Harry A. C. Eaton,et al.  Learning coefficient dependence on training set size , 1992, Neural Networks.

[10]  Geoffrey E. Hinton,et al.  Learning internal representations by error propagation , 1986 .

[11]  J. K. Kruschke,et al.  Improving generalization in backpropagation networks with distributed bottlenecks , 1989, International 1989 Joint Conference on Neural Networks.

[12]  James L. McClelland,et al.  Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations , 1986 .