Limits to performance gains in combined neural classifiers

The performance of a single classiier is often inadequate in dif-cult classiication problems. In such cases, several researchers have combined the outputs of multiple classiiers to obtain better performance. However, the amount of improvement possible through such combination techniques is generally not known. This article presents two approaches to estimating performance limits in hybrid networks. First, we present a framework that estimates Bayes error rates when linear combiners are used. Then we discuss a more general method that provides decision conn-dences and error bounds based on error types arising from the training data. The methods are illustrated for a diicult four class problem involving underwater acoustic data. For this data, we compute the single classiier and combiner classiication performances , as well as the Bayes error rate and an error bound.