Improving classification accuracy by using confidence measures to combine classifiers
暂无分享,去创建一个
The idea of comparing and combining classifiers to improve the accuracy of classification is currently the focus of extensive theoretical and experimental research in the areas of machine learning and pattern recognition. We contribute to this body of work by developing ways to combine off-the-shelf classifiers. Some of our techniques show significant improvement over the state of the art at the cost of restricting classification to part of the data.
Our concept is straight-forward: use confidence measures to predict which classifier is expected to do the best on a given sample, then apply that classifier to the sample. Experimental results show that continuing in this manner for all the samples can produce improved classification accuracy. Our partial classification technique achieves significant improvement in accuracy on a large portion of the data. Two classifiers can be combined to achieve higher accuracy on the entire dataset than either classifier can attain on its own. We present several techniques for using confidence measures to combine classifiers. Each classifier is applied to a subset of the data for which it is most appropriate. Gains in accuracy are possible and are demonstrated experimentally.