Expanding context against weighted voting of classifiers

In the paper we propose a new method to integrate the predictions of multiple classifiers for Data Mining and Machine Learning tasks. The method assumes that each classifier stands in it's own context, and the contexts are partially ordered. The order is defined by monotonous quality function that maps each context to the value from the interval [0,1]. The classifier that has the context with better quality is supposed to predict better than the classifier from worse quality. The objective is to generate the opinion of `virtual' classifier that stands in the context with quality equal to 1. This virtual classifier must have the best accuracy of predictions due to the best context. To do this we build the regression where each prediction is put with the weight, equal to quality evaluation of the context of the correspondent classifier. This regression will give us the best opinion in the point 1. Some experiments on the vowel recognition tasks showed validity of the approach.