Stacked generalization is a classification technique which aims to increase the performance of individual classifiers by combining them under a hierarchical architecture. In many applications, this technique, performs better than other classification schemas under some circumstances. However, in some applications, the performance of the technique goes astray, for the reasons that are not well-known. Even though it is used in several application domains up to now, it is not clear under which circumstances stacked generalization technique increases the performance. In this work, the states of the performance of stacked generalization technique is analyzed in terms of the performance parameters of the individual classifiers under the architecture. This work shows that the individual classifiers should learn the training set sharing the members of the set among themselves for the success of the stacked generalization architecture.
[1]
Padhraic Smyth,et al.
Stacked Density Estimation
,
1997,
NIPS.
[2]
C. Kaynak,et al.
Techniques for Combining Multiple Learners
,
1998
.
[3]
E. Akbas,et al.
A comparison of fuzzy ARTMAP and adaboost methods in image retrieval problems
,
2005,
Proceedings of the IEEE 13th Signal Processing and Communications Applications Conference, 2005..
[4]
Ian H. Witten,et al.
Stacked generalization: when does it work?
,
1997,
IJCAI 1997.
[5]
David H. Wolpert,et al.
Stacked generalization
,
1992,
Neural Networks.