Stabilizing classifiers for very small sample sizes

In this paper the possibilities for constructing linear classifiers are considered for very small sample sizes. We propose a stability measure and present a study on the performance and stability of the following techniques: regularization by the ridge-estimate of the covariance matrix, bootstrapping followed by aggregation ("bagging") and editing combined with pseudo-inversion. It is shown that by these techniques a smooth transition can be made between the nearest mean classifier and the Fisher discriminant (1936, 1940) based on large samples sizes. Especially for highly correlated data very good results are obtained compared with the nearest mean method.