Linear feature extraction using sufficient statistic

The objective in feature extraction is to compress the data while maintaining the same Bayes classification error as on the original data. This objective is achieved by a sufficient statistic with the minimum dimension. This paper derives a non-iterative linear feature extractor that approximates the minimal-dimension linear sufficient statistic operator for the classification of Gaussian distributions. This new framework alleviates the bias of an existing similar formulation towards the parameters of a reference class. Moreover, it is a heteroscedastic extension of linear discriminant analysis and captures the discriminative information in the first and second central moments of the data. The proposed method can improve the performance of the similar feature extractors while imposing equal, or even lower, computational complexity.