An information theoretic approach to detection of minority subsets

Unsupervised learning techniques, e.g. clustering, is useful for obtaining a summary of a dataset. However, its application to large databases can be computationally expensive. Alternatively, useful information can also be retrieved from its subsets in a more efficient yet effective manner. This paper addresses the problem of finding a small subset of minority instances whose distribution significantly differs from that of the majority. Generally, such a subset can substantially overlap with the majority, which is problematic for conventional estimation of distribution. This paper proposes a new approach for estimating a minority distribution based on Information Theoretic Framework, an extension of the Rate Distortion Theory for unsupervised learning tasks. Specifically, the proposed method (a) estimates parameters which maximize the divergence between the minority and majority distributions, (b) penalizes the redundancy of data expression based on the mutual information between the observed and hidden variables, and (c) employs a hard assignment approximation to avoid computation of trivial conditional probabilities. The algorithm of the proposed method has no problem-dependent parameter and its time and space complexities are linear to the size of the minority subset. Experiments using artificial datasets show the proposed method yields significantly high precision and sensitivity in detecting minority subsets which substantially overlaps with the majority. The proposed method also substantially outperforms one-class classification and mixture estimation methods in real-world benchmark datasets for text and satellite imagery classification.