The problem of document classification considers categorizing or grouping of various document types. Each document can be represented as a bag of words, which has no straightforward Euclidean representation. Relative word counts form the basis for similarity metrics among documents. Endowing the vector of term frequencies with a Euclidean metric has no obvious straightforward justification. A more appropriate assumption commonly used is that the data lies on a statistical manifold, or a manifold of probabilistic generative models. In this paper, we propose calculating a low-dimensional, information based embedding of documents into Euclidean space. One component of our approach motivated by information geometry is the Fisher information distance to define similarities between documents. The other component is the calculation of the Fisher metric over a lower dimensional statistical manifold estimated in a nonparametric fashion from the data. We demonstrate that in the classification task, this information driven embedding outperforms both a standard PCA embedding and other Euclidean embeddings of the term frequency vector.
[1]
R. Kass,et al.
Geometrical Foundations of Asymptotic Inference
,
1997
.
[2]
Guy Lebanon.
Information Geometry, the Embedding Principle, and Document Classification
,
2005
.
[3]
R. Kass,et al.
Geometrical Foundations of Asymptotic Inference: Kass/Geometrical
,
1997
.
[4]
Guy Lebanon,et al.
Metric learning for text documents
,
2006,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[5]
Mikhail Belkin,et al.
Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering
,
2001,
NIPS.
[6]
John D. Lafferty,et al.
Diffusion Kernels on Statistical Manifolds
,
2005,
J. Mach. Learn. Res..
[7]
A. Hero,et al.
LEARNING ON STATISTICAL MANIFOLDS FOR CLUSTERING AND VISUALIZATION
,
2007
.