Learning hierarchical latent class models

Hierarchical latent class (HLC) models generalize latent class models. As models for cluster analysis, they suit more applications than the latter because they relax the often untrue conditional independence assumption. They also facilitate the discovery of latent causal structures and the induction of probabilistic models that capture complex dependencies and yet have low inferential complexity. In this paper, we investigate the problem of inducing HLC models from data. Two fundamental issues of general latent structure discovery are identified and methods to address those issues for HLC models are proposed. Based on the proposals, we develop an algorithm for learning HLC models and demonstrate the feasibility of learning HLC models that are large enough to be of practical interest.