Tractable Undirected Approximations for Graphical Models

Graphical models provide a broad framework for probabilistic inference, with application to such diverse areas as speech recognition (Hidden Markov Models), medical diagnosis (Belief networks) and artificial intelligence (Boltzmann Machines). However, the computing time is typically exponential in the number of nodes in the graph. We present a general framework for a class of approximating models, based on the Kullback-Leibler divergence between an approximating graph and the original graph. We concentrate here on undirected approximations of both intractable directed and undirected graphical models. Simulation results on a small benchmark problem suggest that this method compares favourably against others previously reported in the literature.