An Application of the Variational Bayesian Approach to Probabilistic Context-Free Grammars

We present an efficient learning algorithm for probabilistic context-free grammars based on the variational Bayesian approach. Although the maximum likelihood method has traditionally been used for learning probabilistic language models, Bayesian learning is, in principle, less likely to cause overfitting problems than the maximum likelihood method. We show that the computational complexity of our algorithm is equal to that of the Inside-Outside algorithm. We also report results of experiments to compare precisions of the Inside-Outside algorithm and our algorithm.