Gaussian Hierarchical Latent Dirichlet Allocation: Bringing Polysemy Back

Topic models are widely used to discover the latent representation of a set of documents. The two canonical models are latent Dirichlet allocation, and Gaussian latent Dirichlet allocation, where the former uses multinomial distributions over words, and the latter uses multivariate Gaussian distributions over pre-trained word embedding vectors as the latent topic representations, respectively. Compared with latent Dirichlet allocation, Gaussian latent Dirichlet allocation is limited in the sense that it does not capture the polysemy of a word such as ``bank.'' In this paper, we show that Gaussian latent Dirichlet allocation could recover the ability to capture polysemy by introducing a hierarchical structure in the set of topics that the model can use to represent a given document. Our Gaussian hierarchical latent Dirichlet allocation significantly improves polysemy detection compared with Gaussian-based models and provides more parsimonious topic representations compared with hierarchical latent Dirichlet allocation. Our extensive quantitative experiments show that our model also achieves better topic coherence and held-out document predictive accuracy over a wide range of corpus and word embedding vectors.

[1]  Alexander J. Smola,et al.  Word Features for Latent Dirichlet Allocation , 2010, NIPS.

[2]  J. Pitman Combinatorial Stochastic Processes , 2006 .

[3]  James G. Scott,et al.  Bayesian Inference for Logistic Models Using Pólya–Gamma Latent Variables , 2012, 1205.0310.

[4]  Qi Yu,et al.  From Novice to Expert Narratives of Dermatological Disease , 2018, 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops).

[5]  David M. Blei,et al.  Probabilistic topic models , 2012, Commun. ACM.

[6]  Daniel F. Schmidt,et al.  High-Dimensional Bayesian Regularised Regression with the BayesReg Package , 2016, 1611.06649.

[7]  John D. Lafferty,et al.  Correlated Topic Models , 2005, NIPS.

[8]  Chong Wang,et al.  Reading Tea Leaves: How Humans Interpret Topic Models , 2009, NIPS.

[9]  Rajarshi Das,et al.  Gaussian LDA for Topic Models with Word Embeddings , 2015, ACL.

[10]  Jeffrey Dean,et al.  Distributed Representations of Words and Phrases and their Compositionality , 2013, NIPS.

[11]  Zhiyuan Zhao,et al.  Detecting Malicious Websites in Depth through Analyzing Topics and Web-pages , 2018, ICCSP.

[12]  David M. Blei,et al.  Topic Modeling in Embedding Spaces , 2019, Transactions of the Association for Computational Linguistics.

[13]  Paul Buitelaar,et al.  An Analysis of Topic Modelling for Legislative Texts , 2017, ASAIL@ICAIL.

[14]  Thomas L. Griffiths,et al.  Hierarchical Topic Models and the Nested Chinese Restaurant Process , 2003, NIPS.

[15]  Jun'ichi Tsujii,et al.  A Latent Concept Topic Model for Robust Topic Inference Using Word Embeddings , 2016, ACL.

[16]  Mark Steyvers,et al.  Finding scientific topics , 2004, Proceedings of the National Academy of Sciences of the United States of America.

[17]  David M. Blei,et al.  The Dynamic Embedded Topic Model , 2019, ArXiv.

[18]  John D. Lafferty,et al.  Dynamic topic models , 2006, ICML.

[19]  Jens Lehmann,et al.  DBpedia: A Nucleus for a Web of Open Data , 2007, ISWC/ASWC.

[20]  Aidong Zhang,et al.  A Correlated Topic Model Using Word Embeddings , 2017, IJCAI.

[21]  Timothy Baldwin,et al.  Automatic Evaluation of Topic Coherence , 2010, NAACL.

[22]  Wray L. Buntine Estimating Likelihoods for Topic Models , 2009, ACML.

[23]  Wei Liu,et al.  Distilled Wasserstein Learning for Word Embedding and Topic Modeling , 2018, NeurIPS.

[24]  Thomas L. Griffiths,et al.  The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies , 2007, JACM.

[25]  Jeffrey Pennington,et al.  GloVe: Global Vectors for Word Representation , 2014, EMNLP.

[26]  Michael Röder,et al.  Exploring the Space of Topic Coherence Measures , 2015, WSDM.

[27]  Tomas Mikolov,et al.  Enriching Word Vectors with Subword Information , 2016, TACL.

[28]  Nematollah Batmanghelich,et al.  Nonparametric Spherical Topic Modeling with Word Embeddings , 2016, ACL.

[29]  Ruslan Salakhutdinov,et al.  Evaluation methods for topic models , 2009, ICML '09.

[30]  Michael I. Jordan Graphical Models , 2003 .

[31]  Dat Quoc Nguyen,et al.  Improving Topic Models with Latent Feature Word Representations , 2015, TACL.

[32]  Anton van den Hengel,et al.  Image-Based Recommendations on Styles and Substitutes , 2015, SIGIR.