Learning higher-order sequential structure with cloned HMMs

Variable order sequence modeling is an important problem in artificial and natural intelligence. While overcomplete Hidden Markov Models (HMMs), in theory, have the capacity to represent long-term temporal structure, they often fail to learn and converge to local minima. We show that by constraining HMMs with a simple sparsity structure inspired by biology, we can make it learn variable order sequences efficiently. We call this model cloned HMM (CHMM) because the sparsity structure enforces that many hidden states map deterministically to the same emission state. CHMMs with over 1 billion parameters can be efficiently trained on GPUs without being severely affected by the credit diffusion problem of standard HMMs. Unlike n-grams and sequence memoizers, CHMMs can model temporal dependencies at arbitrarily long distances and recognize contexts with 'holes' in them. Compared to Recurrent Neural Networks and their Long Short-Term Memory extensions (LSTMs), CHMMs are generative models that can natively deal with uncertainty. Moreover, CHMMs return a higher-order graph that represents the temporal structure of the data which can be useful for community detection, and for building hierarchical models. Our experiments show that CHMMs can beat n-grams, sequence memoizers, and LSTMs on character-level language modeling tasks. CHMMs can be a viable alternative to these methods in some tasks that require variable order sequence modeling and the handling of uncertainty.

[1]  Dileep George,et al.  Towards a Mathematical Theory of Cortical Micro-circuits , 2009, PLoS Comput. Biol..

[2]  Sam T. Roweis,et al.  Constrained Hidden Markov Models , 1999, NIPS.

[3]  R. Nigel Horspool,et al.  Data Compression Using Dynamic Markov Modelling , 1987, Comput. J..

[4]  Marc Toussaint,et al.  Feature Discovery for Sequential Prediction of Monophonic Music , 2017, ISMIR.

[5]  Vatsal Sharan,et al.  Learning Overcomplete HMMs , 2017, NIPS.

[6]  H. Fleming Equivalence of regularization and truncated iteration in the solution of III-posed image reconstruction problems , 1990 .

[7]  Anima Anandkumar,et al.  A Method of Moments for Mixture Models and Hidden Markov Models , 2012, COLT.

[8]  Nitesh V. Chawla,et al.  Representing higher-order dependencies in networks , 2015, Science Advances.

[9]  F ChenStanley,et al.  An Empirical Study of Smoothing Techniques for Language Modeling , 1996, ACL.

[10]  Masato Okada,et al.  Complex Sequencing Rules of Birdsong Can be Explained by Simple Hidden Markov Processes , 2010, PloS one.

[11]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[12]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[13]  Ewan Klein,et al.  Natural Language Processing with Python , 2009 .

[14]  Frank D. Wood,et al.  The sequence memoizer , 2011, Commun. ACM.

[15]  Martin J. Wainwright,et al.  Statistical and computational guarantees for the Baum-Welch algorithm , 2015, 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[16]  Mari Ostendorf,et al.  HMM topology design using maximum likelihood successive state splitting , 1997, Comput. Speech Lang..

[17]  Pierre Dupont,et al.  Inducing Hidden Markov Models to Model Long-Term Dependencies , 2005, ECML.

[18]  Martin Rosvall,et al.  Maps of sparse Markov chains efficiently reveal community structure in network flows with memory , 2016, ArXiv.

[19]  Y. Yao,et al.  On Early Stopping in Gradient Descent Learning , 2007 .

[20]  New York Dover,et al.  ON THE CONVERGENCE PROPERTIES OF THE EM ALGORITHM , 1983 .

[21]  Yuwei Cui,et al.  Continuous Online Sequence Learning with an Unsupervised Neural Network Model , 2015, Neural Computation.

[22]  Pierre Baldi,et al.  Smooth On-Line Learning Algorithms for Hidden Markov Models , 1994, Neural Computation.

[23]  Yoshua Bengio,et al.  Diffusion of Context and Credit Information in Markovian Models , 1995, J. Artif. Intell. Res..

[24]  Dileep George,et al.  Sequence memory for prediction, inference and behaviour , 2009, Philosophical Transactions of the Royal Society B: Biological Sciences.

[25]  Tatsuo S. Okubo,et al.  Growth and splitting of neural sequences in songbird vocal development , 2015, Nature.

[26]  Fei-Fei Li,et al.  Visualizing and Understanding Recurrent Networks , 2015, ArXiv.

[27]  Hermann Ney,et al.  Improved backing-off for M-gram language modeling , 1995, 1995 International Conference on Acoustics, Speech, and Signal Processing.

[28]  Martin Rosvall,et al.  Maps of random walks on complex networks reveal community structure , 2007, Proceedings of the National Academy of Sciences.

[29]  Sarah J. Starling,et al.  Cna Uoy Raed Thsi Nwo? Contextual and Stimulus Effects on Decoding Scrambled Words , 2018 .