Regularization and nonlinearities for neural language models: when are they needed?

Neural language models (LMs) based on recurrent neural networks (RNN) are some of the most successful word and character-level LMs. Why do they work so well, in particular better than linear neural LMs? Possible explanations are that RNNs have an implicitly better regularization or that RNNs have a higher capacity for storing patterns due to their nonlinearities or both. Here we argue for the first explanation in the limit of little training data and the second explanation for large amounts of text data. We show state-of-the-art performance on the popular and small Penn dataset when RNN LMs are regularized with random dropout. Nonetheless, we show even better performance from a simplified, much less expressive linear RNN model without off-diagonal entries in the recurrent matrix. We call this model an impulse-response LM (IRLM). Using random dropout, column normalization and annealed learning rates, IRLMs develop neurons that keep a memory of up to 50 words in the past and achieve a perplexity of 102.5 on the Penn dataset. On two large datasets however, the same regularization methods are unsuccessful for both models and the RNN's expressivity allows it to overtake the IRLM by 10 and 20 percent perplexity, respectively. Despite the perplexity gap, IRLMs still outperform RNNs on the Microsoft Research Sentence Completion (MRSC) task. We develop a slightly modified IRLM that separates long-context units (LCUs) from short-context units and show that the LCUs alone achieve a state-of-the-art performance on the MRSC task of 60.8%. Our analysis indicates that a fruitful direction of research for neural LMs lies in developing more accessible internal representations, and suggests an optimization regime of very high momentum terms for effectively training such models.

[1]  Vysoké Učení,et al.  Statistical Language Models Based on Neural Networks , 2012 .

[2]  Geoffrey E. Hinton,et al.  Generating Text with Recurrent Neural Networks , 2011, ICML.

[3]  Razvan Pascanu,et al.  Understanding the exploding gradient problem , 2012, ArXiv.

[4]  Ilya Sutskever,et al.  SUBWORD LANGUAGE MODELING WITH NEURAL NETWORKS , 2011 .

[5]  Hugo Larochelle,et al.  A Neural Autoregressive Topic Model , 2012, NIPS.

[6]  Lukás Burget,et al.  Empirical Evaluation and Combination of Advanced Language Modeling Techniques , 2011, INTERSPEECH.

[7]  Aapo Hyvärinen,et al.  Noise-contrastive estimation: A new estimation principle for unnormalized statistical models , 2010, AISTATS.

[8]  Geoffrey E. Hinton,et al.  Three new graphical models for statistical language modelling , 2007, ICML '07.

[9]  Geoffrey E. Hinton,et al.  On the importance of initialization and momentum in deep learning , 2013, ICML.

[10]  Nitish Srivastava,et al.  Improving neural networks by preventing co-adaptation of feature detectors , 2012, ArXiv.

[11]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[12]  Yoshua Bengio,et al.  A Neural Probabilistic Language Model , 2003, J. Mach. Learn. Res..

[13]  Christopher J. C. Burges,et al.  The Microsoft Research Sentence Completion Challenge , 2011 .

[14]  Yoshua Bengio,et al.  Learning long-term dependencies with gradient descent is difficult , 1994, IEEE Trans. Neural Networks.

[15]  Razvan Pascanu,et al.  On the difficulty of training recurrent neural networks , 2012, ICML.

[16]  Yee Whye Teh,et al.  A fast and simple algorithm for training neural probabilistic language models , 2012, ICML.

[17]  Harald Haas,et al.  Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication , 2004, Science.