LSTM (Long Short-Term Memory) recurrent neural networks have been highly successful in a number of application areas. This technical report describes the use of the MNIST and UW3 databases for benchmarking LSTM networks and explores the effect of different architectural and hyperparameter choices on performance. Significant findings include: (1) LSTM performance depends smoothly on learning rates, (2) batching and momentum has no significant effect on performance, (3) softmax training outperforms least square training, (4) peephole units are not useful, (5) the standard non-linearities (tanh and sigmoid) perform best, (6) bidirectional training combined with CTC performs better than other methods.
[1]
Jürgen Schmidhuber,et al.
LSTM recurrent networks learn simple context-free and context-sensitive languages
,
2001,
IEEE Trans. Neural Networks.
[2]
Jürgen Schmidhuber,et al.
Learning Precise Timing with LSTM Recurrent Networks
,
2003,
J. Mach. Learn. Res..
[3]
Jürgen Schmidhuber,et al.
Bidirectional LSTM Networks for Improved Phoneme Classification and Recognition
,
2005,
ICANN.
[4]
Yann LeCun,et al.
The mnist database of handwritten digits
,
2005
.
[5]
Jürgen Schmidhuber,et al.
LSTM: A Search Space Odyssey
,
2015,
IEEE Transactions on Neural Networks and Learning Systems.