Training recurrent neural networks with temporal input encodings

Investigates the learning of deterministic finite-state automata (DFAs) with recurrent networks with a single input neuron, where each input symbol is represented as a temporal pattern and strings as sequences of temporal patterns. The authors empirically demonstrate that obvious temporal encodings can make learning very difficult or even impossible. Based on preliminary results, the authors formulate some hypotheses about 'good' temporal encoding, i.e. encodings which do not significantly increase training time compared to training of networks with multiple input neurons.<<ETX>>