First-order vs. Second-order Single Layer Recurrent Neural Networks

We examine the representational capabilities of rst-order and second-order Single Layer Recurrent Neural Networks (SLRNNs) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a rst-order SLRNN. However, if the rst-order SLRNN is augmented with output layers of feedforward neurons, it can implement any nite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for eecient implementation of nite-state recognizers using augmented rst-order SLRNNs.