On recurrent neural networks and representing finite-state recognizers
暂无分享,去创建一个
A discussion on the representational abilities of single layer recurrent neural networks (SLRNNs) is presented. The fact that SLRNNs can not implement all finite-state recognizers is addressed. However, there are methods that can be used to expand the representational abilities of SLRNNs, and some of these are explained. The authors call such systems augmented SLRNNs. Some possibilities for augmented SLRNNs are: adding a layer of feedforward neurons to the SLRNN, allowing the SLRNN to have an extra time step to calculate the solution, and increasing the order of the SLRNN. It is significant that, for some problems, some augmented SLRNNs must actually implement a non-minimal finite-state recognizer that is equivalent to the desired finite-state recognizer. Simulations are performed that demonstrate the use of both a SLRNN and an augmented SLRNN for the problem of learning an odd parity finite-state recognizer using a gradient descent method.