Encoding of sequential translators in discrete-time recurrent neural nets

In recent years, there has been a lot of interest in the use of discrete-time recurrent neural nets (DTRNN) to learn finite-state tasks, and in the computational power of DTRNN, particularly in connection with finite-state computation. This paper describes a simple strategy to devise stable encodings of sequential finite-state translators (SFST) in a second-order DTRNN with units having bounded, strictly growing, continuous sigmoid activation functions. The strategy relies on bounding criteria based on a study of the conditions under which the DTRNN is actually behaving as a SFST.