Efficient encoding of finite automata in discrete-time recurrent neural networks

A number of researchers have used discrete-time recurrent neural nets (DTRNN) to learn finite-state machines (FSM) from samples of input and output strings. Trained DTRNN usually show FSM behaviour for strings up to a certain length, but not beyond; this is usually called instability. Other authors have shown that DTRNN may actually behave as FSM for strings of any length and have devised strategies to construct such DTRNN. In these strategies, m-state deterministic FSM are encoded and the number of state units in the DTRNN is Θ(m). This paper shows that more efficient sigmoid DTRNN encoding exist for a subclass of deterministic finite automata, namely, when the size of an equivalent nondeterministic finite automata (NFA) is smaller, because n-state NFA may directly be encoded in DTRNN with a Θ(n) units.