Annealed RNN Learning of Finite State Automata

In recurrent neural network (RNN) learning of finite state automata (FSA), we discuss how a neuro gain (β) influences the stability of the state representation and the performance of the learning. We formally show that the existence of the critical neuro gain (β0): any β larger than β0 makes an RNN maintain the stable representation of states of an acquired FSA. Considering the existence of β0 and avoidance of local minima, we propose a new RNN learning method with the scheduling of β, called an annealed RNN learning. Our experiments show that the annealed RNN learning went beyond than a constant β learning.