Second-order recurrent neural networks for grammatical inference

It is shown that a recurrent, second-order neural network using a real-time, feedforward training algorithm readily learns to infer regular grammars from positive and negative string training samples. Numerous simulations which show the effect of initial conditions, training set size and order, and neuron architecture are presented. All simulations were performed with random initial weight strengths and usually converge after approximately a hundred epochs of training. The authors discuss a quantization algorithm for dynamically extracting finite-state automata during and after training. For a well-trained neural net, the extracted automata constitute an equivalence class of state machines that are reducible to the minimal machine of the inferred grammar. It is then shown through simulations that many of the neural net state machines are dynamically stable and correctly classify long unseen strings.<<ETX>>