Using Prior Knowledge in a {NNPDA} to Learn Context-Free Languages

Although considerable interest has been shown in language inference and automata induction using recurrent neural networks, success of these models has mostly been limited to regular languages. We have previously demonstrated that Neural Network Pushdown Automaton (NNPDA) model is capable of learning deterministic context-free languages (e.g., anbn and parenthesis languages) from examples. However, the learning task is computationally intensive. In this paper we discus some ways in which a priori knowledge about the task and data could be used for efficient learning. We also observe that such knowledge is often an experimental prerequisite for learning nontrivial languages (eg. anbncbmam).