Babies, Variables, and Connectionist Networks

Recent studies have shown that infants have access to what would seem to be highly useful language acquisition skills. On the one hand, they can segment a stream of unmarked syllables into words, based only on the statistical regularities present in it. On the other, they are able to abstract beyond these input-specific regularities and generalize to rules. It has been argued that these are two separate learning mechanisms, that the former is simply associationist whereas the latter requires variables. In this paper we present a neural network model, demonstrating that when a network is made out of the right stuff, specifically, when it has the ability to represent sameness and the ability to represent relations, a simple associationist learning mechanism suffices to perform both of these tasks.