Language Learning from Texts: Mindchanges, Limited Memory, and Monotonicity

Abstract The paper explores language learning in the limit under various constraints on the number of mindchanges, memory, and monotonicity. We define language learning with limited (long term) memory and prove that learning with limited memory is exactly the same as learning via set driven machines (when the order of the input string is not taken into account). Further we show that every language learnable via a set driven machine is learnable via a conservative machine (making only justifiable mindchanges). We get a variety of separation results for learning with bounded number of mindchanges or limited memory under restrictions on monotonicity. A surprising result is that there are families of languages that can be monotonically learned with at most one mindchange, but can neither be weak-monotonically nor conservatively learned. Many separation results have a variant: If a criterion A can be separated from B , then often it is possible to find a family L of languages such that L is A and B learnable, but while it is possible to restrict the number of mindchanges or long term memory on criterion A , this is impossible for B .