On the Power of Monotonic Language Learning

In the present paper strong{monotonic, monotonic and weak{monotonic reasoning is studied in the context of algorithmic language learning theory from positive as well as from positive and negative data. Strong{monotonicity describes the requirement to only produce better and better generalizations when more and more data are fed to the inference device. Monotonic learning re ects the eventual interplay between generalization and restriction during the process of inferring a language. However, it is demanded that for any two hypotheses the one output later has to be at least as good as the previously produced one with respect to the language to be learnt. Weak{monotonicity is the analogue of cumulativity in learning theory. We relate all these notions one to the other as well as to previously studied modes of identication, thereby in particular obtaining a strong hierarchy. These results have been presented at the 2nd International Workshop on Nonmonotonic and Inductive Logics