Characterization of language learning front informant under various monotonicity constraints

Abstract The present paper deals with monotonic and dual monotonic language learning from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt. The three versions of dual monotonicity describe the concept that the inference device has to produce exclusively specializations that fit better and better to the target language. We characterize strong-monotonic, monotonic, weak-monotonic, dual strong-monotonic, dual monotonic and dual weak-monotonic as well as finite language learning from positive and negative data in terms of recursively generable finite sets. Thereby, we elaborate a unifying approach to monotonic language learning by showing that there is exactly one learning algorithm which can perform any monotonic inference task.

[1]  John Case,et al.  Machine Inductive Inference and Language Identification , 1982, ICALP.

[2]  Yasuhito Mukouchi,et al.  Definite Inductive Inference as a Successful Identification Criterion , 1991 .

[3]  Klaus P. Jantke,et al.  Monotonic and Nonmonotonic Inductive Inference of Functions and Patterns , 1990, Nonmonotonic and Inductive Logic.

[4]  Thomas Zeugmann,et al.  On the Power of Monotonic Language Learning , 1992 .

[5]  Thomas Zeugmann,et al.  Types of monotonic language learning and their characterization , 1992, COLT '92.

[6]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part I , 1964, Inf. Control..

[7]  Sanjay Jain,et al.  Recursion Theoretic Characterizations of Language Learning , 1989 .

[8]  Rolf Wiehagen Limes-Erkennung rekursiver Funktionen durch spezielle Strategien , 1975, J. Inf. Process. Cybern..

[9]  Daniel N. Osherson,et al.  Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists , 1990 .

[10]  John Case The power of vacillation , 1988, COLT '88.

[11]  T. Zeugmann Characterizations of Class Preserving Monotonic and Dual Monotonic Language Learning , 1992 .

[12]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part II , 1964, Inf. Control..

[13]  Shyam Kapur,et al.  Monotonic Language Learning , 1992, ALT.

[14]  Manuel Blum,et al.  Toward a Mathematical Theory of Inductive Inference , 1975, Inf. Control..

[15]  Rolf Wiehagen,et al.  Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen , 1976, J. Inf. Process. Cybern..

[16]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[17]  Rusins Freivalds,et al.  Inductive Inference from Good Examples , 1989, AII.

[18]  Thomas Zeugmann,et al.  A-posteriori Characterizations in Inductive Inference of Recursive Functions , 1983, J. Inf. Process. Cybern..

[19]  Carl H. Smith,et al.  Inductive Inference: Theory and Methods , 1983, CSUR.

[20]  Thomas Zeugmann,et al.  A Unifying Approach to Monotonic Language Learning on Informant , 1992, AII.

[21]  Dana Angluin,et al.  Inductive Inference of Formal Languages from Positive Data , 1980, Inf. Control..

[22]  Mark A. Fulk Prudence and Other Conditions on Formal Language Learning , 1990, Inf. Comput..

[23]  Rolf Wiehagen,et al.  Identification of Formal Languages , 1977, MFCS.

[24]  Rolf Wiehagen A Thesis in Inductive Inference , 1990, Nonmonotonic and Inductive Logic.