Characterizations of Monotonic and Dual Monotonic Language Learning

The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce better and better generalizations when fed more and more data on the concept to be learned. The three versions of dual monotonicity describe the concept that the inference device has to produce specializations that fit better and better to the target language. We characterize strong-monotonic, monotonic, weak-monotonic, dual strong-monotonic, dual monotonic, and monotonic & dual monotonic, as well as finite language learning from positive data in terms of recursively generable finite sets. These characterizations provide a unifying framework for learning from positive data under the various monotonicity constraints. Moreover, they yield additional insight into the problem of what a natural learning algorithm should look like.

[1]  Sanjay Jain,et al.  Recursion Theoretic Characterizations of Language Learning , 1989 .

[2]  Manuel Blum,et al.  Toward a Mathematical Theory of Inductive Inference , 1975, Inf. Control..

[3]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[4]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part II , 1964, Inf. Control..

[5]  H. Gordon Rice,et al.  On completely recursively enumerable classes and their key arrays , 1956, Journal of Symbolic Logic.

[6]  Thomas Zeugmann,et al.  Learning Recursive Languages with Bounded Mind Changes , 1993, Int. J. Found. Comput. Sci..

[7]  Ray J. Solomonoff,et al.  A Formal Theory of Inductive Inference. Part I , 1964, Inf. Control..

[8]  Thomas Zeugmann,et al.  Language learning in dependence on the space of hypotheses , 1993, COLT '93.

[9]  Rusins Freivalds,et al.  On the Power of Inductive Inference from Good Examples , 1993, Theor. Comput. Sci..

[10]  T. Zeugmann,et al.  On the Impact of Order Independence to the Learnability of Recursive Languages , 1993 .

[11]  Leonard Pitt,et al.  A polynomial-time algorithm for learning k-variable pattern languages from examples , 1989, COLT '89.

[12]  Rolf Wiehagen,et al.  Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen , 1976, J. Inf. Process. Cybern..

[13]  Thomas G. Dietterich What is machine learning? , 2020, Archives of Disease in Childhood.

[14]  John Case The power of vacillation , 1988, COLT '88.

[15]  Thomas Zeugmann,et al.  Characterization of language learning front informant under various monotonicity constraints , 1994, J. Exp. Theor. Artif. Intell..

[16]  T. Zeugmann Characterizations of Class Preserving Monotonic and Dual Monotonic Language Learning , 1992 .

[17]  Leslie G. Valiant,et al.  Computational limitations on learning from examples , 1988, JACM.

[18]  Thomas Zeugmann,et al.  On the Power of Monotonic Language Learning , 1992 .

[19]  Klaus P. Jantke,et al.  Monotonic and Nonmonotonic Inductive Inference of Functions and Patterns , 1990, Nonmonotonic and Inductive Logic.

[20]  Robert Nix,et al.  Editing by example , 1985, POPL '84.

[21]  Rolf Wiehagen A Thesis in Inductive Inference , 1990, Nonmonotonic and Inductive Logic.

[22]  Klaus P. Jantke Proceedings of the International Workshop on Analogical and Inductive Inference , 1986 .

[23]  Thomas Zeugmann,et al.  Monotonic Versus Nonmonotonic Language Learning , 1991, Nonmonotonic and Inductive Logic.

[24]  Gerhard Brewka,et al.  Nonmonotonic Reasoning: Logical Foundations of Commonsense By Gerhard Brewka (Cambridge University Press, 1991) , 1991, SGAR.

[25]  Mona Singh,et al.  Learning functions of k terms , 1990, COLT '90.

[26]  Shyam Kapur,et al.  Monotonic Language Learning , 1992, ALT.

[27]  John Case,et al.  Machine Inductive Inference and Language Identification , 1982, ICALP.

[28]  Jeffrey D. Ullman,et al.  Formal languages and their relation to automata , 1969, Addison-Wesley series in computer science and information processing.

[29]  John R. Anderson,et al.  MACHINE LEARNING An Artificial Intelligence Approach , 2009 .

[30]  Thomas Zeugmann,et al.  A-posteriori Characterizations in Inductive Inference of Recursive Functions , 1983, J. Inf. Process. Cybern..

[31]  Ryszard S. Michalski,et al.  Machine learning: an artificial intelligence approach volume III , 1990 .

[32]  Takeshi Shinohara Inductive inference from positive data is powerful , 1990, COLT '90.

[33]  Daniel N. Osherson,et al.  Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists , 1990 .

[34]  Kenneth Wexler,et al.  Formal Principles of Language Acquisition , 1980 .

[35]  Carl H. Smith,et al.  Inductive Inference: Theory and Methods , 1983, CSUR.

[36]  Dana Angluin,et al.  Inductive Inference of Formal Languages from Positive Data , 1980, Inf. Control..

[37]  Mark A. Fulk Prudence and Other Conditions on Formal Language Learning , 1990, Inf. Comput..

[38]  David Haussler,et al.  Proceedings of the fifth annual workshop on Computational learning theory , 1992, COLT 1992.

[39]  Thomas Zeugmann On the Power of Recursive Optimizers , 1988, Theor. Comput. Sci..

[40]  Tao Jiang,et al.  Inclusion is Undecidable for Pattern Languages , 1993, ICALP.

[41]  Dana Angluin,et al.  Finding Patterns Common to a Set of Strings , 1980, J. Comput. Syst. Sci..

[42]  Gianfranco Bilardi,et al.  Language Learning Without Overgeneralization , 1992, Theor. Comput. Sci..

[43]  Yasuhito Mukouchi,et al.  Characterization of Finite Identification , 1992, AII.