Prescribed learning of r.e. classes

This work extends studies of Angluin, Lange and Zeugmann on the dependence of learning on the hypothesis space chosen for the language class in the case of learning uniformly recursive language classes. The concepts of class-comprising (where the learner can choose a uniformly recursively enumerable superclass as the hypothesis space) and class-preserving (where the learner has to choose a uniformly recursively enumerable hypothesis space of the same class) are formulated in their study. In subsequent investigations, uniformly recursively enumerable hypothesis spaces have been considered. In the present work, we extend the above works by considering the question of whether learners can be effectively synthesized from a given hypothesis space in the context of learning uniformly recursively enumerable language classes. In our study, we introduce the concepts of prescribed learning (where there must be a learner for every uniformly recursively enumerable hypothesis space of the same class) and uniform learning (like prescribed, but the learner has to be synthesized effectively from an index of the hypothesis space). It is shown that while for explanatory learning, these four types of learnability coincide, some or all are different for other learning criteria. For example, for conservative learning, all four types are different. Several results are obtained for vacillatory and behaviourally correct learning; three of the four types can be separated, however the relation between prescribed and uniform learning remains open. It is also shown that every (not necessarily uniformly recursively enumerable) behaviourally correct learnable class has a prudent learner, that is, a learner using a hypothesis space such that the learner learns every set in the hypothesis space. Moreover the prudent learner can be effectively built from any learner for the class.

[1]  John Case,et al.  Machine Inductive Inference and Language Identification , 1982, ICALP.

[2]  Ming Li,et al.  An Introduction to Kolmogorov Complexity and Its Applications , 2019, Texts in Computer Science.

[3]  Stuart A. Kurtz,et al.  Prudence in language learning , 1988, COLT '88.

[4]  Thomas Zeugmann,et al.  Language learning in dependence on the space of hypotheses , 1993, COLT '93.

[5]  Emil L. Post Recursively enumerable sets of positive integers and their decision problems , 1944 .

[6]  John Case,et al.  When unlearning helps , 2008, Inf. Comput..

[7]  Thomas Zeugmann,et al.  Characterizations of Monotonic and Dual Monotonic Language Learning , 1995, Inf. Comput..

[8]  Sandra Zilles,et al.  Increasing the power of uniform inductive learners , 2005, J. Comput. Syst. Sci..

[9]  William I. Gasarch,et al.  Book Review: An introduction to Kolmogorov Complexity and its Applications Second Edition, 1997 by Ming Li and Paul Vitanyi (Springer (Graduate Text Series)) , 1997, SIGACT News.

[10]  John Case,et al.  The Power of Vacillation in Language Learning , 1999, SIAM J. Comput..

[11]  Rolf Wiehagen A Thesis in Inductive Inference , 1990, Nonmonotonic and Inductive Logic.

[12]  Daniel N. Osherson,et al.  Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists , 1990 .

[13]  Thomas Zeugmann,et al.  A Guided Tour Across the Boundaries of Learning Recursive Languages , 1995, GOSLER Final Report.

[14]  Dana Angluin,et al.  Inductive Inference of Formal Languages from Positive Data , 1980, Inf. Control..

[15]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[16]  Mark A. Fulk Prudence and Other Conditions on Formal Language Learning , 1990, Inf. Comput..

[17]  R. Soare Recursively enumerable sets and degrees , 1987 .

[18]  Thomas Zeugmann,et al.  Monotonic and Dual Monotonic Language Learning , 1996, Theor. Comput. Sci..

[19]  Sanjay Jain,et al.  Learning in Friedberg numberings , 2008, Inf. Comput..

[20]  P. Odifreddi Classical recursion theory , 1989 .

[21]  Sandra Zilles Separation of uniform learning classes , 2004, Theor. Comput. Sci..

[22]  Patrick Brézillon,et al.  Lecture Notes in Artificial Intelligence , 1999 .

[23]  Richard M. Friedberg,et al.  Three theorems on recursive enumeration. I. Decomposition. II. Maximal set. III. Enumeration without duplication , 1958, Journal of Symbolic Logic.

[24]  Sanjay Jain,et al.  Prudence in vacillatory language identification , 1995, Mathematical systems theory.

[25]  Dick de Jongh,et al.  Angluin's theorem for indexed families of r.e. sets and applications , 1996, COLT '96.

[26]  Carl H. Smith,et al.  Inductive Inference: Theory and Methods , 1983, CSUR.

[27]  Manuel Blum,et al.  Toward a Mathematical Theory of Inductive Inference , 1975, Inf. Control..