Learning all subfunctions of a function

Sublearning, a model for learning of subconcepts of a concept, is presented. Sublearning a class of total recursive functions informally means to learn all functions from that class together with all of their sub functions. While in language learning it is known to be impossible to learn any infinite language together with all of its sublanguages, the situation changes for sublearning of functions. Several types of sublearning are defined and compared to each other as well as to other learning types. For example, in some cases, sublearning coincides with robust learning. Furthermore, whereas in usual function learning there are classes that cannot be learned consistently, all sublearnable classes of some natural types can be learned consistently. Moreover, the power of sublearning is characterized in several terms, thereby establishing a close connection to measurable classes and variants of this notion. As a consequence, there are rich classes which do not need any self-referential coding for sublearning them.

[1]  Giovanni Soda,et al.  Inductive Inference of Tree Automata by Recursive Neural Networks , 1997, AI*IA.

[2]  Manuel Blum,et al.  Toward a Mathematical Theory of Inductive Inference , 1975, Inf. Control..

[3]  Mark A. Fulk ROBUST SEPARATIONS IN INDUCTIVE INFERENCE , 1990, COLT 1990.

[4]  Manuel Blum,et al.  A Machine-Independent Theory of the Complexity of Recursive Functions , 1967, JACM.

[5]  John Case,et al.  Comparison of Identification Criteria for Machine Inductive Inference , 1983, Theor. Comput. Sci..

[6]  Rolf Wiehagen Limes-Erkennung rekursiver Funktionen durch spezielle Strategien , 1975, J. Inf. Process. Cybern..

[7]  Jr. Hartley Rogers Theory of Recursive Functions and Effective Computability , 1969 .

[8]  Stuart A. Kurtz,et al.  On the role of search for learning from examples , 2001, J. Exp. Theor. Artif. Intell..

[9]  R. Soare Recursively enumerable sets and degrees , 1987 .

[10]  John Case,et al.  Robust learning--rich and poor , 2004, J. Comput. Syst. Sci..

[11]  Frank Stephan,et al.  Avoiding coding tricks by hyperrobust learning , 2002, Theor. Comput. Sci..

[12]  Rolf Wiehagen,et al.  Inductive Inference with Additional Information , 1979, J. Inf. Process. Cybern..

[13]  J. M. Barzdin,et al.  Prognostication of Automata and Functions , 1971, IFIP Congress.

[14]  Dana Angluin,et al.  Inductive Inference of Formal Languages from Positive Data , 1980, Inf. Control..

[15]  Rolf Wiehagen,et al.  Learning and Consistency , 1995, GOSLER Final Report.

[16]  Mark A. Fulk Prudence and Other Conditions on Formal Language Learning , 1990, Inf. Comput..

[17]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[18]  Carl H. Smith,et al.  Robust Learning Is Rich , 2001, J. Comput. Syst. Sci..

[19]  Klaus P. Jantke,et al.  Combining Postulates of Naturalness in Inductive Inference , 1981, J. Inf. Process. Cybern..

[20]  Daniel N. Osherson,et al.  Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists , 1990 .

[21]  Thomas Zeugmann On Barzdin's Conjecture , 1986, AII.

[22]  Patrick Brézillon,et al.  Lecture Notes in Artificial Intelligence , 1999 .

[23]  On the role of search for learning , 1989, COLT '89.

[24]  Rusins Freivalds,et al.  Inductive Inference of Recursive Functions: Complexity Bounds , 1991, Baltic Computer Science.

[25]  Rolf Wiehagen,et al.  Charakteristische Eigenschaften von erkennbaren Klassen rekursiver Funktionen , 1976, J. Inf. Process. Cybern..