Robust Learning Is Rich

Intuitively, a class of objects is robustly learnable if not only this class itself is learnable but all of its computable transformations remain learnable as well. In that sense, being learnable robustly seems to be a desirable property in all fields of learning. We will study this phenomenon within the paradigm of inductive inference. Here a class of recursive functions is called robustly learnable under the criterion I iff all of its images under general recursive operators are learnable under the criterion I. M. Fulk (1990, in “3lst Annual IEEE Symposium on Foundations of Computer Science,” pp. 405?410, IEEE Comput. Soc. Press, Los Alamitos, CA) showed the existence of a nontrivial class which is robustly learnable under the criterion Ex. However, several of the hierarchies (such as the anomaly hierarchies for Ex and Bc) do not stand robustly. Hence, up to now it was not clear if robust learning is really rich. The main intention of this paper is to give strong evidence that robust learning is rich. Our main result proved by a priority construction is that the mind change hierarchy for Ex stands robustly. Moreover, the hierarchies of team learning for both Ex and Bc stand robustly as well. In several contexts, we observe the surprising fact that a more complex topological structure of the classes to be learned leads to positive robustness results, whereas an easy topological structure yields negative results. We also show the counterintuitive fact that even some self-referential classes can be learned robustly. Some of our results point out the difficulty of robust learning when only a bounded number of mind changes is allowed. Further results concerning uniformly robust learning are derived.

[1]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[2]  Carl H. Smith,et al.  Inductive Inference: Theory and Methods , 1983, CSUR.

[3]  On the role of search for learning , 1989, COLT '89.

[4]  R. Lathe Phd by thesis , 1988, Nature.

[5]  Thomas Zeugmann On Barzdin's Conjecture , 1986, AII.

[6]  Manuel Blum,et al.  Toward a Mathematical Theory of Inductive Inference , 1975, Inf. Control..

[7]  John Stillwell,et al.  Symmetry , 2000, Am. Math. Mon..

[8]  Frank Stephan,et al.  Avoiding coding tricks by hyperrobust learning , 2002, Theor. Comput. Sci..

[9]  Andris Ambainis,et al.  Transformations that Preserve Learnability , 1996, ALT.

[10]  Jr. Hartley Rogers Theory of Recursive Functions and Effective Computability , 1969 .

[11]  Daniel N. Osherson,et al.  Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists , 1990 .

[12]  Daniel N. Osherson,et al.  Aggregating Inductive Expertise , 1986, Inf. Control..

[13]  W. A. Rosenblith Information and Control in Organ Systems , 1959 .

[14]  Carl H. Smith,et al.  On the power of learning robustly , 1998, COLT' 98.

[15]  Rusins Freivalds,et al.  Inductive Inference of Recursive Functions: Qualitative Theory , 1991, Baltic Computer Science.

[16]  Rolf Wiehagen,et al.  Research in the theory of inductive inference by GDR mathematicians - A survey , 1980, Inf. Sci..

[17]  Manuel Blum,et al.  A Machine-Independent Theory of the Complexity of Recursive Functions , 1967, JACM.

[18]  Carl H. Smith,et al.  The Power of Pluralism for Automatic Program Synthesis , 1982, JACM.

[19]  Patrick Brézillon,et al.  Lecture Notes in Artificial Intelligence , 1999 .

[20]  Mark A. Fulk ROBUST SEPARATIONS IN INDUCTIVE INFERENCE , 1990, COLT 1990.

[21]  Carl H. Smith,et al.  Probability and Plurality for Aggregations of Learning Machines , 1987, Inf. Comput..

[22]  John Case,et al.  Comparison of Identification Criteria for Machine Inductive Inference , 1983, Theor. Comput. Sci..

[23]  Rolf Wiehagen,et al.  Learning and Consistency , 1995, GOSLER Final Report.