Team Learning of Computable Languages

Abstract. A team of learning machines is a multiset of learning machines. A team is said to learn a concept successfully if each member of some nonempty subset, of predetermined size, of the team learns the concept. Team learning of languages may be viewed as a suitable theoretical model for studying computational limits on the use of multiple heuristics in learning from examples. Team learning of recursively enumerable languages has been studied extensively. However, it may be argued that from a practical point of view all languages of interest are computable. This paper gives theoretical results about team learnability of computable (recursive) languages. These results are mainly about two issues: redundancy and aggregation. The issue of redundancy deals with the impact of increasing the size of a team and increasing the number of machines required to be successful. The issue of aggregation deals with conditions under which a team may be replaced by a single machine without any loss in learning ability. The learning scenarios considered are:(a) Identification in the limit of grammars for computable languages. (b) Identification in the limit of decision procedures for computable languages. (c) Identification in the limit of grammars for indexed families of computable languages. (d) Identification in the limit of grammars for indexed families with a recursively enumerable class of grammars for the family as the hypothesis space. Scenarios that can be modeled by team learning are also presented.

[1]  Carl H. Smith,et al.  Asymmetric team learning , 1997, COLT '97.

[2]  Arun Sharma,et al.  Finite Identification of Functions by Teams with Success Ratio 1\over2 and Above , 1995, Inf. Comput..

[3]  Robert P. Daley On the Error Correcting Power of Pluralism in BC-Type Inductive Inference , 1983, Theoretical Computer Science.

[4]  Carl H. Smith,et al.  The Power of Pluralism for Automatic Program Synthesis , 1982, JACM.

[5]  Manuel Blum,et al.  A Machine-Independent Theory of the Complexity of Recursive Functions , 1967, JACM.

[6]  Léa Meyer Monotonic and Dual-Monotonic Probabilistic Language Learning of Indexed Families with High Probability , 1997, EuroCOLT.

[7]  Rusins Freivalds,et al.  On the Power of Probabilistic Strategies in Inductive Inference , 1984, Theor. Comput. Sci..

[8]  M. W. Shields An Introduction to Automata Theory , 1988 .

[9]  Jeffrey D. Ullman,et al.  Introduction to Automata Theory, Languages and Computation , 1979 .

[10]  R. V. Freivald Functions Computable in the Limit by Probabilistic Machines , 1974, MFCS.

[11]  Sandip Sen,et al.  Adaption and Learning in Multi-Agent Systems , 1995, Lecture Notes in Computer Science.

[12]  Arun Sharma,et al.  Probability is more powerful than team for language identification from positive data , 1993, COLT '93.

[13]  Daniel N. Osherson,et al.  Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists , 1990 .

[14]  Leonard Pitt,et al.  Relations between probabilistic and team one-shot learners (extended abstract) , 1991, COLT '91.

[15]  Gerhard Weiß,et al.  Adaptation and Learning in Multi-Agent Systems: Some Remarks and a Bibliography , 1995, Adaption and Learning in Multi-Agent Systems.

[16]  Bala Kalyanasundaram,et al.  Capabilities of fallible FINite learning , 1993, COLT '93.

[17]  Bala Kalyanasundaram,et al.  Breaking the probability ½ barrier in FIN-type learning , 1992, COLT '92.

[18]  Mark A. Fulk Prudence and Other Conditions on Formal Language Learning , 1990, Inf. Comput..

[19]  Bala Kalyanasundaram,et al.  Breaking the Probability 1/2 Barrier in FIN-Type Learning , 1995, J. Comput. Syst. Sci..

[20]  Carl H. Smith,et al.  Probability and Plurality for Aggregations of Learning Machines , 1987, Inf. Comput..

[21]  Léa Meyer Probabilistic Language Learning Under Monotonicity Constraints , 1995, ALT.

[22]  Thomas Zeugmann,et al.  Learning Recursive Languages with Bounded Mind Changes , 1993, Int. J. Found. Comput. Sci..

[23]  John Case,et al.  Refinements of inductive inference by Popperian and reliable machines , 1994, Kybernetika.

[24]  Thomas Zeugmann,et al.  A Guided Tour Across the Boundaries of Learning Recursive Languages , 1995, GOSLER Final Report.

[25]  Thomas Zeugmann,et al.  One-Sided Error Probabilistic Inductive Inference and Reliable Frequency Identification , 1991, Inf. Comput..

[26]  Arun Sharma,et al.  Finite learning by a “team” , 1990, COLT '90.

[27]  Carl H. Smith,et al.  Choosing a learning team: a topological approach , 1994, STOC '94.

[28]  Bala Kalyanasundaram,et al.  Use of Reduction Arguments in Determining Popperian FIN-Type Learning Capabilities , 1993, ALT.

[29]  Manuel Blum,et al.  Toward a Mathematical Theory of Inductive Inference , 1975, Inf. Control..

[30]  Carl H. Smith,et al.  Three Decades of Team Learning , 1994, AII/ALT.

[31]  Arun Sharma,et al.  Computational Limits on Team Identification of Languages , 1996, Inf. Comput..

[32]  Leonard Pitt,et al.  Probabilistic inductive inference , 1989, JACM.

[33]  E. Mark Gold,et al.  Language Identification in the Limit , 1967, Inf. Control..

[34]  Leonard Pitt,et al.  A Characterization of Probabilistic Inference , 1984, FOCS.

[35]  Daniel N. Osherson,et al.  Aggregating Inductive Expertise , 1986, Inf. Control..

[36]  Arun Sharma,et al.  Language Learning by a "Team" (Extended Abstract) , 1990, ICALP.

[37]  Arun Sharma,et al.  On Aggregating Teams of Learning Machines , 1995, Theor. Comput. Sci..

[38]  Karlis Podnieks Comparing various concepts of function prediction. Part 1. , 1974 .

[39]  Bala Kalyanasundaram,et al.  Capabilities of probabilistic learners with bounded mind changes , 1993, COLT '93.

[40]  Dana Angluin,et al.  Finding Patterns Common to a Set of Strings , 1980, J. Comput. Syst. Sci..