The type of learning which we consider is finite learning ( F]N-t ype) where a learner is permitted to conjecture only one program for the function which it is trying to learn. In this paper we investigate the relative learning capabilities of probabilistic and pluralistic learners when they are allowed to conjecture programs which have errors in them. Pluralistic learners are teams of learners which cooperate in trying to learn a function. We determine the exact point at which probabilistic learners are more powerful than deterministic (i.e., a team of size one) learners. The “bootstrapping technique” of Freivalds has been widely used in finite learning for determining the capabilities of probabilistic and team learners. However, when the learners are allowed to produce programs that may commit errors, then “bootstrapping” can not be employed. For probability p > ~, we show that a probabilistic learner with success probability p can be replaced with a deterministic learner. We also show that the cut-off point ~ is indeed tight. Quite surprisingly, in the case of PFIN-learning, the cut-off point is $, different from that of F]N-type learning. Simple techniques such as “majority-vote” are insufficient for transforming a probabilistic learner into a deterministic one. Like BC-type learning, the capability cut-off points depend on the number of errors in the case of finite learning, which contrasts with, the situation for learning in the limit. Finally, we consider similar questions for FIN-type learners which are allowed to produce programs which make an apriori unbounded (but finite) number of errors. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and tha titlQ of the publication and its date a~pear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. ACM COLT ’93 171931( 2A, USA @ 1993 ACM 0-89791-61 1-5/93 /0007 /0199 . ..$1 .50
[1]
E. Mark Gold,et al.
Language Identification in the Limit
,
1967,
Inf. Control..
[2]
Leonard Pitt,et al.
Probabilistic inductive inference
,
1989,
JACM.
[3]
Leonard Pitt,et al.
Relations between probabilistic and team one-shot learners (extended abstract)
,
1991,
COLT '91.
[4]
Mahendran Velauthapillai.
Inductive inference with bounded number of mind changes
,
1989,
COLT '89.
[5]
Carl H. Smith,et al.
Probability and Plurality for Aggregations of Learning Machines
,
1988,
Inf. Comput..
[6]
Arun Sharma,et al.
Finite learning by a “team”
,
1990,
COLT '90.
[7]
Bala Kalyanasundaram,et al.
Breaking the probability ½ barrier in FIN-type learning
,
1992,
COLT '92.
[8]
Robert P. Daley.
Transformation of probabilistic learning strategies into deterministic learning strategies
,
1988,
COLT '88.
[9]
Daniel N. Osherson,et al.
Systems That Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists
,
1990
.
[10]
Bala Kalyanasundaram,et al.
Use of Reduction Arguments in Determining Popperian FIN-Type Learning Capabilities
,
1993,
ALT.
[11]
Bala Kalyanasundaram,et al.
Probabilistic and Pluralistic Learners with Mind Changes
,
1992,
MFCS.
[12]
Bala Kalyanasundaram,et al.
Capabilities of probabilistic learners with bounded mind changes
,
1993,
COLT '93.