Relations between probabilistic and team one-shot learners (extended abstract)

A typical way to increase the power of a learning paradigm is to allow randomization and require successful learning only with some probability p. Another standard approach is to allow a team of s learners working in parallel and to demand only that at least r of them correctly learn. These two variants are compared for the model of learning of total recursive functions where the learning algorithm is allowed an unbounded but finite amount of computation, and must halt with a correct program after receiving only a finite number of values of the function to be learned.