Relations between probabilistic and team one-shot learners (extended abstract)
暂无分享,去创建一个
A typical way to increase the power of a learning paradigm is to allow randomization and require successful learning only with some probability p. Another standard approach is to allow a team of s learners working in parallel and to demand only that at least r of them correctly learn. These two variants are compared for the model of learning of total recursive functions where the learning algorithm is allowed an unbounded but finite amount of computation, and must halt with a correct program after receiving only a finite number of values of the function to be learned.
[1] E. Mark Gold,et al. Language Identification in the Limit , 1967, Inf. Control..
[2] Leonard Pitt,et al. Probabilistic inductive inference , 1989, JACM.
[3] Carl H. Smith,et al. Probability and Plurality for Aggregations of Learning Machines , 1988, Inf. Comput..
[4] Arun Sharma,et al. Finite learning by a “team” , 1990, COLT '90.
[5] Carl H. Smith,et al. The Power of Pluralism for Automatic Program Synthesis , 1982, JACM.