A parametrization scheme for classifying models of learnability

Abstract We present a systematic framework for classifying, comparing, and defining models of PAC learnability. Apart from the obvious "uniformity" parameters, we present a novel "solid learnability" notion that indicates when the class in question can be successfully learned by the most straightforward algorithms, namely, any consistent algorithm. We analyze known models in terms of our new parameterization scheme and investigate the relative strength of notions of learnability that correspond to different parameter values. In addition, we consider "proximity" between concept classes. We define notions of "covering" one class by another and show that, with respect to learnability, they play a role similar to the role of reductions in computational complexity; the learnability of a class implies the learnability of any class it covers. We apply the covering technique to resolve some open questions raised by Benedek and Itai (1991, Theoret. Comput. Sci.86, 377-389; 1989, Inform. and Comput.82, 247-261) and Linial et al. (1991, Inform. and Comput.90, 33-49). The notions we discuss are information-theoretic: we concentrate on the question of learnability rather than the computational complexity of the learning process.