Randomized Hypotheses and Minimum Disagreement Hypotheses for Learning with Noise

In this paper we prove various results about PAC learning in the presence of malicious and random classification noise. Our main theme is the use of randomized hypotheses for learning with small sample sizes and high malicious noise rates. We show an algorithm that PAC learns any target class of VC-dimension d using randomized hypotheses and order of d/e training examples (up to logarithmic factors) while tolerating malicious noise rates even slightly larger than the information-theoretic bound e/(1+e) for deterministic hypotheses. Combined with previous results, this implies that a lower bound d/Δ+e/Δ2 on the sample size, where η=e/(l+e)−Δ is the malicious noise rate, applies only when using deterministic hypotheses. We then show that the information-theoretic upper bound on the noise rate for deterministic hypotheses can be replaced by 2e/(l+2e) if randomized hypotheses are used. Investigating further the use of randomized hypotheses, we show a strategy for learning the powerset of d elements using an optimal sample size of order de/Δ2 (up to logarithmic factors) and tolerating a noise rate η=2e/(l+2e)−Δ. We complement this result by proving that this sample size is also necessary for any class C of VC-dimension d.