Learning 2µ DNF Formulas and kµ Decision Trees

We consider the learnability of “ kμ ” DNF formulas and decision trees. These are DNF formulas and decision trees in which every variable appears in at most a constant k different locations. The learning model is Valiant's distribution free model with the addition of membership queries. We present polynomial time learning algorithms for 2 μ DNF formulas and for kμ decision trees (where k is an arbitrary constant). Learning 3 μ DNF formulas in this model is as hard as learning arbitrary DNF formulas.

[1]  Dana Angluin,et al.  When won't membership queries help? , 1991, STOC '91.

[2]  Robert E. Schapire,et al.  The Strength of Weak Learnability , 1989, 30th Annual Symposium on Foundations of Computer Science.

[3]  Marek Karpinski,et al.  Learning read-once formulas with queries , 1993, JACM.

[4]  Dana Angluin,et al.  Learning Regular Sets from Queries and Counterexamples , 1987, Inf. Comput..

[5]  Yoav Freund,et al.  Boosting a weak learning algorithm by majority , 1990, COLT '90.

[6]  Leslie G. Valiant,et al.  On the learnability of Boolean formulae , 1987, STOC.

[7]  Leslie G. Valiant,et al.  A theory of the learnable , 1984, STOC '84.

[8]  H. Aizenstein,et al.  Exact learning of read-twice DNF formulas , 1991, [1991] Proceedings 32nd Annual Symposium of Foundations of Computer Science.

[9]  Leonard Pitt,et al.  Prediction-Preserving Reducibility , 1990, J. Comput. Syst. Sci..