Efficient algorithms in computational learning theory
暂无分享,去创建一个
This thesis presents new positive and negative results in the theory of machine learning. We give efficient algorithms for various learning problems and establish new relationships among seemingly unrelated problems and techniques in learning theory.
We give a optimal characterization of Disjunctive Normal Form (DNF) formulae as thresholded real-valued polynomials. Using this characterization we obtain the fastest known algorithm for the well-studied problem of learning an arbitrary DNF using examples drawn from a fixed, arbitrary probability distribution. The running time of our new algorithm is exponential in the cube root of the number of variables in the DNF formula. Using different techniques we also present the fastest known algorithms for learning DNF in two other well-studied learning models.
We give a new family of algorithms for the classical problem of learning a linear threshold function. Our algorithms, which are based on a new smooth boosting technique, can withstand high levels of noisy data. These new algorithms provably match the performance bounds of the Perceptron and Winnow algorithms, two of the best-known algorithms in machine learning, and demonstrate a surprising connection between boosting and linear threshold learning.
We also use techniques from cryptography to show that even simple learning problems can be subject to strong tradeoffs between the running time and the amount of data required for successful learning. As an aspect of this tradeoff we give the first proof that attribute efficient learning (a type of learning from very few examples) can be computationally hard.