Limitations on Inductive Learning
暂无分享,去创建一个
ABSTRACT This paper explores the proposition that inductive learning from examples is fundamentally limited to learning only a small fraction of the total space of possible hypotheses. We begin by defining the notion of an algorithm reliably learning a good approximation to a concept C. An empirical study of three algorithms (the classical algorithm for maximally specific conjunctive generalizations, ID3, and back-propagation for feed-forward networks of logistic units) demonstrates that each of these algorithms performs very poorly for the task of learning concepts defined over the space of Boolean feature vectors containing 3 variables. Simple counting arguments allow us to prove an upper bound on the maximum number of concepts reliably learnable from m training examples.
[1] Geoffrey E. Hinton,et al. Learning internal representations by error propagation , 1986 .
[2] Judea Pearl,et al. Probabilistic reasoning in intelligent systems , 1988 .
[3] Leslie G. Valiant,et al. A general lower bound on the number of examples needed for learning , 1988, COLT '88.