Publisher Summary This chapter reviews a technique for detecting incorrect bias with arbitrary reliability. When a system is confronted with a new concept learning problem, it is probably not the case that it will be able to select the correct bias before it begins its learning task. It is true that the learner may be able to use knowledge it has to induce an appropriate bias in a few cases. Thus, it might be necessary to utilize an algorithm that makes strong performance guarantees when its bias is correct. Testing of whether the algorithm performs as promised forms the basis for concluding that the bias is bad. Any reasonable pac-learning algorithm can be converted into an algorithm that either returns an accurate concept or reports a bad bias such that its output can be regarded to be 1–δ reliable, whether or not its bias is correct. The Valiant framework derives its power and ability to provide strong performance guarantees from the assumption that the target concept is a member of the given concept class. When the assumption is true, the guarantees on accuracy, reliability, and computational complexity hold; however, when the bias can be incorrect, the model alone makes no guarantees on performance.
[1]
Oren Etzioni,et al.
Hypothesis Filtering: A Practical Approach to Reliable Learning
,
1988,
ML.
[2]
Paul E. Utgoff,et al.
Shift of bias for inductive concept learning
,
1984
.
[3]
M. Kearns,et al.
Recent Results on Boolean Concept Learning
,
1987
.
[4]
Jonathan Amsterdam.
Extending the Valiant Learning Model
,
1988,
ML.
[5]
Leslie G. Valiant,et al.
A theory of the learnable
,
1984,
STOC '84.
[6]
David Haussler,et al.
Quantifying Inductive Bias: AI Learning Algorithms and Valiant's Learning Framework
,
1988,
Artif. Intell..
[7]
D. Angluin.
Queries and Concept Learning
,
1988
.