Approximate Testing and Its Relationship to Learning

Abstract Testing plays an integral part in many areas of computer science. In relation to computational learning theory, testing can be viewed as an inverse process to learning. Testing algorithms create a set of examples for a given target concept that distinguish it from other concepts, while learning algorithms use a given set of examples to correctly infer an unknown concept. In this paper we develop a model for approximate testing of concepts, which relates to the PAC (probably almost correct) model of learning as well as other learning models. In approximate testing, a concept that passes the given tests is only required to be correct to within a given error tolerance rather than being exactly correct. We define what it means for a concept class to be approximately testable , and we investigate general properties of a concept class that make it testable or untestable. We define a new measure that is similar to the Vapnik-Chervonenkis dimension, called the testing dimension of a concept class, and show how it yields untestability results for certain concept classes. We also compare our testing model to several different learning models, and we discuss the topics of nonredundant test sets and generic test sets.