Detecting Novel Classes with Applications to Fault Diagnosis

Existing empirical learning algorithms such as decision trees and neural networks usually measure the quality of the learned model in terms of classification error. However, as machine learning algorithms are adapted for practical applications, other aspects of classifier performance can become critically important. In particular, in this paper, we consider the problem of designing a classifier that can be designed to reject classes which it has not been trained on. This is a significant practical problem in domains such as fault diagnosis. In essence, the learned model must know the boundaries of its own experience. The problem is formalised using a probabilistic model and we describe a new learning algorithm based on ideas from statistical kernel density estimation and mixture models. Empirical results for a NASA fault diagnosis task are reported which demonstrate that the new model can identify data from novel classes where conventional classifiers fail to do so.