Sensitivity of fusion performance to classifier model variations

During design of classifier fusion tools, it is important to evaluate the performance of the fuser. In many cases, the output of the classifiers needs to be simulated to provide the range of fusion input that allows an evaluation throughout the design space. One fundamental question is how the output should be distributed, in particular for multi-class continuous output classifiers. Using the wrong distribution may lead to fusion tools that are either overly optimistic or otherwise distort the outcome. Either case may lead to a fuser that performs sub-optimal in practice. It is therefore imperative to establish the bounds of different classifier output distributions. In addition, one must take into account the design space that may be of considerable complexity. Exhaustively simulating the entire design space may be a lengthy undertaking. Therefore, the simulation has to be guided to populate the relevant areas of the design space. Finally, it is crucial to quantify the performance throughout the design of the fuser. This paper addresses these issues by introducing a simulator that allows the evaluation of different classifier distributions in combination with a design of experiment setup, and a built-in performance evaluation. We show results from an application of diagnostic decision fusion on aircraft engines.

[1]  Henry W. Altland,et al.  Engineering Methods for Robust Product Design , 1996 .

[2]  James P. Egan,et al.  Signal detection theory and ROC analysis , 1975 .

[3]  Thomas J. Downey,et al.  Using the receiver operating characteristic to asses the performance of neural classifiers , 1999, IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339).

[4]  Ilya M. Sobol,et al.  A Primer for the Monte Carlo Method , 1994 .

[5]  Weizhong Yan,et al.  Fusing binary and continuous output of multiple classifiers , 2002, Proceedings of the Fifth International Conference on Information Fusion. FUSION 2002. (IEEE Cat.No.02EX5997).

[6]  Andrew P. Bradley,et al.  The use of the area under the ROC curve in the evaluation of machine learning algorithms , 1997, Pattern Recognit..

[7]  K. Goebel,et al.  Diagnostic information fusion: requirements flowdown and interface issues , 2000, 2000 IEEE Aerospace Conference. Proceedings (Cat. No.00TH8484).

[8]  Weizhong Yan,et al.  Classifier performance measures in multifault diagnosis for aircraft engines , 2002, SPIE Defense + Commercial Sensing.

[9]  Ronald L. Wasserstein,et al.  Monte Carlo: Concepts, Algorithms, and Applications , 1997 .