Techniques for evaluating classifiers in application
暂无分享,去创建一个
In gauging the generalization capability of a classifier, a good evaluation technique should adhere to certain principles. For instance, the technique should evaluate a selected classifier, not simply an architecture. Secondly, a solution should be assessable at the classifier’s design and, further, throughout its application. Additionally, the technique should be insensitive to data presentation and cover a significant portion of the classifier’s domain. Such principles call for methods beyond supervised learning and statistical training techniques such as cross validation. For this paper, we shall discuss the evaluation of a generalization in application. For illustration, we will present a method for the multilayer perceptron (MLP) that may be drawn from the unlabeled data collected in the operational use of a given classifier. These conclusions support self-supervised learning and computational methods that isolate unstable, nonrepresentational regions in the classifier.
[1] Kathy Rooney,et al. Encarta world English dictionary , 1999 .
[2] Amy L. Magnus,et al. Inquisitive pattern recognition , 2000, Proceedings of the Third International Conference on Information Fusion.
[3] Mark E. Oxley,et al. Theory of confusion , 2001, SPIE Optics + Photonics.
[4] Rémi Monasson,et al. Determining computational complexity from characteristic ‘phase transitions’ , 1999, Nature.