Optimized Precision - A New Measure for Classifier Performance Evaluation

All learning algorithms attempt to improve the accuracy of a classification system. However, the effectiveness of such a system is dependent on the heuristic used by the learning paradigm to measure performance. This paper demonstrates that the use of Precision (P) for performance evaluation of imbalanced data sets could lead the solution towards sub-optimal answers. We move onto present a novel performance heuristic, the 'Optimized Precision (OP)', to negate these detrimental effects. We also analyze the impact of these observations on the training performance of ensemble learners and Multi-Classifier Systems (MCS), and provide guidelines for the proper training of multi-classifier systems.