Lift-Per-Drift: An Evaluation Metric for Classification Frameworks with Concept Drift Detection

Data streams with concept drift change over time. Detecting drift allows remedial action, but this can come at a cost e.g. training a new classifier. Prequential accuracy is commonly used to evaluate the impact of drift detection frameworks on data stream classification, but recent work shows frequent periodic drift detection can provide better accuracy than state-of-the-art drift detection techniques. We discuss how sequentiality, the degree of consecutive matching class labels across instances, allows high accuracy without a classifier learning to differentiate classes. We propose a novel metric: lift-per-drift (lpd). This measures drift detection performance through its impact on classification accuracy, penalised by drifts detected in a dataset. This metric solves three problems: lpd cannot be increased by periodic, frequent drifts; lpd clearly shows when using drift detection increases classifier error; and lpd does not require knowledge of where real drifts occurred. We show how lpd can be set to be sensitive to the cost of each drift. Our experiments show lpd is not artificially increased through sequentiality; that lpd highlights when drift detection has caused a loss in accuracy; and that it is sensitive to change in true-positive drift and false-positive drift detection rates.