Outlier (also called deviation or exception) detection is an important function in data mining. In identifying outliers, the deviation-based approach has many advantages and draws much attention. Although a linear algorithm for sequential deviation detection is proposed, it is not stable and always loses many deviation points. In this paper, we present three algorithms on detecting deviations. The first algorithm is time proportional to the square of the dataset length, and the second is time proportional to the square of the number of distinct data values. These two algorithms lead to same result, while the latter is much more efficient than the former. In the third algorithm, a deviation factor is defined to help finding deviation points. Although leading to approximation results, it is the most efficient of the three, especially to large datasets with lots of distinct values.
[1]
Prabhakar Raghavan,et al.
A Linear Method for Deviation Detection in Large Databases
,
1996,
KDD.
[2]
S. Muthukrishnan,et al.
Mining Deviants in a Time Series Database
,
1999,
VLDB.
[3]
Jiawei Han,et al.
Data Mining: Concepts and Techniques
,
2000
.
[4]
Fan Ming-hui.
Review of Outlier Detection
,
2006
.
[5]
Yannis E. Ioannidis,et al.
Balancing histogram optimality and practicality for query result size estimation
,
1995,
SIGMOD '95.
[6]
Jiawei Han,et al.
Data Mining: Concepts and Techniques, Second Edition
,
2006,
The Morgan Kaufmann series in data management systems.
[7]
Jian Pei,et al.
Data Mining: Concepts and Techniques, 3rd edition
,
2006
.