Anomaly Detection Using an Ensemble of Feature Models

We present a new approach to semi-supervised anomaly detection. Given a set of training examples believed to come from the same distribution or class, the task is to learn a model that will be able to distinguish examples in the future that do not belong to the same class. Traditional approaches typically compare the position of a new data point to the set of ``normal'' training data points in a chosen representation of the feature space. For some data sets, the normal data may not have discernible positions in feature space, but do have consistent relationships among some features that fail to appear in the anomalous examples. Our approach learns to predict the values of training set features from the values of other features. After we have formed an ensemble of predictors, we apply this ensemble to new data points. To combine the contribution of each predictor in our ensemble, we have developed a novel, information-theoretic anomaly measure that our experimental results show selects against noisy and irrelevant features. Our results on 47 data sets show that for most data sets, this approach significantly improves performance over current state-of-the-art feature space distance and density-based approaches.

[1]  Bernhard Schölkopf,et al.  New Support Vector Algorithms , 2000, Neural Computation.

[2]  Vipin Kumar,et al.  Feature bagging for outlier detection , 2005, KDD '05.

[3]  VARUN CHANDOLA,et al.  Anomaly detection: A survey , 2009, CSUR.

[4]  Pat Langley,et al.  Estimating Continuous Distributions in Bayesian Classifiers , 1995, UAI.

[5]  Sang Joon Kim,et al.  A Mathematical Theory of Communication , 2006 .

[6]  Ian H. Witten,et al.  The WEKA data mining software: an update , 2009, SKDD.

[7]  Choh-Man Teng,et al.  Correcting Noisy Data , 1999, ICML.

[8]  Carla E. Brodley,et al.  Behavioral Features for Network Anomaly Detection , 2006 .

[9]  Kent A. Spackman,et al.  Signal Detection Theory: Valuable Tools for Evaluating Inductive Learning , 1989, ML.

[10]  Philip S. Yu,et al.  Outlier detection for high dimensional data , 2001, SIGMOD '01.

[11]  Michael N. Molla,et al.  Computer Sciences Department Novel Uses for Machine Learning and Other Computational Methods for the Design and Interpretation of Genetic Microarrays , 2007 .

[12]  Hans-Peter Kriegel,et al.  LOF: identifying density-based local outliers , 2000, SIGMOD '00.

[13]  Jian Tang,et al.  Enhancing Effectiveness of Outlier Detections for Low Density Patterns , 2002, PAKDD.

[14]  Chih-Jen Lin,et al.  LIBLINEAR: A Library for Large Linear Classification , 2008, J. Mach. Learn. Res..

[15]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[16]  Zhi-Hua Zhou,et al.  Isolation Forest , 2008, 2008 Eighth IEEE International Conference on Data Mining.

[17]  Philip S. Yu,et al.  Cross-feature analysis for detecting ad-hoc routing anomalies , 2003, 23rd International Conference on Distributed Computing Systems, 2003. Proceedings..

[18]  Takafumi Kanamori,et al.  Inlier-Based Outlier Detection via Direct Density Ratio Estimation , 2008, 2008 Eighth IEEE International Conference on Data Mining.