Benchmark Datasets for Fault Detection and Classification in Sensor Data

Data measured and collected from embedded sensors often contains faults, i.e., data points which are not an accurate representation of the physical phenomenon monitored by the sensor. These data faults may be caused by deployment conditions outside the operational bounds for the node, and short- or long-term hardware, software, or communication problems. On the other hand, the applications will expect accurate sensor data, and recent literature proposes algorithmic solutions for the fault detection and classification in sensor data. In order to evaluate the performance of such solutions, however, the field lacks a set of \emph{benchmark sensor datasets}. A benchmark dataset ideally satisfies the following criteria: (a) it is based on real-world raw sensor data from various types of sensor deployments; (b) it contains (natural or artificially injected) faulty data points reflecting various problems in the deployment, including missing data points; and (c) all data points are annotated with the \emph{ground truth}, i.e., whether or not the data point is accurate, and, if faulty, the type of fault. We prepare and publish three such benchmark datasets, together with the algorithmic methods used to create them: a dataset of 280 temperature and light subsets of data from 10 indoor \emph{Intel Lab} sensors, a dataset of 140 subsets of outdoor temperature data from SensorScope sensors, and a dataset of 224 subsets of outdoor temperature data from 16 \emph{Smart Santander} sensors. The three benchmark datasets total 5.783.504 data points, containing injected data faults of the following types known from the literature: random, malfunction, bias, drift, polynomial drift, and combinations. We present algorithmic procedures and a software tool for preparing further such benchmark datasets.

[1]  Ramesh Govindan,et al.  Sensor faults: Detection methods and prevalence in real-world datasets , 2010, TOSN.

[2]  Fabienne Gaillard,et al.  Quality Control of Large Argo Datasets , 2009 .

[3]  Weizheng Ren,et al.  Fault Diagnosis Model of WSN Based on Rough Set and Neural Network Ensemble , 2008, 2008 Second International Symposium on Intelligent Information Technology Application.

[4]  Yuan He,et al.  SAVE: Sensor anomaly visualization engine , 2011, 2011 IEEE Conference on Visual Analytics Science and Technology (VAST).

[5]  Oum-El-Kheir Aktouf,et al.  Online data fault detection for wireless sensor networks - case study , 2012, 2012 International Conference on Wireless Communications in Underground and Confined Areas.

[6]  Doina Bucur,et al.  Applying time series analysis and neighbourhood voting in a decentralised approach for fault detection and classification in WSNs , 2013, SoICT.

[7]  P. Young,et al.  Time series analysis, forecasting and control , 1972, IEEE Transactions on Automatic Control.

[8]  Nirvana Meratnia,et al.  Outlier Detection Techniques for Wireless Sensor Networks: A Survey , 2008, IEEE Communications Surveys & Tutorials.

[9]  Yang Zhang,et al.  Observing the unobservable : distributed online outlier detection in wireless sensor networks , 2010 .

[10]  Gregory J. Pottie,et al.  Sensor network data fault types , 2007, TOSN.

[11]  Yuan Yao,et al.  Online anomaly detection for sensor systems: A simple and efficient approach , 2010, Perform. Evaluation.

[12]  Marco Aiello,et al.  A Machine Learning Approach for Identifying and Classifying Faults in Wireless Sensor Network , 2012, 2012 IEEE 15th International Conference on Computational Science and Engineering.

[13]  Kenji Tei,et al.  Fault classification and model learning from sensory Readings — Framework for fault tolerance in wireless sensor networks , 2013, 2013 IEEE Eighth International Conference on Intelligent Sensors, Sensor Networks and Information Processing.

[14]  John Anderson,et al.  Wireless sensor networks for habitat monitoring , 2002, WSNA '02.

[15]  Yuan He,et al.  Does feature matter: anomaly detection in sensor networks , 2011, BODYNETS.