On GPU accelerated tuning for a payload anomaly-based network intrusion detection scheme

In network intrusion detection, anomaly-based solutions complement signature-based solutions in mitigating zero-day attacks, but require extensive training and learning to effectively model what the normal pattern for a given system (or service) looks like. Though the training typically happens off-line, and the processing speed is not as important as the detection stage (which occurs on-line in real-time), continuous analysis and retuning may be attractive depending on the deployment scenarios. The different types of computation required to perform automatic retuning (or retraining) of the system may result in resource competition for other important system tasks. Thus, a mechanism by which the retuning can take place without affecting the actual system workload is important. In this paper, we describe a layered, simple statistics based anomaly detection algorithm with parallel implementation of the training algorithm. We focus on the use of graphic processing units (GPU) to allow cost-efficient implementation with minimal impact on CPU loads so as to minimize affecting the day to day server workloads. Our results show potential for significant performance improvements.