Testing network-based intrusion detection signatures using mutant exploits

Misuse-based intrusion detection systems rely on models of attacks to identify the manifestation of intrusive behavior. Therefore, the ability of these systems to reliably detect attacks is strongly affected by the quality of their models, which are often called "signatures." A perfect model would be able to detect all the instances of an attack without making mistakes, that is, it would produce a 100% detection rate with 0 false alarms. Unfortunately, writing good models (or good signatures) is hard. Attacks that exploit a specific vulnerability may do so in completely different ways, and writing models that take into account all possible variations is very difficult. For this reason, it would be beneficial to have testing tools that are able to evaluate the "goodness" of detection signatures. This work describes a technique to test and evaluate misuse detection models in the case of network-based intrusion detection systems. The testing technique is based on a mechanism that generates a large number of variations of an exploit by applying mutant operators to an exploit template. These mutant exploits are then run against a victim host protected by a network-based intrusion detection system. The results of the systems in detecting these variations provide a quantitative basis for the evaluation of the quality of the corresponding detection model.

[1]  Thomas Henry Ptacek,et al.  Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection , 1998 .

[2]  Richard A. Kemmerer,et al.  State Transition Analysis: A Rule-Based Intrusion Detection Approach , 1995, IEEE Trans. Software Eng..

[3]  Wenliang Du,et al.  Vulnerability Testing of Software System Using Fault Injection , 1999 .

[4]  Dhiraj K. Pradhan,et al.  Fault Injection: A Method for Validating Computer-System Dependability , 1995, Computer.

[5]  Eric Miller,et al.  Testing and evaluating computer intrusion detection systems , 1999, CACM.

[6]  Richard J. Lipton,et al.  Hints on Test Data Selection: Help for the Practicing Programmer , 1978, Computer.

[7]  Jeffrey M. Voas,et al.  A 'Crystal Ball' for Software Liability , 1997, Computer.

[8]  Roy T. Fielding,et al.  Hypertext Transfer Protocol - HTTP/1.1 , 1997, RFC.

[9]  Stephen E. Deering,et al.  Internet Protocol, Version 6 (IPv6) Specification , 1995, RFC.

[10]  John McHugh,et al.  Testing Intrusion detection systems: a critique of the 1998 and 1999 DARPA intrusion detection system evaluations as performed by Lincoln Laboratory , 2000, TSEC.

[11]  Jean Arlat,et al.  Fault Injection for Dependability Validation: A Methodology and Some Applications , 1990, IEEE Trans. Software Eng..

[12]  Martin Roesch,et al.  Snort - Lightweight Intrusion Detection for Networks , 1999 .

[13]  Giovanni Vigna,et al.  An experience developing an IDS stimulator for the black-box testing of network intrusion detection systems , 2003, 19th Annual Computer Security Applications Conference, 2003. Proceedings..

[14]  Calcagnini Giovanni Fun with Packets: Designing a Stick , 2002 .

[15]  Stephen Taylor,et al.  Validation of Sensor Alert Correlators , 2003, IEEE Secur. Priv..

[16]  R.K. Cunningham,et al.  Evaluating intrusion detection systems: the 1998 DARPA off-line intrusion detection evaluation , 2000, Proceedings DARPA Information Survivability Conference and Exposition. DISCEX'00.