Keep your friends close: the necessity for updating an anomaly sensor with legitimate environment changes

Large-scale distributed systems have dense, complex code-bases that are assumed to perform multiple and inter-dependent tasks while user interaction is present. The way users interact with systems can differ and evolve over time, as can the systems themselves. Consequently, anomaly detection (AD) sensors must be able to cope with updates to their operating environment. Otherwise, the sensor may incorrectly classify new patterns as malicious (a false positive) or assert that old or outdated patterns are normal (a false negative). This problem of "model drift" is an almost universally acknowledged hazard for anomaly sensors. However, relatively little work has been done to understand the process of identifying and seamlessly updating an operational network AD sensor with legal modifications like changes to a file system or back-end database. In this paper, we highlight some of the challenges of keeping an anomaly sensor updated, an important step toward helping anomaly sensors become a practical intrusion detection tool for real-world network and host environments. Our goal is to eliminate needless false positives arising from the gradual de-synchronization of the sensor from the environment it is monitoring. To that end, we investigate the feasibility of automatically deriving and applying a "data" or "model patch" that describes the changes necessary to update a "reasonable" AD behavioral model (i.e., a model whose structure follows the core design principles of existing anomaly models). We propose an update procedure that is holistic in nature: specifically, we present preliminary results on how to update a sensor that monitors the request and response messages for non-dynamic HTTP requests and software patches. In addition, we propose extensions for dynamic, database-driven requests and responses.