Poster: Design of an Anomaly-based Threat Detection & Explication System

The poster corresponding to this summary depicts a proposition of a system able to explain anomalous behavior within a user session by considering anomalies identified through their deviation from a set of baseline process graphs. We adapt star structures, a bipartite representation used to approximate the edit distance between two graphs. Relevant processes are selected from a dictionary of benign and malicious traces generated through a sentiment-like bigram extraction and scoring system based on the log likelihood ratio test. We prototypically implemented smart anomaly explication through a number of competency questions derived and evaluated by a decision tree. The determined key factors are ultimately mapped to a dedicated APT attack stage ontology that considers actions, actors, as well as target assets.