The development of a multimodal decision support system for network intrusion detection analysis

The increasing accessibility of information and volumes of on-line transactions are a reflection of the growing number and sophistication of computer security incidents on the Internet. While an intrusion detection system may be one component of a good security model, implementing intrusion detection systems on networks and hosts requires a broad understanding of computer security, and the massive amounts of textual data retrieved by the system. Given the sensitivity of the security posture, interpretation for rapid response in maintaining operational security is perhaps one of the more prevalent problems in security operations. The purpose of this study, in essence, is to increase situational awareness of the security posture through the development of a multimodal decision support system for network intrusion analysis. Modality fusion can extend the capabilities of computer systems to better match the natural communication means of human beings and assist them in their comprehension and exploration of large unfamiliar datasets or information spaces. In this way, modality fusion embodies intelligence amplification and cognitive augmentation, using computers to aid and enhance human intelligence by utilizing inherent human cognitive abilities, building upon the skills that humans already have, and augmenting the areas that are lacking in some way. This research evaluates conventional and multimodal intrusion analysis environments, the effectiveness, efficiency, and the preference of modality combinations in attack identification while gauging performance. By integrating multiple modalities to provide a full picture of the security posture, we can better communicate the information that is needed by the analyst, and can elicit an increase in their situational awareness, reduce time necessary for decision-making, and thus increase performance. Preliminary results show that auditory as well as haptics benefit in search and alerting tasks. Incorporating auditory and haptics reduced the performance time to less than half that of the traditional visual approach.