Malware Detection for Forensic Memory Using Deep Recurrent Neural Networks

Memory forensics is a young but fast-growing area of research and a promising one for the field of computer forensics. The learned model is proposed to reside in an isolated core with strict communication restrictions to achieve incorruptibility as well as efficiency, therefore providing a probabilistic memory-level view of the system that is consistent with the user-level view. The lower level memory blocks are constructed using primary block sequences of varying sizes that are fed as input into Long-Short Term Memory (LSTM) models. Four configurations of the LSTM model are explored by adding bi- directionality as well as attention. Assembly level data from 50 Windows portable executable (PE) files are extracted, and basic blocks are constructed using the IDA Disassembler toolkit. The results show that longer primary block sequences result in richer LSTM hidden layer representations. The hidden states are fed as features into Max pooling layers or Attention layers, depending on the configuration being tested, and the final classification is performed using Logistic Regression with a single hidden layer. The bidirectional LSTM with Attention proved to be the best model, used on basic block sequences of size 29. The differences between the model’s ROC curves indicate a strong reliance on the lower level, instructional features, as opposed to metadata or string features.

[1]  Edward Raff,et al.  An investigation of byte n-gram features for malware classification , 2018, Journal of Computer Virology and Hacking Techniques.

[2]  Li Jun,et al.  HIDE: a Hierarchical Network Intrusion Detection System Using Statistical Preprocessing and Neural Network Classification , 2001 .

[3]  Razvan Pascanu,et al.  Malware classification with recurrent networks , 2015, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[4]  Jason Weston,et al.  End-To-End Memory Networks , 2015, NIPS.

[5]  Xinli Yang,et al.  Deep Learning for Just-in-Time Defect Prediction , 2015, 2015 IEEE International Conference on Software Quality, Reliability and Security.

[6]  Geoffrey E. Hinton,et al.  On the importance of initialization and momentum in deep learning , 2013, ICML.

[7]  Jürgen Schmidhuber,et al.  Long Short-Term Memory , 1997, Neural Computation.

[8]  Chengqi Zhang,et al.  MetaGraph2Vec: Complex Semantic Path Augmented Heterogeneous Network Embedding , 2018, PAKDD.

[9]  Yi Sun,et al.  Malware Detection Based on Deep Learning of Behavior Graphs , 2019, Mathematical Problems in Engineering.

[10]  Muhammad Zubair Shafiq,et al.  Using spatio-temporal information in API calls with machine learning algorithms for malware detection , 2009, AISec '09.

[11]  Wei Song,et al.  DeepMem: Learning Graph Neural Network Models for Fast and Robust Memory Forensic Analysis , 2018, CCS.

[12]  Michael Cohen,et al.  Anti-forensic resilient memory acquisition , 2013 .

[13]  Jiliang Zhang,et al.  DeepCheck: A Non-intrusive Control-flow Integrity Checking based on Deep Learning , 2019, ArXiv.

[14]  Yanfang Ye,et al.  Deep Neural Networks for Automatic Android Malware Detection , 2017, 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM).

[15]  Edward Raff,et al.  KiloGrams: Very Large N-Grams for Malware Classification , 2019, ArXiv.

[16]  R. Sekar,et al.  Function Interface Analysis: A Principled Approach for Function Recognition in COTS Binaries , 2017, 2017 47th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN).

[17]  Jack W. Stokes,et al.  Malware classification with LSTM and GRU language models and a character-level CNN , 2017, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).

[18]  Stefan Vömel,et al.  Acquisition and analysis of compromised firmware using memory forensics , 2015, Digit. Investig..

[19]  Yoshua Bengio,et al.  Neural Machine Translation by Jointly Learning to Align and Translate , 2014, ICLR.

[20]  Dawn Xiaodong Song,et al.  Recognizing Functions in Binaries with Neural Networks , 2015, USENIX Security Symposium.

[21]  Yoshua Bengio,et al.  Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations , 2016, ICLR.