CVSS-BERT: Explainable Natural Language Processing to Determine the Severity of a Computer Security Vulnerability from its Description

When a new computer security vulnerability is publicly disclosed, only a textual description of it is available. Cybersecurity experts later provide an analysis of the severity of the vulnerability using the Common Vulnerability Scoring System (CVSS). Specifically, the different characteristics of the vulnerability are summarized into a vector (consisting of a set of metrics), from which a severity score is computed. However, because of the high number of vulnerabilities disclosed everyday this process requires lot of manpower, and several days may pass before a vulnerability is analyzed. We propose to leverage recent advances in the field of Natural Language Processing (NLP) to determine the CVSS vector and the associated severity score of a vulnerability from its textual description in an explainable manner. To this purpose, we trained multiple BERT classifiers, one for each metric composing the CVSS vector. Experimental results show that our trained classifiers are able to determine the value of the metrics of the CVSS vector with high accuracy. The severity score computed from the predicted CVSS vector is also very close to the real severity score attributed by a human expert. For explainability purpose, gradient-based input saliency method was used to determine the most relevant input words for a given prediction made by our classifiers. Often, the top relevant words include terms in agreement with the rationales of a human cybersecurity expert, making the explanation comprehensible for end-users.

[1]  Vali Derhami,et al.  An automatic method for CVSS score prediction using vulnerabilities description , 2015, J. Intell. Fuzzy Syst..

[2]  Byron C. Wallace,et al.  Attention is not Explanation , 2019, NAACL.

[3]  Jasmijn Bastings,et al.  The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? , 2020, BLACKBOXNLP.

[4]  Jakob Grue Simonsen,et al.  A Diagnostic Study of Explainability Techniques for Text Classification , 2020, EMNLP.

[5]  Brandon M. Greenwell,et al.  Interpretable Machine Learning , 2019, Hands-On Machine Learning with R.

[6]  Alan Ritter,et al.  Analyzing the Perceived Severity of Cybersecurity Threats Reported on Social Media , 2019, NAACL.

[7]  Hai Jin,et al.  AutoCVSS: An Approach for Automatic Assessment of Vulnerability Severity Based on Attack Process , 2019, GPC.

[8]  Paulo Shakarian,et al.  DarkEmbed: Exploit Prediction With Neural Language Models , 2018, AAAI.

[9]  Lukasz Kaiser,et al.  Attention is All you Need , 2017, NIPS.

[10]  Louis Rilling,et al.  Fighting N-day vulnerabilities with automated CVSS vector prediction at disclosure , 2020, ARES.

[11]  Yuval Pinter,et al.  Attention is not not Explanation , 2019, EMNLP.

[12]  Zhenchang Xing,et al.  Learning to Predict Severity of Software Vulnerability Using Only Vulnerability Description , 2017, 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME).

[13]  Ming-Wei Chang,et al.  BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding , 2019, NAACL.

[14]  Ming-Wei Chang,et al.  Well-Read Students Learn Better: On the Importance of Pre-training Compact Models , 2019 .

[15]  Gerardo Canfora,et al.  Summarizing vulnerabilities' descriptions to support experts during vulnerability assessment activities , 2019, J. Syst. Softw..

[16]  Benjamin Edwards,et al.  Exploit Prediction Scoring System (EPSS) , 2019, Digital Threats: Research and Practice.