Trusting artificial intelligence in cybersecurity is a double-edged sword

Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double-edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity.Current national cybersecurity and defence strategies of several governments mention explicitly the use of AI. However, it will be important to develop standards and certification procedures, which involves continuous monitoring and assessment of threats. The focus should be on the reliability of AI-based systems, rather than on eliciting users’ trust in AI.

[1]  Atul Prakash,et al.  Robust Physical-World Attacks on Deep Learning Visual Classification , 2018, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[2]  Chang Liu,et al.  Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning , 2018, 2018 IEEE Symposium on Security and Privacy (SP).

[3]  Fabio Roli,et al.  Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2017, Pattern Recognit..

[4]  Lujo Bauer,et al.  Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition , 2016, CCS.

[5]  Mariarosaria Taddeo,et al.  Trust in Technology: A Distinctive and a Problematic Relation , 2010 .

[6]  Vijay Kumar,et al.  The grand challenges of Science Robotics , 2018, Science Robotics.

[7]  Mariarosaria Taddeo,et al.  Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust , 2010, Minds and Machines.

[8]  Tariq M. King,et al.  AI for Testing Today and Tomorrow: Industry Perspectives , 2019, 2019 IEEE International Conference On Artificial Intelligence Testing (AITest).

[9]  David A. Wagner,et al.  Towards Evaluating the Robustness of Neural Networks , 2016, 2017 IEEE Symposium on Security and Privacy (SP).

[10]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[11]  L. Floridi,et al.  Regulate artificial intelligence to avert cyber arms race , 2018, Nature.

[12]  Mariarosaria Taddeo,et al.  Trusting Digital Technologies Correctly , 2017, Minds and Machines.

[13]  Edward H. Glaessgen,et al.  The Digital Twin Paradigm for Future NASA and U.S. Air Force Vehicles , 2012 .