A Lipschitz - Shapley Explainable Defense Methodology Against Adversarial Attacks

Every learning algorithm, has a specific bias. This may be due to the choice of its hyperparameters, to the characteristics of its classification methodology, or even to the representation approach of the considered information. As a result, Machine Learning modeling algorithms are vulnerable to specialized attacks. Moreover, the training datasets are not always an accurate image of the real world. Their selection process and the assumption that they have the same distribution as all the unknown cases, introduce another level of bias. Global and Local Interpretability (GLI) is a very important process that allows the determination of the right architectures to solve Adversarial Attacks (ADA). It contributes towards a holistic view of the Intelligent Model, through which we can determine themost important features, we can understand theway the decisions aremade and the interactions between the involved features. This research paper, introduces the innovative hybrid Lipschitz Shapley approach for Explainable Defence Against Adversarial Attacks. The introducedmethodology, employs the Lipschitz constant and it determines its evolution during the training process of the intelligent model. The use of the Shapley Values, offers clear explanations for the specific decisions made by the model.

[1]  Tao Yang,et al.  Sensitivity of Adversarial Perturbation in Fast Gradient Sign Method , 2019, 2019 IEEE Symposium Series on Computational Intelligence (SSCI).

[2]  Robert Ferber,et al.  Are Correlations any Guide to Predictive Value , 1956 .

[3]  Ziqiang Shi,et al.  Link Prediction Adversarial Attack Via Iterative Gradient Attack , 2020, IEEE Transactions on Computational Social Systems.

[4]  Quanlin Li,et al.  The Shapley value of cooperative game with stochastic payoffs , 2014, The 26th Chinese Control and Decision Conference (2014 CCDC).

[5]  Konstantinos Demertzis,et al.  The Next Generation Cognitive Security Operations Center: Adaptive Analytic Lambda Architecture for Efficient Defense against Adversarial Attacks , 2019, Big Data Cogn. Comput..

[6]  Xu Huafeng,et al.  The game theory analysis of risk share for PPP project based on Shapley value , 2010, 2010 2nd IEEE International Conference on Information Management and Engineering.

[7]  Iris Loeb,et al.  Lipschitz functions in constructive reverse mathematics , 2013, Log. J. IGPL.

[8]  Konstantinos Demertzis,et al.  Darknet Traffic Big-Data Analysis and Network Management to Real-Time Automating the Malicious Intent Detection Process by a Weight Agnostic Neural Networks Framework , 2021, Electronics.

[9]  Chen-Kuo Chiang,et al.  Generating Adversarial Examples By Makeup Attacks on Face Recognition , 2019, 2019 IEEE International Conference on Image Processing (ICIP).

[10]  J. Doug Tygar,et al.  Adversarial machine learning , 2019, AISec '11.

[11]  Shahram Shah Heydari,et al.  Polymorphic Adversarial DDoS attack on IDS using GAN , 2020, 2020 International Symposium on Networks, Computers and Communications (ISNCC).

[12]  Ping Yu,et al.  Generating Adversarial Examples With Conditional Generative Adversarial Net , 2018, 2018 24th International Conference on Pattern Recognition (ICPR).

[13]  Cesare Alippi,et al.  Moving Convolutional Neural Networks to Embedded Systems: The AlexNet and VGG-16 Case , 2018, 2018 17th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN).

[14]  The Method of Adaptive Gaussian Decomposition Based Recognition and Extraction of Scattering Mechanisms , 2018, 2018 Progress in Electromagnetics Research Symposium (PIERS-Toyama).

[15]  Wei Wei,et al.  Black Box Explanation Guided Decision-Based Adversarial Attacks , 2019, 2019 IEEE 5th International Conference on Computer and Communications (ICCC).

[16]  Jan-P. Calliess,et al.  Lipschitz optimisation for Lipschitz Interpolation , 2017, 2017 American Control Conference (ACC).

[17]  Min Huang,et al.  Analysis of hyperspectral scattering image using wavelet transformation for assessing internal qualities of apple fruit , 2012, 2012 24th Chinese Control and Decision Conference (CCDC).

[18]  Zhihai He,et al.  Adversarial Dual Network Learning With Randomized Image Transform for Restoring Attacked Images , 2020, IEEE Access.

[19]  Florin Leon,et al.  Optimizing neural network topology using Shapley value , 2014, 2014 18th International Conference on System Theory, Control and Computing (ICSTCC).

[20]  Christos Makris,et al.  Model-Agnostic Interpretability with Shapley Values , 2019, 2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA).

[21]  Lu Peng,et al.  Fooling AI with AI: An Accelerator for Adversarial Attacks on Deep Learning Visual Classification , 2019, 2019 IEEE 30th International Conference on Application-specific Systems, Architectures and Processors (ASAP).

[22]  Guang-Da Hu,et al.  Observers for one-sided Lipschitz non-linear systems , 2006, IMA J. Math. Control. Inf..

[23]  Zuofeng Gao,et al.  The Shapley value of convex compound stochastic cooperative game , 2011, 2011 2nd International Conference on Artificial Intelligence, Management Science and Electronic Commerce (AIMSEC).

[24]  Henry Leung,et al.  Adversarial-Example Attacks Toward Android Malware Detection System , 2020, IEEE Systems Journal.

[25]  Xiao-fei Li,et al.  Cost Allocation of Integrated Supply Based on Shapley Value Method , 2010, 2010 International Conference on Intelligent Computation Technology and Automation.