Blockchain for explainable and trustworthy artificial intelligence

The increasing computational power and proliferation of big data are now empowering Artificial Intelligence (AI) to achieve massive adoption and applicability in many fields. The lack of explanation when it comes to the decisions made by today's AI algorithms is a major drawback in critical decision‐making systems. For example, deep learning does not offer control or reasoning over its internal processes or outputs. More importantly, current black‐box AI implementations are subject to bias and adversarial attacks that may poison the learning or the inference processes. Explainable AI (XAI) is a new trend of AI algorithms that provide explanations of their AI decisions. In this paper, we propose a framework for achieving a more trustworthy and XAI by leveraging features of blockchain, smart contracts, trusted oracles, and decentralized storage. We specify a framework for complex AI systems in which the decision outcomes are reached based on decentralized consensuses of multiple AI and XAI predictors. The paper discusses how our proposed framework can be utilized in key application areas with practical use cases.

[1]  Khaled Salah,et al.  Blockchain-Based Proof of Delivery of Physical Assets With Single and Multiple Transporters , 2018, IEEE Access.

[2]  Khaled Salah,et al.  Blockchain for AI: Review and Open Research Challenges , 2019, IEEE Access.

[3]  J. Henrich,et al.  The Moral Machine experiment , 2018, Nature.

[4]  Juan Benet,et al.  IPFS - Content Addressed, Versioned, P2P File System , 2014, ArXiv.

[5]  Ken Thompson,et al.  Reflections on trusting trust , 1984, CACM.

[6]  Adam P. Harrison,et al.  Deep Medical Image Computing in Preventive and Precision Medicine , 2018, IEEE MultiMedia.

[7]  Carlos Guestrin,et al.  "Why Should I Trust You?": Explaining the Predictions of Any Classifier , 2016, ArXiv.

[8]  Carlos Guestrin,et al.  Model-Agnostic Interpretability of Machine Learning , 2016, ArXiv.

[9]  Joan Bruna,et al.  Intriguing properties of neural networks , 2013, ICLR.

[10]  Ananthram Swami,et al.  Practical Black-Box Attacks against Machine Learning , 2016, AsiaCCS.

[11]  Erik Strumbelj,et al.  Explaining prediction models and individual predictions with feature contributions , 2014, Knowledge and Information Systems.

[12]  Yair Zick,et al.  Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems , 2016, 2016 IEEE Symposium on Security and Privacy (SP).

[13]  Khaled Salah,et al.  Monetization of IoT data using smart contracts , 2019, IET Networks.

[14]  Jiqiang Liu,et al.  Adversarial attack and defense in reinforcement learning-from AI security view , 2019, Cybersecur..

[15]  Tom M. van Engers,et al.  The Role of Normware in Trustworthy and Explainable AI , 2018, XAILA@JURIX.

[16]  Virginia Dignum,et al.  Responsible Autonomy , 2017, IJCAI.

[17]  Samy Bengio,et al.  Adversarial examples in the physical world , 2016, ICLR.

[18]  Kouichi Sakurai,et al.  One Pixel Attack for Fooling Deep Neural Networks , 2017, IEEE Transactions on Evolutionary Computation.

[19]  Jaegul Choo,et al.  Visual Analytics for Explainable Deep Learning , 2018, IEEE Computer Graphics and Applications.

[20]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[21]  Ananthram Swami,et al.  The Limitations of Deep Learning in Adversarial Settings , 2015, 2016 IEEE European Symposium on Security and Privacy (EuroS&P).

[22]  Marko Bohanec,et al.  Explaining machine learning models in sales predictions , 2017, Expert Syst. Appl..

[23]  Fabio Roli,et al.  Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning , 2018, CCS.

[24]  Khaled Salah,et al.  Combating Deepfake Videos Using Blockchain and Smart Contracts , 2019, IEEE Access.

[25]  Marko Bohanec,et al.  Perturbation-Based Explanations of Prediction Models , 2018, Human and Machine Learning.

[26]  Geoff Keeling,et al.  Why Trolley Problems Matter for the Ethics of Automated Vehicles , 2019, Science and Engineering Ethics.

[27]  Iyad Rahwan,et al.  The evolution of citation graphs in artificial intelligence research , 2019, Nature Machine Intelligence.