Leveraging Federated Learning & Blockchain to counter Adversarial Attacks in Incremental Learning

Whereas data labelling in IoT applications is costly, it is also time consuming to train a supervised Machine Learning (ML) algorithm. Hence, a human oracle is required to gradually annotate the data patterns at run-time to improve the models’ learning behavior, through an active learning strategy in form of User Feedback Process (UFP). Consequently, it is worth to note that during UFP there may exist malicious content that may subject the learning model to be vulnerable to adversarial attacks, more so, manipulative attacks. We argue in this position paper, that there are instances during incremental learning, where the local data model may present wrong output, if retraining is done using data that has already been subjected to adversarial attack. We propose a Distributed Interactive Secure Federated Learning (DISFL) framework that utilizes UFP in the edge and fog node, that subsequently increases the amount of labelled personal local data for the ML model during incremental training. Furthermore, the DISFL framework addresses data privacy by leveraging federated learning, where only the model's knowledge is moved to a global unit, herein referred to as Collective Intelligence Node (CIN). During incremental learning, this would then allow the creation of an immutable chain of data that has to be trained, which in its entirety is tamper-free while increasing trust between parties. With a degree of certainty, this approach counters adversarial manipulation during incremental learning in active learning context at the same time strengthens data privacy, while reducing the computation costs.