Privacy-Preserving Verifiable Collaborative Learning with Chain Aggregation

As many countries have promulgated laws on the protection of user data privacy, how to legally use user data has become a hot topic. With the emergence of collaborative learning, also known as federated learning, multiple participants can create a common, robust, and secure machine learning model aimed at addressing such critical issues of data sharing as privacy, security, and access, etc. Unfortunately, existing research shows that collaborative learning is not as secure as it claims, and the gradient leakage is still a key problem. To deal with this problem, a collaborative learning solution based on chained secure multi-party computing has been proposed recently. However, there are two security issues in this scheme that remain unsolved. First, if semi-honest users collude, the honest users' gradient also leaks. Second, if one of the users fails, it also cannot guarantee the correctness of the aggregation results. In this paper, we propose a privacy-preserving and verifiable chain collaborative learning scheme to solve this problem. First, we design a gradient encryption method, which can solve the problem of gradient leakage. Second, we create a verifiable method based on homomorphic hash technology. This method can ensure the correctness of users' aggregation results. At the same time, it can also track users who aggregate wrong. Compared with other solutions, our scheme is more efficient.