SecureGBM: Secure Multi-Party Gradient Boosting

Federated machine learning systems have been widely used to facilitate the joint data analytics across the distributed datasets owned by the different parties that do not trust each others. In this paper, we proposed a novel Gradient Boosting Machines (GBM) framework SecureGBM built-up with a multi-party computation model based on semi-homomorphic encryption, where every involved party can jointly obtain a shared Gradient Boosting machines model while protecting their own data from the potential privacy leakage and inferential identification. More specific, our work focused on a specific “dualparty” secure learning scenario based on two parties — both party own an unique view (i.e., attributes or features) to the sample group of samples while only one party owns the labels. In such scenario, feature and label data are not allowed to share with others.To achieve the above goal, we firstly extent — LightGBM — a well known implementation of tree-based GBM through covering its key operations for training and inference with SEAL homomorphic encryption schemes. However, the performance of such re-implementation is significantly bottle-necked by the explosive inflation of the communication payloads, based on ciphertexts subject to the increasing length of plaintexts. In this way, we then proposed to use stochastic approximation techniques to reduced the communication payloads while accelerating the overall training procedure in a statistical manner. Our experiments using the real-world data showed that SecureGBM can well secure the communication and computation of LightGBM training and inference procedures for the both parties while only losing less than 3% AUC, using the same number of iterations for gradient boosting, on a wide range of benchmark datasets. More specific, compared to LightGBM, the proposed SecureGBM would slowdown with $3\mathrm{x} \sim 64\mathrm{x}$ time consumption per iteration in the training procedure, while SecureGBM becomes more and more efficient when the scale of the training dataset increases (i.e., the larger training set, the lower slowdown ratio).

[1]  Changyu Dong,et al.  When private set intersection meets big data: an efficient and scalable protocol , 2013, CCS.

[2]  Marcel Keller,et al.  Actively Secure OT Extension with Optimal Overhead , 2015, CRYPTO.

[3]  L. Breiman Arcing the edge , 1997 .

[4]  Marc'Aurelio Ranzato,et al.  Large Scale Distributed Deep Networks , 2012, NIPS.

[5]  Shai Halevi,et al.  Homomorphic Encryption , 2017, Tutorials on the Foundations of Cryptography.

[6]  Vinod Vaikuntanathan,et al.  Computing Blindfolded: New Developments in Fully Homomorphic Encryption , 2011, 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science.

[7]  Brent Waters,et al.  Homomorphic Encryption from Learning with Errors: Conceptually-Simpler, Asymptotically-Faster, Attribute-Based , 2013, CRYPTO.

[8]  Vinod Vaikuntanathan,et al.  Efficient Fully Homomorphic Encryption from (Standard) LWE , 2011, 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science.

[9]  Vinod Vaikuntanathan,et al.  Lattice-based FHE as secure as PKE , 2014, IACR Cryptol. ePrint Arch..

[10]  Craig Gentry,et al.  Fully homomorphic encryption using ideal lattices , 2009, STOC '09.

[11]  Lingxiao Wang,et al.  Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization , 2018, NeurIPS.

[12]  Michael G. Rabbat,et al.  Consensus-based distributed optimization: Practical issues and applications in large-scale machine learning , 2012, 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[13]  Craig Gentry,et al.  Computing arbitrary functions of encrypted data , 2010, CACM.

[14]  J. Friedman Stochastic gradient boosting , 2002 .

[15]  Bhiksha Raj,et al.  Multiparty Differential Privacy via Aggregation of Locally Trained Classifiers , 2010, NIPS.

[16]  Peter L. Bartlett,et al.  Boosting Algorithms as Gradient Descent , 1999, NIPS.

[17]  Vlad Sandulescu,et al.  Predicting the future relevance of research institutions - The winning solution of the KDD Cup 2016 , 2016, ArXiv.

[18]  Ohad Shamir,et al.  Stochastic Convex Optimization , 2009, COLT.

[19]  Ohad Shamir,et al.  Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes , 2012, ICML.

[20]  Tie-Yan Liu,et al.  LightGBM: A Highly Efficient Gradient Boosting Decision Tree , 2017, NIPS.

[21]  Michael O. Rabin,et al.  In Foundations of secure computation , 1978 .

[22]  Dennis M. Wilkinson,et al.  Large-Scale Parallel Collaborative Filtering for the Netflix Prize , 2008, AAIM.

[23]  Lu Tian,et al.  Communication-efficient Distributed Sparse Linear Discriminant Analysis , 2016, AISTATS.

[24]  István Hegedüs,et al.  Gossip learning with linear models on fully distributed data , 2011, Concurr. Comput. Pract. Exp..

[25]  A. Shapiro,et al.  Convergence analysis of gradient descent stochastic algorithms , 1996 .

[26]  Max Welling,et al.  Asynchronous Distributed Learning of Topic Models , 2008, NIPS.

[27]  Didrik Nielsen,et al.  Tree Boosting With XGBoost - Why Does XGBoost Win "Every" Machine Learning Competition? , 2016 .

[28]  Craig Gentry,et al.  Fully Homomorphic Encryption over the Integers , 2010, EUROCRYPT.

[29]  Wei Cheng,et al.  Multi-party Sparse Discriminant Learning , 2017, 2017 IEEE International Conference on Data Mining (ICDM).

[30]  Hao Chen,et al.  Algebraic Geometric Secret Sharing Schemes and Secure Multi-Party Computations over Small Fields , 2006, CRYPTO.

[31]  Tianqi Chen,et al.  XGBoost: A Scalable Tree Boosting System , 2016, KDD.

[32]  Yutao Liu,et al.  Thwarting Memory Disclosure with Efficient Hypervisor-enforced Intra-domain Isolation , 2015, CCS.

[33]  Benny Pinkas,et al.  Scalable Private Set Intersection Based on OT Extension , 2018, IACR Cryptol. ePrint Arch..

[34]  Craig Gentry,et al.  (Leveled) fully homomorphic encryption without bootstrapping , 2012, ITCS '12.

[35]  Frederik Armknecht,et al.  A Guide to Fully Homomorphic Encryption , 2015, IACR Cryptol. ePrint Arch..

[36]  Yaoliang Yu,et al.  Petuum: A New Platform for Distributed Machine Learning on Big Data , 2013, IEEE Transactions on Big Data.

[37]  J. Friedman Greedy function approximation: A gradient boosting machine. , 2001 .

[38]  Benny Pinkas,et al.  Faster Private Set Intersection Based on OT Extension , 2014, USENIX Security Symposium.