Securing Distributed Gradient Descent in High Dimensional Statistical Learning

We consider unreliable distributed learning systems wherein the training data is kept confidential by external workers, and the learner has to interact closely with thoseworkers to train a model. In particular, we assume that there exists a system adversary that can adaptively compromise some workers; the compromised workers deviate from their local designed specifications by sending out arbitrarily malicious messages.