Student research highlight: Secure and resilient distributed machine learning under adversarial environments

Machine learning algorithms, such as support vector machines (SVMs), neutral networks, and decision trees (DTs) have been widely used in data processing for estimation and detection. They can be used to classify samples based on a model built from training data. However, under the assumption that training and testing samples come from the same natural distribution, an attacker who can generate or modify training data will lead to misclassification or misestimation. For example, a spam filter will fail to recognize input spam messages after training crafted data provided by attackers [1].