A New Approach for Machine Learning Security Risk Assessment – Work in Progress
暂无分享,去创建一个
We propose a new security risk assessment approach for Machine Learning-based AI systems (ML systems). The assessment of security risks of ML systems requires expertise in ML security. So, ML system developers, who may not know much about ML security, cannot assess the security risks of their systems. By using our approach, a ML system developers can easily assess the security risks of the ML system. In performing the assessment, the ML system developer only has to answer the yes/no questions about the specification of the ML system. In our trial, we confirmed that our approach works correctly. CCS CONCEPTS • Security and privacy; • Computing methodologies → Artificial intelligence; Machine learning;
[1] Vitaly Shmatikov,et al. Membership Inference Attacks Against Machine Learning Models , 2016, 2017 IEEE Symposium on Security and Privacy (SP).
[2] A. Juels,et al. Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.
[3] Somesh Jha,et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.
[4] Blaine Nelson,et al. Poisoning Attacks against Support Vector Machines , 2012, ICML.