Monitoring AI Services for Misuse

Given the surge in interest in AI, we now see the emergence of Artificial Intelligence as a Service (AIaaS). AIaaS entails service providers offering remote access to ML models and capabilities at arms-length', through networked APIs. Such services will grow in popularity, as they enable access to state-of-the-art ML capabilities, 'on demand', 'out of the box', at low cost and without requiring training data or ML expertise. However, there is much public concern regarding AI. AIaaS raises particular considerations, given there is much potential for such services to be used to underpin and drive problematic, inappropriate, undesirable, controversial, or possibly even illegal applications. A key way forward is through service providers monitoring their AI services to identify potential situations of problematic use. Towards this, we elaborate the potential for 'misuse indicators' as a mechanism for uncovering patterns of usage behaviour warranting consideration or further investigation. We introduce a taxonomy for describing these indicators and their contextual considerations, and use exemplars to demonstrate the feasibility analysing AIaaS usage to highlight situations of possible concern. We also seek to draw more attention to AI services and the issues they raise, given AIaaS' increasing prominence, and the general calls for the more responsible and accountable use of AI.

[1]  Jennifer Cobbe,et al.  Monitoring Misuse for Accountable 'Artificial Intelligence as a Service' , 2020, AIES.

[2]  David M. Brooks,et al.  Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective , 2018, 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA).

[3]  Maranke Wieringa,et al.  What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability , 2020, FAT*.

[4]  S. P. Lloyd,et al.  Least squares quantization in PCM , 1982, IEEE Trans. Inf. Theory.

[5]  Miriam A. M. Capretz,et al.  MLaaS: Machine Learning as a Service , 2015, 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA).

[6]  Jatinder Singh,et al.  Artificial intelligence as a service: Legal responsibilities, liabilities, and policy challenges , 2021, Comput. Law Secur. Rev..

[7]  Douglas A. Reynolds,et al.  Gaussian Mixture Models , 2018, Encyclopedia of Biometrics.

[8]  Marwan Mattar,et al.  Labeled Faces in the Wild: A Database forStudying Face Recognition in Unconstrained Environments , 2008 .

[9]  Anne Pfeifle,et al.  Alexa, What Should We Do about Privacy? Protecting Privacy for Users of Voice-Activated Devices , 2018 .

[10]  Samuel Greengard,et al.  Will deepfakes do deep damage? , 2019, Commun. ACM.

[11]  Hyrum S. Anderson,et al.  The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation , 2018, ArXiv.

[12]  Hans-Peter Kriegel,et al.  A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise , 1996, KDD.

[13]  Harry Wechsler,et al.  The FERET database and evaluation procedure for face-recognition algorithms , 1998, Image Vis. Comput..

[14]  Chun-Hung Richard Lin,et al.  Intrusion detection system: A comprehensive review , 2013, J. Netw. Comput. Appl..

[15]  Kelly Laas,et al.  What's Next for AI Ethics, Policy, and Governance? A Global Overview , 2020, AIES.

[16]  Joel J. P. C. Rodrigues,et al.  A comprehensive survey on network anomaly detection , 2018, Telecommunication Systems.

[17]  Pedro M. Domingos A few useful things to know about machine learning , 2012, Commun. ACM.

[18]  Ricardo Bianchini,et al.  Serverless in the Wild: Characterizing and Optimizing the Serverless Workload at a Large Cloud Provider , 2020, USENIX ATC.

[19]  Rodrigo da Rosa Righi,et al.  A Survey on Global Management View: Toward Combining System Monitoring, Resource Management, and Load Prediction , 2019, Journal of Grid Computing.

[20]  Dong Seong Kim,et al.  INTRUSION DETECTION SYSTEM , 2013 .

[21]  Elettra Bietti,et al.  From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy , 2019, FAT*.

[22]  Md. Rafiqul Islam,et al.  A survey of anomaly detection techniques in financial domain , 2016, Future Gener. Comput. Syst..

[23]  Yambem Jina Chanu,et al.  A Survey on Image Segmentation Methods using Clustering Techniques , 2017, European Journal of Engineering and Technology Research.

[24]  Sungroh Yoon,et al.  Security and Privacy Issues in Deep Learning , 2018, ArXiv.

[25]  Francesca Rossi,et al.  Building Trust in Artificial Intelligence , 2018 .

[26]  James Philbin,et al.  FaceNet: A unified embedding for face recognition and clustering , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[27]  Yi Zeng,et al.  Responsible Facial Recognition and Beyond , 2019, ArXiv.

[28]  Danah Boyd,et al.  Fairness and Abstraction in Sociotechnical Systems , 2019, FAT.

[29]  Douglas Harris,et al.  Deepfakes: False Pornography Is Here and the Law Cannot Protect You , 2019 .

[30]  E. West Amazon: Surveillance as a Service , 2019, Surveillance & Society.

[31]  Andrew Chadwick,et al.  Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News , 2020, Social Media + Society.