Monitoring Misuse for Accountable 'Artificial Intelligence as a Service'

AI is increasingly being offered 'as a service' (AIaaS). This entails service providers offering customers access to pre-built AI models and services, for tasks such as object recognition, text translation, text-to-voice conversion, and facial recognition, to name a few. The offerings enable customers to easily integrate a range of powerful AI-driven capabilities into their applications. Customers access these models through the provider's APIs, sending particular data to which models are applied, the results of which returned. However, there are many situations in which the use of AI can be problematic. AIaaS services typically represent generic functionality, available 'at a click'. Providers may therefore, for reasons of reputation or responsibility, seek to ensure that the AIaaS services they offer are being used by customers for 'appropriate' purposes. This paper introduces and explores the concept whereby AIaaS providers uncover situations of possible service misuse by their customers. Illustrated through topical examples, we consider the technical usage patterns that could signal situations warranting scrutiny, and raise some of the legal and technical challenges of monitoring for misuse. In all, by introducing this concept, we indicate a potential area for further inquiry from a range of perspectives.

[1]  Saeedeh Parsaeefard,et al.  Artificial Intelligence as a Services (AI-aaS) on Software-Defined Infrastructure , 2019, ArXiv.

[2]  Fan Zhang,et al.  Stealing Machine Learning Models via Prediction APIs , 2016, USENIX Security Symposium.

[3]  Michael Veale,et al.  Algorithms that remember: model inversion attacks and data protection law , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[4]  Jared Dean,et al.  Big Data, Data Mining, and Machine Learning: Value Creation for Business Leaders and Practitioners , 2014 .

[5]  Jennifer Cobbe,et al.  Administrative law and the machines of government: judicial review of automated public-sector decision-making , 2019, Legal Studies.

[6]  Francesca Rossi,et al.  Building Trust in Artificial Intelligence , 2018 .

[7]  Arianne E. Miller Searching for gaydar: Blind spots in the study of sexual orientation perception , 2018 .

[8]  V VasilakosAthanasios,et al.  Machine learning on big data , 2017 .

[9]  Eduardo B. Fernandez,et al.  Three Misuse Patterns for Cloud Computing , 2013 .

[10]  Douglas Harris,et al.  Deepfakes: False Pornography Is Here and the Law Cannot Protect You , 2019 .

[11]  S. Gill,et al.  BUILDING TRUST , 2020, Tao te Ching.

[12]  VARUN CHANDOLA,et al.  Anomaly detection: A survey , 2009, CSUR.

[13]  Yi Zeng,et al.  Responsible Facial Recognition and Beyond , 2019, ArXiv.

[14]  Hyrum S. Anderson,et al.  The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation , 2018, ArXiv.

[15]  Qihui Wu,et al.  A survey of machine learning for big data processing , 2016, EURASIP Journal on Advances in Signal Processing.

[16]  Flora Graham,et al.  Daily briefing: San Francisco bans facial-recognition technology , 2019, Nature.

[17]  Siani Pearson,et al.  Privacy, Security and Trust Issues Arising from Cloud Computing , 2010, 2010 IEEE Second International Conference on Cloud Computing Technology and Science.

[18]  Jens Lindemann,et al.  Towards Abuse Detection and Prevention in IaaS Cloud Computing , 2015, 2015 10th International Conference on Availability, Reliability and Security.

[19]  Media Sport,et al.  Online harms white paper , 2019 .

[20]  Julia Macke,et al.  European Union Agency for Fundamental Rights , 2006 .

[21]  Athanasios V. Vasilakos,et al.  Machine learning on big data: Opportunities and challenges , 2017, Neurocomputing.

[22]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[23]  Mohiuddin Ahmed,et al.  A survey of network anomaly detection techniques , 2016, J. Netw. Comput. Appl..