How should we regulate artificial intelligence?

Using artificial intelligence (AI) technology to replace human decision-making will inevitably create new risks whose consequences are unforeseeable. This naturally leads to calls for regulation, but I argue that it is too early to attempt a general system of AI regulation. Instead, we should work incrementally within the existing legal and regulatory schemes which allocate responsibility, and therefore liability, to persons. Where AI clearly creates risks which current law and regulation cannot deal with adequately, then new regulation will be needed. But in most cases, the current system can work effectively if the producers of AI technology can provide sufficient transparency in explaining how AI decisions are made. Transparency ex post can often be achieved through retrospective analysis of the technology's operations, and will be sufficient if the main goal is to compensate victims of incorrect decisions. Ex ante transparency is more challenging, and can limit the use of some AI technologies such as neural networks. It should only be demanded by regulation where the AI presents risks to fundamental rights, or where society needs reassuring that the technology can safely be used. Masterly inactivity in regulation is likely to achieve a better long-term solution than a rush to regulate in ignorance. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations'.

[1]  Chris Reed,et al.  Responsibility, Autonomy and Accountability: Legal Liability for Machine Learning , 2016 .

[2]  Seth Flaxman,et al.  EU regulations on algorithmic decision-making and a "right to explanation" , 2016, ArXiv.

[3]  Chen-Fu Chien,et al.  Data mining to improve personnel selection and enhance human capital: A case study in high-technology industry , 2008, Expert Syst. Appl..

[4]  Ala'a R. Al-Shamasneh,et al.  Artificial Intelligence Techniques for Cancer Detection and Classification: Review Study , 2017 .

[5]  Keith Darlington,et al.  Aspects of Intelligent Systems Explanation , 2013 .

[6]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[7]  Chris Reed,et al.  How to Make Bad Law: Lessons from Cyberspace , 2010 .

[8]  Iyad Rahwan,et al.  The social dilemma of autonomous vehicles , 2015, Science.

[9]  Jenna Burrell,et al.  How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .

[10]  Christopher Millard,et al.  Machine Learning with Personal Data , 2016 .

[11]  Erik Strumbelj,et al.  Towards a Model Independent Method for Explaining Classification for Individual Instances , 2008, DaWaK.

[12]  Motoaki Kawanabe,et al.  How to Explain Individual Classification Decisions , 2009, J. Mach. Learn. Res..

[13]  Johannes Gehrke,et al.  Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.

[14]  Mark A. Lemley Rationalizing Internet Safe Harbors , 2007, J. Telecommun. High Technol. Law.

[15]  Marko Robnik-Sikonja,et al.  Explaining Classifications For Individual Instances , 2008, IEEE Transactions on Knowledge and Data Engineering.