AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines

With Artificial Intelligence (AI) entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be transparent we should focus on constraining AI and those machines powered by AI within microenvironments—both physical and virtual—which allow these machines to realize their function whilst preventing harm to humans. In the field of robotics this is called ‘envelopment’. However, to put an ‘envelope’ around AI-powered machines we need to know some basic things about them which we are often in the dark about. The properties we need to know are the: training data, inputs, functions, outputs, and boundaries. This knowledge is a necessary first step towards the envelopment of AI-powered machines. It is only with this knowledge that we can responsibly regulate, use, and live in a world populated by these machines.

[1]  Jeroen van den Hoven,et al.  Meaningful Human Control over Autonomous Systems: A Philosophical Account , 2018, Front. Robot. AI.

[2]  Adam Henschke,et al.  Designing for democracy : Bulk data and authoritarianism , 2017 .

[3]  Suresh Venkatasubramanian,et al.  Runaway Feedback Loops in Predictive Policing , 2017, FAT.

[4]  Matthias Scheutz,et al.  The Need for Moral Competency in Autonomous Agent Architectures , 2013, PT-AI.

[5]  Andrea Gaggioli,et al.  Artificial Intelligence: The Future of Cybertherapy? , 2017, Cyberpsychology Behav. Soc. Netw..

[6]  Nick Bostrom,et al.  Future Progress in Artificial Intelligence: A Survey of Expert Opinion , 2013, PT-AI.

[7]  Aimee van Wynsberghe,et al.  Critiquing the Reasons for Making Artificial Moral Agents , 2018, Science and Engineering Ethics.

[8]  Frank A. Pasquale,et al.  [89WashLRev0001] The Scored Society: Due Process for Automated Predictions , 2014 .

[9]  Luciano Floridi,et al.  Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation , 2017 .

[10]  J. Bryson Robots should be slaves , 2010 .

[11]  Lalana Kagal,et al.  Explaining Explanations: An Overview of Interpretability of Machine Learning , 2018, 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA).

[12]  Frank A. Pasquale The Black Box Society: The Secret Algorithms That Control Money and Information , 2015 .

[13]  C. Allen,et al.  Moral Machines: Teaching Robots Right from Wrong , 2008 .

[14]  Aimee van Wynsberghe,et al.  Ethicist as Designer: A Pragmatic Approach to Ethics in the Lab , 2014, Sci. Eng. Ethics.

[15]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[16]  Bram van Ginneken,et al.  A survey on deep learning in medical image analysis , 2017, Medical Image Anal..

[17]  Deborah G. Johnson Computer systems: Moral entities but not moral agents , 2006, Ethics and Information Technology.

[18]  Children of the Fourth Revolution , 2011 .