Some ethical and legal consequences of the application of artificial intelligence in the field of medicine

Artificial Intelligence platforms are driven by sophisticated algorithms which have been incorporated into A.I. robots. These algorithms are also programmed to be self-teaching. This technology has resulted in producing a ‘super intelligent’ robot, the current best example of which is IBM’s Watson. Watson is being increasingly applied to perform a variety of tasks in the medical field, tasks which had formerly been the exclusive preserve of doctors. A.I. is replacing doctors in fields such as interpreting X-rays and scans, performing diagnoses of patients’ symptoms, in what can be described as a ‘consulting physician’ basis. A.I. is also being used in psychology where robots are programmed to speak to patients and counsel them. Robots have also been designed to perform sensitive surgical techniques. One is therefore able to confidently predict that the role of robots in medicine is going to increase exponentially in the future. Because medicine is not an exact science it is possible that Watson, to use one example of an existing robot, can make errors which result in injury to patients. The injured patient should then be entitled to sue for damages, as they would have been able to do if the injury had been caused by a real doctor. However, the problem which arises in this regard is that the law of torts has developed to regulate the actions of natural persons. Watson, and similar A.I. platforms, are not natural persons. This means that a patient seeking redress cannot rely on existing law relating to medical negligence or malpractice to recover damages. It is therefore imperative that appropriate legislation is passed to bridge this gap and allow the apportionment of damages to a patient which have resulted from the actions of an A.I. robot. *Correspondence to: Michael Lupton, Professor, Bond University, Queensland, Australia, E-mail: mlupton@bond.edu.au Received: June 19, 2018; Accepted: July 04, 2018; Published: July 09, 2018 Definition of A.I. and some applications A.I. is usually defined as ‘the capability of a computer program to perform tasks or reasoning processes that we usually associate with intelligence in a human being [1]. Artificial intelligence is inextricably linked to the ever-increasing capabilities of algorithms. A.I. has been insidiously infiltrating our lives for a number of years in the form of the GPS built into or attached to motor cars and from its humble beginnings as an animated map it has now evolved to the point where it can control or ‘drive’ the motor car: Spam filters are based on A.I. The Google translate service, which is now capable of translating from and to more than 70 languages is the product of statistical machine learning which in turn is imbedded in A.I. The face recognition technology employed for security purposes at airports and railway stations is also driven by A.I. The much-used iPhone app, Siri, which understands us when we speak to it and mostly responds in an intelligent way, is based on A.I. algorithms developed to facilitate speech understanding. These are just a few examples of how A.I. is increasingly becoming an essential component of everyday life for the average citizen in developed countries. The examples above do not even include the so-called Internet of things which is linked to the application of cognitive computing capabilities [2]. Computing giant IBM continues to invest massive resources in order to employ its Watson cognitive computing system to finance, personalised education and of particular interest to this article, to the field of medicine [1]. The definition of A.I. usually identifies the fact that the field can be divided into so-called ‘strong’ A.I. which refers to the creation of computer systems whose behaviour at certain levels would be indistinguishable from that of humans. The alternative to the above system would be ‘weak’ A.I., which would examine human cognition and decide how it could be applied to assist and support our limited human cognition in multiple situations e.g. modern fighter aircraft are filled with such ‘weak’ A.I. systems. ‘Weak’ A.I. systems will help pilots to maximize the potential of their sophisticated aircraft, but they will not be empowered to have an independent existence and decisionmaking process [3]. The goal with which A.I. systems in Medicine have been created is to assist and support healthcare workers to execute their normal duties more efficiently, especially in those areas which require the manipulation of data and knowledge [4]. This characteristic of the system will allow it to evaluate an electronic medical record system on an ongoing basis. This constant analysis of the records will enable it to alert the clinician when it detects patterns in clinical data which suggest significant changes in a patient’s condition or if it detects a probable contraindication to a planned treatment [5]. The fact that the algorithms in A.I. systems have the capacity to learn, will lead to the discovery of new phenomena and thus the creation of new medical knowledge. On the other hand, A.I. is a form of automation that will reduce the number of current jobs in the medical field, and there is as yet no certainty that new jobs in sufficient quantities will be created to replace those lost [5]. Lupton M (2018) Some ethical and legal consequences of the application of artificial intelligence in the field of medicine Volume 18(4): 2-7 Trends Med, 2018 doi: 10.15761/TiM.1000147 Major concerns arising from A.I. Humans owe their dominant position in the world to their intelligence not their speed or strength. Therefore, the development of A.I. systems that are ‘super intelligent’ in that they exceed the ability of the best human brains in practically every field could impact drastically on humanity and we should proceed down this road with care [6]. It is human intelligence which allowed man to develop tools and the technology to enable us to control our environment. It is therefore not illogical to deduce that a super intelligent system would likewise be capable of developing its own tools and technology for exerting control [7]. The dangers attached to the above occurring is that such A.I. systems would not share our evolutionary history and there is therefore no reason to believe that they would be driven by human characteristics such as a lust for power. Their default position is likely to be that they are driven to compete for and acquire resources currently used by humans, which is likely given the fact that the system is devoid of the human sense of fairness, compassion or conservatism [8]. An onus therefore rests on the creators of A.I. systems to construct and train them in such a way that the systems are wired to develop ‘moral’ and ‘ethical’ behaviour patterns so as to ensure that these super intelligent A.I. systems have a positive rather than a negative impact on society, or to use the terminology of A.I scientists, that these systems are ‘aligned with human interests’. To achieve this end designers, need to develop and employ agent architectures which avert the incentives of A.I. systems to manipulate and deceive their human operators, and instead remain tolerant of programmer errors [9]. Just one example of the unexpected outcomes of a task allocated to an A.I. agent is described by authors Bird and Lydell. It involved a generic algorithm which was tasked with making an oscillator. The algorithm instead repurposed the tracks on a printed circuit board on the mother board, to act as a makeshift radio to amplify oscillating signals from nearby computers. Had the algorithms been simulated on a virtual circuit board which only possessed the features that seemed relevant to the problem, it would have delivered an outcome closer to what its controllers had anticipated [4]. The above example clearly illustrates the ability of an A.I. agent, operating in the real world, to use resources in unexpected ways by for example finding ‘shortcuts’ or ‘cheats’ not accounted for in a simplified model [10]. A.I. and medical diagnosis The remarks above illustrate the scope and potential of A.I. systems. It is therefore not surprising that there is ample opportunity to employ A.I. systems in the field of medicine, some of which we will discuss below.

[1]  John P. Sullins When Is a Robot a Moral Agent , 2006 .

[2]  Robert N. M. Watson,et al.  The Age of Avatar Realism , 2010, IEEE Robotics & Automation Magazine.

[3]  Keith W. Miller,et al.  It's Not Nice to Fool Humans , 2010, IT Professional.

[4]  L. Lusted Logical analysis in roentgen diagnosis. , 1960, Radiology.

[5]  Enrico Coiera,et al.  Artificial Intelligence in Medicine : an Introduction , 2010 .

[6]  Peter de Blanc Ontological Crises in Artificial Agents' Value Systems , 2011, ArXiv.

[7]  L. Damm,et al.  Moral Machines: Teaching Robots Right from Wrong , 2012 .

[8]  T. Mclean CYBERSURGERY--AN ARGUMENT FOR ENTERPRISE LIABILITY , 2002, The Journal of legal medicine.

[9]  C. Allen,et al.  Moral Machines: Teaching Robots Right from Wrong , 2008 .

[10]  Eliezer Yudkowsky Artificial Intelligence as a Positive and Negative Factor in Global Risk , 2006 .

[11]  S. E. Pegalis,et al.  American law of medical malpractice , 1980 .

[12]  Luke Muehlhauser,et al.  Intelligence Explosion: Evidence and Import , 2012 .

[13]  Benja Fallenstein,et al.  Problems of Self-reference in Self-improving Space-Time Embedded Intelligence , 2014, AGI.

[14]  T. A. Moore Medical malpractice : discovery and trial , 2002 .

[15]  Marshall L. Smith,et al.  Intensive care telemedicine: evaluating a model for proactive remote monitoring and intervention in the critical care setting. , 2008, Studies in health technology and informatics.

[16]  Daniel L. Tobey Software Malpractice in the Age of AI: A Guide for the Wary Tech Company , 2018, AIES.

[17]  E. Topol,et al.  Adapting to Artificial Intelligence: Radiologists and Pathologists as Information Specialists. , 2016, JAMA.

[18]  D M Angaran,et al.  Telemedicine and telepharmacy: current status and future implications. , 1999, American journal of health-system pharmacy : AJHP : official journal of the American Society of Health-System Pharmacists.

[19]  C. Rogers A Theory of Therapy , Personality , and Interpersonal Relationships , as Developed in the Client-centered Framework , 2010 .

[20]  Gianmarco Veruggio,et al.  Roboethics: Social and Ethical Implications of Robotics , 2008, Springer Handbook of Robotics.

[21]  Andrew Y. Ng,et al.  Pharmacokinetics of a novel formulation of ivermectin after administration to goats , 2000, ICML.

[22]  Ronald Epstein,et al.  Time and the patient-physician relationship , 1999, Journal of General Internal Medicine.

[23]  C. Robert Superintelligence: Paths, Dangers, Strategies , 2017 .