Societal Issues Concerning the Application of Artificial Intelligence in Medicine

Background: Medicine is becoming an increasingly data-centred discipline and, beyond classical statistical approaches, artificial intelligence (AI) and, in particular, machine learning (ML) are attracting much interest for the analysis of medical data. It has been argued that AI is experiencing a fast process of commodification. This characterization correctly reflects the current process of industrialization of AI and its reach into society. Therefore, societal issues related to the use of AI and ML should not be ignored any longer and certainly not in the medical domain. These societal issues may take many forms, but they all entail the design of models from a human-centred perspective, incorporating human-relevant requirements and constraints. In this brief paper, we discuss a number of specific issues affecting the use of AI and ML in medicine, such as fairness, privacy and anonymity, explainability and interpretability, but also some broader societal issues, such as ethics and legislation. We reckon that all of these are relevant aspects to consider in order to achieve the objective of fostering acceptance of AI- and ML-based technologies, as well as to comply with an evolving legislation concerning the impact of digital technologies on ethically and privacy sensitive matters. Our specific goal here is to reflect on how all these topics affect medical applications of AI and ML. This paper includes some of the contents of the “2nd Meeting of Science and Dialysis: Artificial Intelligence,” organized in the Bellvitge University Hospital, Barcelona, Spain. Summary and Key Messages: AI and ML are attracting much interest from the medical community as key approaches to knowledge extraction from data. These approaches are increasingly colonizing ambits of social impact, such as medicine and healthcare. Issues of social relevance with an impact on medicine and healthcare include (although they are not limited to) fairness, explainability, privacy, ethics and legislation.

[1]  D. Stemerding,et al.  Science and Technology Governance and Ethics : A Global Perspective from Europe, India and China , 2015 .

[2]  T. Hoff Deskilling and adaptation among primary care physicians using two work innovations , 2011, Health care management review.

[3]  Cristiano Castelfranchi,et al.  The theory of social functions: challenges for computational social science and multi-agent learning , 2001, Cognitive Systems Research.

[4]  Marc Berg,et al.  Viewpoint Paper: Some Unintended Consequences of Information Technology in Health Care: The Nature of Patient Care Information System-related Errors , 2003, J. Am. Medical Informatics Assoc..

[5]  Sabina Leonelli,et al.  Data-Centric Biology: A Philosophical Study , 2016 .

[6]  Paulo J. G. Lisboa,et al.  How to find simple and accurate rules for viral protease cleavage specificities , 2009, BMC Bioinformatics.

[7]  Guang-Zhong Yang,et al.  Deep Learning for Health Informatics , 2017, IEEE Journal of Biomedical and Health Informatics.

[8]  Michael Veale,et al.  Some HCI Priorities for GDPR-Compliant Machine Learning , 2018, CHI 2018.

[9]  K. Borgwardt,et al.  Machine Learning in Medicine , 2015, Mach. Learn. under Resour. Constraints Vol. 3.

[10]  Siobhan O'Connor,et al.  Big data and data science in health care: What nurses and midwives need to know. , 2018, Journal of clinical nursing.

[11]  Paulo J. G. Lisboa,et al.  Bioinformatics and medicine in the era of deep learning , 2018, ESANN.

[12]  T. Beauchamp,et al.  Principles of biomedical ethics , 1991 .

[13]  Michael Veale,et al.  Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data , 2017, Big Data Soc..

[14]  Simone Scardapane,et al.  Privacy-Preserving Data Mining for Distributed Medical Scenarios , 2018, Multidisciplinary Approaches to Neural Computing.

[15]  Paulo J. G. Lisboa,et al.  Making machine learning models interpretable , 2012, ESANN.

[16]  Stephan Dreiseitl,et al.  Do physicians value decision support? A look at the effect of decision support systems on physician opinion , 2005, Artif. Intell. Medicine.

[17]  F. Cabitza,et al.  Unintended Consequences of Machine Learning in Medicine , 2017, JAMA.

[18]  Nisheeth K. Vishnoi,et al.  Ranking with Fairness Constraints , 2017, ICALP.

[19]  Been Kim,et al.  Towards A Rigorous Science of Interpretable Machine Learning , 2017, 1702.08608.

[20]  Jules J. Berman,et al.  Confidentiality issues for medical data miners , 2002, Artif. Intell. Medicine.

[21]  James H. Moor,et al.  The Nature, Importance, and Difficulty of Machine Ethics , 2006, IEEE Intelligent Systems.

[22]  Johan A. K. Suykens,et al.  Explaining Support Vector Machines: A Color Based Nomogram , 2016, PloS one.

[23]  Seth Flaxman,et al.  European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation" , 2016, AI Mag..

[24]  Arvind Narayanan,et al.  Semantics derived automatically from language corpora contain human-like biases , 2016, Science.

[25]  Philip S. Yu,et al.  A General Survey of Privacy-Preserving Data Mining Models and Algorithms , 2008, Privacy-Preserving Data Mining.

[26]  Alex Zhavoronkov,et al.  Applications of Deep Learning in Biomedicine. , 2016, Molecular pharmaceutics.

[27]  INGEMAR NORDIN,et al.  A PHILOSOPHICAL STUDY , 2001 .

[28]  José Luis Fernández Alemán,et al.  Security and privacy in electronic health records: A systematic literature review , 2013, J. Biomed. Informatics.

[29]  Hetan Shah The DeepMind debacle demands dialogue on data , 2017, Nature.

[30]  Emil Wiedemann,et al.  A continuous framework for fairness , 2017, ArXiv.

[31]  Tom Fawcett,et al.  Data Science and its Relationship to Big Data and Data-Driven Decision Making , 2013, Big Data.