Human-centred artificial intelligence: a contextual morality perspective

ABSTRACT The emergence of big data combined with the technical developments in Artificial Intelligence has enabled novel opportunities for autonomous and continuous decision support. While initial work has begun to explore how human morality can inform the decision making of future Artificial Intelligence applications, these approaches typically consider human morals as static and immutable. In this work, we present an initial exploration of the effect of context on human morality from a Utilitarian perspective. Through an online narrative transportation study, in which participants are primed with either a positive story, a negative story or a control condition (N = 82), we collect participants' perceptions on technology that has to deal with moral judgment in changing contexts. Based on an in-depth qualitative analysis of participant responses, we contrast participant perceptions to related work on Fairness, Accountability and Transparency. Our work highlights the importance of contextual morality for Artificial Intelligence and identifies opportunities for future work through a FACT-based (Fairness, Accountability, Context and Transparency) perspective.

[1]  Aesop,et al.  Fables of Aesop, and Others , 2016 .

[2]  Julia L. Shields Fair Is Foul. , 1981 .

[3]  John Skorupski,et al.  The Definition of Morality , 1993, Royal Institute of Philosophy Supplement.

[4]  S. Vitell,et al.  The Effects of Culture on Ethical Decision-Making: An Application of Hofstede's Typology , 1993 .

[5]  J. Russell Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. , 1994, Psychological bulletin.

[6]  蔡森昌 饒斯對功效主義(Utilitarianism)的批評 , 1994 .

[7]  Rosalind W. Picard Affective computing: (526112012-054) , 1997 .

[8]  David C. Wyld,et al.  The Importance of Context: The Ethical Work Climate Construct and Models of Ethical Decision Making -- An Agenda for Research , 1997 .

[9]  A. Greenwald,et al.  Measuring individual differences in implicit cognition: the implicit association test. , 1998, Journal of personality and social psychology.

[10]  M. Banaji,et al.  When fair is foul and foul is fair: reverse priming in automatic evaluation. , 1999, Journal of personality and social psychology.

[11]  Gilbert Harman,et al.  XIV—Moral Philosophy Meets Social Psychology: Virtue Ethics and the Fundamental Attribution Error , 1999 .

[12]  Anind K. Dey,et al.  Understanding and Using Context , 2001, Personal and Ubiquitous Computing.

[13]  Melanie C. Green,et al.  In the Mind's Eye Transportation-Imagery Model of Narrative Persuasion , 2002 .

[14]  B. Latour,et al.  Morality and Technology , 2002, The Ethics of Biotechnology.

[15]  Rachana Kamtekar,et al.  Situationism and Virtue Ethics on the Content of Our Character* , 2004, Ethics.

[16]  Christopher Rowe,et al.  Aristotle: Nicomachean Ethics , 2004 .

[17]  Kristen A. Lindquist,et al.  Language and the perception of emotion. , 2006, Emotion.

[18]  G. Hofstede Dimensionalizing cultures: The Hofstede model in context , 2011 .

[19]  Susan E. Morgan,et al.  The Power of Narratives: The Effect of Entertainment Television Organ Donation Storylines on the Attitudes, Knowledge, and Behaviors of Donors and Nondonors , 2009 .

[20]  Panagiotis G. Ipeirotis Analyzing the Amazon Mechanical Turk marketplace , 2010, XRDS.

[21]  J. Henrich,et al.  Most people are not WEIRD , 2010, Nature.

[22]  Toon Calders,et al.  Three naive Bayes approaches for discrimination-free classification , 2010, Data Mining and Knowledge Discovery.

[23]  Panagiotis G. Ipeirotis,et al.  Running Experiments on Amazon Mechanical Turk , 2010, Judgment and Decision Making.

[24]  Katherine A. DeCelles,et al.  Automatic ethics: the effects of implicit assumptions and contextual cues on moral behavior. , 2010, The Journal of applied psychology.

[25]  Yu He,et al.  The YouTube video recommendation system , 2010, RecSys '10.

[26]  K. Hunt,et al.  Smoking in movies and adolescent smoking: cross-cultural study in six European countries , 2011, Thorax.

[27]  Xing Xie,et al.  Towards mobile intelligence: Learning from GPS history data for collaborative recommendation , 2012, Artif. Intell..

[28]  S. Hannah,et al.  Different Hats, Different Obligations: Plural Occupational Identities and Situated Moral Judgments , 2012 .

[29]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[30]  Adam J. Berinsky,et al.  Evaluating Online Labor Markets for Experimental Research: Amazon.com's Mechanical Turk , 2012, Political Analysis.

[31]  F. Allan Hanson,et al.  The Technological Society , 1965 .

[32]  Jenna L. Clark,et al.  Transportation into narrative worlds: implications for entertainment media influences on tobacco use. , 2013, Addiction.

[33]  D. Johnson,et al.  Reading Narrative Fiction Reduces Arab-Muslim Prejudice and Offers a Safe Haven From Intergroup Anxiety , 2013 .

[34]  Luca Chittaro,et al.  Affective basis of judgment-behavior discrepancy in virtual experiences of moral dilemmas , 2014, Social neuroscience.

[35]  Lu Wang,et al.  Being Bad in a Video Game Can Make Us Morally Sensitive , 2014, Cyberpsychology Behav. Soc. Netw..

[36]  D. Johnson,et al.  Changing Race Boundary Perception by Reading Narrative Fiction , 2014 .

[37]  Ko de Ruyter,et al.  The Extended Transportation-Imagery Model: A Meta-Analysis of the Antecedents and Consequences of Consumers' Narrative Transportation , 2014 .

[38]  Frank A. Pasquale The Black Box Society: The Secret Algorithms That Control Money and Information , 2015 .

[39]  K. Karahalios,et al.  "I always assumed that I wasn't really that close to [her]": Reasoning about Invisible Algorithms in News Feeds , 2015, CHI.

[40]  J. Savulescu,et al.  Moral Enhancement and Artificial Intelligence: Moral AI? , 2015 .

[41]  Markus Appel,et al.  The Transportation Scale–Short Form (TS–SF) , 2015 .

[42]  Johannes Gehrke,et al.  Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission , 2015, KDD.

[43]  T. Parsons Virtual Reality for Enhanced Ecological Validity and Experimental Control in the Clinical, Affective and Social Neurosciences , 2015, Front. Hum. Neurosci..

[44]  Nicholas Diakopoulos,et al.  Accountability in algorithmic decision making , 2016, Commun. ACM.

[45]  Andrew D. Selbst,et al.  Big Data's Disparate Impact , 2016 .

[46]  Isabela Granic,et al.  Designing and Utilizing Biofeedback Games for Emotion Regulation: The Case of Nevermind , 2016, CHI Extended Abstracts.

[47]  Ryan Calo,et al.  There is a blind spot in AI research , 2016, Nature.

[48]  Sarah T. Roberts Commercial Content Moderation: Digital Laborers' Dirty Work , 2016 .

[49]  Jessica E. Black,et al.  Development, reliability, and validity of the Moral Identity Questionnaire , 2016 .

[50]  K. Crawford Artificial Intelligence's White Guy Problem , 2016 .

[51]  Iyad Rahwan,et al.  The social dilemma of autonomous vehicles , 2015, Science.

[52]  Nathan Srebro,et al.  Equality of Opportunity in Supervised Learning , 2016, NIPS.

[53]  J. Reidenberg,et al.  Accountable Algorithms , 2016 .

[54]  Selmer Bringsjord,et al.  On Automating the Doctrine of Double Effect , 2017, IJCAI.

[55]  D. Howard,et al.  Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency , 2017 .

[56]  D. Sculley,et al.  No Classification without Representation: Assessing Geodiversity Issues in Open Data Sets for the Developing World , 2017, 1711.08536.

[57]  Adam Tauman Kalai,et al.  Counterfactual Language Model Adaptation for Suggesting Phrases , 2017, IJCNLP.

[58]  Munmun De Choudhury,et al.  Integrating Artificial and Human Intelligence in Complex, Sensitive Problem Domains: Experiences from Mental Health , 2018, AI Mag..

[59]  Mohan S. Kankanhalli,et al.  Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda , 2018, CHI.

[60]  Allison Woodruff,et al.  A Qualitative Exploration of Perceptions of Algorithmic Fairness , 2018, CHI.

[61]  James Zou,et al.  AI can be sexist and racist — it’s time to make it fair , 2018, Nature.

[62]  P. Verbeek,et al.  Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy , 2018, Science, Technology, & Human Values.

[63]  J. Henrich,et al.  The Moral Machine experiment , 2018, Nature.

[64]  Katia P. Sycara,et al.  Transparency and Explanation in Deep Reinforcement Learning Neural Networks , 2018, AIES.

[65]  Esther Rolf,et al.  Delayed Impact of Fair Machine Learning , 2018, ICML.

[66]  Alex Pentland,et al.  Fair, Transparent, and Accountable Algorithmic Decision-making Processes , 2017, Philosophy & Technology.

[67]  Frank Elberzhager,et al.  Acceptance Testing of Mobile Applications - Automated Emotion Tracking for Large User Groups , 2018, 2018 IEEE/ACM 5th International Conference on Mobile Software Engineering and Systems (MOBILESoft).

[68]  Susan Leavy,et al.  Gender Bias in Artificial Intelligence: The Need for Diversity and Gender Theory in Machine Learning , 2018, 2018 IEEE/ACM 1st International Workshop on Gender Equality in Software Engineering (GE).

[69]  Hany Farid,et al.  The accuracy, fairness, and limits of predicting recidivism , 2018, Science Advances.

[70]  Munindar P. Singh,et al.  Sociotechnical Systems and Ethics in the Large , 2018, AIES.

[71]  Matthew T. Brodhead,et al.  Introduction to ABA, Ethics, and Core Ethical Principles , 2018 .

[72]  Michael Veale,et al.  Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making , 2018, CHI.

[73]  Josh Harguess,et al.  Report on the Second Annual Workshop on Naval Applications of Machine Learning , 2018, AI Mag..

[74]  A. Adlam,et al.  Moral decision-making and moral development: Toward an integrative framework , 2018, Developmental Review.

[75]  Niels van Berkel,et al.  The Experience Sampling Method on Mobile Devices , 2017, ACM Comput. Surv..

[76]  Shagun Jhaver,et al.  Algorithmic Anxiety and Coping Strategies of Airbnb Hosts , 2018, CHI.

[77]  Kai Kunze,et al.  Continuous Alertness Assessments: Using EOG Glasses to Unobtrusively Monitor Fatigue Levels In-The-Wild , 2019, CHI.

[78]  Miroslav Dudík,et al.  Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need? , 2018, CHI.

[79]  Ariel Shamir,et al.  Can Children Understand Machine Learning Concepts?: The Effect of Uncovering Black Boxes , 2019, CHI.

[80]  Ana P. Gantman,et al.  Binding moral values gain importance in the presence of close others , 2019, Nature Communications.

[81]  Anna Jobin,et al.  The global landscape of AI ethics guidelines , 2019, Nature Machine Intelligence.

[82]  Kush R. Varshney,et al.  Increasing Trust in AI Services through Supplier's Declarations of Conformity , 2018, IBM J. Res. Dev..

[83]  G. NaveenSundar,et al.  Toward the Engineering of Virtuous Machines , 2018, AIES.

[84]  Marcello Ienca,et al.  Artificial Intelligence: the global landscape of ethics guidelines , 2019, ArXiv.

[85]  Aws Albarghouthi,et al.  Fairness-Aware Programming , 2019, FAT.

[86]  Eric Gilbert,et al.  User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms , 2019, CHI.

[87]  Jorge Gonçalves,et al.  Fueling AI with public displays?: a feasibility study of collecting biometrically tagged consensual data on a university campus , 2019, PerDis.

[88]  Martin Wattenberg,et al.  Human-Centered Tools for Coping with Imperfect Algorithms During Medical Decision-Making , 2019, CHI.

[89]  Simo Hosio,et al.  Capturing contextual morality , 2019 .

[90]  Simo Hosio,et al.  Capturing contextual morality: applying game theory on smartphones , 2019, UbiComp/ISWC Adjunct.

[91]  Timnit Gebru,et al.  Datasheets for datasets , 2018, Commun. ACM.