(When) Can AI Bots Lie?

The ability of an AI agent to build mental models can open up pathways for manipulating and exploiting the human in the hopes of achieving some greater good. In fact, such behavior does not necessarily require any malicious intent but can rather be borne out of cooperative scenarios. It is also beyond the scope of misinterpretation of intents, as in the case of value alignment problems, and thus can be effectively engineered if desired (i.e. algorithms exist that can optimize such behavior not because models were misspecified but because they were misused). Such techniques pose several unresolved ethical and moral questions with regards to the design of autonomy. In this paper, we illustrate some of these issues in a teaming scenario and investigate how they are perceived by participants in a thought experiment. Finally, we end with a discussion on the moral implications of such behavior from the perspective of the doctor-patient relationship.

[1]  Shambhu J. Upadhyaya,et al.  Towards Modeling Trust Based Decisions: A Game Theoretic Approach , 2007, ESORICS.

[2]  T. Stern,et al.  Lies in the doctor-patient relationship. , 2009, Primary care companion to the Journal of clinical psychiatry.

[3]  D. Thomasma,et al.  Telling the Truth to Patients: A Clinical Ethics Exploration , 1994, Cambridge Quarterly of Healthcare Ethics.

[4]  Will Bridewell,et al.  White lies on silver tongues: Why robots need to deceive (and how) , 2017 .

[5]  Hans van Ditmarsch,et al.  The Ditmarsch Tale of Wonders , 2011, KI.

[6]  T. Hak,et al.  Collusion in doctor-patient communication about imminent death: an ethnographic study , 2000, BMJ : British Medical Journal.

[7]  Yu Zhang,et al.  AI Challenges in Human-Robot Cognitive Teaming , 2017, ArXiv.

[8]  Yu Zhang,et al.  A Formal Framework for Studying Interaction in Human-Robot Societies , 2016, AAAI Workshop: Symbiotic Cognitive Systems.

[9]  David Hume,et al.  Essays, Moral, Political, And Literary , 2012 .

[10]  Michael S. Pritchard,et al.  Moral Machines? , 2012, Science and Engineering Ethics.

[11]  Richard Lindley Lying: Moral Choice in Public and Private Life , 1980 .

[12]  G. Adler,et al.  Borderline Conditions and Pathological Narcissism , 1976 .

[13]  George J Annas,et al.  Doctors, patients, and lawyers--two centuries of health law. , 2012, The New England journal of medicine.

[14]  Serena Villata,et al.  Representing Excuses in Social Dependence Networks , 2009, AI*IA.

[15]  Yu Zhang,et al.  Plan Explanations as Model Reconciliation: Moving Beyond Explanation as Soliloquy , 2017, IJCAI.

[16]  Yu Zhang,et al.  Planning for serendipity , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[17]  Leverhulme Centre for the Future of Intelligence – Written evidence (AIC0182) , 2017 .

[18]  Stephanie Rosenthal,et al.  CoBots: Robust Symbiotic Autonomous Mobile Service Robots , 2015, IJCAI.

[19]  Yu Zhang,et al.  Planning with Resource Conflicts in Human-Robot Cohabitation , 2016, AAMAS.

[20]  V Gandhi The NHS experience: The “Snakes and Ladders” guide for patients and professionals , 2006 .

[21]  Subbarao Kambhampati,et al.  A Game Theoretic Approach to Ad-Hoc Coalitions in Human-Robot Societies , 2016, AAAI Workshop: Multiagent Interaction without Prior Coordination.

[22]  Joshua B. Tenenbaum,et al.  Bayesian Theory of Mind: Modeling Joint Belief-Desire Attribution , 2011, CogSci.

[23]  Anca D. Dragan,et al.  Cooperative Inverse Reinforcement Learning , 2016, NIPS.

[24]  G. Swaminath The doctor's dilemma: Truth telling , 2008, Indian journal of psychiatry.