Conversational AI: Social and Ethical Considerations

Conversational Agents are becoming ubiquitous in our daily lives. They are used in various areas including customer service, education, medicine, and entertainment. As tools that are increasingly permeating various social domains, Conversational Agents can have a direct impact on individual’s lives and on social discourse in general. Consequently, critical evaluation of this impact is imperative. In this paper, we highlight some emerging ethical issues and suggest ways for agent designers, developers, and owners to approach them with the goal of responsible development of Conversational Agents.

[1]  Verena Rieser,et al.  A Crowd-based Evaluation of Abuse Response Strategies in Conversational Agents , 2019, SIGdial.

[2]  Ho-Jin Choi,et al.  A Chatbot for Psychiatric Counseling in Mental Healthcare Service Based on Emotional Dialogue Analysis and Sentence Generation , 2017, 2017 18th IEEE International Conference on Mobile Data Management (MDM).

[3]  Annabel Latham,et al.  A Survey of the General Public's Views on the Ethics of Using AI in Education , 2019, AIED.

[4]  Emilio Ferrara,et al.  Social Bots Distort the 2016 US Presidential Election Online Discussion , 2016, First Monday.

[5]  Annika Silvervarg,et al.  The Effect of Visual Gender on Abuse in Conversation with ECAs , 2012, IVA.

[6]  Peter Gentsch Conversational AI: How (Chat)Bots Will Reshape the Digital Experience , 2019 .

[7]  Per Linell Rethinking Language, Mind and World Dialogically : Interactional and contextual theories of human sense-making , 2009 .

[8]  Josephine Lau,et al.  Alexa, Are You Listening? , 2018, Proc. ACM Hum. Comput. Interact..

[9]  Sophia Melanson,et al.  We are data: algorithms and the making of our digital selves , 2017 .

[10]  Anna Jobin,et al.  The global landscape of AI ethics guidelines , 2019, Nature Machine Intelligence.

[11]  J. Chatwin Conversation analysis. , 2004, Complementary therapies in medicine.

[12]  Catherine E. Tucker,et al.  Algorithmic Bias? An Empirical Study of Apparent Gender-Based Discrimination in the Display of STEM Career Ads , 2019, Manag. Sci..

[13]  Kadija Ferryman,et al.  Fairness in precision medicine , 2018 .

[14]  Luciano Floridi,et al.  From What to How. An Overview of AI Ethics Tools, Methods and Research to Translate Principles into Practices , 2019, ArXiv.

[15]  Chunyan Miao,et al.  Towards AI-powered personalization in MOOC learning , 2017, npj Science of Learning.

[16]  Yonghwan Kim,et al.  Digital Media Use and Social Engagement: How Social Media and Smartphone Use Influence Social Activities of College Students , 2016, Cyberpsychology Behav. Soc. Netw..

[17]  Gloria Mark,et al.  Trusting Virtual Agents , 2019, ACM Trans. Interact. Intell. Syst..

[18]  B. Inkster,et al.  An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-World Data Evaluation Mixed-Methods Study , 2018, JMIR mHealth and uHealth.

[19]  Verena Rieser,et al.  #MeToo Alexa: How Conversational Systems Respond to Sexual Harassment , 2018, EthNLP@NAACL-HLT.

[20]  Benjamin Kuipers,et al.  Computer power and human reason , 1976, SGAR.

[21]  W. R. Ford,et al.  Real conversations with artificial intelligence: A comparison between human-human online conversations and human-chatbot conversations , 2015, Comput. Hum. Behav..

[22]  Alex S. Taylor,et al.  Let's Talk About Race: Identity, Chatbots, and AI , 2018, CHI.

[23]  K. Crawford,et al.  Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice , 2019 .

[24]  Taezoon Park,et al.  When stereotypes meet robots: The double-edge sword of robot gender and personality in human-robot interaction , 2014, Comput. Hum. Behav..

[25]  Yanghee Kim,et al.  Pedagogical agents as learning companions: the impact of agent emotion and gender , 2007, J. Comput. Assist. Learn..

[26]  Yi Mou,et al.  The media inequality: Comparing the initial human-human and human-AI social interactions , 2017, Comput. Hum. Behav..

[27]  Tony Doyle,et al.  Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy , 2017, Inf. Soc..

[28]  Judy Hoffman,et al.  Predictive Inequity in Object Detection , 2019, ArXiv.

[29]  Philip T. Kortum,et al.  The impact of voice characteristics on user response in an interactive voice response system , 2010, Interact. Comput..

[30]  Robert Dale,et al.  The return of the chatbots , 2016, Natural Language Engineering.

[31]  I. Singh,et al.  Can Your Phone Be Your Therapist? Young People’s Ethical Perspectives on the Use of Fully Automated Conversational Agents (Chatbots) in Mental Health Support , 2019, Biomedical informatics insights.

[32]  Rachael Tatman,et al.  Gender and Dialect Bias in YouTube’s Automatic Captions , 2017, EthNLP@EACL.

[33]  J. Halamka,et al.  Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape , 2019, Canadian journal of psychiatry. Revue canadienne de psychiatrie.

[34]  P. Bourdieu,et al.  Language and Symbolic Power , 1991 .

[35]  Jian Lu,et al.  FROM “WHAT” TO “HOW” , 1991 .

[36]  Joseph Weizenbaum,et al.  ELIZA—a computer program for the study of natural language communication between man and machine , 1966, CACM.

[37]  Anthony Ventresque,et al.  BoTest: a Framework to Test the Quality of Conversational Agents Using Divergent Input Examples , 2018, IUI Companion.

[38]  Mark West,et al.  I'd blush if I could: closing gender divides in digital skills through education , 2019 .

[39]  Sorelle A. Friedler,et al.  Hiring by Algorithm: Predicting and Preventing Disparate Impact , 2016 .

[40]  Seth D. Baum,et al.  Social choice ethics in artificial intelligence , 2017, AI & SOCIETY.

[41]  Betty Tärning,et al.  Challenging gender stereotypes using virtual pedagogical characters , 2007 .

[42]  Helen Nissenbaum,et al.  The politics of search engines , 2000 .

[43]  Amani Kaadoor,et al.  Managing the ethical and risk implications of rapid advances in artificial intelligence: A literature review , 2016, 2016 Portland International Conference on Management of Engineering and Technology (PICMET).

[44]  Maheshwar Boodraj,et al.  Conversational Assistants: Investigating Privacy Concerns, Trust, and Self-Disclosure , 2017, ICIS.

[45]  Abigail Sellen,et al.  "Like Having a Really Bad PA": The Gulf between User Expectation and Experience of Conversational Agents , 2016, CHI.