Respect for Human Autonomy in Recommender Systems

Recommender systems can influence human behavior in significant ways, in some cases making people more machine-like. In this sense, recommender systems may be deleterious to notions of human autonomy. Many ethical systems point to respect for human autonomy as a key principle arising from human rights considerations, and several emerging frameworks for AI include this principle. Yet, no specific formalization has been defined. Separately, self-determination theory shows that autonomy is an innate psychological need for people, and moreover has a significant body of experimental work that formalizes and measures level of human autonomy. In this position paper, we argue that there is a need to specifically operationalize respect for human autonomy in the context of recommender systems. Moreover, that such an operational definition can be developed based on well-established approaches from experimental psychology, which can then be used to design future recommender systems that respect human autonomy.

[1]  Nick Seaver,et al.  Captivating algorithms: Recommender systems as traps , 2018, Journal of Material Culture.

[2]  Krzysztof Z. Gajos,et al.  Predictive text encourages predictable writing , 2020, IUI.

[3]  E. Deci,et al.  Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. , 2000, The American psychologist.

[4]  Bart P. Knijnenburg,et al.  Recommender Systems for Self-Actualization , 2016, RecSys.

[5]  Kartik Hosanagar,et al.  Blockbuster Culture's Next Rise or Fall: The Impact of Recommender Systems on Sales Diversity , 2007, Manag. Sci..

[6]  Jess Whittlestone,et al.  The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions , 2019, AIES.

[7]  Mariarosaria Taddeo,et al.  Designing AI for Social Good: Seven Essential Factors , 2019, SSRN Electronic Journal.

[8]  Tim Wu Is the First Amendment Obsolete , 2017 .

[9]  H. Nissenbaum A Contextual Approach to Privacy Online , 2011, Daedalus.

[10]  Alec Radford,et al.  Learning to summarize from human feedback , 2020, NeurIPS.

[11]  R. Ryan,et al.  Self-determination theory , 2015 .

[12]  Lav R. Varshney,et al.  Engineering for problems of excess , 2014, 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering.

[13]  I. Glenn Cohen,et al.  Informed Consent and Medical Artificial Intelligence: What to Tell the Patient? , 2020 .

[14]  Yunfeng Zhang,et al.  AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias , 2019, IBM Journal of Research and Development.

[15]  Richard M. Ryan,et al.  The index of autonomous functioning: Development of a scale of human autonomy , 2012 .

[16]  Mariarosaria Taddeo,et al.  Recommender systems and their ethical challenges , 2020, AI & SOCIETY.

[17]  Mariarosaria Taddeo,et al.  The Debate on the Moral Responsibilities of Online Service Providers , 2015, Science and Engineering Ethics.