Privacy-preserving Crowd-guided AI Decision-making in Ethical Dilemmas

With the rapid development of artificial intelligence (AI), ethical issues surrounding AI have attracted increasing attention. In particular, autonomous vehicles may face moral dilemmas in accident scenarios, such as staying the course resulting in hurting pedestrians or swerving leading to hurting passengers. To investigate such ethical dilemmas, recent studies have adopted preference aggregation, in which each voter expresses her/his preferences over decisions for the possible ethical dilemma scenarios, and a centralized system aggregates these preferences to obtain the winning decision. Although a useful methodology for building ethical AI systems, such an approach can potentially violate the privacy of voters since moral preferences are sensitive information and their disclosure can be exploited by malicious parties resulting in negative consequences. In this paper, we report a first-of-its-kind privacy-preserving crowd-guided AI decision-making approach in ethical dilemmas. We adopt the formal and popular notion of differential privacy to quantify privacy, and consider four granularities of privacy protection by taking voter-/record-level privacy protection and centralized/distributed perturbation into account, resulting in four approaches VLCP, RLCP, VLDP, and RLDP, respectively. Moreover, we propose different algorithms to achieve these privacy protection granularities, while retaining the accuracy of the learned moral preference model. Specifically, VLCP and RLCP are implemented with the data aggregator setting a universal privacy parameter and perturbing the averaged moral preference to protect the privacy of voters' data. VLDP and RLDP are implemented in such a way that each voter perturbs her/his local moral preference with a personalized privacy parameter. Extensive experiments based on both synthetic data and real-world data of voters' moral decisions demonstrate that the proposed approaches achieve high accuracy of preference aggregation while protecting individual voter's privacy.

[1]  Ian Goodfellow,et al.  Deep Learning with Differential Privacy , 2016, CCS.

[2]  Alessandro Acquisti,et al.  Privacy and rationality in individual decision making , 2005, IEEE Security & Privacy.

[3]  Chunyan Miao,et al.  Building Ethics into Artificial Intelligence , 2018, IJCAI.

[4]  Moni Naor,et al.  Our Data, Ourselves: Privacy Via Distributed Noise Generation , 2006, EUROCRYPT.

[5]  C. Allen,et al.  Moral Machines: Teaching Robots Right from Wrong , 2008 .

[6]  Ge Yu,et al.  Collecting and Analyzing Multidimensional Data with Local Differential Privacy , 2019, 2019 IEEE 35th International Conference on Data Engineering (ICDE).

[7]  Vincent Conitzer,et al.  A PAC Framework for Aggregating Agents' Judgments , 2019, AAAI.

[8]  Meg Leta Jones,et al.  AI and the Ethics of Automating Consent , 2018, IEEE Security & Privacy.

[9]  Martin J. Wainwright,et al.  Local privacy and statistical minimax rates , 2013, 2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton).

[10]  A. Shariff,et al.  Psychological roadblocks to the adoption of self-driving vehicles , 2017, Nature Human Behaviour.

[11]  Ting Yu,et al.  Conservative or liberal? Personalized differential privacy , 2015, 2015 IEEE 31st International Conference on Data Engineering.

[12]  Toniann Pitassi,et al.  Fairness through awareness , 2011, ITCS '12.

[13]  W. Rudin Principles of mathematical analysis , 1964 .

[14]  Joshua D. Greene Our driverless dilemma , 2016, Science.

[15]  Catuscia Palamidessi,et al.  Broadening the Scope of Differential Privacy Using Metrics , 2013, Privacy Enhancing Technologies.

[16]  Cynthia Dwork,et al.  Calibrating Noise to Sensitivity in Private Data Analysis , 2006, TCC.

[17]  Vincent Conitzer,et al.  Moral Decision Making Frameworks for Artificial Intelligence , 2017, ISAIM.

[18]  J. Henrich,et al.  The Moral Machine experiment , 2018, Nature.

[19]  Andreas Krause,et al.  Mathematical Notions vs. Human Perception of Fairness: A Descriptive Approach to Fairness for Machine Learning , 2019, KDD.

[20]  Vaishak Belle,et al.  Deep Tractable Probabilistic Models for Moral Responsibility , 2018, ArXiv.

[21]  Iyad Rahwan,et al.  A Voting-Based System for Ethical Decision Making , 2017, AAAI.

[22]  Úlfar Erlingsson,et al.  RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response , 2014, CCS.

[23]  Aaron Roth,et al.  The Algorithmic Foundations of Differential Privacy , 2014, Found. Trends Theor. Comput. Sci..

[24]  F. Mosteller Remarks on the method of paired comparisons: I. The least squares solution assuming equal standard deviations and equal correlations , 1951 .

[25]  Zhicong Huang,et al.  Differential Privacy with Bounded Priors: Reconciling Utility and Privacy in Genome-Wide Association Studies , 2015, CCS.

[26]  Francesca Rossi,et al.  Embedding Ethical Principles in Collective Decision Support Systems , 2016, AAAI.

[27]  Jun Tang,et al.  Privacy Loss in Apple's Implementation of Differential Privacy on MacOS 10.12 , 2017, ArXiv.

[28]  Somesh Jha,et al.  Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures , 2015, CCS.

[29]  Iyad Rahwan,et al.  The social dilemma of autonomous vehicles , 2015, Science.

[30]  Yin Yang,et al.  Functional Mechanism: Regression Analysis under Differential Privacy , 2012, Proc. VLDB Endow..