Fair and Responsible AI: A Focus on the Ability to Contest

Copyright held by the owner/author(s). Fair and Responsible AI Workshop (CHI’20), April 25–30, 2020, Honolulu, HI, USA ACM 978-1-4503-6819-3/20/04. https://doi.org/10.1145/3334480.XXXXXXX Abstract As the use of artificial intelligence (AI) in high-stakes decisionmaking increases, the ability to contest such decisions is being recognised in AI ethics guidelines as an important safeguard for individuals. Yet, there is little guidance on how AI systems can be designed to support contestation. In this paper we explain that the design of a contestation process is important due to its impact on perceptions of fairness and satisfaction. We also consider design challenges, including a lack of transparency as well as the numerous design options that decision-making entities will be faced with. We argue for a human-centred approach to designing for contestability to ensure that the needs of decision subjects, and the community, are met.

[1]  Shrikanth S. Narayanan,et al.  Designing Contestability: Interaction Design, Machine Learning, and Mental Health , 2017, Conference on Designing Interactive Systems.

[2]  Emre Bayamlioglu,et al.  Contesting Automated Decisions: , 2018 .

[3]  J. Thibaut,et al.  Procedural Justice: A Psychological Analysis , 1976 .

[4]  Krishna P. Gummadi,et al.  Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning , 2018, AAAI.

[5]  Anna Jobin,et al.  The global landscape of AI ethics guidelines , 2019, Nature Machine Intelligence.

[6]  Cynthia Rudin,et al.  Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead , 2018, Nature Machine Intelligence.

[7]  Hannah Lebovits Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor , 2018, Public Integrity.

[8]  Jenna Burrell,et al.  How the machine ‘thinks’: Understanding opacity in machine learning algorithms , 2016 .

[9]  G. Leventhal What Should Be Done with Equity Theory , 1980 .

[10]  Virginia Dignum,et al.  Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way , 2019, Artificial Intelligence: Foundations, Theory, and Algorithms.

[11]  Jun Zhao,et al.  'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions , 2018, CHI.

[12]  Chris Russell,et al.  Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR , 2017, ArXiv.

[13]  Marco Almada Human Intervention in Automated Decision-Making: Toward the Construction of Contestable Systems , 2018 .

[14]  M. Cannarsa Ethics Guidelines for Trustworthy AI , 2021, The Cambridge Handbook of Lawyering in the Digital Age.

[15]  Sarah Myers West,et al.  Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms , 2018, New Media Soc..

[16]  christyn Australian Government Department of Industry, Innovation and Science - Australia-China Science and Research Fund Joint Research Centres - Research Division , 2017 .