Explaining Algorithmic Decisions with respect to Fairness

1 Abstract Decision-Making Software (D-MS) may exhibit biases against people on grounds of protected characteristics such as gender and ethnicity. Such undesirable behavior should not only be detected but also explained. To avoid complicated explanations and expensive fixes, fairness awareness has to be proactively embedded in the design phase of the system development. With fairness by design, system developers have to be supported with tools that detect and explain discriminations during the system architecture design [Ra18a].

[1]  Steffen Staab,et al.  Model-Based Discrimination Analysis: A Position Paper , 2018, 2018 IEEE/ACM International Workshop on Software Fairness (FairWare).

[2]  Jan Jürjens,et al.  From Secure Business Process Modeling to Design-Level Security Verification , 2017, 2017 ACM/IEEE 20th International Conference on Model Driven Engineering Languages and Systems (MODELS).

[3]  Yuriy Brun,et al.  Fairness testing: testing software for discrimination , 2017, ESEC/SIGSOFT FSE.

[4]  Jan Jürjens,et al.  Detecting Conflicts Between Data-Minimization and Security Requirements in Business Process Models , 2018, ECMFA.

[5]  Jan Jürjens,et al.  Model-based privacy and security analysis with CARiSMA , 2017, ESEC/SIGSOFT FSE.

[6]  Dorothy E. Denning,et al.  A lattice model of secure information flow , 1976, CACM.