HCI sustaining the rule of law and democracy

I kind of design is optimized via, for example, AB testing and privileges interaction that generates observable behavior and/or impressions, clicks, or conversions. Clearly, other objectives abound, such as successful medical interventions, coordination of public transport, enhancing educational performance in disadvantaged groups, improving food security, preventing unlawful police violence, and so on. Some of these systems are data driven, ultimately based on the brute force of complex statistical calculations; others are model driven, in the sense of decision trees based on logic rather than statistics. Both types of systems are often meant to engage users, and are designed in ways that invite intuitive interaction in line with the purpose for which the system was developed. This raises the question of to what extent such design should support legal requirements, thus contributing to interactions that fit the system of checks and balances typical for a society that demands that all of its human and institutional agents be “under the rule of law.” In a time in which many believe AI poses serious threats to democratic politics, democratic institutions, and our capacity and right to engage freely in democratic practices, AI activism is rising. This activism is not only geared to the AI community itself and the giant tech organizations that are de facto defining the field, but also toward governments and their responsibility to shape the societal and ethical implications of AI (see [1] for a survey on AI activism in the past six years). As a result, human-centered AI is being advocated as the way forward, one to which multidisciplinary methods and approaches, long used by the HCI community, are core.