Artificial Canaries: Early Warning Signs for Anticipatory and Democratic Governance of AI

We propose a method for identifying early warning signs of transformative progress in artificial intelligence (AI), and discuss how these can support the anticipatory and democratic governance of AI. We call these early warning signs ‘canaries’, based on the use of canaries to provide early warnings of unsafe air pollution in coal mines. Our method combines expert elicitation and collaborative causal graphs to identify key milestones and identify the relationships between them. We present two illustrations of how this method could be used: to identify early warnings of harmful impacts of language models; and of progress towards high-level machine intelligence. Identifying early warning signs of transformative applications can support more efficient monitoring and timely regulation of progress in AI: as AI advances, its impacts on society may be too great to be governed retrospectively. It is essential that those impacted by AI have a say in how it is governed. Early warnings can give the public time and focus to influence emerging technologies using democratic, participatory technology assessments. We discuss the challenges in identifying early warning signals and propose directions for future work.

[1]  Janus Hansen Operationalising the public in participatory technology assessment: A framework for comparison applied to three cases , 2006 .

[2]  Hyrum S. Anderson,et al.  The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation , 2018, ArXiv.

[3]  I. Katznelson On Democratic Reason , 2019, Global Policy.

[4]  Sarah Myers,et al.  Gender , Race , and Power in AI , 2019 .

[5]  Carla Zoe Cremer Deep limitations? Examining expert disagreement over deep learning , 2021, Progress in Artificial Intelligence.

[6]  A. Loeber,et al.  The challenge of citizen participation for democracy , 2010 .

[7]  Philip E. Tetlock,et al.  Superforecasting: The Art and Science of Prediction , 2015 .

[8]  Philip N. Howard,et al.  Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda , 2017, Big Data.

[9]  Jess Whittlestone,et al.  The Transformative Potential of Artificial Intelligence , 2019, Futures.

[10]  R. Penrose,et al.  How Long Until Human-Level AI ? Results from an Expert Assessment , 2011 .

[11]  Helge Toutenburg,et al.  The Social Control of Technology , 1982 .

[12]  Darakhshan J. Mir,et al.  Fighting Back Algocracy: The need for new participatory approaches to technology assessment , 2020, PDC.

[13]  Rhys Crilley,et al.  Cyberwar: How Russian Hackers and Trolls Helped Elect a President—What We Don’t, Can’t, and Do Know , 2019, Journal of Communication.

[14]  Emilio Ferrara,et al.  Disinformation and Social Bot Operations in the Run Up to the 2017 French Presidential Election , 2017, First Monday.

[15]  T. Gordon,et al.  Who is an expert for foresight? A review of identification methods , 2020 .

[16]  Lívia Markíczy,et al.  A Method for Eliciting and Comparing Causal Maps , 1995 .

[17]  Darshana Narayanan,et al.  vTaiwan: An Empirical Study of Open Consultation Process in Taiwan , 2018 .

[18]  Kendall Roark,et al.  Potential for participatory big data ethics and algorithm design: a scoping mapping review , 2018, PDC.

[19]  Gabriele Abels Chapter Nine. Participatory Technology Assessment And The “Institutional Void”: Investigating Democratic Theory And Representative Politics , 2010 .

[20]  Gavin Pearson,et al.  Tackling threats to informed decision-making in democratic societies: Promoting epistemic security in a technologically-advanced world , 2020 .

[21]  Russell K. Nieli The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies, by Scott E. Page. Princeton, NJ, and Oxford: Princeton University Press, 2007, 448 pp., $27.95 hardbound, $19.95 paperback. , 2009 .

[22]  P. Howard Lie Machines , 2020 .

[23]  Julii Brainard,et al.  Misinformation making a disease outbreak worse: outcomes compared for influenza, monkeypox, and norovirus , 2019, Simul..

[24]  C. Eden ON THE NATURE OF COGNITIVE MAPS , 1992 .

[25]  Kate Starbird,et al.  Disinformation’s spread: bots, trolls and all of us , 2019, Nature.

[26]  Yogesh Kumar Dwivedi,et al.  Artificial Intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy , 2019, International Journal of Information Management.

[27]  Michael A. Osborne,et al.  The future of employment: How susceptible are jobs to computerisation? , 2017 .

[28]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[29]  Public Engagement in the Governance of Science and Technology , 2015 .

[30]  Jason Chilvers,et al.  Upping the ante: A conceptual framework for designing and evaluating participatory technology assessments , 2006 .

[31]  Valerie Belton,et al.  Causal maps and the evaluation of decision options—a review , 2006, J. Oper. Res. Soc..

[32]  Lu Hong,et al.  Groups of diverse problem solvers can outperform groups of high-ability problem solvers. , 2004, Proceedings of the National Academy of Sciences of the United States of America.

[33]  G. Rowe,et al.  A Typology of Public Engagement Mechanisms , 2005 .

[34]  Nick Bostrom,et al.  Future Progress in Artificial Intelligence: A Survey of Expert Opinion , 2013, PT-AI.

[35]  Ross Gruetzemacher A Holistic Framework for Forecasting Transformative AI , 2019, Big Data Cogn. Comput..

[36]  George P. Richardson,et al.  Integrating modes of policy analysis and strategic management practice: requisite elements and dilemmas , 2009, J. Oper. Res. Soc..

[37]  James Fox,et al.  An analysis and evaluation of methods currently used to quantify the likelihood of existential hazards , 2020 .

[38]  Mikko Rask,et al.  Foresight - Balancing between Increasing Variety and Productive Convergence , 2008 .

[39]  Mikko Rask,et al.  The tragedy of citizen deliberation – two cases of participatory technology assessment , 2013, Technol. Anal. Strateg. Manag..

[40]  Giovanni Luca Ciampaglia,et al.  The spread of low-credibility content by social bots , 2017, Nature Communications.

[41]  Mariarosaria Taddeo,et al.  Artificial Intelligence and the ‘Good Society’: the US, EU, and UK approach , 2016, Sci. Eng. Ethics.

[42]  Julie M. Hays,et al.  A Methodology for Constructing Collective Causal Maps , 2006, Decis. Sci..

[43]  Cameron Buckner,et al.  Mating dances and the evolution of language: What’s the next step? , 2017 .

[44]  J. Bryson,et al.  Visible Thinking: Unlocking Causal Mapping for Practical Business Results , 2004 .

[45]  Samuel C. Woolley,et al.  Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration , 2018 .

[46]  Tom Cohen,et al.  Reframing the governance of automotive automation: insights from UK stakeholder workshops , 2018, Journal of Responsible Innovation.

[47]  Luke J. Matthews,et al.  The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment , 2019 .

[48]  K. Matschoss,et al.  Public Participation, Science and Society: Tools for Dynamic and Responsible Governance of Research and Innovation , 2019 .

[49]  Simon Joss,et al.  Participatory technology assessment: European perspectives , 2002 .

[50]  Robert Gorwa,et al.  Unpacking the Social Media Bot: A Typology to Guide Research and Policy , 2018, Policy & Internet.

[51]  J. Crisp,et al.  The Delphi method? , 1997, Nursing research.

[52]  John Salvatier,et al.  When Will AI Exceed Human Performance? Evidence from AI Experts , 2017, ArXiv.

[53]  Annibal José Scavarda,et al.  A Review of the Causal Mapping Practice and Research Literature 1 , 2004 .

[54]  Pekka Tuominen,et al.  Turning Ideas into Proposals: A Case for Blended Participation During the Participatory Budgeting Trial in Helsinki , 2019, ePart.

[55]  Matt Chessen The MADCOM Future , 2018, Artificial Intelligence Safety and Security.

[56]  C. Eden,et al.  Using Causal Mapping with Group Support Systems to Elicit an Understanding of Failure in Complex Projects: Some Implications for Organizational Research , 2005 .

[57]  C. Eden,et al.  The analysis of cause maps , 1992 .

[58]  K. Kertysova Artificial Intelligence and Disinformation , 2018, Security and Human Rights.

[59]  Fran Ackermann,et al.  Cognitive mapping expert views for policy analysis in the public sector , 2004, Eur. J. Oper. Res..