Catastrophic Risk from Rapid Developments in Artificial Intelligence

Abstract This article describes important possible scenarios in which rapid advances in artificial intelligence (AI) pose multiple risks, including to democracy and for inter-state conflict. In parallel with other countries, New Zealand needs policies to monitor, anticipate and mitigate global catastrophic and existential risks from advanced new technologies. A dedicated policy capacity could translate emerging research and policy options into the New Zealand context. It could also identify how New Zealand could best contribute to global solutions. It is desirable that the potential benefits of AI are realised, while the risks are also mitigated to the greatest extent possible.

[1]  David Leslie,et al.  Understanding artificial intelligence ethics and safety , 2019, ArXiv.

[2]  T. Walsh The effective and ethical development of artificial intelligence: an opportunity to improve our wellbeing , 2019 .

[3]  Alexey Turchin,et al.  Global catastrophic and existential risks communication scale , 2018, Futures.

[4]  Stefania Ileana Chivu The Macroeconomic Impact of Artificial Intelligence , 2022, International Journal of Sustainable Economies Management.

[5]  Nick Bostrom,et al.  The Vulnerable World Hypothesis , 2019, Global Policy.

[6]  Max Smeets,et al.  Sandworm: a new era of cyberwar and the hunt for the Kremlin’s most dangerous hackers , 2020, Journal of Cyber Policy.

[7]  N. Bostrom,et al.  Global Catastrophic Risks , 2008 .

[8]  The Rise of Russia’s Hi-Tech Military , 2020 .

[9]  J. Boston,et al.  Foresight, insight and oversight: Enhancing long-term governance through better parliamentary scrutiny , 2019 .

[10]  James Johnson,et al.  Artificial intelligence & future warfare: implications for international security , 2019, Defense & Security Analysis.

[11]  David Leslie Understanding artificial intelligence ethics and safety , 2019, SSRN Electronic Journal.

[12]  John Schulman,et al.  Concrete Problems in AI Safety , 2016, ArXiv.

[13]  Travis S. Humble,et al.  Quantum supremacy using a programmable superconducting processor , 2019, Nature.

[14]  Paul Nemitz,et al.  Constitutional democracy and technology in the age of artificial intelligence , 2018, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences.

[15]  N. Wilson,et al.  The Prioritization of Island Nations as Refuges from Extreme Pandemics , 2019, Risk analysis : an official publication of the Society for Risk Analysis.

[16]  Nick Wilson,et al.  Rapid developments in artificial intelligence: how might the New Zealand government respond? , 2017 .

[17]  Roman V. Yampolskiy,et al.  Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures , 2016, ArXiv.

[18]  Luke J. Matthews,et al.  The Emerging Risk of Virtual Societal Warfare: Social Manipulation in a Changing Information Environment , 2019 .

[19]  Nello Cristianini,et al.  Can Machines Read our Minds? , 2019, Minds and Machines.

[20]  Steven Pinker,et al.  The better angels of our nature : a history of violence and humanity , 2012 .

[21]  Ilya Sutskever,et al.  Language Models are Unsupervised Multitask Learners , 2019 .

[22]  BostromNick,et al.  Future progress in artificial intelligence , 2014 .

[23]  J Craig Venter,et al.  Digital-to-biological converter for on-demand production of biologics , 2017, Nature Biotechnology.

[24]  Dino Pedreschi,et al.  Algorithmic bias amplifies opinion fragmentation and polarization: A bounded confidence model , 2018, PloS one.

[25]  F. Rue Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression , 2012 .

[26]  Nick Bostrom,et al.  Future Progress in Artificial Intelligence: A Survey of Expert Opinion , 2013, PT-AI.

[27]  David Denkenberger,et al.  Classification of global catastrophic risks connected with artificial intelligence , 2018, AI & SOCIETY.

[28]  Hyrum S. Anderson,et al.  The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation , 2018, ArXiv.