Global Solutions vs. Local Solutions for the AI Safety Problem

There are two types of artificial general intelligence (AGI) safety solutions: global and local. Most previously suggested solutions are local: they explain how to align or “box” a specific AI (Artificial Intelligence), but do not explain how to prevent the creation of dangerous AI in other places. Global solutions are those that ensure any AI on Earth is not dangerous. The number of suggested global solutions is much smaller than the number of proposed local solutions. Global solutions can be divided into four groups: 1. No AI: AGI technology is banned or its use is otherwise prevented; 2. One AI: the first superintelligent AI is used to prevent the creation of any others; 3. Net of AIs as AI police: a balance is created between many AIs, so they evolve as a net and can prevent any rogue AI from taking over the world; 4. Humans inside AI: humans are augmented or part of AI. We explore many ideas, both old and new, regarding global solutions for AI safety. They include changing the number of AI teams, different forms of “AI Nanny” (non-self-improving global control AI system able to prevent creation of dangerous AIs), selling AI safety solutions, and sending messages to future AI. Not every local solution scales to a global solution or does it ethically and safely. The choice of the best local solution should include understanding of the ways in which it will be scaled up. Human-AI teams or a superintelligent AI Service as suggested by Drexler may be examples of such ethically scalable local solutions, but the final choice depends on some unknown variables such as the speed of AI progress.

[1]  N. Bostrom Strategic Implications of Openness in AI Development , 2017, Artificial Intelligence Safety and Security.

[2]  J. M. Smart The transcension hypothesis: Sufficiently advanced civilizations invariably leave our universe, and implications for METI and SETI , 2012 .

[3]  Kaj Sotala,et al.  Corrigendum: Responses to catastrophic AGI risk: a survey (2015 Phys. Scr. 90 018001) , 2015, Physica Scripta.

[4]  The Technology of Holiness: A Response to Hava Tirosh-Samuelson , 2018 .

[5]  N. Bostrom Human Genetic Enhancements: A Transhumanist Perspective , 2020, The Journal of value inquiry.

[6]  Seth D. Baum,et al.  On the Promotion of Safe and Socially Beneficial Artificial Intelligence , 2016 .

[7]  Eric Drexler,et al.  Safe exponential manufacturing , 2004 .

[8]  David Denkenberger,et al.  Classification of global catastrophic risks connected with artificial intelligence , 2018, AI & SOCIETY.

[9]  Hamideh Afsarmanesh,et al.  Collaborative networks: a new scientific discipline , 2005, J. Intell. Manuf..

[10]  D. Kushner,et al.  The real story of stuxnet , 2013, IEEE Spectrum.

[11]  Nick Bostrom,et al.  Racing to the precipice: a model of artificial intelligence development , 2016, AI & SOCIETY.

[12]  H. Valpola,et al.  COALESCING MINDS: BRAIN UPLOADING-RELATED GROUP MIND SCENARIOS , 2012 .

[13]  Alexey Turchin Assessing the future plausibility of catastrophically dangerous AI , 2019, Futures.

[14]  N. Bostrom Are We Living in a Computer Simulation , 2003 .

[15]  Nick Bostrom,et al.  Existential Risk Prevention as Global Priority , 2013 .

[16]  Roman V Yampolskiy,et al.  Safety Engineering for Artificial General Intelligence , 2012 .