Good and safe uses of AI Oracles

It is possible that powerful and potentially dangerous artificial intelligence (AI) might be developed in the future. An Oracle is a design which aims to restrain the impact of a potentially dangerous AI by restricting the agent to no actions besides answering questions. Unfortunately, most Oracles will be motivated to gain more control over the world by manipulating users through the content of their answers, and Oracles of potentially high intelligence might be very successful at this \citep{DBLP:journals/corr/AlfonsecaCACAR16}. In this paper we present two designs for Oracles which, even under pessimistic assumptions, will not manipulate their users into releasing them and yet will still be incentivised to provide their users with helpful answers. The first design is the counterfactual Oracle -- which choses its answer as if it expected nobody to ever read it. The second design is the low-bandwidth Oracle -- which is limited by the quantity of information it can transmit.

[1]  A. Raftery,et al.  Strictly Proper Scoring Rules, Prediction, and Estimation , 2007 .

[2]  Stuart Armstrong,et al.  The errors, insights and lessons of famous AI predictions – and what they mean for the future , 2014, J. Exp. Theor. Artif. Intell..

[3]  Anca D. Dragan,et al.  Cooperative Inverse Reinforcement Learning , 2016, NIPS.

[4]  Shane Legg,et al.  Deep Reinforcement Learning from Human Preferences , 2017, NIPS.

[5]  Nick Bostrom,et al.  Thinking Inside the Box: Controlling and Using an Oracle AI , 2012, Minds and Machines.

[6]  Marcus Hutter,et al.  Can Intelligence Explode? , 2012, ArXiv.

[7]  James Babcock,et al.  The AGI Containment Problem , 2016, AGI.

[8]  Stuart J. Russell,et al.  Research Priorities for Robust and Beneficial Artificial Intelligence , 2015, AI Mag..

[9]  Eliezer Yudkowsky Artificial Intelligence as a Positive and Negative Factor in Global Risk , 2006 .

[10]  Roman V. Yampolskiy,et al.  Leakproofing the Singularity Artificial Intelligence Confinement Problem , 2012 .

[11]  Laurent Orseau,et al.  AI Safety Gridworlds , 2017, ArXiv.

[12]  John Schulman,et al.  Concrete Problems in AI Safety , 2016, ArXiv.

[13]  Stephen M. Omohundro,et al.  The Basic AI Drives , 2008, AGI.

[14]  C. Robert Superintelligence: Paths, Dangers, Strategies , 2017 .

[15]  Nick Bostrom,et al.  Future Progress in Artificial Intelligence: A Survey of Expert Opinion , 2013, PT-AI.

[16]  John Salvatier,et al.  When Will AI Exceed Human Performance? Evidence from AI Experts , 2017, ArXiv.

[17]  Peter Kulchyski and , 2015 .

[18]  Iyad Rahwan,et al.  Superintelligence cannot be contained: Lessons from Computability Theory , 2016, J. Artif. Intell. Res..

[19]  Marcus Hutter,et al.  AGI Safety Literature Review , 2018, IJCAI.

[20]  Andrew Y. Ng,et al.  Pharmacokinetics of a novel formulation of ivermectin after administration to goats , 2000, ICML.

[21]  Pieter Abbeel,et al.  Apprenticeship learning via inverse reinforcement learning , 2004, ICML.