Artificial Intelligence as a Positive and Negative Factor in Global Risk

By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. Of course this problem is not limited to the field of AI. Jacques Monod wrote: "A curious aspect of the theory of evolution is that everybody thinks he understands it." (Monod 1974.) My father, a physicist, complained about people making up their own theories of physics; he wanted to know why people did not make up their own theories of chemistry. (Answer: They do.) Nonetheless the problem seems to be unusually acute in Artificial Intelligence. The field of AI has a reputation for making huge promises and then failing to deliver on them. Most observers conclude that AI is hard; as indeed it is. But the embarrassment does not stem from the difficulty. It is difficult to build a star from hydrogen, but the field of stellar astronomy does not have a terrible reputation for promising to build stars and then failing. The critical inference is not that AI is hard, but that, for some reason, it is very easy for people to think they know far more about Artificial Intelligence than they actually do.

[1]  Bill Hibbard,et al.  Super-intelligent machines , 2012, COMG.

[2]  Donald E. Brown HUMAN UNIVERSALS , 2008, Science.

[3]  John McCarthy,et al.  A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955 , 2006, AI Mag..

[4]  Jürgen Schmidhuber,et al.  Goedel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements , 2003, ArXiv.

[5]  Donald Favareau The Symbolic Species: The Co-evolution of Language and the Brain , 1998 .

[6]  Hans P. Moravec Robot: Mere Machine to Transcendent Mind , 1998 .

[7]  F. Keil,et al.  Conceptualizing a Nonnatural Entity: Anthropomorphism in God Concepts , 1996, Cognitive Psychology.

[8]  G. Price The nature of selection , 1995 .

[9]  G. Reeke Marvin Minsky, The Society of Mind , 1991, Artif. Intell..

[10]  Elliott Sober,et al.  The Nature of Selection: Evolutionary Theory in Philosophical Focus , 1986 .

[11]  D. Hofstadter,et al.  Godel, Escher, Bach: An Eternal Golden Braid , 1979 .

[12]  H. Rice Classes of recursively enumerable sets and their decision problems , 1953 .

[13]  Eliezer Yudkowsky Cognitive biases potentially affecting judgement of global risks , 2008 .

[14]  Chris Phoenix,et al.  Nanotechnology as Global Catastrophic Risk , 2008 .

[15]  N. Bostrom,et al.  Global Catastrophic Risks , 2008 .

[16]  Jürgen Schmidhuber,et al.  Gödel Machines: Fully Self-referential Optimal Universal Self-improvers , 2007, Artificial General Intelligence.

[17]  M. Tribus,et al.  Probability theory: the logic of science , 2005 .

[18]  Bill Hibbard,et al.  Reinforcement Learning as a Context for Integrating AI Research , 2004, AAAI Technical Report.

[19]  Kushaagra Goyal The Matrix , 2003, Think.

[20]  N. Bostrom Existential risks: analyzing human extinction scenarios and related hazards , 2002 .

[21]  A. Sandberg,et al.  The Physics of Information Processing Superobjects : Daily Life Among the Jupiter Brains , 1999 .

[22]  Ralph C. Merkle,et al.  Helical logic , 1996 .

[23]  L. Cosmides,et al.  The Adapted mind : evolutionary psychology and the generation of culture , 1992 .

[24]  K. E. Drexler Nanosystems: Molecular Machinery, Manufacturing, and Computation , 1992 .

[25]  R. Weisberg Creativity - Genius and Other Myths , 1986 .

[26]  Bernd-Olaf Küppers,et al.  Molecular Theory of Evolution , 1983, Springer Berlin Heidelberg.

[27]  John R. Hayes,et al.  The Complete Problem Solver , 1981 .

[28]  I. J. Good,et al.  Speculations Concerning the First Ultraintelligent Machine , 1965, Adv. Comput..

[29]  Vernor Vinge,et al.  ==================================================================== the Coming Technological Singularity: How to Survive in the Post-human Era , 2022 .