The Emperor's Real Mind: Review of Roger Penrose'sThe Emperor's New Mind: Concerning Computers, Minds and the Laws of Physics

Abstract The Emperor's New Mind by Roger Penrose has received a great deal of both praise and criticism. This review discusses philosophical aspects of the book that form an attack on the “strong” AI thesis. Eight different versions of this thesis are distinguished, and sources of ambiguity diagnosed, including different requirements for relationships between program and behaviour. Excessively strong versions attacked by Penrose (and Searle) are not worth defending or attacking, whereas weaker versions remain problematic. Penrose (like Searle) regards the notion of an algorithm as central to AI, whereas it is argued here that for the purpose of explaining mental capabilities the architecture of an intelligent system is more important than the concept of an algorithm, using the premise that what makes something intelligent is not what it does but how it does it. What needs to be explained is also unclear: Penrose thinks we all know what consciousness is and claims that the ability to judge Godel's formula to be true depends on it. He also suggests that quantum phenomena underly consciousness. This is rebutted by arguing that our existing concept of “consciousness” is too vague and muddled to be of use in science. This and related concepts will gradually be replaced by a more powerful theory-based taxonomy of types of mental states and processes. The central argument offered by Penrose against the strong AI thesis depends on a tempting but unjustified interpretation of Godel's incompleteness theorem. Some critics are shown to have missed the point of his argument. A stronger criticism is mounted, and the relevance of mathematical Platonism analysed. Architectural requirements for intelligence are discussed and differences between serial and parallel implementations analysed.

[1]  Aaron Sloman,et al.  What Enables a Machine to Understand? , 1985, IJCAI.

[2]  Brian Cantwell Smith The Semantics of Clocks , 1988 .

[3]  Aaron Sloman,et al.  Prolegomena to a Theory of Communication and Affect , 1992 .

[4]  A. M. Turing,et al.  Computing Machinery and Intelligence , 1950, The Philosophy of Artificial Intelligence.

[5]  Aaron Sloman,et al.  Did Searle attack strong strong or weak strong AI , 1987 .

[6]  H. Simon,et al.  Motivational and emotional controls of cognition. , 1967, Psychological review.

[7]  Aaron Sloman,et al.  Interactions Between Philosophy and Artificial Intelligence: The Role of Intuition and Non-Logical Reasoning in Intelligence , 1971, IJCAI.

[8]  J. Moor The Pseudorealization Fallacy and the Chinese Room Argument , 1988 .

[9]  A. M. Turing,et al.  Computing Machinery and Intelligence , 1950, The Philosophy of Artificial Intelligence.

[10]  D. McDermott Computation and consciousness , 1990, Behavioral and Brain Sciences.

[11]  Christopher J. Taylor,et al.  A Formal Logical Analysis of Causal Relations , 1993 .

[12]  Aaron Sloman,et al.  Reference without Causal Links , 1986, ECAI.

[13]  Brian V. Funt,et al.  Problem-Solving with Diagrammatic Representations , 1980, Artif. Intell..

[14]  William J. Rapaport,et al.  Syntactic Semantics: Foundations of Computational Natural-Language Understanding , 1988 .

[15]  Aaron Sloman,et al.  Motives, Mechanisms, and Emotions , 1987, The Philosophy of Artificial Intelligence.

[16]  Aaron Slomon,et al.  On designing a visual system# (towards a Gibsonian computational model of vision) , 1990 .

[17]  J. Lucas Minds, Machines and Gödel , 1961, Philosophy.

[18]  John R. Searle,et al.  Minds, brains, and programs , 1980, Behavioral and Brain Sciences.