The main thesis of this book is briefly stated: The brain is a biological computer. Cognitive science has shown that several mental capacities can be explained computationally, and there is no reason to believe that phenomenal consciousness will be an exception. Indeed, far from being an inexplicable by-product of the brain’s information processing, phenomenal consciousness is a necessary component of it. For in order for a computational system to successfully interact with its physical environment, it must have, as part of the model of its world, a model of itself as a perceiver and decision maker. An entity with the requisite self-model will have “virtual consciousness”, and the principle of parsimony should lead us to identify virtual consciousness with the real thing. The argument in support of this thesis is considerably complex, and involves a detailed analysis of the very nature of computation and of the crucial role of the brain’s self-model in its various intelligent dealings with the environment. An appreciation of this role will help us to understand not only how computer consciousness is possible but also how robots can be said to have reasons for what they do and free choice in what they do, and thus how they can be viewed as subjects and objects of moral judgement. In the course of articulating and defending the argument, McDermott also endeavors to show that many of the popular claims one finds in the philosophical literature on computation are “false, meaningless, or at least questionable” (p. xii). Among these are the claims that there is a paradigmatic distinction between symbolic and connectionist computation, that whatever meaning computer symbols have depends entirely on what meaning their users ascribe to them, and that there could be a world of computational “zombies” whose behavior is totally indistinguishable from ours but who are totally lacking in consciousness. Refuting the last mentioned claim is part of what David Chalmers (1996), has labeled the “hard problem” of consciousness — the problem of explaining how a physical system, no matter how well organized, can have experiences, or qualia. It is to Chalmers’ “great, if totally miguided book” (p. xv) that McDermott’s book is in part a response. Nothing much hangs on the claim that the brain is a biological computer. Had we been made of different material, including silicon, we would still have a mind as long as our silicon brain performed the same computations as our biological brain. There is no doubt something special about our biological brain, for it constitutes a mind not merely in virtue of being a computer (none of the non-biological computers presently around us obviously has a mind), but in virtue of the kinds
[1]
S. Pinker.
How the Mind Works
,
1999,
Philosophy after Darwin.
[2]
Vladimir N. Vapnik,et al.
The Nature of Statistical Learning Theory
,
2000,
Statistics for Engineering and Information Science.
[3]
Tim Shallice.
From Neuropsychology to Mental Structure: Preface
,
1988
.
[4]
Padhraic Smyth,et al.
From Data Mining to Knowledge Discovery: An Overview
,
1996,
Advances in Knowledge Discovery and Data Mining.
[5]
David H. Wolpert,et al.
Stacked generalization
,
1992,
Neural Networks.
[6]
Andrew W. Moore,et al.
Efficient Locally Weighted Polynomial Regression Predictions
,
1997,
ICML.
[7]
K. Maccorquodale,et al.
On a distinction between hypothetical constructs and intervening variables.
,
1948,
Psychological review.
[8]
Leo Breiman,et al.
Bagging Predictors
,
1996,
Machine Learning.
[9]
Yoav Freund,et al.
Experiments with a New Boosting Algorithm
,
1996,
ICML.