Grounding Symbolic Capacity in Robotic Capacity

Depite considerations in favor of symbol grounding, neither pure connectionism nor pure nonsymbolic robotics can be counted out yet, in the path to robotic Turing Test. So far only computationalism and pure AI have fallen by the wayside. If it turns out that no internal symbols at all underlie our symbolic (email Turing Test) capacity, if dynamic states of neural nets alone or sensorimotor mechanisms subserving robotic capacities alone can successfully generate our full robotic performance capacity without symbols, that is still the decisive test for the presence of mind and everyone should be ready to accept the verdict. For even if we should happen to be wrong about such a robot, it is clear that no one (not even an advocate of the stronger neural-equivalence version of the Turing Test, nor even the Blind Watchmaker who designed us but isno more a mind-reader than we are) can ever hope to be the wiser.

[1]  Allen Newell,et al.  Physical Symbol Systems , 1980, Cogn. Sci..

[2]  S. Harnad Consciousness: An Afterthought , 1982 .

[3]  Retinotopic organization of lateral eye input to Limulus brain. , 1982, Journal of neurophysiology.

[4]  Stevan Harnad Verifying machines' minds , 1984 .

[5]  What Are the Scope and Limits of Radical Behaviorist Theory? , 1984, Behavioral and Brain Sciences.

[6]  Z. Pylyshyn Robot's Dilemma: The Frame Problem in Artificial Intelligence , 1987 .

[7]  I. Biederman,et al.  Sexing day-old chicks: A case study and expert systems analysis of a difficult perceptual-learning task. , 1987 .

[8]  A. Catania,et al.  The Selection of Behavior. The Operant Behaviorism of BF Skinner: Comments and Consequences , 1988 .

[9]  Stevan Harnad,et al.  Minds, machines and Searle , 1989, J. Exp. Theor. Artif. Intell..

[10]  Stevan Harnad,et al.  Symbols and Nets: Cooperation vs. Competition , 1990 .

[11]  Stephen José Hanson,et al.  What connectionist models learn: Learning and representation in connectionist networks , 1990, Behavioral and Brain Sciences.

[12]  S. Harnad Categorical Perception: The Groundwork of Cognition , 1990 .

[13]  John R. Searle,et al.  Minds, brains, and programs , 1980, Behavioral and Brain Sciences.

[14]  Stevan Harnad,et al.  Categorical Perception and the Evolution of Supervised Learning in Neural Nets , 1991 .

[15]  Stevan Harnad,et al.  The Turing Test is not a trick: Turing indistinguishability is a scientific criterion , 1992, SGAR.

[16]  Nick Chater,et al.  Connectionism, Learning and Meaning , 1992 .

[17]  S. Harnad Connecting Object to Symbol in Modeling Cognition , 1992 .

[18]  S. Harnad Symbol Grounding is an Empirical Problem: Neural Nets are Just a Candidate Component , 1993 .

[19]  Nick Chater,et al.  SYMBOL GROUNDING - THE EMPERORS NEW THEORY OF MEANING , 1993 .

[20]  Stevan Harnad Artificial Life: Synthetic Versus Virtual , 1993 .

[21]  Eric Dietrich The Ubiquity of Computation , 1993 .

[22]  Stevan Harnad,et al.  Problems, Problems: The Frame Problem as a Symptom of the Symbol Grounding Problem , 1993 .

[23]  Stevan Harnad,et al.  GROUNDING SYMBOLS IN THE ANALOG WORLD WITH NEURAL NETS A Hybrid Model , 1993 .

[24]  M. Jeannerod The representing brain: Neural correlates of motor intention and imagery , 1994, Behavioral and Brain Sciences.

[25]  S. Hanson,et al.  Learned Categorical Perception in Neural Nets: Implications for Symbol Grounding , 1995 .

[26]  Stevan Harnad The Origin of Words: A Psychophysical Hypothesis , 1996 .