Semantics in an intelligent control system

Much research on intelligent systems has concentrated on low level mechanisms or limited subsystems. We need to understand how to assemble the components in an architecture for a complete agent with its own mind, driven by its own desires. A mind is a self-modifying control system, with a hierarchy of levels of control and a different hierarchy of levels of implementation. AI needs to explore alternative control architectures and their implications for human, animal and artificial minds. Only when we have a good theory of actual and possible architectures can we solve old problems about the concept of mind and causal roles of desires, beliefs, intentions, etc. The global information level ‘virtual machine’ architecture is more relevant to this than detailed mechanisms. For example, differences between connectionist and symbolic implementations may be of minor importance. An architecture provides a framework for systematically generating concepts of possible states and processes. Lacking this, philosophers cannot provide good analyses of concepts, psychologists and biologists cannot specify what they are trying to explain or explain it, and psychotherapists and educationalists are left groping with ill-understood problems. The paper outlines some requirements for such architectures showing the importance of an idea shared between engineers and philosophers: the concept of ‘semantic information’.

[1]  A. Sloman The Computer Revolution in Philosophy: Philosophy, Science, and Models of Mind , 1982 .

[2]  Terence E Horgan,et al.  From Supervenience to Superdupervenience: Meeting the Demands of a Material World , 1993 .

[3]  Aaron Sloman,et al.  The Mind as a Control System , 1993, Royal Institute of Philosophy Supplement.

[4]  J. Gibson The Ecological Approach to Visual Perception , 1979 .

[5]  Norbert Wiener,et al.  Cybernetics, or control and communication in the animal and the machine, 2nd ed. , 1961 .

[6]  Daniel C. Dennett,et al.  Brainstorms: Philosophical Essays on Mind and Psychology , 1981 .

[7]  Aaron Sloman,et al.  THE EMPEROR'S NEW MIND Concerning Computers, Minds and the Laws of Physics , 1992 .

[8]  Aaron Sloman,et al.  Prospects for AI as the General Science of Intelligence , 1993 .

[9]  Aaron Sloman,et al.  What Sorts of Machines Can Understand the Symbols They Use , 1986 .

[10]  E. Reed The Ecological Approach to Visual Perception , 1989 .

[11]  Christopher J. Taylor,et al.  A Formal Logical Analysis of Causal Relations , 1993 .

[12]  Norbert Wiener,et al.  Cybernetics: Control and Communication in the Animal and the Machine. , 1949 .

[13]  Aaron Sloman,et al.  Reference without Causal Links , 1986, ECAI.

[14]  Allen Newell,et al.  The Knowledge Level , 1989, Artif. Intell..

[15]  G. Humphreys,et al.  Explorations in Design Space , 1994 .

[16]  A. Sloman,et al.  A Study of Motive Processing and Attention , 1993 .

[17]  Aaron Sloman,et al.  What Enables a Machine to Understand? , 1985, IJCAI.

[18]  Steve Torrance,et al.  The Mentality of Robots , 1994 .