What Sort of Architecture is Required for a Human-Like Agent?

This paper is about how to give human-like powers to complete agents. For this the most important design choice concerns the overall architecture. Questions regarding detailed mechanisms, forms of representations, inference capabilities, knowledge etc. are best addressed in the context of a global architecture in which different design decisions need to be linked. Such a design would assemble various kinds of functionality into a complete coherent working system, in which there are many concurrent, partly independent, partly mutually supportive, partly potentially incompatible processes, addressing a multitude of issues on different time scales, including asynchronous, concurrent, motive generators. Designing human like agents is part of the more general problem of understanding design space, niche space and their interrelations, for, in the abstract, there is no one optimal design, as biological diversity on earth shows.

[1]  Luc Beaudoin Goal Processing in Autonomous Agents , 1994 .

[2]  Robert G. Muncaster,et al.  Prolegomena to the Theory , 1988 .

[3]  Aaron Sloman Interactions between philosophy and artificial intelligence: the role of intuition and non-logical reasoning in intelligence , 1971, IJCAI 1971.

[4]  Aaron Sloman,et al.  Exploring Design Space and Niche Space , 1995, SCAI.

[5]  Aaron Sloman,et al.  Prolegomena to a Theory of Communication and Affect , 1992 .

[6]  John McCarthy,et al.  Making Robots Conscious of Their Mental States , 1995, Machine Intelligence 15.

[7]  Riccardo Poli,et al.  SIM_AGENT: A Toolkit for Exploring Agent Designs , 1995, ATAL.

[8]  Aaron Sloman,et al.  Prospects for AI as the General Science of Intelligence , 1993 .

[9]  Aaron Sloman,et al.  Motives, Mechanisms, and Emotions , 1987, The Philosophy of Artificial Intelligence.

[10]  Glyn W. Humphreys,et al.  Prospects for Artificial Intelligence , 1993 .

[11]  Aaron Slomon,et al.  On designing a visual system# (towards a Gibsonian computational model of vision) , 1990 .

[12]  Aaron Sloman,et al.  Why Robots Will Have Emotions , 1981, IJCAI.

[13]  H. Simon,et al.  Motivational and emotional controls of cognition. , 1967, Psychological review.

[14]  Aaron Sloman,et al.  Musings on the roles of logical and non-logical representations in intelligence , 1995 .

[15]  John M. Nicholas,et al.  Images, Perception, and Knowledge , 1977 .

[16]  Aaron Sloman On designing a visual system (Towards a Gibsonian computational model of vision) , 1989, J. Exp. Theor. Artif. Intell..

[17]  Aaron Sloman,et al.  What Sort of Control System Is Able to Have a Personality? , 1997, Creating Personalities for Synthetic Actors.

[18]  W. Scott Neal Reilly,et al.  Broad agents , 1991, SGAR.

[19]  Aaron Sloman,et al.  Semantics in an intelligent control system , 1994, Philosophical Transactions of the Royal Society of London. Series A: Physical and Engineering Sciences.

[20]  A. Sloman,et al.  Towards a Design-Based Analysis of Emotional Episodes , 1996 .