Embodiment in GLAIR: A Grounded Layered Architecture with Integrated Reasoning for Autonomous Agents

In order to function robustly in the world, autonomous agents need to assimilate concepts for physical entities and relations, grounded in perception and action. They also need to assimilate concepts for perceptual properties like color, shape, and weight, and perhaps eventually even for nonphysical objects like unicorns. The process of acquiring concepts that carry meaning in terms of the agent's own physiology we call embodiment. Unlike current robotic agents, those endowed with embodied concepts will more readily understand high level instructions. As a consequence, these robots won't have to be instructed at a low level. We have developed an autonomous agent architecture that facilitates embodiment of action and perception, and accommodates embodied concepts for both physical and non-physical objects, properties, and relations. 1 GLAIR We present an architecture for intelligent autonomous agents which we call GLAIR (Grounded Layered Architecture with Integrated Reasoning). A major motivation for GLAIR and the focus of our attention in this paper is the concept of embodiment: what it is, why it is important, and how it can be given a concrete form in an agent architecture. As such our deenition is both more concrete and more narrow than the one in Lak87], for instance. Figure 1 schematically presents our architecture. Concept learning provides an important motivation for embodiment. Winston's Arch program Win75] is an early example of a system that learns concepts through examples. This program relies heavily on feature analysis. However, the feature concepts and the concepts of objects learned lack embodiment, as is typical for traditional symbolic AI work. Most of this work never gets implemented in actual agents that can physically interact with their environment (other than via a keyboard and monitor, which is a rather uninteresting case), i.e. the symbols never inhabit a body. It is this kind of approach that Brooks criticizes in papers like Bro90]. According to Brooks, symbolic representations should be matched

[1]  Stuart C. Shapiro,et al.  SNePS Considered as a Fully Intensional Propositional Semantic Network , 1986, AAAI.

[2]  Patrick Henry Winston,et al.  Learning structural descriptions from examples , 1970 .

[3]  R. James Firby,et al.  An Investigation into Reactive Planning in Complex Domains , 1987, AAAI.

[4]  J. Lammens A computational model of color perception and color naming , 1995 .

[5]  Rodney A. Brooks,et al.  A Robust Layered Control Syste For A Mobile Robot , 2022 .

[6]  Henry Hexmoor,et al.  Methods for deciding what to do next and learning , 1992 .

[7]  James S. Albus,et al.  Theory and Practice of Hierarchical Control , 1981 .

[8]  Geoffrey E. Hinton,et al.  The appeal of parallel distributed processing , 1986 .

[9]  P. Kay,et al.  Basic Color Terms: Their Universality and Evolution , 1973 .

[10]  B. Habibi,et al.  Pengi : An Implementation of A Theory of Activity , 1998 .

[11]  G. Lakoff Women, fire, and dangerous things : what categories reveal about the mind , 1989 .

[12]  D. Regan,et al.  Looming detectors in the human visual pathway , 1978, Vision Research.

[13]  Rodney A. Brooks,et al.  Elephants don't play chess , 1990, Robotics Auton. Syst..

[14]  Paul R. Cohen,et al.  Two ways to act , 1991, SGAR.

[15]  Leslie Pack Kaelbling,et al.  Action and planning in embedded agents , 1990, Robotics Auton. Syst..

[16]  John L. Pollock,et al.  New foundations for practical reasoning , 1992, Minds and Machines.

[17]  Stuart C. Shapiro,et al.  Autonomous Agent Architecture for Integrating Perception and Acting with Grounded Embodied Symbolic Reasoning , 1992 .

[18]  David Chapman,et al.  Vision, instruction, and action , 1990 .