The YODA Robot Project at the University of Southern California/Information Sciences Institute consists of a group of young researchers who share a passion for autonomous systems that can bootstrap its knowledge from real environments by exploration, experimentation, learning, and discovery. Our goal is to create a mobile agent that can autonomously learn from its environment based on its own actions, percepts, and mis-sions. Our participation in the Fifth Annual AAAI Mobile Robot Competition and Exhibition, held as part of the Thirteenth National Conference on Artificial Intelligence, served as the first milestone in advancing us toward this goal. YODA's software architecture is a hierarchy of abstraction layers, ranging from a set of behaviors at the bottom layer to a dynamic, mission-oriented planner at the top. The planner uses a map of the environment to determine a sequence of goals to be accomplished by the robot and delegates the detailed executions to the set of behaviors at the lower layer. This abstraction architecture has proven robust in dynamic and noisy environments, as shown by YODA's performance at the robot competition.
[1]
R. A. Brooks,et al.
Intelligence without Representation
,
1991,
Artif. Intell..
[2]
Wei-Min Shen,et al.
Autonomous learning from the environment
,
1994
.
[3]
Erann Gat,et al.
Integrating Planning and Reacting in a Heterogeneous Asynchronous Architecture for Controlling Real-World Mobile Robots
,
1992,
AAAI.
[4]
Michael A. Arbib,et al.
Perceptual Structures and Distributed Motor Control
,
1981
.
[5]
Ronald C. Arkin,et al.
Motor Schema — Based Mobile Robot Navigation
,
1989,
Int. J. Robotics Res..