An architecture for integrating various vision devices in an autonomous agent is presented and demonstrated in the case of vision-based navigation in autonomous mobile robotics. The architecture is based on the behavioural approach. Applying this approach to vision results in a collection of vision-based behaviours that perform just simple tasks when considered isolated but are much more capable when working in cooperation. In the presented work, an experimental behavioural architecture was applied to vision by developing a large number and variety of vision devices and behaviours. Shown to be successful in robot navigation tasks, we believe the behavioural architecture to be relevant also for other complex vision-based applications involving real-world and real-time interactions.<<ETX>>
[1]
Thomas Dean,et al.
1992 AAAI Robot Exhibition and Competition
,
1993,
AI Mag..
[2]
S. Sitharama Iyengar,et al.
Autonomous Mobile Robots: Perception
,
1991
.
[3]
Heinz Hügli,et al.
ARCHITECTURE OF AN AUTONOMOUS SYSTEM: APPLICATION TO MOBILE ROBOT NAVIGATION
,
1994
.
[4]
Rodney A. Brooks,et al.
Elephants don't play chess
,
1990,
Robotics Auton. Syst..
[5]
Heinz Hügli,et al.
Multi-Layered Hybrid Architecture to Solve Complex Tasks of an Autonomous Mobile Robot
,
1994,
Int. J. Artif. Intell. Tools.
[6]
Thomas Dean,et al.
AAAI Robot Exhibition and Competition
,
.