Interactive Model Based Vision System for Telerobotic Vehicles

Abstract : Attempts to deal with the control of multiple vehicles by increasing the autonomy of individual vehicles will have to confront the fact that autonomous perceptual systems which can reliably interpret such complex, outdoor imagery do not yet exist. These problems are exacerbated when dealing with passive data sensors and limited a prior information, as would be the case in a rapidly changing environment which is characteristic of a battlefield. Our approach to these problems has been to develop a model based vision system that a human controls interactively. The human uses this systems to rapidly interpret sensory information from a distributed team of telerobots. The resulting interpretation will be a model of the world that the telerobots can refine autonomously and also use to control their behavior. In this way the human can direct the telerobots by initializing processing that can then be handled autonomously. In addition, the communication between the robot and the human will take place in the context of a shared model of the world. The top of Figure 1-1 shows the framework towards which we are building. It shows multiple telerobots with limited amounts of autonomy interacting with the human controller through a common model of the world.