Vision system for telerobotics operation

This paper presents a knowledge-based vision system for a telerobotics guidance project. The system is capable of recognizing and locating 3-D objects with unrestricted viewpoints in a simulated unconstrained space environment. It constructs object representation for vision tasks from wireframe models; recognizes and locates objects in a 3-D scene, and provides world modeling capability to establish, maintain, and update 3-D environment description for telerobotic manipulations. In this paper, an object model is represented by an attributed hypergraph which contains direct structural (relational) information with features grouped according to their multiple-views so as the interpretation of the 3-D object and its 2-D projections are coupled. With this representation, object recognition is directed by a knowledge-directed hypothesis refinement strategy. The strategy starts with the identification of 2-D local feature characteristics for initiating feature and relation matching. Next it continues to refine the matching by adding 2-D features from the image according to viewpoint and geometric consistency. Finally it links the successful matchings back to the 3-D model to recover the feature, relation and location information of the recognized object. The paper also presents the implementation and the experimentation of the vision prototype.