Visually-guided haptic object recognition
暂无分享,去创建一个
Sensory capabilities are vital if a robot is to function autonomously in unknown or partially specified environments, if it is to carry out complex, roughly detailed tasks, and if it is to interact with and to learn from the world around it. Perception forms the all important interface between the cogitative organism and the world in which it must act and survive. Hence the first step toward intelligent, autonomous robots is to develop this interface--to provide robots with perceptual capabilities. This work presents a model for robotic perception. Within the framework of this model, we have developed a system which utilizes passive vision and active touch for the task of object categorization. The system is organized as a highly modularized, distributed hierarchy of domain specific and informationally encapsulated knowledge-based experts. The visual subsystem is passive and consists of a two-dimensional region analysis and a three-dimensional edge analysis. The haptic subsystem is active and consists of a set of modules which either execute exploratory procedures to extract information from the world or which combine information from lower level modules into more complex representations. We also address the issues of visually-guided haptic exploration and intersensory integration. Finally, we establish representational and reasoning paradigms for dealing with generic objects. Both representation and reasoning are feature-based. The representation includes both definitional information in the form of a hierarchy of frames and spatial/geometric information in the form of the spatial polyhedron.