Cooperative Integration Of Vision And Touch

Vision and touch have proved to be powerful sensing modalities in humans. In order to build robots capable of complex behavior, analogues of human vision and taction need to be created. In addition, strategies for intelligent use of these sensors in tasks such as object recognition need to be developed. Two overriding principles that dictate a good strategy for cooperative use of these sensors are the following: 1) sensors should complement each other in the kind and quality of data they report, and 2) each sensor system be used in the most robust manner possible. We demonstrate this with a contour following algorithm that recovers the shape of surfaces of revolution from sparse tactile sensor data. The absolute location in depth of an object can be found more accurately through touch than vision; but the global properties of where to actively explore with the hand are better found through vision.

[1]  Kenneth S. Roberts,et al.  Haptic object recognition using a multi-fingered dextrous hand , 1989, Proceedings, 1989 International Conference on Robotics and Automation.

[2]  R. Klatzky,et al.  Hand movements: A window into haptic object recognition , 1987, Cognitive Psychology.

[3]  Peter K. Allen,et al.  Integrating Vision and Touch for Object Recognition Tasks , 1988, Int. J. Robotics Res..

[4]  Lawrence B. Wolff Measuring The Orientation Of Lines And Surfaces Using Translation Invariant Stereo , 1989, Optics East.

[5]  Peter K. Allen,et al.  Acquisition and interpretation of 3-D sensor data from touch , 1990, IEEE Trans. Robotics Autom..