Vision-based multisensor integration in remote-brained robots

This paper presents a method for the integration of multiple sensors in a "remote-brained robot", that is, one where processing capability is accessed by wireless links. The remote-brained approach allowed the authors to build a robot with a free body and a heavy brain. In order to integrate multiple sensors in a remote-brained robot, it is crucial to consider how to multiplex multisensor information. The authors have examined several configurations and developed a method using a video screen to integrate touch, force and visual information in a sensor image. This paper describes the possible configurations for multisensor integration in remote-brained robots and presents real examples of vision-based sensor integration for a dynamic biped robot and a multi-limbed apelike robot.<<ETX>>

[1]  Masayuki Inaba,et al.  Robot vision system with a correlation chip for real-time tracking, optical flow and depth map generation , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[2]  Masayuki Inaba,et al.  Rope handling by mobile hand-eye robots , 1993 .

[3]  Masayuki Inaba,et al.  Remote-Brained Robotics : Interfacing AI with Real World Behaviors , 1993 .

[4]  Masayuki Inaba,et al.  Vision-based adaptive and interactive behaviors in mechanical animals using the remote-brained approach , 1996, Robotics Auton. Syst..

[5]  Masayuki INABA,et al.  Hand Eye Coordination in Rope Handling , 1985 .

[6]  Masayuki Inaba,et al.  Pivoting: A new method of graspless manipulation of object by robot fingers , 1993, Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '93).

[7]  Masayuki Inaba,et al.  Design and implementation of a system that generates assembly programs from visual recognition of human action sequences , 1990, EEE International Workshop on Intelligent Robots and Systems, Towards a New Frontier of Applications.