GripSee: A Robot for Visually-Guided Grasping

We have designed an anthropomorphic robot system at our institute as a research platform and demonstrator for the next generation of service robots. GripSee is a visually guided robot which is endowed with a number of skills based on different neural network architectures. The skills include the interpretation of human gestures, localization and recognition of objects, planning and generation of grasping movements, and automatic calibration of the eye-hand coordination. This paper gives an overview of the system and reports on our experiences in applying diverse neural network architectures in a real robot.

[1]  Christoph von der Malsburg,et al.  Self Calibration of the Fixation Movement of a Stereo Camera Head , 1998, Auton. Robots.

[2]  Joachim M. Buhmann,et al.  Distortion Invariant Object Recognition in the Dynamic Link Architecture , 1993, IEEE Trans. Computers.

[3]  Bernd Fritzke Incremental Learning of Local Linear Mappings , 1995 .

[4]  Jochen Triesch,et al.  Robotic Gesture Recognition , 1997, Gesture Workshop.

[5]  Stéphane Mallat,et al.  Characterization of Signals from Multiscale Edges , 2011, IEEE Trans. Pattern Anal. Mach. Intell..

[6]  Laurenz Wiskott,et al.  Labeled graphs and dynamic link matching for face recognition and scene analysis , 1995 .

[7]  Helge Ritter,et al.  Robot guidance by human pointing gestures , 1996, Proceedings of International Workshop on Neural Networks for Identification, Control, Robotics and Signal/Image Processing.

[8]  Roberto Cipolla,et al.  Human-robot interface by pointing with uncalibrated stereo vision , 1996, Image Vis. Comput..