Sensorimotor processes for learning object representations

Learning object representations by exploration is of great importance for cognitive robots that need to learn about their environment without external help. In this paper we present sensorimotor processes that enable the robot to observe grasped objects from all relevant viewpoints, which makes it possible to learn viewpoint independent object representations. Taking control of the object allows the robot to focus on relevant parts of the images, thus bypassing potential pitfalls of pure bottom-up attention and segmentation. We propose a systematic method to control a robot in order to achieve a maximum range of motion across the 3-D view sphere. This is done by exploiting the task redundancies typically found on a humanoid arm and by avoiding joint limits of the robot. The proposed method brings the robot into configurations that are appropriate for observing objects. It enables us to acquire a wider range of snapshots without regrasping the object.

[1]  Dorin Comaniciu,et al.  Kernel-Based Object Tracking , 2003, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Giorgio Metta,et al.  Grounding vision through experimental manipulation , 2003, Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences.

[3]  Tamim Asfour,et al.  ARMAR-III: An Integrated Humanoid Platform for Sensory-Motor Control , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[4]  Pradeep K. Khosla,et al.  Strategies for Increasing the Tracking Region of an Eye-in-Hand System by Singularity and Joint Limit Avoidance , 1995, Int. J. Robotics Res..

[5]  Tsuneo Yoshikawa Basic optimization methods of redundant manipulators , 1996 .

[6]  Bojan Nemec,et al.  Comparison of null-space and minimal null-space control algorithms , 2007, Robotica.

[7]  Dana H. Ballard,et al.  Animate Vision , 1991, Artif. Intell..

[8]  Jan-Olof Eklundh,et al.  Statistical background subtraction for a mobile observer , 2003, Proceedings Ninth IEEE International Conference on Computer Vision.

[9]  Paul M. Fitzpatrick,et al.  First contact: an active vision approach to segmentation , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[10]  Éric Marchand,et al.  Using the task function approach to avoid robot joint limits and kinematic singularities in visual servoing , 1996, Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS '96.

[11]  Emanuele Trucco,et al.  A compact algorithm for rectification of stereo pairs , 2000, Machine Vision and Applications.

[12]  John M. Hollerbach,et al.  Redundancy resolution of manipulators through torque optimization , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[13]  Gordon Cheng,et al.  Support vector machines and Gabor kernels for object recognition on a humanoid with active foveated vision , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[14]  Shaogang Gong,et al.  Tracking colour objects using adaptive mixture models , 1999, Image Vis. Comput..

[15]  Peter I. Corke,et al.  A tutorial on visual servo control , 1996, IEEE Trans. Robotics Autom..

[16]  Dragomir N. Nenchev,et al.  Redundancy resolution through local optimization: A review , 1989, J. Field Robotics.

[17]  Bojan Nemec,et al.  Force strategies for on-line obstacle avoidance for redundant manipulators , 2003, Robotica.