Two Neural Approaches for Solving Reaching Tasks with Redundant Robots

In this paper, two solutions for learning to reach targets in 3D space with redundant robots are presented. Fist of them is based on Hyper Radial Basis Functions networks (HRBF) that learn to compute a locally linear approximation of the Jacobian pseudoinverse at each robot joint configuration. The second one, is an alternative solution that is inspired in human ability to project the sensorial stimulus over the motor actuators on joints, sending motor commands to each articulation and avoiding, in most phases of the movement, the feedback of the visual information. These models have been implemented on a visuo-motor robotic platform for reaching and grasping applications. The obtained results allow to compare the robustness and accuracy capabilities of both neural models for reaching and tracking objects as well as to give a solution when redundant robots are considered.