Visual Servoing of Robotic Manipulator in a Virtual Learned Articular Space

Abstract A position control approach for Robotic Manipulator based on visual feedback is presented. This feedback, originated from a pair stereo fixed camera, is converted into a Virtual Articular Space, so-called because it is an approximation of real articular space of robot. A multilayered neural network trained to build the correspondence from the visual information of the robot hand and the articular position of the arm generates this approximation. By using this mapping we avoid the complexity of the analytic approach which requires the both robot inverse kinematics and the inverse camera-space mappings, including calibration. This approach is tested experimentally in real time on a 5 degrees-offreedom laboratory manipulator, including the required cameras and image processing boards.