Image based visual servoing for robot positioning tasks

Abstract Visual servoing has become a popular paradigm for the control of complex robotic systems: this sensor based approach exploits the image informations provided by one ore more cameras in a feedback control loop to drive the system to the desired configuration. Here authors will refers to a monocular system where the camera is mounted on the end effector of a 6-DOF manipulator. Among different Visual Servoing approaches Image Based Visual Servoing (IBVS) has been the most investigated in the literature because of its nice properties of robustness with respect to both robot modeling and camera calibration errors: in IBVS the control loop is in fact directly closed in the image; moreover IBVS doesn’t require the knowledge of the target/scene model (model-free approach). Despite its advantages IBVS may be affected by singularity and local minima problems of the control law: these drawbacks arise especially when the initial and the goal camera images respectively corresponding to the actual and desired system configurations are very different (i.e for large system displacements). To overcome these problems an image path planning can be exploited to ensure system convergence. In this paper author presents an off-line image path planning that can be used to execute system positioning task also in presence of large camera displacements: planning trajectories has been developed such as to make the robot end effector move on a 3D helix, connecting the initial and the desired arm configuration, by generating feasible robot twist-screws and keeping the target in the image field of view. During control execution also 3D target informations are retrieved through an adaptive estimation law. Both simulations and experimental results show the feasibility of the proposed approach.