Visual Servoing Based on a Task Function Approach

Recent advances in vision sensors technology and image processing authorize to hope that the use of vision data directly into the control loop of a robot is no more an utopic way. Commonly, the general approach in robot vision is the following: processing vision data into the frame linked to the sensor, converting data into the frame linked to the scene by mean of inverse calibration matrix, computing, with respect to the robot task, the control vector of the robot into the frame linked to the scene, controlling the robot by using the inverse kinematic model. This scheme works in open loop with respect to vision data and cannot take into account inaccuracies and uncertainties occuring during the processing. Such an approach needs to perfectly overcome the constraints of the problem: geometry of the sensor (for example, in a stereovision method), the model of the environment and the model of the robot. In some cases, this approach is the only one possible but, in many cases, an alternative way consists to specify the robot task in terms of control directly into the sensor frame. This approach is often referred as visual servoing [8],