Combining vision based information and partial geometric models in automatic grasping

The problem of making sensing and acting techniques cooperate in order to achieve a given manipulation task in a partially structured environment is treated in the context of automatic grasping by guiding the decisional process using a combination of partial geometric models and a vision data. The geometric models represent the known information concerning the robot workspace and the object to be grasped. The vision-based information is collected at execution time using a 2D camera and a 3D vision sensor, both located on the robot end effector. This means that robot motions and sensing operations have to be combined for the purpose of both acquiring the missing information and guiding the grasping movements. This is achieved by applying three processing phases respectively aimed at selecting a viewpoint avoiding occlusions, modeling the local environment of the object to be grasped, and determining the grasping parameters.<<ETX>>