Reasoning About Grasping

The promise of robots for the future is that of intelligent, autonomous machines functioning in a variety of tasks and situations. If this promise is to be met, then it is vital that robots be capable of grasping and manipulating a wide range of objects in the execution of highly variable tasks. A current model of human grasping divides the grasp into two stages, a precontact stage and a postcontact stage. In this paper, we present a rule-based reasoning system and an object representation paradigm for a robotic system which utilizes this model to reason about grasping during the precontact stage. Sensed object features and their spatial relations are used to invoke a set of hand preshapes and reach parameters for the robot arm/hand. The system has been implemented in PROLOG and results are presented to illustrate how the system functions.

[1]  Thea Iberall,et al.  The nature of human prehension: Three dextrous hands in one , 1987, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[2]  George A. Bekey,et al.  A strategy for grasp synthesis with multifingered robot hands , 1987, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[3]  Damian M. Lyons,et al.  A simple set of grasps for a dextrous hand , 1985, Proceedings. 1985 IEEE International Conference on Robotics and Automation.

[4]  Mark R. Cutkosky,et al.  Modeling manufacturing grips and correlations with the design of robotic hands , 1986, Proceedings. 1986 IEEE International Conference on Robotics and Automation.

[5]  M. Arbib Coordinated control programs for movements of the hand , 1985 .

[6]  Ruzena Bajcsy,et al.  Visually-guided haptic object recognition , 1987 .

[7]  M. Jeannerod Intersegmental coordination during reaching at natural visual objects , 1981 .