Grasp programming by demonstration in virtual reality with automatic environment reconstruction

A virtual reality system enabling high-level programming of robot grasps is described. The system is designed to support programming by demonstration (PbD), an approach aimed at simplifying robot programming and empowering even unexperienced users with the ability to easily transfer knowledge to a robotic system. Programming robot grasps from human demonstrations requires an analysis phase, comprising learning and classification of human grasps, as well as a synthesis phase, where an appropriate human-demonstrated grasp is imitated and adapted to a specific robotic device and object to be grasped. The virtual reality system described in this paper supports both phases, thereby enabling end-to-end imitation-based programming of robot grasps. Moreover, as in the PbD approach robot environment interactions are no longer explicitly programmed, the system includes a method for automatic environment reconstruction that relieves the designer from manually editing the pose of the objects in the scene and enables intelligent manipulation. A workspace modeling technique based on monocular vision and computation of edge-face graphs is proposed. The modeling algorithm works in real time and supports registration of multiple views. Object recognition and workspace reconstruction features, along with grasp analysis and synthesis, have been tested in simulated tasks involving 3D user interaction and programming of assembly operations. Experiments reported in the paper assess the capabilities of the three main components of the system: the grasp recognizer, the vision-based environment modeling system, and the grasp synthesizer.

[1]  Les A. Piegl,et al.  On NURBS: a survey , 1991, IEEE Computer Graphics and Applications.

[2]  John F. Canny,et al.  Planning optimal grasps , 1992, Proceedings 1992 IEEE International Conference on Robotics and Automation.

[3]  Katsushi Ikeuchi,et al.  Toward automatic robot instruction from perception-mapping human grasps to manipulator grasps , 1997, IEEE Trans. Robotics Autom..

[4]  Rüdiger Dillmann,et al.  Integration of tactile sensors in a programming by demonstration system , 2001, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164).

[5]  A. Goodwin,et al.  Hand function and the neocortex , 1985 .

[6]  Stefano Caselli,et al.  Leveraging on a virtual environment for robot programming by demonstration , 2004, Robotics Auton. Syst..

[7]  Carl Machover,et al.  Virtual reality , 1994, IEEE Computer Graphics and Applications.

[8]  Stefano Caselli,et al.  Grasp recognition in virtual reality for robot pregrasp planning by demonstration , 2006, Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006..

[9]  Thea Iberall,et al.  The nature of human prehension: Three dextrous hands in one , 1987, Proceedings. 1987 IEEE International Conference on Robotics and Automation.

[10]  Peter K. Allen,et al.  GraspIt!: A Versatile Simulator for Grasp Analysis , 2000, Dynamic Systems and Control: Volume 2.

[11]  Tamim Asfour,et al.  Integrated Grasp Planning and Visual Object Localization For a Humanoid Robot with Five-Fingered Hands , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Andrew K. C. Wong,et al.  Robotic vision: 3D object recognition and pose determination , 1998, Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No.98CH36190).

[13]  Thea Iberall,et al.  Dextrous robot hands , 1990 .

[14]  Aude Billard,et al.  Stochastic gesture production and recognition model for a humanoid robot , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[15]  Heni Ben Amor,et al.  Grasp Recognition with Uncalibrated Data Gloves - A Comparison of Classification Methods , 2007, 2007 IEEE Virtual Reality Conference.

[16]  Dinesh K. Pai,et al.  Programming contact tasks using a reality-based virtual environment integrated with vision , 1999, IEEE Trans. Robotics Autom..

[17]  Rüdiger Dillmann,et al.  Understanding users intention: programming fine manipulation tasks by demonstration , 2002, IEEE/RSJ International Conference on Intelligent Robots and Systems.

[18]  Mark R. Cutkosky,et al.  On grasp choice, grasp models, and the design of hands for manufacturing tasks , 1989, IEEE Trans. Robotics Autom..

[19]  Stefano Caselli,et al.  Trajectory clustering and stochastic approximation for robot programming by demonstration , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[20]  Tomás Lozano-Pérez,et al.  Imitation Learning of Whole-Body Grasps , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[21]  Katsushi Ikeuchi,et al.  Toward an assembly plan from observation. I. Task recognition with polyhedral objects , 1994, IEEE Trans. Robotics Autom..

[22]  Sukhan Lee,et al.  A real-time 3D workspace modeling with stereo camera , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[23]  Hadi Moradi,et al.  A visibility-based accessibility analysis of the grasp points for real-time manipulation , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[24]  Yutaka Hirano,et al.  Image-based object recognition and dexterous hand/arm motion planning using RRTs for grasping in cluttered scene , 2005, 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[25]  Leila De Floriani Feature Extraction from Boundary Models of Three-Dimensional Objects , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[26]  Yutaka Hirano,et al.  Vision-based scene representation for 3D interaction of service robots , 2006, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[27]  Kenzo Nonami,et al.  Hand posture detection by neural network and grasp mapping for a master slave hand system , 2004, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566).

[28]  Tomoichi Takahashi,et al.  Teaching robot's movement in virtual reality , 1991, Proceedings IROS '91:IEEE/RSJ International Workshop on Intelligent Robots and Systems '91.

[29]  Stefano Caselli,et al.  Robot grasp synthesis from virtual demonstration and topology-preserving environment reconstruction , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[30]  Tomoichi Takahashi,et al.  Robotic assembly operation teaching in a virtual environment , 1994, IEEE Trans. Robotics Autom..

[31]  R. Howe,et al.  Human grasp choice and robotic grasp analysis , 1990 .

[32]  Hirokazu Kato,et al.  Marker tracking and HMD calibration for a video-based augmented reality conferencing system , 1999, Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR'99).

[33]  Katsushi Ikeuchi,et al.  A sensor fusion approach for recognizing continuous human grasping sequences using hidden Markov models , 2005, IEEE Transactions on Robotics.

[34]  Rüdiger Dillmann,et al.  A comparison of four fast vision based object recognition methods for programming by demonstration applications , 2000, Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065).

[35]  Jiri Matas,et al.  Object recognition using a tag , 1997, Proceedings of International Conference on Image Processing.

[36]  Danica Kragic,et al.  Grasp Recognition for Programming by Demonstration , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[37]  Danica Kragic,et al.  Interactive grasp learning based on human demonstration , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.