Spatio-temporal modeling of grasping actions

Understanding the spatial dimensionality and temporal context of human hand actions can provide representations for programming grasping actions in robots and inspire design of new robotic and prosthetic hands. The natural representation of human hand motion has high dimensionality. For specific activities such as handling and grasping of objects, the commonly observed hand motions lie on a lower-dimensional non-linear manifold in hand posture space. Although full body human motion is well studied within Computer Vision and Biomechanics, there is very little work on the analysis of hand motion with nonlinear dimensionality reduction techniques. In this paper we use Gaussian Process Latent Variable Models (GPLVMs) to model the lower dimensional manifold of human hand motions during object grasping. We show how the technique can be used to embed high-dimensional grasping actions in a lower-dimensional space suitable for modeling, recognition and mapping.

[1]  Thomas Feix,et al.  A comprehensive grasp taxonomy , 2009 .

[2]  Danica Kragic,et al.  Hands in action: real-time 3D reconstruction of hands in interaction with objects , 2010, 2010 IEEE International Conference on Robotics and Automation.

[3]  Sethu Vijayakumar,et al.  Latent spaces for dynamic movement primitives , 2009, 2009 9th IEEE-RAS International Conference on Humanoid Robots.

[4]  Heni Ben Amor,et al.  Grasp synthesis from low‐dimensional probabilistic grasp models , 2008, Comput. Animat. Virtual Worlds.

[5]  Sethu Vijayakumar,et al.  Does dimensionality reduction improve the quality of motion interpolation? , 2009, ESANN.

[6]  Joaquin Quiñonero Candela,et al.  Local distance preservation in the GP-LVM through back constraints , 2006, ICML.

[7]  David J. Fleet,et al.  3D People Tracking with Gaussian Process Dynamical Models , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[8]  Danica Kragic,et al.  Learning task constraints for robot grasping using graphical models , 2010, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[9]  Matei T. Ciocarlie,et al.  Biomimetic grasp planning for cortical control of a robotic hand , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[10]  Matei T. Ciocarlie,et al.  Dimensionality reduction for hand-independent dexterous robotic grasping , 2007, 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[11]  Odest Chadwicke Jenkins,et al.  Neighborhood denoising for learning high-dimensional grasping manifolds , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[12]  Trevor Darrell,et al.  Discriminative Gaussian process latent variable model for classification , 2007, ICML '07.

[13]  David J. Fleet,et al.  Topologically-constrained latent variable models , 2008, ICML '08.

[14]  M. Arbib,et al.  Opposition Space as a Structuring Concept for the Analysis of Skilled Hand Movements , 1986 .

[15]  Aude Billard,et al.  On Learning, Representing, and Generalizing a Task in a Humanoid Robot , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[16]  J. F. Soechting,et al.  Postural Hand Synergies for Tool Use , 1998, The Journal of Neuroscience.

[17]  J. Tenenbaum,et al.  A global geometric framework for nonlinear dimensionality reduction. , 2000, Science.

[18]  Danica Kragic,et al.  Grasp Recognition for Programming by Demonstration , 2005, Proceedings of the 2005 IEEE International Conference on Robotics and Automation.

[19]  Danica Kragic,et al.  Interactive grasp learning based on human demonstration , 2004, IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA '04. 2004.

[20]  S T Roweis,et al.  Nonlinear dimensionality reduction by locally linear embedding. , 2000, Science.