Fast recognition of postures for a simplified three-fingered artificial hand

Real-time navigation is one of the major trends in mobile robotics. Visual navigation of a hand requires integration of visual processes and motor control. Fast computer analysis of articulated objects is only possible under strong constraints relative to features to identify, postures obtained by grouping processes and motions connecting postures. In the paper we obtain a symbolic or virtual local representation of a two-fingered articulated hand, in terms of angles measured at 0-dimensional features (double and triple junctions), which are connected by means of meaningful segments in planar representations. Knowledge about the geometry of the artificial hand and kinematics of articulated mechanisms is applied to provide a fast symbolic representation for postures, in terms of matrix spaces which we store as certain type of grassmannians. Geometric information about kinematic multichains corresponding to fingers is used for predicting and tracking evolution of symbolic representation associated to postures by describing linear symbolic models for physiological restrictions based on artificial mechanisms.

[1]  Katsushi Ikeuchi,et al.  Toward automatic robot instruction from perception-mapping human grasps to manipulator grasps , 1997, IEEE Trans. Robotics Autom..

[2]  J. F. Soechting,et al.  Errors in pointing are due to approximations in sensorimotor transformations. , 1989, Journal of neurophysiology.

[3]  Mitsuo Kawato,et al.  A computational model for shape estimation by integration of shading and edge information , 1994, Neural Networks.

[4]  Loukia D. Loukopoulos,et al.  Planning reaches by evaluating stored postures. , 1995, Psychological review.

[5]  Alistair D. N. Edwards,et al.  Progress in Gestural Interaction, Proceedings of Gesture Workshop '96, March 19th 1996, University of York, UK , 1997, Gesture Workshop.

[6]  Kenichi Kanatani,et al.  Statistical optimization for geometric computation - theory and practice , 1996, Machine intelligence and pattern recognition.

[7]  Tosiyasu L. Kunii,et al.  Model-based analysis of hand posture , 1995, IEEE Computer Graphics and Applications.

[8]  Naohiro Fukumura,et al.  A computational model for recognizing objects and planning hand shapes in grasping movements , 1995, Neural Networks.

[9]  Thomas S. Huang,et al.  Motion and Structure from Orthographic Projections , 1989, IEEE Trans. Pattern Anal. Mach. Intell..

[10]  Juan López Coronado,et al.  A hybrid model for the hand preconfiguration in rehabilitation grasping tasks , 1998, SMC'98 Conference Proceedings. 1998 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.98CH36218).

[11]  Jean-Claude Latombe,et al.  Planning motions with intentions , 1994, SIGGRAPH.

[12]  S. Ullman,et al.  The interpretation of visual motion , 1977 .

[13]  Mubarak Shah,et al.  Recognizing Hand Gestures , 1994, ECCV.

[14]  Takeo Kanade,et al.  Shape and motion without depth , 1990, [1990] Proceedings Third International Conference on Computer Vision.