Virtual Agent for Deaf Signing Gestures

We describe in this paper a system for automatically synthesizing deaf signing animations from motion data captured on real deaf subjects. Moreover, we create a virtual agent endowed with expressive gestures. Our attention is focused on the expressiveness of gesture (what type of gesture: fluidity, tension, anger) and on its semantic representations. Our approach relies on a data-driven animation scheme. From motion data captured thanks to an optical system and data gloves, we try to extract relevant features of communicative gestures, and to re-synthesize them afterwards with style variation. Within this framework, a motion database containing the whole body, hands motion and facial expressions has been built. The analysis of signals makes possible the enrichment of this database by including segmentation and annotation descriptors. Analysis and synthesis algorithms are applied to the generation of a set of French Sign Language gestures. Key words Communication for deaf people, sign language gestures, virtual signer agent, gesture database.

[1]  R. Kulpa,et al.  Fast inverse kinematics and kinetics solver for human-like figures , 2005, 5th IEEE-RAS International Conference on Humanoid Robots, 2005..

[2]  Richard Kennaway,et al.  Experience with and Requirements for a Gesture Description Language for Synthetic Animation , 2003, Gesture Workshop.

[3]  Catherine Pelachaud,et al.  Virtual humanoids endowed with expressive communication gestures : the HuGEx project , 2006, 2006 IEEE International Conference on Systems, Man and Cybernetics.

[4]  Alexis Héloir,et al.  Temporal alignment of communicative gesture sequences , 2006, Comput. Animat. Virtual Worlds.

[5]  W. Stokoe,et al.  Semiotics and Human Sign Languages , 1972 .

[6]  W. Stokoe,et al.  Sign language structure: an outline of the visual communication systems of the American deaf. 1960. , 1961, Journal of deaf studies and deaf education.

[7]  Jernej Barbic,et al.  Segmenting Motion Capture Data into Distinct Behaviors , 2004, Graphics Interface.

[8]  W. Stokoe,et al.  A dictionary of American sign language on linguistic principles , 1965 .

[9]  Alexis Héloir,et al.  Captured Motion Data Processing for Real Time Synthesis of Sign Language , 2005, Gesture Workshop.

[10]  Sylvie Gibet,et al.  High-level Specification and Animation of Communicative Gestures , 2001, J. Vis. Lang. Comput..

[11]  W. Stokoe Sign language structure: an outline of the visual communication systems of the American deaf. 1960. , 1961, Journal of deaf studies and deaf education.

[12]  Maurizio Mancini,et al.  Implementing Expressive Gesture Synthesis for Embodied Conversational Agents , 2005, Gesture Workshop.

[13]  Richard Kennaway,et al.  Synthetic Animation of Deaf Signing Gestures , 2001, Gesture Workshop.

[14]  Sylvie Gibet,et al.  High level specification and control of communication gestures: the GESSYCA system , 1999, Proceedings Computer Animation 1999.

[15]  Bruno Arnaldi,et al.  Synchronization for dynamic blending of motions , 2004, SCA '04.

[16]  Catherine Pelachaud Contextually Embodied Agents , 2000, DEFORM/AVATARS.

[17]  Bruno Arnaldi,et al.  Morphology‐independent representation of motions for interactive human‐like animation , 2005, Comput. Graph. Forum.

[18]  Bruno Arnaldi,et al.  Motion blending for real-time animation while accounting for the environment , 2004, Proceedings Computer Graphics International, 2004..

[19]  Jean-Marc Vannobel,et al.  Sign language formal description and synthesis , 1998 .

[20]  Catherine Pelachaud,et al.  Performative facial expressions in animated faces , 2001 .

[21]  Scott K. Liddell,et al.  American Sign Language: The Phonological Base , 2013 .