Generating gestures from speech

This article describes a first version of a system for translating speech into Spanish Sign Language. The system proposed is made up of 4 modules: speech recognizer, semantic analysis, gesture sequence generation and gesture playing. For the speech recognizer and the semantic analysis, we use modules developed by IBM and the University of Colorado respectively. The gesture sequence generation uses the semantic concepts (obtained in the semantic analysis) associating them to several Spanish Sign Language gestures. This association is carried out based on a number of generating rules. For gesture animation, we have developed an animated character and a strategy for reducing the effort in gesture generation. This strategy consists of making the system generate automatically all agent positions necessary for the gesture animation. In this process, the system uses a few main agent positions (2-3 per second) and some interpolation strategies, both issues previously generated by the service developer.