Towards a Phonological Construction of Classifier Handshapes in 3D Sign Language

3D sign language generation has showed real performances since several years. Many systems have been proposed aiming to generate animated sign language through avatars, however, the technology still young and many fundamental parameters of sign language like facial expressions and other iconic features have been ignored in the proposed systems. In this paper, we focus on the generation and analysis of descriptive classifiers also called Size and Shape Specifiers (SASSes) in 3D sign language data. We propose a new adaptation of the phonological structure of handshapes that have been given by Brentari. Our adapted framework is able to encode 3D descriptive classifiers that can express different amounts or sizes of shapes. We describe the way our model has been implemented through an XML framework. Our model is a way to link the phonological level with the 3D physical animation level since it is compliant with sign language phonology as described by Brentari as well as Liddel & Johnson and compliant with the 3D animation standards.