Content-Based Video Analysis and Access for Finnish Sign Language A Multidisciplinary Research Project

This paper outlines a multidisciplinary research project in which computer vision techniques for the recognition and analysis of gestures and facial expressions from video are developed and applied to the processing of sign language in general and Finnish Sign Language in particular. This is a collaborative project between four project partners: Helsinki University of Technology, University of Jyvaskyla, University of Art and Design, and the Finnish Association of the Deaf. The project has several objectives of which the following four are in the focus of this paper: (i) to adapt the existing PicSOM framework developed by the Helsinki University of Technology regarding content-based analysis of multimedia data to content-based analysis of sign language videos containing continuous signing; (ii) to develop a computer system which can identify sign and gesture boundaries and indicate, from the video, the sequences that correspond to signs and gestures; (iii) to apply the studied and developed methods and computer system for automatic and semi-automatic indexing of sign language corpora; and (iv) to conduct a feasibility study for the implementation of mobile video access to sign language dictionaries and corpora. Methods for reaching the objectives are presented in the paper.