Tracking Complex Movements under Assorted Backgrounds in Sign Language using Active Contour Models

This paper deals with an important issue related to gesture recognition systems. Three important characteristics of sign language are hand and head orientation, location and movement apart from hand shapes. To employ these three characteristics into a sign language recognition system we engaged active contours for tracking hands and head in sign language videos. The hands tracking is accomplished by fusing skin color, texture, boundary and shape information. From RGB (Red, Blue, Green) color space used to model skin color, a single color plane is extracted based on the video background. The texture information is computed using a statistical co-occurrence matrix. The boundary information is computed by calculating the divergence vector on the extracted color and texture feature vector. The shape is computed dynamically and is made adaptive to each video frame to track hands and head during occlusions and in complex video backgrounds. The tracking is achieved using level sets energy minimization on each video frame. The performance of our tracking model is illustrated by tracking hands of the signer in image sequences under simple backgrounds, natural backgrounds and complex backgrounds.