Activity detection in conversational sign language video for mobile telecommunication

The goal of the MobileASL project is to increase accessibility by making the mobile telecommunications network available to the signing Deaf community. Video cell phones enable Deaf users to communicate in their native language, American Sign Language (ASL). However, encoding and transmission of real-time video over cell phones is a power-intensive task that can quickly drain the battery.

[1]  Surendra Ranganath,et al.  Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning , 2005, IEEE Trans. Pattern Anal. Mach. Intell..

[2]  Kirsti Grobel,et al.  Video-Based Sign Language Recognition Using Hidden Markov Models , 1997, Gesture Workshop.

[3]  Tieniu Tan,et al.  Recent developments in human motion analysis , 2003, Pattern Recognit..

[4]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[5]  Yoshiaki Shirai,et al.  Extraction of Hand Features for Recognition of Sign Language Words , 2002 .

[6]  Shan Lu,et al.  Recognition of local features for camera-based sign language recognition system , 2000, Proceedings 15th International Conference on Pattern Recognition. ICPR-2000.

[7]  Badrinath Roysam,et al.  Image change detection algorithms: a systematic survey , 2005, IEEE Transactions on Image Processing.

[8]  Narendra Ahuja,et al.  Extraction of 2D Motion Trajectories and Its Application to Hand Gesture Recognition , 2002, IEEE Trans. Pattern Anal. Mach. Intell..

[9]  Chung-Lin Huang,et al.  Sign language recognition using model-based tracking and a 3D Hopfield neural network , 1998, Machine Vision and Applications.

[10]  Richard E. Ladner,et al.  Variable frame rate for low power mobile sign language communication , 2007, Assets '07.

[11]  Abdesselam Bouzerdoum,et al.  Skin segmentation using color pixel classification: analysis and comparison , 2005, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[12]  Frank M. Ciaramello,et al.  "Can you see me now?" An objective metric for predicting intelligibility of compressed American Sign Language video , 2007, Electronic Imaging.

[13]  Richard E. Ladner,et al.  MobileASL:: intelligibility of sign language video as constrained by mobile phone technology , 2006, Assets '06.

[14]  Alex Pentland,et al.  Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video , 1998, IEEE Trans. Pattern Anal. Mach. Intell..

[15]  Chung-Lin Huang,et al.  A model-based hand gesture recognition system , 2001, Machine Vision and Applications.

[16]  Yuntao Cui,et al.  Appearance-Based Hand Sign Recognition from Intensity Image Sequences , 2000, Comput. Vis. Image Underst..

[17]  Chung-Lin Huang,et al.  Hand gesture recognition using a real-time tracking method and hidden Markov models , 2003, Image Vis. Comput..

[18]  Joakim Wiklund,et al.  General packet radio service , 1999 .

[19]  Shinichi Tamura,et al.  Recognition of sign language motion images , 1988, Pattern Recognit..

[20]  T. Kobayashi,et al.  Partly-hidden Markov model and its application to gesture recognition , 1997, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing.

[21]  Karl-Friedrich Kraiss,et al.  Video-based sign recognition using self-organizing subunits , 2002, Object recognition supported by user interaction for service robots.