Vision based Interpretation of Natural Sign Languages
暂无分享,去创建一个
This manuscript outlines our current demonstration system for translating visual Sign to written text. The system is based around a broad description of scene activity that naturally generalizes, reducing training requirements and allowing the knowledge base to be explicitly stated. This allows the same system to be used for different sign languages requiring only a change of the knowledge base.
[1] Mansoor Sarhadi,et al. A non-linear model of shape and motion for tracking finger spelt American sign language , 2002, Image Vis. Comput..
[2] Dimitris N. Metaxas,et al. ASL recognition based on a coupling between HMMs and 3D motion analysis , 1998, Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271).
[3] Thad Starner,et al. Visual Recognition of American Sign Language Using Hidden Markov Models. , 1995 .