Facial Expression Sequence Recognition for a Japanese Sign Language Training System
暂无分享,去创建一个
Considerable research has been conducted in the past on techniques for analyzing facial expressions. Many researchers have employed general analytical methods using whole-face data; however, significant results focusing on sign language have not been obtained. Because facial expressions are very important in sign language, it is crucial that they are accurately captured in sign language training systems. Our research investigates a facial expression discrimination method specialized for a Japanese sign language training system. First, this system acquires a teacher's sign language and facial expressions and automatically adjusts the system parameters. The system then acquires the shape of the learner's face and extracts it in sections via machine learning. Finally, the sections and the degree to which they have changed are integrated by fuzzy inference and judged as a facial expression. By referring to the system's facial expression judgment, the learner can better understand the type and degree of facial expression necessary for proper sign language. Experiment conducted on eight subjects using this system was able to verify the validity of sign language expressions performed by learners with approximately 90.7% accuracy. It is expected that this system can be used for facial expression training in specific applications.