Vision Based Acquisition of Mouth Actions for Human-Computer Interaction
暂无分享,去创建一个
We describe a computer vision based system that allows use of movements of the mouth for human-computer interaction (HCI). The lower region of the face is tracked by locating and tracking the position of the nostrils. The location of the nostrils determines a sub-region of the image from which the cavity of the open mouth may be segmented. Shape features of the open mouth can then be used for continuous real-time data input, for human-computer interaction. Several applications of the head-tracking mouth controller are described.
[1] Chi-Ho Chan,et al. MouthType: text entry by hand and mouth , 2004, CHI EA '04.
[2] Michael J. Lyons,et al. Designing, Playing, and Performing with a Vision-based Mouth Interface , 2003, NIME.
[3] Chi-Ho Chan,et al. Mouthbrush: drawing and painting by hand and mouth , 2003, ICMI '03.