Towards an American Sign Language interface

In this paper, we present two major parts of an interface for American Sign Language (ASL) to computer applications currently under work; a hand tracker and an ASL-parser. The hand tracker extracts information about handshape, position and motion from image sequences. As an aid in this process, the signer wears a pair of gloves with colour-coded markers on joints and finger tips. We also present a computational model of American Sign Language. This model is realized in an ASL-parser which consists of a DCG-grammar and a non-lexical component that records non-manual and spatial information over an ASL-discourse.