Give Me a Sign : A Person Independent Interactive Sign Dictionary

AbstractThis paper presents a method to perform person independent sign recog-nition. This is achieved by implementing generalising features based onsign linguistics. These are combined using two methods. The first is tradi-tional Markov models, which are shown to lack the required generalisation.The second is a discriminative approach called Sequential Pattern Boosting,which combines feature selection with learning. The resulting system is in-troduced as a dictionary application, allowing signers to query by perform-ing a sign in front of a Kinect TM . Two data sets are used and results shownfor both, with the query-return rate reaching 99.9% on a 20 sign multi-userdataset and 85.1% on a more challenging and realistic subject independent,40 sign test set. 1 Introduction While image indexes into search engines are becoming common place, the abilityto search using an action or gesture is still an open research question. For sign lan-guage users, this makes looking up new signs in a dictionary a non-trivial task. Allexisting sign language dictionaries are complex to navigate due to the lack of uni-versal indexing feature (like the alphabet in written language). This work attemptsto address this by proposing an interactive sign dictionary. The proposed dictionarycan be queried by enacting the sign, live, in front of a Microsoft Kinect

[1]  Bencie Woll,et al.  The Linguistics of British Sign Language: An Introduction , 1999 .

[2]  Kent Lyons,et al.  GART: The Gesture and Activity Recognition Toolkit , 2007, HCI.

[3]  Andrew Zisserman,et al.  Minimal Training, Large Lexicon, Unconstrained Sign Language Recognition , 2004, BMVC.

[4]  Richard Bowden,et al.  Learning Sequential Patterns for Lipreading , 2011, BMVC.

[5]  Petros Maragos,et al.  Advances in phonetics-based sub-unit modeling for transcription alignment and sign language recognition , 2011, CVPR 2011 WORKSHOPS.

[6]  Dimitris N. Metaxas,et al.  Parallel hidden Markov models for American sign language recognition , 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision.

[7]  W. Stokoe,et al.  Sign language structure: an outline of the visual communication systems of the American deaf. 1960. , 1961, Journal of deaf studies and deaf education.

[8]  James M. Rehg,et al.  Learning the basic units in American Sign Language using discriminative segmental feature selection , 2009, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing.

[9]  Ceil Lucas,et al.  Linguistics of American Sign Language: An Introduction , 1995 .

[10]  Scott K. Liddell,et al.  American Sign Language: The Phonological Base , 2013 .

[11]  Helen Cooper,et al.  University of Surrey , 2019, The Grants Register 2022.

[12]  Alex Pentland,et al.  Real-time American Sign Language recognition from video using hidden Markov models , 1995 .

[13]  George Awad,et al.  Modelling and segmenting subunits for sign language recognition based on hand motion analysis , 2009, Pattern Recognit. Lett..

[14]  W. Stokoe Sign language structure: an outline of the visual communication systems of the American deaf. 1960. , 1961, Journal of deaf studies and deaf education.