Tale of a robot: Humanoid robot assisted sign language tutoring

There is an on-going study which aims to assist in teaching Sign Language (SL) to hearing impaired children by means of non-verbal communication and imitation based interaction games between a humanoid robot and the child. In this study, the robot will be able to express a word in SL among a set of chosen words using hand movements, body and face gestures. Having comprehended the word, the child will give relevant feedback to the robot. In the current study, we propose an interactive story telling game between a NAO H25 humanoid robot and preschool children based on Turkish Sign Language (TSL). Since most of the children do not know how to read and write, and they are not familiar with sign language, we prepared a short story including specially selected words which is performed by the robot verbally and with sign language as well. The children are expected to give feedback to the robot with matching colour flashcards when it implements a word in sign language. The robotic event covered 106 preschool children. The aim is to evaluate the children's sign language learning ability from a robot, and comparison of these results with the results of video based studies.

[1]  Hatice Kose-Bagci,et al.  Evaluation of the Robot Sign Language Tutoring Assistant using Video-based Studies , 2011, ECMR.

[2]  W. Kadous GRASP: Recognition of Australian Sign Language Using Instrumented Gloves , 1995 .

[3]  Nicoletta Adamo-Villani A Virtual Learning Environment for Deaf Children: Design and Evaluation , 2007 .

[4]  Chrystopher L. Nehaniv,et al.  Drum-mate: interaction dynamics and gestures in human–humanoid drumming experiments , 2010, Connect. Sci..

[5]  T. Röfer,et al.  A Robust Closed-Loop Gait for the Standard Platform League Humanoid , 2009 .

[6]  Lale Akarun,et al.  A multi-class classification strategy for Fisher scores: Application to signer independent sign language recognition , 2010, Pattern Recognit..

[7]  Jaffe Dl Evolution of mechanical fingerspelling hands for people who are deaf-blind. , 1994 .

[8]  Alex Pentland,et al.  Real-time American Sign Language recognition from video using hidden Markov models , 1995 .

[9]  Bilge Mutlu,et al.  A Storytelling Robot: Modeling and Evaluation of Human-like Gaze Behavior , 2006, 2006 6th IEEE-RAS International Conference on Humanoid Robots.

[10]  Chrystopher L. Nehaniv,et al.  Effects of Embodiment and Gestures on Social Interaction in Drumming Games with a Humanoid Robot , 2009, Adv. Robotics.

[11]  Lale Akarun,et al.  STARS: Sign tracking and recognition system using input-output HMMs , 2009, Pattern Recognit. Lett..

[12]  Kouichi Murakami,et al.  Gesture recognition using recurrent neural networks , 1991, CHI.

[13]  Bülent Sankur,et al.  SignTutor: An Interactive System for Sign Language Tutoring , 2009, IEEE Multimedia.

[14]  Kerstin Dautenhahn,et al.  The Impact of Participants' Beliefs on Motor Interference and Motor Coordination in Human–Humanoid Interactions , 2011, IEEE Transactions on Autonomous Mental Development.

[15]  Marion Hersh,et al.  Assistive Technology for the Hearing-impaired, Deaf and Deafblind , 2003, Springer London.

[16]  Rodney A. Brooks,et al.  Humanoid robots , 2002, CACM.

[17]  Hatice Kose-Bagci,et al.  Evaluation of the Robot Assisted Sign Language Tutoring Using Video-Based Studies , 2012, International Journal of Social Robotics.

[18]  Lale Akarun,et al.  A belief-based sequential fusion approach for fusing manual and non-manual signs , 2008 .

[19]  Chrystopher L. Nehaniv,et al.  Emergent dynamics of turn-taking interaction in drumming games with a humanoid robot , 2008, RO-MAN 2008 - The 17th IEEE International Symposium on Robot and Human Interactive Communication.

[20]  Lale Akarun,et al.  Sign Language Tutoring tool , 2005, 2005 13th European Signal Processing Conference.

[21]  Chrystopher L. Nehaniv,et al.  Title of paper : KASPAR – A Minimally Expressive Humanoid Robot for Human-Robot Interaction Research , 2009 .

[22]  Lale Akarun,et al.  Speech and sliding text aided sign retrieval from hearing impaired sign news videos , 2007, Journal on Multimodal User Interfaces.

[23]  Lale Akarun,et al.  A belief-based sequential fusion approach for fusing manual signs and non-manual signals , 2009, Pattern Recognit..

[24]  Gérard Bailly,et al.  Image and Video for Hearing Impaired People , 2007, EURASIP J. Image Video Process..