Teaching American Sign Language in Mixed Reality

This paper presents a holistic system to scale up the teaching and learning of vocabulary words of American Sign Language (ASL). The system leverages the most recent mixed-reality technology to allow the user to perceive her own hands in an immersive learning environment with first- and third-person views for motion demonstration and practice. Precise motion sensing is used to record and evaluate motion, providing real-time feedback tailored to the specific learner. As part of this evaluation, learner motions are matched to features derived from the Hamburg Notation System (HNS) developed by sign-language linguists. We develop a prototype to evaluate the efficacy of mixed-reality-based interactive motion teaching. Results with 60 participants show a statistically significant improvement in learning ASL signs when using our system, in comparison to traditional desktop-based, non-interactive learning. We expect this approach to ultimately allow teaching and guided practice of thousands of signs.

[1]  Hamid Amiri,et al.  Hand Gesture Recognition Using Leap Motion Controller for Recognition of Arabic Sign Language , 2016 .

[2]  Teak Wei Chong,et al.  American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach , 2018, Sensors.

[3]  Michela Ott,et al.  A LITERATURE REVIEW ON IMMERSIVE VIRTUAL REALITY IN EDUCATION: STATE OF THE ART AND PERSPECTIVES. , 2015, 11th International Conference eLearning and Software for Education.

[4]  Deen Freelon,et al.  ReCal OIR : Ordinal , Interval , and Ratio Intercoder Reliability as a Web Service , 2013 .

[5]  Susan J. Lederman,et al.  Force variability during surface contact with bare finger or rigid probe , 2004, 12th International Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, 2004. HAPTICS '04. Proceedings..

[6]  N.V. Thakor,et al.  A study of the range of motion of human fingers with application to anthropomorphic designs , 1988, IEEE Transactions on Biomedical Engineering.

[7]  Stephen A. Brewster,et al.  Feeling what you hear: tactile feedback for navigation of audio graphs , 2006, CHI.

[8]  Kenji Kawashima,et al.  Development of Master-slave Type Lower Limb Motion Teaching System , 2018, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[9]  Hong Z. Tan,et al.  Flexible Electrostatic Transducers for Wearable Haptic Communication* , 2019, 2019 IEEE World Haptics Conference (WHC).

[10]  Min-Hyung Choi,et al.  VRInsole: An unobtrusive and immersive mobility training system for stroke rehabilitation , 2018, 2018 IEEE 15th International Conference on Wearable and Implantable Body Sensor Networks (BSN).

[11]  Kazuhiro Nakadai,et al.  Learning Three-dimensional Skeleton Data from Sign Language Video , 2020, ACM Trans. Intell. Syst. Technol..

[12]  Ratchadaporn Kanawong,et al.  Human Motion Matching for Assisting Standard Thai Folk Dance Learning , 2017 .

[13]  Nolan Ung,et al.  Utilizing virtual and augmented reality for educational and clinical enhancements in neurosurgery , 2017, Journal of Clinical Neuroscience.

[14]  Shuangquan Wang,et al.  SignFi , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[15]  N. Altman An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression , 1992 .

[16]  Hui Ding,et al.  A motion rehabilitation self-training and evaluation system using Kinect , 2016, 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI).

[17]  Haryong Song,et al.  Robust Vision-Based Relative-Localization Approach Using an RGB-Depth Camera and LiDAR Sensor Fusion , 2016, IEEE Transactions on Industrial Electronics.

[18]  Christian Duriez,et al.  Vision-Based Sensing of External Forces Acting on Soft Robots Using Finite Element Method , 2018, IEEE Robotics and Automation Letters.

[19]  Francisco José Madrid-Cuevas,et al.  Automatic generation and detection of highly reliable fiducial markers under occlusion , 2014, Pattern Recognit..

[20]  W. Rymer,et al.  Extrinsic flexor muscles generate concurrent flexion of all three finger joints. , 2002, Journal of biomechanics.

[21]  Tovi Grossman,et al.  YouMove: enhancing movement training with an augmented reality mirror , 2013, UIST.

[22]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[23]  Parth H. Pathak,et al.  mmASL: Environment-Independent ASL Gesture Recognition Using 60 GHz Millimeter-wave Signals , 2020, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[24]  Rajesh B. Mapari,et al.  American Static Signs Recognition Using Leap Motion Sensor , 2016, ICTCS.

[25]  Roger B. Dannenberg,et al.  ShIFT: A Semi-haptic Interface for Flute Tutoring , 2018, NIME.

[26]  Ching-Hua Chuan,et al.  American Sign Language Recognition Using Leap Motion Sensor , 2014, 2014 13th International Conference on Machine Learning and Applications.

[27]  Klaus Krippendorff,et al.  Computing Krippendorff's Alpha-Reliability , 2011 .

[28]  Thomas Hanke HamNoSys – Representing Sign Language Data in Language Resources and Language Processing Contexts , 2004 .

[29]  Parameswaran Ramanathan,et al.  Leveraging directional antenna capabilities for fine-grained gesture recognition , 2014, UbiComp.

[30]  Mark Fiala,et al.  ARTag, a fiducial marker system using digital techniques , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[31]  Panlong Yang,et al.  SignSpeaker: A Real-time, High-Precision SmartWatch-based Sign Language Translator , 2019, MobiCom.

[32]  R. Likert “Technique for the Measurement of Attitudes, A” , 2022, The SAGE Encyclopedia of Research Design.

[33]  Terry K Koo,et al.  A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. , 2016, Journal Chiropractic Medicine.

[34]  Yang Xu,et al.  WiFinger: talk to your smart devices with finger-grained gesture , 2016, UbiComp.

[35]  Pietro Zanuttigh,et al.  Hand gesture recognition with leap motion and kinect devices , 2014, 2014 IEEE International Conference on Image Processing (ICIP).

[36]  K. McGraw,et al.  Forming inferences about some intraclass correlation coefficients. , 1996 .

[37]  Zhou Yu,et al.  EIS , 2018, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[38]  David White,et al.  Getting up your nose: a virtual reality education tool for nasal cavity anatomy , 2017, SIGGRAPH Asia 2017 Symposium on Education.

[39]  Jiacheng Shang,et al.  A Robust Sign Language Recognition System with Multiple Wi-Fi Devices , 2017, MobiArch@SIGCOMM.