Gesture4all: a framework for 3D gestural interaction to improve accessibility of web videos

Recently the interest in video consumption and sharing has increased considerably. Although initiatives such as WCAG 2.0 set guidelines for video players development, there is still a need for investigation about how to improve accessibility of Web videos for blind and low vision users. To enable researchers and developers new ways of interaction based on 3D gestural interaction, we introduce a JavaScript based framework, called Gesture4All, for Web videos interactions using four sensors (Smartphone, Webcam, Myo, and Leap Motion). Our framework is seamlessly integrated with the interface, receiving and interpreting gesture interactions performed by the user in order to control the Web video player functionalities. In this paper, we present the Gesture4All features as well as details of its architecture, demonstrating how the framework improve the accessibility through a case-study by 20 blind and visually impaired users. We also discuss the results and future works.

[1]  Clemens Holzmann,et al.  Comparing the Placement of Two Arm-Worn Devices for Recognizing Dynamic Hand Gestures , 2016, MoMM.

[2]  Henry Been-Lirn Duh,et al.  A hand gesture control framework on smart glasses , 2015, SIGGRAPH Asia Mobile Graphics and Interactive Applications.

[3]  Miaolong Yuan,et al.  Interacting with 3D objects in a virtual environment using an intuitive gesture system , 2008, VRCAI '08.

[4]  Takashi Komuro,et al.  VolGrab: Realizing 3D View Navigation by Aerial Hand Gestures , 2017, CHI Extended Abstracts.

[5]  Renata Pontin de Mattos Fortes,et al.  Web Videos - Concerns About Accessibility based on User Centered Design , 2013, DSAI.

[6]  Renata Pontin de Mattos Fortes,et al.  Accessible Modeling on the Web: A Case Study , 2013, DSAI.

[7]  Peter Gorny,et al.  The design of auditory user interfaces for blind users , 2002, NordiCHI '02.

[8]  Gregory D. Abowd,et al.  Charting past, present, and future research in ubiquitous computing , 2000, TCHI.

[9]  Linda Di Geronimo,et al.  Tilt-and-Tap: Framework to Support Motion-Based Web Interaction Techniques , 2015, ICWE.

[10]  André Pimenta Freire,et al.  Guidelines are only half of the story: accessibility problems encountered by blind users on the web , 2012, CHI.

[11]  Zélia Zilda Lourenço de Camargo Bittencourt,et al.  The perceptions of low vision people about their return to the labor market , 2011 .

[12]  Vassilis Athitsos,et al.  A database-based framework for gesture recognition , 2010, Personal and Ubiquitous Computing.

[13]  Rudinei Goularte,et al.  Gestural Interaction for Accessibility of Web Videos , 2016, WebMedia.

[14]  David Zeltzer,et al.  A design method for “whole-hand” human-computer interaction , 1993, TOIS.

[15]  Thad Starner,et al.  MAGIC: a motion gesture design tool , 2010, CHI.

[16]  S. Mitra,et al.  Gesture Recognition: A Survey , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[17]  Alvin Jude,et al.  Bimanual Word Gesture Keyboards for Mid-air Gestures , 2017, CHI Extended Abstracts.

[18]  Mohammed Yeasin,et al.  A real-time framework for natural multimodal interaction with large screen displays , 2002, Proceedings. Fourth IEEE International Conference on Multimodal Interfaces.

[19]  Constantine Stephanidis,et al.  Universal Access in Human-Computer Interaction. Access to Today's Technologies , 2015, Lecture Notes in Computer Science.