GeeAir: a universal multimodal remote control device for home appliances

In this paper, we present a handheld device called GeeAir for remotely controlling home appliances via a mixed modality of speech, gesture, joystick, button, and light. This solution is superior to the existing universal remote controllers in that it can be used by the users with physical and vision impairments in a natural manner. By combining diverse interaction techniques in a single device, the GeeAir enables different user groups to control home appliances effectively, satisfying even the unmet needs of physically and vision-impaired users while maintaining high usability and reliability. The experiments demonstrate that the GeeAir prototype achieves prominent performance through standardizing a small set of verbal and gesture commands and introducing the feedback mechanisms.

[1]  Biing-Hwang Juang,et al.  A study on speaker adaptation of the parameters of continuous density hidden Markov models , 1991, IEEE Trans. Signal Process..

[2]  Nello Cristianini,et al.  An Introduction to Support Vector Machines and Other Kernel-based Learning Methods , 2000 .

[3]  Joachim Machate Being natural - on the use of multimodal interaction concepts in smart homes , 1999, HCI.

[4]  Nello Cristianini,et al.  An introduction to Support Vector Machines , 2000 .

[5]  Maribeth Gandy Coleman,et al.  The Gesture Pendant: A Self-illuminating, Wearable, Infrared Computer Vision System for Home Automation Control and Medical Monitoring , 2000, Digest of Papers. Fourth International Symposium on Wearable Computers.

[6]  William Ribarsky,et al.  Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment , 2002, VisSym.

[7]  L. Rabiner,et al.  Isolated and Connected Word Recognition - Theory and Selected Applications , 1981, IEEE Transactions on Communications.

[8]  Stan Davis,et al.  Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Se , 1980 .

[9]  Niels Henze,et al.  Gesture recognition with a Wii controller , 2008, TEI.

[11]  Jani Mäntyjärvi,et al.  Accelerometer-based gesture control for a design environment , 2006, Personal and Ubiquitous Computing.

[12]  Nuria Oliver,et al.  GWindows: robust stereo vision for gesture-based control of windows , 2003, ICMI '03.

[13]  Thorsten Joachims,et al.  Making large scale SVM learning practical , 1998 .

[14]  Larix Lee,et al.  URCousin : Universal Remote COntrol USer INterface , 2006 .

[15]  Vtt Publications,et al.  Discrete hidden Markov models with application to isolated user-dependent hand gesture recognition , 2001 .

[16]  Steven G. Johnson,et al.  The Design and Implementation of FFTW3 , 2005, Proceedings of the IEEE.

[17]  S. Mitra,et al.  Gesture Recognition: A Survey , 2007, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews).

[18]  Zhen Wang,et al.  uWave: Accelerometer-based Personalized Gesture Recognition and Its Applications , 2009, PerCom.

[19]  Daqing Zhang,et al.  Gesture Recognition with a 3-D Accelerometer , 2009, UIC.

[20]  Jani Mäntyjärvi,et al.  Enabling fast and effortless customisation in accelerometer based gesture interaction , 2004, MUM '04.

[21]  Gerhard P. Hancke,et al.  Gesture recognition as ubiquitous input for mobile phones , 2008 .

[22]  Richard A. Bolt,et al.  “Put-that-there”: Voice and gesture at the graphics interface , 1980, SIGGRAPH '80.

[23]  Lawrence R. Rabiner,et al.  A tutorial on hidden Markov models and selected applications in speech recognition , 1989, Proc. IEEE.

[24]  J. Ross Quinlan,et al.  Improved Use of Continuous Attributes in C4.5 , 1996, J. Artif. Intell. Res..