Human-Centered, Ergonomic Wearable Device with Computer Vision Augmented Intelligence for VR Multimodal Human-Smart Home Object Interaction
暂无分享,去创建一个
In the future, Human-Robot Interaction should be enabled by a compact, human-centered and ergonomic wearable device that can merge human and machine altogether seamlessly by constantly identifying each other's intentions. In this paper, we will showcase the use of an ergonomic and lightweight wearable device that can identify human's eye/facial gestures with physiological signal measurements. Since human's intentions are usually coupled with eye movements and facial expressions, through proper design of interactions using these gestures, we can let people interact with the robots or smart home objects naturally. Combined with Computer Vision object recognition algorithms, we can allow people use very simple and straightforward communication strategies to operate telepresence robot and control smart home objects remotely, totally “Hands-Free”. People can wear a VR head-mounted display and see through the robot's eyes (the remote camera attached on the robot) and interact with the smart home devices intuitively by simple facial gestures or blink of the eyes. It is tremendous beneficial for the people with motor impairment as an assistive tool. For the normal people without disabilities, they can also free their hands to do other tasks and operate the smart home devices at the same time as multimodal control strategies.
[1] Zhi-Hong Mao,et al. Brain-computer interface combining eye saccade two-electrode EEG signals and voice cues to improve the maneuverability of wheelchair , 2017, 2017 International Conference on Rehabilitation Robotics (ICORR).