Research on Intention Flexible Mapping Algorithm for Elderly Escort Robot

With the development of science and technology and the intensification of the aging of the world’s population, elderly care robots have begun to enter people’s lives. However, the current elderly care system lacks intelligence and is just a simple patchwork of some traditional elderly products, which is not conducive to the needs of the elderly for easy understanding and easy operation. Therefore, this paper proposes a flexible mapping algorithm (FMFD), that is, a gesture can correspond to a flexible mapping of multiple semantics in the same interactive context. First, combine the input gesture with the surrounding environment to establish the current interactive context. Secondly, when the user uses the same gesture to express different semantics, the feature differences formed due to different cognitions are used as the basis to realize the mapping from one gesture to multiple semantics. Finally, four commonly used gestures are designed to demonstrate the results of flexible mapping. Experiments show that compared with the traditional gesture-based human-computer interaction, the proposed flexible mapping scheme greatly reduces the number of gestures that users need to remember, improves the fault tolerance of gestures in the human-computer interaction process, and meets the concept of elderly caregiver robots.

[1]  Xiaolong Zhang,et al.  Beyond remote control: Exploring natural gesture inputs for smart TV systems , 2019, J. Ambient Intell. Smart Environ..

[2]  Ming-hai Jiao,et al.  Path Planning of Escort Robot Based on Improved Quantum Particle Swarm Optimization , 2019, 2019 Chinese Control And Decision Conference (CCDC).

[3]  Bin Liu,et al.  Recognizing Hand Gestures With Pressure-Sensor-Based Motion Sensing , 2019, IEEE Transactions on Biomedical Circuits and Systems.

[4]  Tianmiao Wang,et al.  Current Researches and Future Development Trend of Intelligent Robot: A Review , 2018, Int. J. Autom. Comput..

[5]  Wenxiong Kang,et al.  Robust Fingertip Detection in a Complex Environment , 2016, IEEE Transactions on Multimedia.

[6]  Pyeong-Gook Jung,et al.  A Wearable Gesture Recognition Device for Detecting Muscular Activities Based on Air-Pressure Sensors , 2015, IEEE Transactions on Industrial Informatics.

[7]  Zheng Yan,et al.  Gesture recognition using a bioinspired learning architecture that integrates visual data with somatosensory data from stretchable sensors , 2020 .

[8]  Yang Wang,et al.  ORB-SLAM-based tracing and 3D reconstruction for robot using Kinect 2.0 , 2017, 2017 29th Chinese Control And Decision Conference (CCDC).

[9]  Nasser Kehtarnavaz,et al.  Real-Time Continuous Detection and Recognition of Subject-Specific Smart TV Gestures via Fusion of Depth and Inertial Sensing , 2018, IEEE Access.

[10]  Shun Zhang,et al.  A Novel Human-3DTV Interaction System Based on Free Hand Gestures and a Touch-Based Virtual Interface , 2019, IEEE Access.

[11]  Abdulmotaleb El-Saddik,et al.  An Elicitation Study on Gesture Preferences and Memorability Toward a Practical Hand-Gesture Vocabulary for Smart Televisions , 2015, IEEE Access.

[12]  Zhibo Pang,et al.  A Novel Gesture Recognition System for Intelligent Interaction with a Nursing-Care Assistant Robot , 2018, Applied Sciences.

[13]  Sang Chul Ahn,et al.  Co-Recognition of Multiple Fingertips for Tabletop Human–Projector Interaction , 2019, IEEE Transactions on Multimedia.

[14]  Ling Shao,et al.  Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition , 2016, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[15]  Reza Azad,et al.  Real-Time Human Face Detection in Noisy Images Based on Skin Color Fusion Model and Eye Detection , 2015 .

[16]  Kaoning Hu,et al.  Temporal Interframe Pattern Analysis for Static and Dynamic Hand Gesture Recognition , 2019, 2019 IEEE International Conference on Image Processing (ICIP).

[17]  Zhiquan Feng,et al.  FM: Flexible mapping from one gesture to multiple semantics , 2018, Inf. Sci..

[18]  Brian Scassellati,et al.  Social robots for education: A review , 2018, Science Robotics.

[19]  Weidong Geng,et al.  Gesture recognition by instantaneous surface EMG images , 2016, Scientific Reports.

[20]  Jun Yu,et al.  From talking head to singing head: A significant enhancement for more natural human computer interaction , 2017, 2017 IEEE International Conference on Multimedia and Expo (ICME).

[21]  A. G. Buddhika P. Jayasekara,et al.  Improving robot’s perception of uncertain spatial descriptors in navigational instructions by evaluating influential gesture notions , 2020, Journal on Multimodal User Interfaces.

[22]  Liujuan Cao,et al.  Many-to-One Gesture-to-Command Flexible Mapping Approach for Smart Teaching Interface Interaction , 2019, IEEE Access.

[23]  Landu Jiang,et al.  Givs: Fine-Grained Gesture Control for Mobile Devices in Driving Environments , 2020, IEEE Access.

[24]  Xu Zhang,et al.  Exploration of Chinese Sign Language Recognition Using Wearable Sensors Based on Deep Belief Net , 2020, IEEE Journal of Biomedical and Health Informatics.

[25]  Geng Yang,et al.  WristCam: A Wearable Sensor for Hand Trajectory Gesture Recognition and Intelligent Human–Robot Interaction , 2019, IEEE Sensors Journal.

[26]  Rui Hong,et al.  一种联合Canny边缘检测和SPIHT的图像压缩方法 (Image Compression Method Combining Canny Edge Detection and SPIHT) , 2019, 计算机科学.