Real-time detection of the interaction between an upper-limb power-assist robot user and another person for perception-assist

Abstract Assisting aged population or a population with disabilities is a critical problem in today’s world. To compensate for their declined motor function, power-assist wearable robots have been proposed. In some cases, however, the cognitive function of these populations may have declined as well, and it may be insufficient to compensate only for their motor function deficiency. Perception-assist wearable robots, which can perceive environmental information using visual sensors attached to them, have been proposed to address this problem. This study addresses the problem of identifying motion intentions of the user of an upper-limb power-assist wearable robot, while the user engages in desired interactions with others. It is important to consider both the interacting parties in order to accurately predict proper interaction. Therefore, this paper presents an interaction recognition methodology by combining both the user’s motion intention and the other party’s motion intention with environmental information. A fuzzy reasoning model is proposed to semantically combine the motion intentions of both parties and environmental information. In this method, the motion intentions of both the user and the other party are simultaneously estimated using kinematic information and visual information, respectively, and they are employed for predicting the interactions between both parties. The effectiveness of the proposed approach is experimentally evaluated.

[1]  Zhicong Huang,et al.  Adaptive Impedance Control for an Upper Limb Robotic Exoskeleton Using Biological Signals , 2017, IEEE Transactions on Industrial Electronics.

[2]  Kejun Zhang,et al.  An Upper-Limb Power-Assist Exoskeleton Using Proportional Myoelectric Control , 2014, Sensors.

[3]  Lihui Wang,et al.  Human motion prediction for human-robot collaboration , 2017 .

[4]  Van-Nam Huynh,et al.  Two-probabilities focused combination in recommender systems , 2017, Int. J. Approx. Reason..

[5]  Mohan S. Kankanhalli,et al.  Action and Interaction Recognition in First-Person Videos , 2014, 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops.

[6]  Behnaz Bigdeli,et al.  A decision fusion method based on multiple support vector machine system for fusion of hyperspectral and LIDAR data , 2014 .

[7]  Xiaogang Ruan,et al.  Adaptive Local Spatiotemporal Features from RGB-D Data for One-Shot Learning Gesture Recognition , 2016, Sensors.

[8]  Qi Tian,et al.  Image Classification Using Spatial Pyramid Coding and Visual Word Reweighting , 2010, ACCV.

[9]  Fuchun Sun,et al.  sEMG-Based Joint Force Control for an Upper-Limb Power-Assist Exoskeleton Robot , 2014, IEEE Journal of Biomedical and Health Informatics.

[10]  Cordelia Schmid,et al.  Action Recognition with Improved Trajectories , 2013, 2013 IEEE International Conference on Computer Vision.

[11]  Jian Huang,et al.  Control of Upper-Limb Power-Assist Exoskeleton Using a Human-Robot Interface Based on Motion Intention Recognition , 2015, IEEE Transactions on Automation Science and Engineering.

[12]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[13]  D. Ruta,et al.  An Overview of Classifier Fusion Methods , 2000 .

[14]  Yoshiaki Hayashi,et al.  An EMG-Based Control for an Upper-Limb Power-Assist Exoskeleton Robot , 2012, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[15]  Nicu Sebe,et al.  Video classification with Densely extracted HOG/HOF/MBH features: an evaluation of the accuracy/computational efficiency trade-off , 2015, International Journal of Multimedia Information Retrieval.

[16]  Shuzhi Sam Ge,et al.  Human–Robot Collaboration Based on Motion Intention Estimation , 2014, IEEE/ASME Transactions on Mechatronics.

[17]  Larry H. Matthies,et al.  First-Person Activity Recognition: What Are They Doing to Me? , 2013, 2013 IEEE Conference on Computer Vision and Pattern Recognition.

[18]  Thomas G. Dietterich Multiple Classifier Systems , 2000, Lecture Notes in Computer Science.

[19]  Kazuo Kiguchi,et al.  Perception-Assist with an Active Stereo Camera for an Upper-Limb Power-Assist Exoskeleton , 2009, J. Robotics Mechatronics.

[20]  Abdelaziz Benallegue,et al.  Adaptive control based on an on-line parameter estimation of an upper limb exoskeleton , 2017, 2017 International Conference on Rehabilitation Robotics (ICORR).

[21]  Larry H. Matthies,et al.  Pooled motion features for first-person videos , 2014, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[22]  Nazli Ikizler-Cinbis,et al.  Two-person interaction recognition via spatial multiple instance embedding , 2015, J. Vis. Commun. Image Represent..

[23]  Wenjun Zeng,et al.  An End-to-End Spatio-Temporal Attention Model for Human Action Recognition from Skeleton Data , 2016, AAAI.

[24]  Cordelia Schmid,et al.  Dense Trajectories and Motion Boundary Descriptors for Action Recognition , 2013, International Journal of Computer Vision.