A Multi-sensor Fusion Approach for Intention Detection

For assistive devices to seamlessly and promptly assist users with activities of daily living (ADL), it is important to understand the user’s intention. Current assistive systems are mostly driven by unimodal sensory input which hinders their accuracy and responses. In this paper, we propose a context-aware sensor fusion framework to detect intention for assistive robotic devices which fuses information from a wearable video camera and wearable inertial measurement unit (IMU) sensors. A Naive Bayes classifier is used to predict the intent to move from IMU data and the object classification results from the video data. The proposed approach can achieve an accuracy of 85.2% in detecting movement intention.

[1]  Soo-Jin Lee,et al.  Current hand exoskeleton technologies for rehabilitation and assistive engineering , 2012 .

[2]  M. Bikson,et al.  Generalizing remotely supervised transcranial direct current stimulation (tDCS): feasibility and benefit in Parkinson’s disease , 2018, Journal of NeuroEngineering and Rehabilitation.

[3]  R. Clement,et al.  Bionic prosthetic hands: A review of present technology and future aspirations. , 2011, The surgeon : journal of the Royal Colleges of Surgeons of Edinburgh and Ireland.

[4]  Rita M Patterson,et al.  Soft robotic devices for hand rehabilitation and assistance: a narrative review , 2018, Journal of NeuroEngineering and Rehabilitation.

[5]  Robert Riener,et al.  A survey of sensor fusion methods in wearable robotics , 2015, Robotics Auton. Syst..

[6]  Ali Farhadi,et al.  You Only Look Once: Unified, Real-Time Object Detection , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).