A feasibility study of depth image based intent recognition for lower limb prostheses

This paper presents our preliminary work on a depth camera based intent recognition system intended for future use in robotic prosthetic legs. The approach infers the activity mode of the subject for standing, walking, running, stair ascent and stair descent modes only using data from the depth camera. Depth difference images are also used to increase the performance of the approach by discriminating between static and dynamic instances. After confidence map based filtering, simple features such as mean, maximum, minimum and standard deviation are extracted from rectangular regions of the frames. A support vector machine with a cubic kernel is used for the classification task. The classification results are post-processed by a voting filter to increase the robustness of activity mode recognition. Experiments conducted with a healthy subject donning the depth camera to his lower leg showed the efficacy of the approach. Specifically, the depth camera based recognition system was able to identify 28 activity mode transitions successfully. The only case of incorrect mode switching was an intended run to stand transition, where an intermediate transition from run to walk was recognized before transitioning to the intended standing mode.

[1]  Manuela M. Veloso,et al.  Depth camera based indoor mobile robot localization and navigation , 2012, 2012 IEEE International Conference on Robotics and Automation.

[2]  Michael Goldfarb,et al.  Multiclass Real-Time Intent Recognition of a Powered Lower Limb Prosthesis , 2010, IEEE Transactions on Biomedical Engineering.

[3]  Robert Riener,et al.  Control strategies for active lower extremity prosthetics and orthotics: a review , 2015, Journal of NeuroEngineering and Rehabilitation.

[4]  Michael Goldfarb,et al.  Upslope Walking With a Powered Knee and Ankle Prosthesis: Initial Results With an Amputee Subject , 2011, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[5]  Robert D. Lipschutz,et al.  Robotic leg control with EMG decoding in an amputee with nerve transfers. , 2013, The New England journal of medicine.

[6]  M. Goldfarb,et al.  Control of Stair Ascent and Descent With a Powered Transfemoral Prosthesis , 2013, IEEE Transactions on Neural Systems and Rehabilitation Engineering.

[7]  Fred A. Hamprecht,et al.  Denoising of continuous-wave time-of-flight depth images using confidence measures , 2009, Optical Engineering.

[8]  H.A. Varol,et al.  Preliminary Evaluations of a Self-Contained Anthropomorphic Transfemoral Prosthesis , 2009, IEEE/ASME Transactions on Mechatronics.

[9]  Maya Cakmak,et al.  The learning and use of traversability affordance using range images on a mobile robot , 2007, Proceedings 2007 IEEE International Conference on Robotics and Automation.

[10]  Lihui Wang,et al.  Depth camera based collision avoidance via active robot control , 2014 .

[11]  Hugh M. Herr,et al.  Powered ankle-foot prosthesis , 2008, IEEE Robotics & Automation Magazine.

[12]  Kathryn Ziegler-Graham,et al.  Estimating the prevalence of limb loss in the United States: 2005 to 2050. , 2008, Archives of physical medicine and rehabilitation.

[13]  David G. Stork,et al.  Pattern Classification , 1973 .

[14]  Fan Zhang,et al.  Continuous Locomotion-Mode Identification for Prosthetic Legs Based on Neuromuscular–Mechanical Fusion , 2011, IEEE Transactions on Biomedical Engineering.

[15]  Andreas Geiger,et al.  Are we ready for autonomous driving? The KITTI vision benchmark suite , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[16]  Huseyin Atakan Varol,et al.  Locomotion Strategy Selection for a Hybrid Mobile Robot Using Time of Flight Depth Sensor , 2015, J. Sensors.

[17]  Levi J. Hargrove,et al.  Depth Sensing for Improved Control of Lower Limb Prostheses , 2015, IEEE Transactions on Biomedical Engineering.