Robust Multi-User In-Hand Object Recognition in Human-Robot Collaboration Using a Wearable Force-Myography Device

Applicable human-robot collaboration requires intuitive recognition of human intention during shared work. A grasped object such as a tool held by the human provides vital information about the upcoming task. In this letter, we explore the use of a wearable device to non-visually recognize objects within the human hand in various possible grasps. The device is based on Force-Myography (FMG) where simple and affordable force sensors measure perturbations of forearm muscles. We propose a novel Deep Neural-Network architecture termed Flip-U-Net inspired by the familiar U-Net architecture used for image segmentation. The Flip-U-Net is trained over data collected from several human participants and with multiple objects of each class. Data is collected while manipulating the objects between different grasps and arm postures. The data is also pre-processed with data augmentation and used to train a Variational Autoencoder for dimensionality reduction mapping. While prior work did not provide a transferable FMG-based model, we show that the proposed network can classify objects grasped by multiple new users without additional training efforts. Experiment with 12 test participants show classification accuracy of approximately 95% over multiple grasps and objects. Correlations between accuracy and various anthropometric measures are also presented. Furthermore, we show that the model can be fine-tuned to a particular user based on an anthropometric measure.

[1]  A. Connolly,et al.  Dimensionality Reduction of SDSS Spectra with Variational Autoencoders , 2020, The Astronomical Journal.

[2]  Carlo Menon,et al.  Exploration of Force Myography and surface Electromyography in hand gesture classification. , 2017, Medical engineering & physics.

[3]  Hong Kai Yap,et al.  Design of a wearable FMG sensing system for user intent detection during hand rehabilitation with a soft robotic glove , 2016, 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob).

[4]  Kazuhiro Kosuge,et al.  Progress and prospects of the human–robot collaboration , 2017, Autonomous Robots.

[5]  Karon E. MacLean,et al.  Gestures for industry Intuitive human-robot communication from human observation , 2013, 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[6]  Max Welling,et al.  Auto-Encoding Variational Bayes , 2013, ICLR.

[7]  Rok Blagus,et al.  Improved shrunken centroid classifiers for high-dimensional class-imbalanced data , 2013, BMC Bioinformatics.

[8]  S. Coyle,et al.  Brain–computer interfaces: a review , 2003 .

[9]  Eric Rohmer,et al.  Optical fiber force myography sensor for applications in prosthetic hand control , 2018, 2018 IEEE 15th International Workshop on Advanced Motion Control (AMC).

[10]  Dana Kulic,et al.  Data augmentation of wearable sensor data for parkinson’s disease monitoring using convolutional neural networks , 2017, ICMI.

[11]  Manfredo Atzori,et al.  Visual Cues to Improve Myoelectric Control of Upper Limb Prostheses , 2018, 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob).

[12]  Oluwarotimi Williams Samuel,et al.  FMG-based body motion registration using piezoelectret sensors , 2016, 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC).

[13]  Avishai Sintov,et al.  Robust Classification of Grasped Objects in Intuitive Human-Robot Collaboration Using a Wearable Force-Myography Device , 2021, IEEE Robotics and Automation Letters.

[14]  Danica Kragic,et al.  The GRASP Taxonomy of Human Grasp Types , 2016, IEEE Transactions on Human-Machine Systems.

[15]  Jian-Bo Yang,et al.  Feature Selection for MLP Neural Network: The Use of Random Permutation of Probabilistic Outputs , 2009, IEEE Transactions on Neural Networks.

[16]  Mehdi Dehghan,et al.  Augmentation Scheme for Dealing with Imbalanced Network Traffic Classification Using Deep Learning , 2019, ArXiv.

[17]  Dapeng Yang,et al.  Combined Use of FSR Sensor Array and SVM Classifier for Finger Motion Recognition Based on Pressure Distribution Map , 2012 .

[18]  Dieter Fox,et al.  Human Grasp Classification for Reactive Human-to-Robot Handovers , 2020, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[19]  Amanpreet Singh,et al.  A review of supervised machine learning algorithms , 2016, 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom).

[20]  C. Menon,et al.  Deep Learning Technique in Recognizing Hand Grasps using FMG signals , 2020, 2020 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON).

[21]  Jun-Hai Zhai,et al.  Autoencoder and Its Various Variants , 2018, 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC).

[22]  Abraham J. Wyner,et al.  Modern Neural Networks Generalize on Small Data Sets , 2018, NeurIPS.

[23]  Michał Grochowski,et al.  Data augmentation for improving deep learning in image classification problem , 2018, 2018 International Interdisciplinary PhD Workshop (IIPhDW).

[24]  Erik Scheme,et al.  FMG Versus EMG: A Comparison of Usability for Real-Time Pattern Recognition Based Control , 2019, IEEE Transactions on Biomedical Engineering.

[25]  Paul Lukowicz,et al.  Using FSR based muscule activity monitoring to recognize manipulative arm gestures , 2007, 2007 11th IEEE International Symposium on Wearable Computers.

[26]  Paul Lukowicz,et al.  Sensing muscle activities with body-worn sensors , 2006, International Workshop on Wearable and Implantable Body Sensor Networks (BSN'06).

[27]  Taghi M. Khoshgoftaar,et al.  A survey on Image Data Augmentation for Deep Learning , 2019, Journal of Big Data.

[28]  Carlo Menon,et al.  Surface EMG pattern recognition for real-time control of a wrist exoskeleton , 2010, Biomedical engineering online.

[29]  Thomas Brox,et al.  U-Net: Convolutional Networks for Biomedical Image Segmentation , 2015, MICCAI.