Activity recognition in manufacturing: The roles of motion capture and sEMG+inertial wearables in detecting fine vs. gross motion

In safety-critical environments, robots need to reliably recognize human activity to be effective and trust-worthy partners. Since most human activity recognition (HAR) approaches rely on unimodal sensor data (e.g. motion capture or wearable sensors), it is unclear how the relationship between the sensor modality and motion granularity (e.g. gross or fine) of the activities impacts classification accuracy. To our knowledge, we are the first to investigate the efficacy of using motion capture as compared to wearable sensor data for recognizing human motion in manufacturing settings. We introduce the UCSD-MIT Human Motion dataset, composed of two assembly tasks that entail either gross or fine-grained motion. For both tasks, we compared the accuracy of a Vicon motion capture system to a Myo armband using three widely used HAR algorithms. We found that motion capture yielded higher accuracy than the wearable sensor for gross motion recognition (up to 36.95%), while the wearable sensor yielded higher accuracy for fine-grained motion (up to 28.06%). These results suggest that these sensor modalities are complementary, and that robots may benefit from systems that utilize multiple modalities to simultaneously, but independently, detect gross and fine-grained motion. Our findings will help guide researchers in numerous fields of robotics including learning from demonstration and grasping to effectively choose sensor modalities that are most suitable for their applications.

[1]  Luc Van Gool,et al.  Tracking a hand manipulating an object , 2009, 2009 IEEE 12th International Conference on Computer Vision.

[2]  M. Tavakol,et al.  Making sense of Cronbach's alpha , 2011, International journal of medical education.

[3]  Jessica K. Hodgins,et al.  Guide to the Carnegie Mellon University Multimodal Activity (CMU-MMAC) Database , 2008 .

[4]  Juha Röning,et al.  MyoGym: introducing an open gym data set for activity recognition collected using myo armband , 2017, UbiComp/ISWC Adjunct.

[5]  Adrian Hilton,et al.  A survey of advances in vision-based human motion capture and analysis , 2006, Comput. Vis. Image Underst..

[6]  Laurel D. Riek,et al.  Healthcare robotics , 2017, Commun. ACM.

[7]  Subhas Chandra Mukhopadhyay,et al.  Wearable Sensors for Human Activity Monitoring: A Review , 2015, IEEE Sensors Journal.

[8]  Asha,et al.  A Hand Gesture Recognition Framework and Wearable Gesture Based Interaction Prototype for Mobile Devices , 2015 .

[9]  Andreas Christmann,et al.  Support vector machines , 2008, Data Mining and Knowledge Discovery Handbook.

[10]  Bilge Mutlu,et al.  Robots in organizations: The role of workflow, social, and environmental factors in human-robot interaction , 2008, 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[11]  Senem Velipasalar,et al.  A Survey on Activity Detection and Classification Using Wearable Sensors , 2017, IEEE Sensors Journal.

[12]  Laurel D. Riek,et al.  Detecting social context: A method for social event classification using naturalistic multimodal data , 2015, 2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG).

[13]  Daniel Roggen,et al.  Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition , 2016, Sensors.

[14]  Roozbeh Jafari,et al.  Real-time American Sign Language Recognition using wrist-worn motion and surface EMG sensors , 2015, 2015 IEEE 12th International Conference on Wearable and Implantable Body Sensor Networks (BSN).

[15]  Fillia Makedon,et al.  EyeOn: An Activity Recognition System using MYO Armband , 2016, PETRA.

[16]  Kevin A Hallgren,et al.  Computing Inter-Rater Reliability for Observational Data: An Overview and Tutorial. , 2012, Tutorials in quantitative methods for psychology.

[17]  Weihua Sheng,et al.  Wearable Sensor-Based Hand Gesture and Daily Activity Recognition for Robot-Assisted Living , 2011, IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans.

[18]  Matko Orsag,et al.  Modeling and Control of MM-UAV: Mobile Manipulating Unmanned Aerial Vehicle , 2013, J. Intell. Robotic Syst..

[19]  Przemyslaw A. Lasota,et al.  Toward safe close-proximity human-robot interaction with standard industrial robots , 2014, 2014 IEEE International Conference on Automation Science and Engineering (CASE).

[20]  Elvira Pirondini,et al.  EMG-based decoding of grasp gestures in reaching-to-grasping motions , 2017, Robotics Auton. Syst..

[21]  Barbara Deml,et al.  Human-Aware Robotic Assistant for Collaborative Assembly: Integrating Human Motion Prediction With Planning in Time , 2018, IEEE Robotics and Automation Letters.

[22]  Dieter Fox,et al.  Fine-grained kitchen activity recognition using RGB-D , 2012, UbiComp.

[23]  Laurel D. Riek,et al.  Social context perception for mobile robots , 2015, 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

[24]  Laurel D. Riek,et al.  A Method for Automatic Detection of Psychomotor Entrainment , 2016, IEEE Transactions on Affective Computing.

[25]  Laurel D. Riek,et al.  Human-Robot Teaming: Approaches from Joint Action and Dynamical Systems , 2018, Humanoid Robotics: A Reference.

[26]  Thomas Plötz,et al.  Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables , 2016, IJCAI.

[27]  Laurel D. Riek,et al.  Movement Coordination in Human–Robot Teams: A Dynamical Systems Approach , 2016, IEEE Transactions on Robotics.

[28]  Eric Wade,et al.  Muscle Activation and Inertial Motion Data for Noninvasive Classification of Activities of Daily Living , 2018, IEEE Transactions on Biomedical Engineering.

[29]  Wilfried N. Gansterer,et al.  On the Relationship Between Feature Selection and Classification Accuracy , 2008, FSDM.

[30]  Dmitry Berenson,et al.  Predicting human reaching motion in collaborative tasks using Inverse Optimal Control and iterative re-planning , 2015, 2015 IEEE International Conference on Robotics and Automation (ICRA).

[31]  P. Cochat,et al.  Et al , 2008, Archives de pediatrie : organe officiel de la Societe francaise de pediatrie.

[32]  Miguel A. Labrador,et al.  A Survey on Human Activity Recognition using Wearable Sensors , 2013, IEEE Communications Surveys & Tutorials.

[33]  Laurel D. Riek,et al.  Coordination Dynamics in Multihuman Multirobot Teams , 2017, IEEE Robotics and Automation Letters.

[34]  Honghai Liu,et al.  A Unified Fuzzy Framework for Human-Hand Motion Recognition , 2011, IEEE Transactions on Fuzzy Systems.

[35]  André Crosnier,et al.  Collaborative manufacturing with physical human–robot interaction , 2016 .

[36]  Ronald Poppe,et al.  A survey on vision-based human action recognition , 2010, Image Vis. Comput..

[37]  Tarek F. Abdelzaher,et al.  Robust Dynamic Human Activity Recognition Based on Relative Energy Allocation , 2008, DCOSS.

[38]  Gaël Varoquaux,et al.  Scikit-learn: Machine Learning in Python , 2011, J. Mach. Learn. Res..

[39]  Paul Lukowicz,et al.  Activity Recognition of Assembly Tasks Using Body-Worn Microphones and Accelerometers , 2006, IEEE Transactions on Pattern Analysis and Machine Intelligence.

[40]  B. Bethke,et al.  Real-time indoor autonomous vehicle test environment , 2008, IEEE Control Systems.

[41]  Brian Peacock,et al.  Cobots for the automobile assembly line , 1999, Proceedings 1999 IEEE International Conference on Robotics and Automation (Cat. No.99CH36288C).