Are Accelerometers for Activity Recognition a Dead-end?

Accelerometer-based (and by extension other inertial sensors) research for Human Activity Recognition (HAR) is a dead-end. This sensor does not offer enough information for us to progress in the core domain of HAR - to recognize everyday activities from sensor data. Despite continued and prolonged efforts in improving feature engineering and machine learning models, the activities that we can recognize reliably have only expanded slightly and many of the same flaws of early models are still present today. Instead of relying on acceleration data, we should instead consider modalities with much richer information - a logical choice are images. With the rapid advance in image sensing hardware and modelling techniques, we believe that a widespread adoption of image sensors will open many opportunities for accurate and robust inference across a wide spectrum of human activities. In this paper, we make the case for imagers in place of accelerometers as the default sensor for human activity recognition. Our review of past works has led to the observation that progress in HAR had stalled, caused by our reliance on accelerometers. We further argue for the suitability of images for activity recognition by illustrating their richness of information and the marked progress in computer vision. Through a feasibility analysis, we find that deploying imagers and CNNs on device poses no substantial burden on modern mobile hardware. Overall, our work highlights the need to move away from accelerometers and calls for further exploration of using imagers for activity recognition.

[1]  Thomas Plötz,et al.  Ensembles of Deep LSTM Learners for Activity Recognition using Wearables , 2017, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..

[2]  Ling Bao,et al.  Activity Recognition from User-Annotated Acceleration Data , 2004, Pervasive.

[3]  Hannah Badland,et al.  Using wearable cameras to categorise type and context of accelerometer-identified episodes of physical activity , 2013, International Journal of Behavioral Nutrition and Physical Activity.

[4]  Fabio Viola,et al.  The Kinetics Human Action Video Dataset , 2017, ArXiv.

[5]  Shahram Izadi,et al.  SenseCam: A Retrospective Memory Aid , 2006, UbiComp.

[6]  David Blaauw,et al.  A 0.5 V Sub-Microwatt CMOS Image Sensor With Pulse-Width Modulation Read-Out , 2010, IEEE Journal of Solid-State Circuits.

[7]  Mohammad Albaida Object-based Activity Recognition with Heterogeneous Sensors on Wrist , 2018 .

[8]  Qing Lei,et al.  A Comprehensive Survey of Vision-Based Human Action Recognition Methods , 2019, Sensors.

[9]  James M. Rehg,et al.  Social interactions: A first-person perspective , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.

[10]  Luca Benini,et al.  Activity Recognition from On-Body Sensors: Accuracy-Power Trade-Off by Dynamic Sensor Selection , 2008, EWSN.

[11]  Patrick Olivier,et al.  Beyond activity recognition: skill assessment from accelerometer data , 2015, UbiComp.

[12]  K. Niazmand,et al.  Freezing of Gait detection in Parkinson's disease using accelerometer based smart clothes , 2011, 2011 IEEE Biomedical Circuits and Systems Conference (BioCAS).

[13]  Laurent Itti,et al.  Situation awareness via sensor-equipped eyeglasses , 2013, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[14]  Bo Chen,et al.  MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications , 2017, ArXiv.

[15]  Mohan M. Trivedi,et al.  Driving style recognition using a smartphone as a sensor platform , 2011, 2011 14th International IEEE Conference on Intelligent Transportation Systems (ITSC).

[16]  Paul Lukowicz,et al.  Collecting complex activity datasets in highly rich networked sensor environments , 2010, 2010 Seventh International Conference on Networked Sensing Systems (INSS).

[17]  Didier Stricker,et al.  Introducing a New Benchmarked Dataset for Activity Monitoring , 2012, 2012 16th International Symposium on Wearable Computers.

[18]  Context-Aware Computing,et al.  Inferring Activities from Interactions with Objects , 2004 .

[19]  Nicholas D. Lane,et al.  Squeezing Deep Learning into Mobile and Embedded Devices , 2017, IEEE Pervasive Computing.

[20]  Shyamal Patel,et al.  A review of wearable sensors and systems with application in rehabilitation , 2012, Journal of NeuroEngineering and Rehabilitation.

[21]  Xiaoli Li,et al.  Deep Convolutional Neural Networks on Multichannel Time Series for Human Activity Recognition , 2015, IJCAI.

[22]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[23]  Vivienne Sze,et al.  Efficient Processing of Deep Neural Networks: A Tutorial and Survey , 2017, Proceedings of the IEEE.

[24]  Limin Wang,et al.  Action recognition with trajectory-pooled deep-convolutional descriptors , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[25]  Tapio Seppänen,et al.  Recognizing human motion with multiple acceleration sensors , 2001, 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat.No.01CH37236).

[26]  Ricardo Chavarriaga,et al.  The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition , 2013, Pattern Recognit. Lett..

[27]  Chien-Sheng Liu,et al.  Design of a Measurement System for Six-Degree-of-Freedom Geometric Errors of a Linear Guide of a Machine Tool , 2018, Sensors.

[28]  Song Han,et al.  Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding , 2015, ICLR.

[29]  Bernt Schiele,et al.  A tutorial on human activity recognition using body-worn inertial sensors , 2014, CSUR.

[30]  Senem Velipasalar,et al.  A Survey on Activity Detection and Classification Using Wearable Sensors , 2017, IEEE Sensors Journal.

[31]  Xiaohui Peng,et al.  Deep Learning for Sensor-based Activity Recognition: A Survey , 2017, Pattern Recognit. Lett..

[32]  Michael S. Ryoo,et al.  Extreme Low Resolution Activity Recognition with Multi-Siamese Embedding Learning , 2017, AAAI.

[33]  Daniel Weinland,et al.  Action Representation and Recognition , 2008 .

[34]  H. Yanco,et al.  Camera Placement and Multi-Camera Fusion for Remote Robot Operation , 2006 .