WiLAR: A Location-adapted Action Recognition System based on WiFi

In modern society, wireless signals are ubiquitous in various indoor environments, such as living houses, offices and shop malls, facilitating human living in various aspects. Action recognition is a technique in roaring demand in the field of human-computer interaction. Whereas previous research studies propose various methods to action recognition using wireless signals, action recognition in locations with limited data is still very challengeable. To realize decent action recognition with the help of wireless signals, we propose WiLAR, a location-adapted action recognition system based on WiFi, which enables action detection, segmentation and recognition with commodity WiFi devices in locations with different amounts of data. WiLAR extracts informative features from fine-grained WiFi channel state information (CSI), and then feeds features into elaborately designed deep learning models to realize action recognition in different locations. In our dedicated experiments, WiLAR achieves average 97% accuracy workout recognition in locations with plenty of data, and also outperforms other recognition models in locations with limited training data.

[1]  Roberto Cipolla,et al.  Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics , 2017, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.

[2]  Quinn Jones,et al.  Few-Shot Adversarial Domain Adaptation , 2017, NIPS.

[3]  Sergey Ioffe,et al.  Rethinking the Inception Architecture for Computer Vision , 2015, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[4]  Wei Wang,et al.  Understanding and Modeling of WiFi Signal Based Human Activity Recognition , 2015, MobiCom.

[5]  Heng Tao Shen,et al.  Principal Component Analysis , 2009, Encyclopedia of Biometrics.

[6]  Muhammad Shahzad,et al.  Multi-User Gesture Recognition Using WiFi , 2018, MobiSys.

[7]  Gregory D. Hager,et al.  Temporal Convolutional Networks for Action Segmentation and Detection , 2016, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[8]  Gang Fu,et al.  Deep & Cross Network for Ad Click Predictions , 2017, ADKDD@KDD.

[9]  Donald J. Berndt,et al.  Using Dynamic Time Warping to Find Patterns in Time Series , 1994, KDD Workshop.

[10]  Yusheng Ji,et al.  RF-Sensing of Activities from Non-Cooperative Subjects in Device-Free Recognition Systems Using Ambient and Local Signals , 2014, IEEE Transactions on Mobile Computing.

[11]  Koji Yatani,et al.  BodyScope: a wearable acoustic sensor for activity recognition , 2012, UbiComp.

[12]  Andrew Zisserman,et al.  Two-Stream Convolutional Networks for Action Recognition in Videos , 2014, NIPS.

[13]  Shwetak N. Patel,et al.  Whole-home gesture recognition using wireless signals , 2013, MobiCom.