Qualitative Action Recognition by Wireless Radio Signals in Human–Machine Systems

Human–machine systems required a deep understanding of human behaviors. Most existing research on action recognition has focused on discriminating between different actions, however, the quality of executing an action has received little attention thus far. In this paper, we study the quality assessment of driving behaviors and present WiQ, a system to assess the quality of actions based on radio signals. This system includes three key components, a deep neural network based learning engine to extract the quality information from the changes of signal strength, a gradient-based method to detect the signal boundary for an individual action, and an activity-based fusion policy to improve the recognition performance in a noisy environment. By using the quality information, WiQ can differentiate a triple body status with an accuracy of 97%, whereas for identification among 15 drivers, the average accuracy is 88%. Our results show that, via dedicated analysis of radio signals, a fine-grained action characterization can be achieved, which can facilitate a large variety of applications, such as smart driving assistants.

[1]  Limin Wang,et al.  Action recognition with trajectory-pooled deep-convolutional descriptors , 2015, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).

[2]  Shwetak N. Patel,et al.  Whole-home gesture recognition using wireless signals , 2013, MobiCom.

[3]  Yusheng Ji,et al.  RF-Sensing of Activities from Non-Cooperative Subjects in Device-Free Recognition Systems Using Ambient and Local Signals , 2014, IEEE Transactions on Mobile Computing.

[4]  Bin Guo,et al.  Personalized Travel Package With Multi-Point-of-Interest Recommendation Based on Crowdsourced User Footprints , 2016, IEEE Transactions on Human-Machine Systems.

[5]  Khaled A. Harras,et al.  WiGest demo: A ubiquitous WiFi-based gesture recognition system , 2015, 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS).

[6]  Jie Yang,et al.  E-eyes: device-free location-oriented activity identification using fine-grained WiFi signatures , 2014, MobiCom.

[7]  Shyamnath Gollakota,et al.  Feasibility and limits of wi-fi imaging , 2014, SenSys.

[8]  R. Polikar,et al.  Ensemble based systems in decision making , 2006, IEEE Circuits and Systems Magazine.

[9]  Kaishun Wu,et al.  WiFall: Device-free fall detection by wireless networks , 2017, IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.

[10]  Sachin Katti,et al.  WiDeo: Fine-grained Device-free Motion Tracing using RF Backscatter , 2015, NSDI.

[11]  Kaishun Wu,et al.  We Can Hear You with Wi-Fi! , 2014, IEEE Transactions on Mobile Computing.

[12]  Shaojie Tang,et al.  Electronic frog eye: Counting crowd using WiFi , 2014, IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.

[13]  Hans-Werner Gellersen,et al.  MotionMA: motion modelling and analysis by demonstration , 2013, CHI.

[14]  Rob Miller,et al.  3D Tracking via Body Radio Reflections , 2014, NSDI.

[15]  Zhiwei Zhu,et al.  Real-time nonintrusive monitoring and prediction of driver fatigue , 2004, IEEE Transactions on Vehicular Technology.

[16]  Desney S. Tan,et al.  SoundWave: using the doppler effect to sense gestures , 2012, CHI.

[17]  Yan Liu,et al.  ActiveCrowd: A Framework for Optimized Multitask Allocation in Mobile Crowdsensing Systems , 2016, IEEE Transactions on Human-Machine Systems.

[18]  Amr El-Keyi,et al.  Impact of the human motion on the variance of the received signal strength of wireless links , 2011, 2011 IEEE 22nd International Symposium on Personal, Indoor and Mobile Radio Communications.

[19]  Matthias Kranz,et al.  GymSkill: A personal trainer for physical exercises , 2012, 2012 IEEE International Conference on Pervasive Computing and Communications.

[20]  Fadel Adib,et al.  See through walls with WiFi! , 2013, SIGCOMM.

[21]  Sei-Wang Chen,et al.  Image compensation for improving extraction of driver's facial features , 2014, 2014 International Conference on Computer Vision Theory and Applications (VISAPP).

[22]  Rob Miller,et al.  Smart Homes that Monitor Breathing and Heart Rate , 2015, CHI.

[23]  Parameswaran Ramanathan,et al.  Leveraging directional antenna capabilities for fine-grained gesture recognition , 2014, UbiComp.

[24]  Fadel Adib,et al.  Multi-Person Localization via RF Body Reflections , 2015, NSDI.

[25]  Chih-Jen Lin,et al.  LIBSVM: A library for support vector machines , 2011, TIST.

[26]  David Wetherall,et al.  Taking the sting out of carrier sense: interference cancellation for wireless LANs , 2008, MobiCom '08.

[27]  Hugo Fuks,et al.  Qualitative activity recognition of weight lifting exercises , 2013, AH.

[28]  Desney S. Tan,et al.  Humantenna: using the body as an antenna for real-time whole-body interaction , 2012, CHI.

[29]  Yunhao Liu,et al.  From RSSI to CSI , 2013, ACM Comput. Surv..

[30]  Yusheng Ji,et al.  RF-Based device-free recognition of simultaneously conducted activities , 2013, UbiComp.

[31]  Qi Han,et al.  Worker-Contributed Data Utility Measurement for Visual Crowdsensing Systems , 2017, IEEE Transactions on Mobile Computing.

[32]  Yusheng Ji,et al.  Leveraging RF-channel fluctuation for activity recognition: Active and passive systems, continuous and RSSI-based signal features , 2013, MoMM '13.

[33]  Khaled A. Harras,et al.  WiGest: A ubiquitous WiFi-based gesture recognition system , 2014, 2015 IEEE Conference on Computer Communications (INFOCOM).