HeadScan: A Wearable System for Radio-Based Sensing of Head and Mouth-Related Activities

The popularity of wearables continues to rise. However, their functionalities and applications are constrained by the types of sensors that are currently available. Accelerometers and gyroscopes struggle to capture complex user activities. Microphones and image sensors are more powerful but capture privacy sensitive information. Physiological sensors are obtrusive to users since they often require skin contact and must be placed at certain body positions to function. In contrast, radio- based sensing uses wireless radio signals to capture movements of different parts of body caused by human activities and therefore provides a contactless and privacy-preserving approach to detect and monitor human activities. In this paper, we contribute to the search for a new sensing modality for the next generation of wearable devices by exploring the feasibility of radio-based human activity sensing and recognition in the context of wearable setting. We envision radio-based sensing has the potential to fundamentally transform wearables as we currently know them. As the first step to achieve our vision, we have designed and developed HeadScan, a first- of-its-kind wearable for radio-based sensing of a number of human activities that involve head and mouth movements. HeadScan only requires a pair of small antennas placed on the shoulder and collar and one wearable unit worn on the arm or the belt of the user. HeadScan uses the fine-grained CSI measurements extracted from the radio signals and incorporates a radio signal processing pipeline that converts the raw CSI measurements into the targeted human activities. To examine the feasibility and performance of HeadScan, we have collected about 50.5 hours data from seven users. Our wide-range experiments including comparisons to a conventional skin-contact audio-based sensing approach to tracking the same set of head and mouth-related activities highlight the enormous potential of our radio-based sensing approach and provide guidance to future explorations.

[1]  David Wetherall,et al.  Tool release: gathering 802.11n traces with channel state information , 2011, CCRV.

[2]  Geoffrey Ye Li,et al.  Robust channel estimation for OFDM systems with rapid dispersive fading channels , 1998, IEEE Trans. Commun..

[3]  Wei Wang,et al.  Keystroke Recognition Using WiFi Signals , 2015, MobiCom.

[4]  Kaishun Wu,et al.  We Can Hear You with Wi-Fi! , 2014, IEEE Transactions on Mobile Computing.

[5]  Ossi Kaltiokallio,et al.  Non-invasive respiration rate monitoring using a single COTS TX-RX pair , 2014, IPSN-14 Proceedings of the 13th International Symposium on Information Processing in Sensor Networks.

[6]  Jie Yang,et al.  E-eyes: device-free location-oriented activity identification using fine-grained WiFi signatures , 2014, MobiCom.

[7]  Guoliang Xing,et al.  iSleep: unobtrusive sleep quality monitoring using smartphones , 2013, SenSys '13.

[8]  Xu Chen,et al.  Tracking Vital Signs During Sleep Leveraging Off-the-shelf WiFi , 2015, MobiHoc.

[9]  S. Frick,et al.  Compressed Sensing , 2014, Computer Vision, A Reference Guide.

[10]  Koji Yatani,et al.  BodyScope: a wearable acoustic sensor for activity recognition , 2012, UbiComp.

[11]  Michael A. Saunders,et al.  Atomic Decomposition by Basis Pursuit , 1998, SIAM J. Sci. Comput..

[12]  Zheng Li,et al.  Tongue-n-cheek: non-contact tongue gesture recognition , 2015, IPSN.

[13]  Shyamnath Gollakota,et al.  Wi-Fi Gesture Recognition on Existing Devices , 2014, ArXiv.

[14]  T. Maurer,et al.  A comparison of Likert scale and traditional measures of self-efficacy. , 1998 .

[15]  Mi Zhang,et al.  Context-aware fall detection using a Bayesian network , 2011, CASEMANS '11.

[16]  Emmanuel J. Candès,et al.  Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information , 2004, IEEE Transactions on Information Theory.

[17]  Cecilia Mascolo,et al.  DSP.Ear: leveraging co-processor support for continuous audio sensing on smartphones , 2014, SenSys.

[18]  Gregory D. Abowd,et al.  Technological approaches for addressing privacy concerns when recognizing eating behaviors with wearable cameras , 2013, UbiComp.

[19]  Andrew T. Campbell,et al.  BeWell: Sensing Sleep, Physical Activities and Social Interactions to Promote Wellbeing , 2014, Mobile Networks and Applications.

[20]  H. Hermens,et al.  Assessment of activities of daily living with an ambulatory monitoring system: a comparative study in patients with chronic low back pain and nonsymptomatic controls , 2002, Clinical rehabilitation.

[21]  John Terry,et al.  OFDM Wireless LANs: A Theoretical and Practical Guide , 2001 .

[22]  Wen Hu,et al.  Radio-based device-free activity recognition with radio frequency interference , 2015, IPSN.

[23]  Deborah Estrin,et al.  Image browsing, processing, and clustering for participatory sensing: lessons from a DietSense prototype , 2007, EmNets '07.

[24]  Qiang Li,et al.  MusicalHeart: a hearty way of listening to music , 2012, SenSys '12.

[25]  Kaishun Wu,et al.  WiFall: Device-free fall detection by wireless networks , 2017, IEEE INFOCOM 2014 - IEEE Conference on Computer Communications.

[26]  Gregory D. Abowd,et al.  Feasibility of identifying eating moments from first-person images leveraging human computation , 2013, SenseCam '13.

[27]  Vigneshwaran Subbaraju,et al.  The case for smartwatch-based diet monitoring , 2015, 2015 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops).

[28]  Gernot Bahle,et al.  Designing Sensitive Wearable Capacitive Sensors for Activity Recognition , 2013, IEEE Sensors Journal.

[29]  Fadel Adib,et al.  Multi-Person Localization via RF Body Reflections , 2015, NSDI.

[30]  Mi Zhang,et al.  BodyBeat: a mobile system for sensing non-speech body sounds , 2014, MobiSys.