Exploring audio and kinetic sensing on earable devices

In this paper, we explore audio and kinetic sensing on earable devices with the commercial on-the-shelf form factor. For the study, we prototyped earbud devices with a 6-axis inertial measurement unit and a microphone. We systematically investigate the differential characteristics of the audio and inertial signals to assess their feasibility in human activity recognition. Our results demonstrate that earable devices have a superior signal-to-noise ratio under the influence of motion artefacts and are less susceptible to acoustic environment noise. We then present a set of activity primitives and corresponding signal processing pipelines to showcase the capabilities of earbud devices in converting accelerometer, gyroscope, and audio signals into the targeted human activities with a mean accuracy reaching up to 88% in varying environmental conditions.

[1]  Steven Francis Leboeuf,et al.  Earbud-based sensor for the assessment of energy expenditure, HR, and VO2max. , 2014, Medicine and science in sports and exercise.

[2]  Gregory D. Abowd,et al.  A practical approach for recognizing eating moments with wrist-mounted inertial sensing , 2015, UbiComp.

[3]  Inseok Hwang,et al.  TalkBetter: family-driven mobile intervention care for children with language delay , 2014, CSCW.

[4]  Yujie Dong,et al.  Detecting Periods of Eating During Free-Living by Tracking Wrist Motion , 2014, IEEE Journal of Biomedical and Health Informatics.

[5]  Diogo R. Ferreira,et al.  Preprocessing techniques for context recognition from accelerometer data , 2010, Personal and Ubiquitous Computing.

[6]  Inseok Hwang,et al.  SocioPhone: everyday face-to-face interaction monitoring platform using multi-phone sensor fusion , 2013, MobiSys '13.

[7]  Fahim Kawsar,et al.  Tiny habits in the giant enterprise: understanding the dynamics of a quantified workplace , 2015, UbiComp.

[8]  Pietro Liò,et al.  Using Deep Data Augmentation Training to Address Software and Hardware Heterogeneities in Wearable and Smartphone Sensing Devices , 2018, 2018 17th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN).

[9]  Farnoush Banaei Kashani,et al.  A Lightweight and Inexpensive In-ear Sensing System For Automatic Whole-night Sleep Stage Monitoring , 2016, SenSys.

[10]  Fahim Kawsar,et al.  Understanding the impact of personal feedback on face-to-face interactions in the workplace , 2016, ICMI.

[11]  Mary Baker,et al.  The sound of silence , 2013, SenSys '13.

[12]  Thomas Plötz,et al.  Deep, Convolutional, and Recurrent Models for Human Activity Recognition Using Wearables , 2016, IJCAI.

[13]  Paul Lukowicz,et al.  Analysis of Chewing Sounds for Dietary Monitoring , 2005, UbiComp.

[14]  Mun Choon Chan,et al.  SocialWeaver: collaborative inference of human conversation networks using smartphones , 2013, SenSys '13.

[15]  Eric C. Larson,et al.  Accurate and privacy preserving cough sensing using a low-cost microphone , 2011, UbiComp '11.

[16]  Mikkel Baun Kjærgaard,et al.  Smart Devices are Different: Assessing and MitigatingMobile Sensing Heterogeneities for Activity Recognition , 2015, SenSys.

[17]  Gregory D. Abowd,et al.  EarBit: Using Wearable Sensors to Detect Eating Episodes in Unconstrained Environments , 2017, Proc. ACM Interact. Mob. Wearable Ubiquitous Technol..